record_id,title,abstract,year,label_included,duplicate_record_id 1,A Conceptual Model of ICT-Supported Unified Process of International Outsourcing of Software Production,"This is an ongoing research in international outsourcing software production. This research examines how Software production through the ICTsupported unified process of international outsourcing could be executed and managed effectively. To address this research question, the results of an in-depth literature review in the areas of outsourcing, international outsourcing, information technology, and international software production is presented. This study proposes the information communication technologies' (ICT) - supported unified process model of international outsourcing of software production (SUPMIOSP). ICT-SUPMIOSP provides a detailed guideline on how to manage the entire process of international outsourcing by integrating a number of key issues such as relationship and risks management. Both theoretical and practical aspects of ICTSUPMIOSP are presented. At the theoretical level, the model can be used as a basis for further research, while at the practical level, it helps managers and other stakeholders to understand the multiple activities involved in offshore outsourcing, improve, systematize, and execute the ICT-SUPIOSP more effectively and efficiently.",2006,1, 2,A Quantitative Assessment of Requirements Engineering Publications ? 1963?2006,"

Requirements engineering research has been conducted for over 40 years. It is important to recognize the plethora of results accumulated to date to: (a) improve researchers' understanding of the historical roots of our field in the real-world and the problems that they are trying to solve, (b) expose researchers to the breadth and depth of solutions that have been proposed, (c) provide a synergistic basis for improving those solutions or building new ones to solve real-world problems facing the industry today, and d) increase practitioner awareness of available solutions. A detailed meta-analysis of the requirements engineering literature will provide an objective overview of the advances and current state of the discipline. This paper represents the first step in a planned multi-year analysis. It presents the results of a demographic analysis by date, type, outlet, author, and author affiliation for an existing database of over 4,000 requirements engineering publications.

",2007,1, 3,A survey and taxonomy of approaches for mining software repositories in the context of software evolution,"

A comprehensive literature survey on approaches for mining software repositories (MSR) in the context of software evolution is presented. In particular, this survey deals with those investigations that examine multiple versions of software artifacts or other temporal information. A taxonomy is derived from the analysis of this literature and presents the work via four dimensions: the type of software repositories mined (what), the purpose (why), the adopted/invented methodology used (how), and the evaluation method (quality). The taxonomy is demonstrated to be expressive (i.e., capable of representing a wide spectrum of MSR investigations) and effective (i.e., facilitates similarities and comparisons of MSR investigations). Lastly, a number of open research issues in MSR that require further investigation are identified.

",2007,1, 4,An analysis of data sets used to train and validate cost prediction systems,"OBJECTIVE - to build up a picture of the nature and type of data sets being used to develop and evaluate different software project effort prediction systems. We believe this to be important since there is a growing body of published work that seeks to assess different prediction approaches.METHOD - we performed an exhaustive search from 1980 onwards from three software engineering journals for research papers that used project data sets to compare cost prediction systems.RESULTS - this identified a total of 50 papers that used, one or more times, a total of 71 unique project data sets. We observed that some of the better known and easily accessible data sets were used repeatedly making them potentially disproportionately influential. Such data sets also tend to be amongst the oldest with potential problems of obsolescence. We also note that only about 60% of all data sets are in the public domain. Finally, extracting relevant information from research papers has been time consuming due to different styles of presentation and levels of contextural information.CONCLUSIONS - first, the community needs to consider the quality and appropriateness of the data set being utilised; not all data sets are equal. Second, we need to assess the way results are presented in order to facilitate meta-analysis and whether a standard protocol would be appropriate.",2005,1, 5,Challenges in Collaborative Modeling: A Literature Review,"Modeling is a key activity in conceptual design and system design. Users as well as stakeholders, experts and entrepreneurs need to be able to create shared understanding about a system representation. In this paper we conducted a literature review to provide an overview of studies in which collaborative modeling efforts have been conducted to give first insights in the challenges of collaborative modeling, specifically with respect to group composition, collaboration & participation methods, modeling methods and quality in collaborative modeling. We found a critical challenge in dealing with the lack of modeling skills, such as having a modeler to support the group, or create the model for the group versus training to empower participants to actively participate in the modeling effort, and another critical challenge in resolving conflicting (parts of) models and integration of submodels or models from different perspectives. The overview of challenges presented in this paper will inspire the design of methods and support systems that will ultimately advance the efficiency and effectiveness of collaborative modeling tasks.",2014,1, 6,Controversy Corner: A new research agenda for tool integration,"This article highlights tool integration within software engineering environments. Tool integration concerns the techniques used to form coalitions of tools that provide an environment supporting some, or all, activities within a software engineering process. These techniques have been used to create environments that attempt to address aspects of software development, with varying success. This article provides a timely analysis and review of many of the significant projects in the field and, combined with evidence collected from industry, concludes by proposing an empirical manifesto for future research, where we see the need for work to justify tool integration efforts in terms of relevant socio-economic indicators.",2007,1, 7,Data sets and data quality in software engineering,"OBJECTIVE - to assess the extent and types of techniques used to manage quality within software engineering data sets. We consider this a particularly interesting question in the context of initiatives to promote sharing and secondary analysis of data sets. METHOD - we perform a systematic review of available empirical software engineering studies. RESULTS - only 23 out of the many hundreds of studies assessed, explicitly considered data quality. CONCLUSIONS - first, the community needs to consider the quality and appropriateness of the data set being utilised; not all data sets are equal. Second, we need more research into means of identifying, and ideally repairing, noisy cases. Third, it should become routine to use sensitivity analysis to assess conclusion stability with respect to the assumptions that must be made concerning noise levels.",2005,1, 8,Developing Open Source Software: A Community-Based Analysis of Research,"Open source software (OSS) creates the potential for the inclusion of large and diverse communities in every aspect of the software development and consumption life cycle. However, despite 6 years of effort by an ever growing research community, we still don’t know exactly what we do and don’t know about OSS, nor do we have a clear idea about the basis for our knowledge. This paper presents an analysis of 155 research artefacts in the area of open source software. The purpose of the study is to identify the kinds of open source project communities that have been researched, the kinds of research questions that have been asked, and the methodologies used by researchers. Emerging from the study is a clearer understanding of what we do and don’t know about open source software, and recommendations for future research efforts ",2015,1, 9,Effectiveness of Requirements Elicitation Techniques: Empirical Results Derived from a Systematic Review,"This paper reports a systematic review of empirical studies concerning the effectiveness of elicitation techniques, and the subsequent aggregation of empirical evidence gathered from those studies. The most significant results of the aggregation process are as follows: (I) interviews, preferentially structured, appear to be one of the most effective elicitation techniques; (2) many techniques often cited in the literature, like card sorting, ranking or thinking aloud, tend to be less effective than interviews; (3) analyst experience does not appear to be a relevant factor; and (4) the studies conducted have not found the use of intermediate representations during elicitation to have significant positive effects. It should be noted that, as a general rule, the studies from which these results were aggregated have not been replicated, and therefore the above claims cannot be said to be absolutely certain. However, they can be used by researchers as pieces of knowledge to be further investigated and by practitioners in development projects, always taking into account that they are preliminary findings",2006,1, 10,Evidence-Based Guidelines for Assessment of Software Development Cost Uncertainty,"Several studies suggest that uncertainty assessments of software development costs are strongly biased toward overconfidence, i.e., that software cost estimates typically are believed to be more accurate than they really are. This overconfidence may lead to poor project planning. As a means of improving cost uncertainty assessments, we provide evidence-based guidelines for how to assess software development cost uncertainty, based on results from relevant empirical studies. The general guidelines provided are: 1) Do not rely solely on unaided, intuition-based uncertainty assessment processes, 2) do not replace expert judgment with formal uncertainty assessment models, 3) apply structured and explicit judgment-based processes, 4) apply strategies based on an outside view of the project, 5) combine uncertainty assessments from different sources through group work, not through mechanical combination, 6) use motivational mechanisms with care and only if greater effort is likely to lead to improved assessments, and 7) frame the assessment problem to fit the structure of the relevant uncertainty information and the assessment process. These guidelines are preliminary and should be updated in response to new evidence.",2005,1, 11,Experimental context classification: incentives and experience of subjects,"There is a need to identify factors that affect the result of empirical studies in software engineering research. It is still the case that seemingly identical replications of controlled experiments result in different conclusions due to the fact that all factors describing the experiment context are not clearly defined and hence controlled. In this article, a scheme for describing the participants of controlled experiments is proposed and evaluated. It consists of two main factors, the incentives for participants in the experiment and the experience of the participants. The scheme has been evaluated by classifying a set of previously conducted experiments from literature. It can be concluded that the scheme was easy to use and understand. It is also found that experiments that are classified in the same way to a large extent point at the same results, which indicates that the scheme addresses relevant factors.",2005,1, 12,How Does a Measurement Programme Evolve in Software Organizations?,"

Establishing a software measurement programme within an organization is not a straightforward task. Previous literature surveys have focused on software process improvement in general and software measurement has been analysed in case studies. This literature survey collects the data from separate cases and presents the critical success factors that are specific to software measurement programmes. We present a categorization of the success factors based on organizational roles that are involved in measurement. Furthermore, the most essential elements of success in different phases of the life cycle of the measurement programme are analysed. It seems that the role of upper management is crucial when starting measurement and the individual developers' impact increases in the later phases. Utilization of the measurement data and improvement of the measurement and development processes requires active management support again.

",2008,1, 13,Improving Evidence about Software Technologies: A Look at Model-Based Testing,"Model-based testing (MBT) approaches help automatically generate test cases using models extracted from software artifacts, and hold the promise to greatly affect how we build software. A review of the literature shows that certain specialized domains are applying MBT, but it does not yet seem to be a mainstream approach. The authors therefore conducted a systematic review of the literature to investigate how much evidence is available on MBT's costs and benefits, especially regarding how these techniques compare to other common testing approaches. They use these results to derive suggestions regarding what types of studies might further increase the deployment of these techniques. ",2008,1, 14,In search of `architectural knowledge',"The software architecture community puts more and more emphasis on 'architectural knowledge'. However, there appears to be no commonly accepted definition of what architectural knowledge entails, which makes it a fuzzy concept. In order to obtain a better understanding of how different authors view 'architectural knowledge', we have conducted a systematic review to examine how architectural knowledge is defined and how the different definitions in use are related. From this review it became clear that many authors do not provide a concrete definition of what they think architectural knowledge entails. What is more intriguing, though, is that those who do give a definition seem to agree that architectural knowledge spans from problem domain through decision making to solution; an agreement that is not obvious from the definitions themselves, but which is only brought to light after careful systematic comparison of the different studies.",2010,1, 15,Measurement in software engineering: From the roadmap to the crossroads.,"Research on software measurement can be organized around five key conceptual and methodological issues: how to apply measurement theory to software, how to frame software metrics, how to develop metrics, how to collect core measures, and how to analyze measures. The subject is of special concern for the industry, which is interested in improving practices - mainly in developing countries, where the software industry represents an opportunity for growth and usually receives institutional support for matching international quality standards. Academics are also in need of understanding and developing more effective methods for managing the software process and assessing the success of products and services, as a result of an enhanced awareness about the emergency of aligning business processes and information systems. This paper unveils the fundamentals of measurement in software engineering and discusses current issues and foreseeable trends for the subject. A literature review was performed within major academic publications in the last decade, and findings suggest a sensible shift of measurement interests towards managing the software process as a whole - without losing from sight the customary focus on hard issues like algorithm efficiency and worker productivity",1997,1, 16,Mobile Systems Development: A Literature Review,"This article reviews 105 representative contributions to the literature on mobile systems development. The contributions are categorized according to a simple conceptual framework. The framework comprises four perspectives: the requirements perspective, the technology perspective, the application perspective, and the business perspective. Our literature review shows that mobile systems development is overlooked in the current debate. From the review, we extend the traditional view on systems development to encompass mobile systems and, based on the identified perspectives, we propose core characteristics for mobile systems. We also extend the traditional focus found in systems development on processes in a development project to encompass the whole of the development company as well as interorganizational linkage between development companies. Finally, we point at research directions emerging from the review that are relevant to the field of mobile systems development.",1989,1, 17,"Quality, productivity and economic benefits of software reuse: a review of industrial studies","

Systematic software reuse is proposed to increase productivity and software quality and lead to economic benefits. Reports of successful software reuse programs in industry have been published. However, there has been little effort to organize the evidence systematically and appraise it. This review aims to assess the effects of software reuse in industrial contexts. Journals and major conferences between 1994 and 2005 were searched to find observational studies and experiments conducted in industry, returning eleven papers of observational type. Systematic software reuse is significantly related to lower problem (defect, fault or error) density in five studies and to decreased effort spent on correcting problems in three studies. The review found evidence for significant gains in apparent productivity in three studies. Other significant benefits of software reuse were reported in single studies or the results were inconsistent. Evidence from industry is sparse and combining results was done by vote-counting. Researchers should pay more attention to using comparable metrics, performing longitudinal studies, and explaining the results and impact on industry. For industry, evaluating reuse of COTS or OSS components, integrating reuse activities in software processes, better data collection and evaluating return on investment are major challenges.

",2007,1, 18,Reflections on 10 Years of Software Process Simulation Modeling,"Software process simulation modeling (SPSM) has become an increasingly active research area since its introduction in the late 1980s. Particularly during the last ten years the related research community and the number of publications have been growing. The objective of this research is to provide insights about the evolution of SPSM research during the last 10 years. A systematic literature review was proposed with two subsequent stages to achieve this goal. This paper presents the preliminary results of the first stage of the review that is exclusively focusing on a core set of publication sources. More than 200 relevant publications were analyzed in order to find answers to the research questions, including the purposes and scopes of SPSM, application domains, and predominant research issues. From the analysis the following conclusions could be drawn: (1) Categories for classifying software process simulation models as suggested by the authors of a landmark publication in 1999 should be adjusted and refined to better capture the diversity of published models. (2) Research improving the efficiency of SPSM is gaining importance. (3) Hybrid process simulation models have attracted interest as a possibility to more realistically capture complex real-world software processes. ",2014,1, 19,Software process improvement in small and medium software enterprises: a systematic review,"Small and medium enterprises are a very important cog in the gears of the world economy. The software industry in most countries is composed of an industrial scheme that is made up mainly of small and medium software enterprises--SMEs. To strengthen these types of organizations, efficient Software Engineering practices are needed--practices which have been adapted to their size and type of business. Over the last two decades, the Software Engineering community has expressed special interest in software process improvement (SPI) in an effort to increase software product quality, as well as the productivity of software development. However, there is a widespread tendency to make a point of stressing that the success of SPI is only possible for large companies. In this article, a systematic review of published case studies on the SPI efforts carried out in SMEs is presented. Its objective is to analyse the existing approaches towards SPI which focus on SMEs and which report a case study carried out in industry. A further objective is that of discussing the significant issues related to this area of knowledge, and to provide an up-to-date state of the art, from which innovative research activities can be thought of and planned.",2011,1, 20,Software project economics: a roadmap,"The objective of this paper is to consider research progress in the field of software project economics with a view to identifying important challenges and promising research directions. I argue that this is an important sub-discipline since this will underpin any cost-benefit analysis used to justify the resourcing, or otherwise, of a software project. To accomplish this I conducted a bibliometric analysis of peer reviewed research articles to identify major areas of activity. My results indicate that the primary goal of more accurate cost prediction systems remains largely unachieved. However, there are a number of new and promising avenues of research including: how we can combine results from primary studies, integration of multiple predictions and applying greater emphasis upon the human aspects of prediction tasks. I conclude that the field is likely to remain very challenging due to the people-centric nature of software engineering, since it is in essence a design task. Nevertheless the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous.",2007,1, 21,Status of Empirical Research in Software Engineering,"We provide an assessment of the status of empirical software research by analyzing all refereed articles that appeared in the Journal of Empirical Software Engineering from its first issue in January 1996 through June 2006. The journal publishes empirical software research exclusively and it is the only journal to do so. The main findings are: 1. The dominant empirical methods are experiments and case studies. Other methods (correlational studies, meta analysis, surveys, descriptive approaches, ex post facto studies) occur infrequently; long-term studies are missing. About a quarter of the experiments are replications. 2. Professionals are used somewhat more frequently than students as subjects. 3. The dominant topics studied are measurement/metrics and tools/methods/frameworks. Metrics research is dominated by correlational and case studies without any experiments. 4. Important topics are underrepresented or absent, for example: programming languages, model driven development, formal methods, and others. The narrow focus on a few empirically researched topics is in contrast to the broad scope of software research. ",2013,1, 22,Systematic review of organizational motivations for adopting CMM-based SPI,"Background: Software Process Improvement (SPI) is intended to improve software engineering, but can only be effective if used. To improve SPI's uptake, we should understand why organizations adopt SPI. CMM-based SPI approaches are widely known and studied. Objective: We investigated why organizations adopt CMM-based SPI approaches, and how these motivations relate to organizations' size. Method: We performed a systematic review, examining reasons reported in more than forty primary studies. Results: Reasons usually related to product quality and project performance, and less commonly, to process. Organizations reported customer reasons infrequently and employee reasons very rarely. We could not show that reasons related to size. Conclusion: Despite its origins in helping to address customer-related issues for the USAF, CMM-based SPI has mostly been adopted to help organizations improve project performance and product quality issues. This reinforces a view that the goal of SPI is not to improve process per se, but instead to provide business benefits.",2008,1, 23,Systematic review: A systematic review of effect size in software engineering experiments,"An effect size quantifies the effects of an experimental treatment. Conclusions drawn from hypothesis testing results might be erroneous if effect sizes are not judged in addition to statistical significance. This paper reports a systematic review of 92 controlled experiments published in 12 major software engineering journals and conference proceedings in the decade 1993-2002. The review investigates the practice of effect size reporting, summarizes standardized effect sizes detected in the experiments, discusses the results and gives advice for improvements. Standardized and/or unstandardized effect sizes were reported in 29% of the experiments. Interpretations of the effect sizes in terms of practical importance were not discussed beyond references to standard conventions. The standardized effect sizes computed from the reviewed experiments were equal to observations in psychology studies and slightly larger than standard conventions in behavioral science. ",2007,1, 24,Tailoring and Introduction of the Rational Unified Process,RUP is a comprehensive software development process framework that has gained a lot of interest by the industry. One major challenge of taking RUP into use is to tailor it to specific needs and then to introduce it into a development organization. This study presents a review and a systematic assembly of existing studies on the tailoring and introduction of RUP. From a systematic search for study reports on this topic we found that most research is anecdotal and focus on the effects of RUP itself. Only a few number of studies address tailoring and introduction. We have found that tailoring RUP is a considerable challenge by itself and that it must be closely related to existing best practices. We see a tendency of turning from large complete process frameworks towards smaller and more light-weight processes which may impose a smoother transition from process model to process in use. ,2009,1, 25,Techniques for developing more accessible web applications: a survey towards a process classification,"

The Web has become one of the most important communication media, since it is spread all over the world. In order to enable everyone to access this medium, Web accessibility has become an emerging topic, and many techniques have been evolved to support the development of accessible Web content. This paper presents a survey on techniques for Web accessibility and proposes a classification into the processes of ISO/IEC 12207 standard. The survey was carried out applying systematic review principles during the literature review. The results include analysis obtained from the synthesis of 53 studies, selected from an initial set of 844. Although the survey results indicate a growth in research on techniques for design and evaluation of Web applications, they also indicate that several development activities have been poorly addressed by scientific research efforts.

",2007,1, 26,The Role of Deliberate Artificial Design Elements in Software Engineering Experiments,"Increased realism in software engineering experiments is often promoted as an important means of increasing generalizability and industrial relevance. In this context, artificiality, e.g., the use of constructed tasks in place of realistic tasks, is seen as a threat. In this paper, we examine the opposite view that deliberately introduced artificial design elements may increase knowledge gain and enhance both generalizability and relevance. In the first part of this paper, we identify and evaluate arguments and examples in favor of and against deliberately introducing artificiality into software engineering experiments. We find that there are good arguments in favor of deliberately introducing artificial design elements to 1) isolate basic mechanisms, 2) establish the existence of phenomena, 3) enable generalization from particularly unfavorable to more favorable conditions (persistence of phenomena), and 4) relate experiments to theory. In the second part of this paper, we summarize a content analysis of articles that report software engineering experiments published over a 10-year period from 1993 to 2002. The analysis reveals a striving for realism and external validity, but little awareness of for what and when various degrees of artificiality and realism are appropriate. Furthermore, much of the focus on realism seems to be based on a narrow understanding of the nature of generalization. We conclude that an increased awareness and deliberation as to when and for what purposes both artificial and realistic design elements are applied is valuable for better knowledge gain and quality in empirical software engineering experiments. We also conclude that time spent on studies that have obvious threats to validity that are due to artificiality might be better spent on studies that investigate research questions for which artificiality is a strength rather than a weakness. However, arguments in favor of artificial design elements should not be used to justify studies - - that are badly designed or that have research questions of low relevance.",2008,1, 27,The type of evidence produced by empirical software engineers,"This paper reports on the research published between the years 1997 and 2003 inclusive in the journal of Empirical Software Engineering, drawing on the taxonomy developed by Glass et al. in [3]. We found that the research was somewhat narrow in topic with about half the papers focusing on measurement/metrics, review and inspection; that researchers were almost as interested in formulating as in evaluating; that hypothesis testing and laboratory experiments dominated evaluations; that research was not very likely to focus on people and extremely unlikely to refer to other disciplines. We discuss our findings in the context of making empirical software engineering more relevant to practitioners.",2005,1, 28,Where Is the Proof? - A Review of Experiences from Applying MDE in Industry,"

Model-Driven Engineering (MDE) has been promoted as a solution to handle the complexity of software development by raising the abstraction level and automating labor-intensive and error-prone tasks. However, few efforts have been made at collecting evidence to evaluate its benefits and limitations, which is the subject of this review. We searched several publication channels in the period 2000 to June 2007 for empirical studies on applying MDE in industry, which produced 25 papers for the review. Our findings include industry motivations for investigating MDE and the different domains it has been applied to. In most cases the maturity of third-party tool environments is still perceived as unsatisfactory for large-scale industrial adoption. We found reports of improvements in software quality and of both productivity gains and losses, but these reports were mainly from small-scale studies. There are a few reports on advantages of applying MDE in larger projects, however, more empirical studies and detailed data are needed to strengthen the evidence. We conclude that there is too little evidence to allow generalization of the results at this stage.

",2008,1, 29,Cross versus Within-Company Cost Estimation Studies: A Systematic Review,"The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e., $20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3.",2007,1, 30,A Systematic Review of Software Development Cost Estimation Studies,"This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease the identification of relevant estimation research results. The review results combined with other knowledge provide support for recommendations for future software cost estimation research, including: 1) increase the breadth of the search for relevant studies, 2) search manually for relevant papers within a carefully selected set of journals when completeness is essential, 3) conduct more studies on estimation methods commonly used by the software industry, and 4) increase the awareness of how properties of the data sets impact the results when evaluating estimation methods",2007,1, 31,A Survey of Controlled Experiments in Software Engineering,"The classical method for identifying cause-effect relationships is to conduct controlled experiments. This paper reports upon the present state of how controlled experiments in software engineering are conducted and the extent to which relevant information is reported. Among the 5,453 scientific articles published in 12 leading software engineering journals and conferences in the decade from 1993 to 2002, 103 articles (1.9 percent) reported controlled experiments in which individuals or teams performed one or more software engineering tasks. This survey quantitatively characterizes the topics of the experiments and their subjects (number of subjects, students versus professionals, recruitment, and rewards for participation), tasks (type of task, duration, and type and size of application) and environments (location, development tools). Furthermore, the survey reports on how internal and external validity is addressed and the extent to which experiments are replicated. The gathered data reflects the relevance of software engineering experiments to industrial practice and the scientific maturity of software engineering research.",2005,1, 32,A Survey on Software Estimation in the Norwegian Industry,"Abstract: We provide an overview of the estimation methods that software companies apply to estimate their projects, why those methods are chosen, and how accurate they are. In order to improve estimation accuracy, such knowledge is essential. We conducted an in-depth survey, where information was collected through structured interviews with senior managers from 18 different companies and project managers of 52 different projects. We analyzed information about estimation approach, effort estimation accuracy and bias, schedule estimation accuracy and bias, delivered functionality and other estimation related information. Our results suggest, for example, that average effort overruns are 41%, that the estimation performance has not changed much the last 10-20 years, that expert estimation is the dominating estimation method, that estimation accuracy is not much impacted by use of formal estimation models, and that software managers tend to believe that the estimation accuracy of their company is better than it actually is.",2004,1, 33,A Systematic Review of Theory Use in Software Engineering Experiments,"Empirically based theories are generally perceived as foundational to science. However, in many disciplines, the nature, role and even the necessity of theories remain matters for debate, particularly in young or practical disciplines such as software engineering. This article reports a systematic review of the explicit use of theory in a comprehensive set of 103 articles reporting experiments, from of a total of 5,453 articles published in major software engineering journals and conferences in the decade 1993-2002. Of the 103 articles, 24 use a total of 40 theories in various ways to explain the cause-effect relationship(s) under investigation. The majority of these use theory in the experimental design to justify research questions and hypotheses, some use theory to provide post hoc explanations of their results, and a few test or modify theory. A third of the theories are proposed by authors of the reviewed articles. The interdisciplinary nature of the theories used is greater than that of research in software engineering in general. We found that theory use and awareness of theoretical issues are present, but that theory-driven research is, as yet, not a major issue in empirical software engineering. Several articles comment explicitly on the lack of relevant theory. We call for an increased awareness of the potential benefits of involving theory, when feasible. To support software engineering researchers who wish to use theory, we show which of the reviewed articles on which topics use which theories for what purposes, as well as details of the theories' characteristics",2007,1, 34,A systematic review of Web engineering research,"Abstract: This paper uses a systematic literature review as means of investigating the rigor of claims arising from Web engineering research. Rigor is measured using criteria combined from software engineering research. We reviewed 173 papers and results have shown that only 5% would be considered rigorous methodologically. In addition to presenting our results, we also provide suggestions for improvement of Web engineering research based on lessons learnt by the software engineering community.",2005,1, 35,Are Two Heads Better than One? On the Effectiveness of Pair Programming,"Pair programming is a collaborative approach that makes working in pairs rather than individually the primary work style for code development. Because PP is a radically different approach than many developers are used to, it can be hard to predict the effects when a team switches to PP. Because projects focus on different things, this article concentrates on understanding general aspects related to effectiveness, specifically project duration, effort, and quality. Not unexpectedly, our meta-analysis showed that the question of whether two heads are better than one isn't precise enough to be meaningful. Given the evidence, the best answer is ""it depends"" - on both the programmer's expertise and the complexity of the system and tasks to be solved. Two heads are better than one for achieving correctness on highly complex programming tasks. They might also have a time gain on simpler tasks. Additional studies would be useful. For example, further investigation is clearly needed into the interaction of complexity and programmer experience and how they affect the appropriateness of a PP approach; our current understanding of this phenomenon rests chiefly on a single (although large) study. Only by understanding what makes pairs work and what makes them less efficient can we take steps to provide beneficial work conditions, to avoid detrimental conditions, and to avoid pairing altogether when conditions are detrimental. With the right cooks and the right combination of ingredients, the broth has the potential to be very good indeed.",2007,1, 36,Do SQA Programs Work - CMM Works. A Meta Analysis,"Many software development professionals and managers of software development organizations are not fully convinced in the profitability of investments for the advancement of SQA systems. The results included in each of the articles we found, cannot lead to general conclusions on the impact of investments in upgrading an SQA system. Our meta analysis was based on CMM level transition (CMMLT) analysis of available publications and was for the seven most common performance metric. The CMMLT analysis is applicable for combined analysis of empirical data from many sources. Each record in our meta analysis database is calculated as ""after-before ratio"", which is nearly free of the studied organization's characteristics. Because the CMM guidelines and SQA requirement are similar, we claim that the results for CMM programs are also applicable to investments in SQA systems. The extensive database of over 1,800 projects from a variety of 19 information sources leading to the meta analysis results - proved that investments in CMM programs and similarly in SQA systems contribute to software development performance.",2005,1, 37,Are CMM Program Investments Beneficial? Analyzing Past Studies,"CMM experts strongly believe that investments in programs promoting an organization's CMM maturity yield substantial organizational and economic benefits. In particular, they argue that CMM programs that implement software process improvements can provide more benefits",2006,1, 38,"Capture-recapture in software inspections after 10 years research - theory, evaluation and application.","Software inspection is a method to detect faults in the early phases of the software life cycle. In order to estimate the number of faults?not?found, capture?recapture was introduced for software inspections in 1992 to estimate remaining faults after an inspection. Since then, several papers have been written in the area, concerning the basic theory, evaluation of models and application of the method. This paper summarizes the work made in capture?recapture for software inspections during these years. Furthermore, and more importantly, the contribution of the papers are classified as?theory,?evaluation?or?application, in order to analyse the performed research as well as to highlight the areas of research that need further work. It is concluded that (1) most of the basic theory is investigated within biostatistics, (2) most software engineering research is performed on evaluation, a majority ending up in recommendation of the Mh?JK model, and (3) there is a need for application experiences. In order to support the application, an inspection process is presented with decision points based on capture?recapture estimates.",2004,1, 39,A systematic review of statistical power in software engineering experiments.,"Statistical power is an inherent part of empirical studies that employ significance testing and is essential for the planning of studies, for the interpretation of study results, and for the validity of study conclusions. This paper reports a quantitative assessment of the statistical power of empirical software engineering research based on the 103 papers on controlled experiments (of a total of 5,453 papers) published in nine major software engineering journals and three conference proceedings in the decade 1993–2002. The results show that the statistical power of software engineering experiments falls substantially below accepted norms as well as the levels found in the related discipline of information systems research. Given this study's findings, additional attention must be directed to the adequacy of sample sizes and research designs to ensure acceptable levels of statistical power. Furthermore, the current reporting of significance tests should be enhanced by also reporting effect sizes and confidence intervals.",2007,1, 40,Software effort estimation terminology: The tower of Babel.,"It is well documented that the software industry suffers from frequent cost overruns. A contributing factor is, we believe, the imprecise estimation terminology in use. A lack of clarity and precision in the use of estimation terms reduces the interpretability of estimation accuracy results, makes the communication of estimates difficult, and lowers the learning possibilities. This paper reports on a structured review of typical software effort estimation terminology in software engineering textbooks and software estimation research papers. The review provides evidence that the term ?effort estimate? is frequently used without sufficient clarification of its meaning, and that estimation accuracy is often evaluated without ensuring that the estimated and the actual effort are comparable. Guidelines are suggested on how to reduce this lack of clarity and precision in terminology.",2006,1, 41,In Search of What We Experimentally Know about Unit Testing,"Gathering evidence in any discipline is a lengthy procedure, requiring experimentation and empirical confirmation to transform information from mere opinion to undisputed fact. Software engineering is a relatively young field and experimental SE is even younger, so undisputed facts are few and far between. Nevertheless, ESE's relevance is growing because experimental results can help practitioners make better decisions. We have aggregated results from unit-testing experiments with the aim of identifying information with some experimental basis that might help practitioners make decisions. Most of the experiments focus on two important characteristics of testing techniques: effectiveness and efficiency. Some other experiments study the quality of test-case sets according to different criteria",2006,1, 42,Precise Identification of Side-Effect-Free Methods in Java,"Knowing which methods do not have side effects is necessary in a variety of software tools for program understanding, restructuring, optimization, and verification. We present a general approach for identifying side-effect-free methods in Java software. Our technique is parameterized by class analysis and is designed to work on incomplete programs. We present empirical results from two instantiations of the approach, based on rapid type analysis and on points-to analysis. In our experiments with several components, on average 22% of the investigated methods were identified as free of side effects. We also present a precision evaluation which shows that the approach achieves almost perfect precision - i.e., it almost never misses methods that in reality have no side effects. These results indicate that very precise identification of side-effect-free methods is possible with simple and inexpensive analysis techniques, and therefore can be easily incorporated in software tools.",2004,1, 43,Reviewing 25 Years of Testing Technique Experiments,"Mature knowledge allows engineering disciplines the achievement of predictable results. Unfortunately, the type of knowledge used in software engineering can be considered to be of a relatively low maturity, and developers are guided by intuition, fashion or market-speak rather than by facts or undisputed statements proper to an engineering discipline. Testing techniques determine different criteria for selecting the test cases that will be used as input to the system under examination, which means that an effective and efficient selection of test cases conditions the success of the tests. The knowledge for selecting testing techniques should come from studies that empirically justify the benefits and application conditions of the different techniques. This paper analyzes the maturity level of the knowledge about testing techniques by examining existing empirical studies about these techniques. We have analyzed their results, and obtained a testing technique knowledge classification based on their factuality and objectivity, according to four parameters.",2004,1, 44,What Do We Know about Defect Detection Methods?,"A survey of defect detection studies comparing inspection and testing techniques yields practical recommendations: use inspections for requirements and design defects, and use testing for code. Evidence-based software engineering can help software practitioners decide which methods to use and for what purpose. EBSE involves defining relevant questions, surveying and appraising avail able empirical evidence, and integrating and evaluating new practices in the target environment. This article helps define questions regarding defect detection techniques and presents a survey of empirical studies on testing and inspection techniques. We then interpret the findings in terms of practical use. The term defect always relates to one or more underlying faults in an artifact such as code. In the context of this article, defects map to single faults",2006,1, 45,On the success of empirical studies in the international conference on software engineering,"Critiques of the quantity and quality of empirical evaluations in software engineering have existed for quite some time. However such critiques are typically not empirically evaluated. This paper fills this gap by empirically analyzing papers published by ICSE, the prime research conference on Software Engineering. We present quantitative and qualitative results of a quasi-random experiment of empirical evaluations over the lifetime of the conference. Our quantitative results show the quantity of empirical evaluation has increased over 29 ICSE proceedings but we still have room to improve the soundness of empirical evaluations in ICSE proceedings. Our qualitative results point to specific areas of improvement in empirical evaluations.",2006,1, 46,A critical evaluation of literature on visual aesthetics for the web,

This paper reviews the current state of literature on visual aesthetics for the web. This was done by referring to recent contributions of authors in the area of visual aesthetics. Specific focus areas included: authors' perception of the importance of visual aesthetics; how visual aesthetics affect communication; and guidelines and suggestions on how to apply visual aesthetics. The authors also briefly suggest an appropriate research approach when studying visual aesthetics.

,2004,0, 47,A fault-tolerant approach to test control utilizing dual-redundant processors,"A simple dual-redundant fault-tolerant test control system architecture has been designed, developed, and demonstrated in a real-time environment using inexpensive personal computers. A survey of existing fault-tolerant control systems was performed to assess the relative cost and capabilities of currently available technology. A cost-benefit analysis was performed comparing the relative benefit of this system to triplex systems and non-fault-tolerant systems for various applications. Functionally identical implementations of a prototype proof-of-concept software design were constructed in two different languages and tested using a unit-under-test model. Bugs (faults) were injected into this model to verify the ability of the system to reliably detect anomalous test hardware operation. Also, simulated bugs (faults) were introduced to verify smooth control transfer between primary and standby, both nominally and in the presence of hardware-under-tests anomalies. Results indicate significant improvement in system reliability, sufficient to justify the additional cost of the proposed duplex system for many potential users.",2008,0, 48,A First Approach to a Data Quality Model for Web Portals,"Advances in technology and the use of the Internet have favoured the emergence of a large number of Web applications, including Web Portals. Web portals provide the means to obtain a large amount of information therefore it is crucial that the information provided is of high quality. In recent years, several research projects have investigated Web Data Quality; however none has focused on data quality within the context of Web Portals. Therefore, the contribution of this research is to provide a framework centred on the point of view of data consumers, and that uses a probabilistic approach for Web portal's data quality evaluation. This paper shows the definition of operational model, based in our previous work.",2007,0, 49,A Framework for Understanding the Factors Influencing Pair Programming Success,"

Pair programming is one of the more controversial aspects of several Agile system development methods, in particular eXtreme Programming (XP). Various studies have assessed factors that either drive the success or suggest advantages (and disadvantages) of pair programming. In this exploratory study the literature on pair programming is examined and factors distilled. These factors are then compared and contrasted with those discovered in our recent Delphi study of pair programming. Gallis et al. (2003) have proposed an initial framework aimed at providing a comprehensive identification of the major factors impacting team programming situations including pair programming. However, this study demonstrates that the framework should be extended to include an additional category of factors that relate to organizational matters. These factors will be further refined, and used to develop and empirically evaluate a conceptual model of pair programming (success).

",2005,0, 50,A Methodology for Identifying Critical Success Factors That Influence Software Process Improvement Initiatives: An Application in the Brazilian Software Industry,"

Continuous improvement of software development capability is fundamental for organizations to thrive in competitive markets. Nevertheless, Software Process Improvement (SPI) initiatives have demonstrated limited results because SPI managers usually fail to cope with factors that have influence on the success of SPI. In this paper, we present the results of a multistrategy approach aiming to identify critical success factors (CSF) that have influence on SPI. The study results were confirmed by the literature review. The CSF were identified through a combination of qualitative and quantitative analyses of the results of a survey we conducted with SPI practitioners involved in Brazilian software industry experiences. We also identified the relationships of major factors that emerged from the survey. We expect that the major CSF presented in this paper can be used by SPI managers in the definition of SPI strategies aiming to enhance SPI initiatives success.

",2007,0, 51,A Model for Requirements Change Management: Implementation of CMMI Level 2 Specific Practice,"

OBJECTIVE --- The objective of this research is to implement CMMI Level 2 specific practice --- SP 1.3-1 manage requirements changes. In this paper we have proposed a model for requirements change management and also discussed initial validation of this model. This model is based on both an empirical study that we have carried out and our extensive literature review of software process improvement (SPI) and requirements engineering (RE).

METHOD --- For data collection we have interviewed SPI experts from reputed organisations. Further work includes analysing research articles, published experience reports and case studies. The initial evaluation of the model was performed via an expert review process.

RESULTS --- Our model is based on five core elements identified from literature and interviews: request, validate, implement, verify and update. Within each of these elements we have identified specific activities that need to take place during requirements change management process.

CONCLUSIONS --- The initial evaluation of the model shows that the requirements change management model is clear, easy to use and can effectively manage the requirements change process. However, more case studies are needed to evaluate this model in order to further evaluate its effectiveness in the domain of RE process.

",2008,0, 52,A Strategic Descriptive Review of the Intelligent Decision-making Support Systems Research: the 1980?2004 Period,"Abstract About 25 years ago, the Nobel laureate Herbert A. Simon and other top Management Science/Operations Research (MS/OR) and Artificial Intelligence (AI) researchers, suggested that an integration of the two disciplines would improve the design of decision-making support tools in organizations. The suggested integrated system has been called an intelligent decision-making support system (i-DMSS). In this chapter, we use an existing conceptual framework posed to assess the capabilities and limitations of the i-DMSS concept, and through a conceptual metaanalysis research of the Decision Support System (DSS) and AI literature from 1980 to 2004, we develop a strategic assessment of the initial proposal. Such an analysis reveals support gaps that suggest further development of the initial i-DMSS concept is needed. We offer recommendations for making the indicated improvements in i-DMSS design, development, and application.",2006,0, 53,A Survey of Software Estimation Techniques and Project Planning Practices,"Paper provides in depth review of software and project estimation techniques existing in industry and literature, its strengths and weaknesses. Usage, popularity and applicability of such techniques are elaborated. In order to improve estimation accuracy, such knowledge is essential. Many estimation techniques, models, methodologies exists and applicable in different categories of projects. None of them gives 100% accuracy but proper use of them makes estimation process smoother and easier. Organizations should automate estimation procedures, customize available tools and calibrate estimation approaches as per their requirements. Proposed future work is to study factors involved in Software Engineering Approaches (Software Estimation in focus) for Offshore and Outsourced Software Development taking Pakistani IT Industry as a Case Study",2006,0, 54,A survey on model-based testing approaches: a systematic review,"This paper describes a systematic review performed on model-based testing (MBT) approaches. A selection criterion was used to narrow the initially identified four hundred and six papers to focus on seventy-eight papers. Detailed analysis of these papers shows where MBT approaches have been applied, the characteristics, and the limitations. The comparison criteria includes representation models, support tools, test coverage criteria, the level of automation, intermediate models, and the complexity. This paper defines and explains the review methodology and presents some results.",2009,0, 55,A Systematic Review Measurement in Software Engineering: State-of-the-Art in Measures,"The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature. Nowadays, many companies need to answer the following questions: How to measure?, When to measure and What to measure?. There have been a lot of efforts made to attempt to answer these questions, and this has resulted in a large amount of data what is sometimes confusing and unclear information. This needs to be properly processed and classified in order to provide a better overview of the current situation. We have used a Measurement Software Ontology to classify and put the amount of data in this field in order. We have also analyzed the results of the systematic review, to show the trends in the software measurement field and the software process on which the measurement efforts have focused. It has allowed us to discover what parts of the process are not supported enough by measurements, to thus motivate future research in those areas.",2012,0, 56,Adoption-Centric Software Maintenance Process Improvement via Information Integration,"Software process improvement is an iterative activity, normally involving measurement, analysis, and change. For most organizations, the existing software process has substantial momentum and is seemingly immovable. Any change to existing process activities causes turbulence in the organization, which can be a significant barrier to adoption of the quality improvement initiative. This paper presents a quiescent, non-invasive, and adoption-centric approach to process improvement for software maintenance. The approach realizes the goal of improving the efficiency of existing processes by minimizing changes to existing workflows and focusing on integrating enhancements at the micro-level of the system. By leveraging information buried in existing data, making it explicit, and integrating the results with known facts, more informed decision-making is made possible. The approach is illustrated with a model problem concerning redocumentation of an embedded control system in the context of performing higher-quality software maintenance",2005,0, 57,Advances in dataflow programming languages.,"Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent developments and how they came about. It is supported by an initial review of dataflow programming in the 1970s and 1980s that led to current topics of research. It then discusses how dataflow programming evolved toward a hybrid von Neumann dataflow formulation, and adopted a more coarse-grained approach. Recent trends toward dataflow visual programming languages are then discussed with reference to key graphical dataflow languages and their development environments. Finally, the article details four key open topics in dataflow programming languages.",1994,0, 58,Agile Methods: The Gap between Theory and Practice,"Since the software crisis of the 1960’s, numerous methodologies have been developed to impose a disciplined process upon software development. Today, these methodologies are noted for being unsuccessful and unpopular due to their increasingly bureaucratic nature. Many researchers and academics are calling for these heavyweight methodologies to be replaced by agile methods. However, there is no consensus as to what constitutes an agile method. An Agile Manifesto was put forward in 2001, but many variations, such as XP, SCRUM and Crystal exist. Each adheres to some principles of the Agile Manifesto and disregards others. My research proposes that these principles lack grounding in theory, and lack a respect for the concept of agility outside the field of Information Systems Development (ISD). This study aims to develop a comprehensive framework of ISD agility, to determine if this framework is adhered to in practice and to determine if such adherence is rewarded. The framework proposes that it is insufficient to just accept agile methods as superior to all others. In actual fact, an ISD team have to identify whether they need to be agile, and to compare this to their agile capabilities before deciding how agile their eventual method should be. Furthermore this study proposes that an agile method is not just accepted and used. Rather it may be selected from a portfolio of methods, it may be constructed from parts of methods, or indeed it may be the product of the ISD team’s deviation from a different method altogether. Finally, this study recognises that agility does not simply come from a method. In actual fact, a cross-disciplinary literature review suggests that it is important to classify sources of agility, which could be the people on team, the way they are organised, the technology they use or the external environment with which they interact. A three phase research method is adopted, incorporating a set of pilot interviews, a large-scale survey and finally, a set of case studies. The survey is intended to produce generalisable results while the case studies are carried out to obtaining much needed qualitative information in an emerging field where little is currently known.",2015,0, 59,An Empirical Exploration of the Distributions of the Chidamber and Kemerer Object-Oriented Metrics Suite,"

The object-oriented metrics suite proposed by Chidamber and Kemerer (CK) is a measurement approach towards improved object-oriented design and development practices. However, existing studies evidence traces of collinearity between some of the metrics and low ranges of other metrics, two facts which may endanger the validity of models based on the CK suite. As high correlation may be an indicator of collinearity, in this paper, we empirically determine to what extent high correlations and low ranges might be expected among CK metrics.

To draw as much general conclusions as possible, we extract the CK metrics from a large data set (200 public domain projects) and we apply statistical meta-analysis techniques to strengthen the validity of our results. Homogenously through the projects, we found a moderate (∼0.50) to high correlation (>0.80) between some of the metrics and low ranges of other metrics.

Results of this empirical analysis supply researchers and practitioners with three main advises: a) to avoid the use in prediction systems of CK metrics that have correlation more than 0.80 b) to test for collinearity those metrics that present moderate correlations (between 0.50 and 0.60) c) to avoid the use as response in continuous parametric regression analysis of the metrics presenting low variance. This might therefore suggest that a prediction system may not be based on the whole CK metrics suite, but only on a subset consisting of those metrics that do not present either high correlation or low ranges.

",2005,0, 60,An Empirical Study to Investigate Software Estimation Trend in Organizations Targeting CMMI^SM,"This paper discusses software estimation practices existing in industry and literature, its strengths and weaknesses. Main focus is the gap analysis of organization with respect to CMMI level 3 for SE/SW/IPPD/SS. Data collection reveals that company makes use of heuristic approaches in which expert judgment supplemented with wideband Delphi, was mainly used for software estimation. In the light of CMMI level 3 for SE/SW, it was suggested that formal methods for estimating size, effort and cost for the project should be implemented apart from heuristics used for estimation. Different estimation methodologies are applicable in different categories of projects. None of them gives 100% accuracy but proper use of them makes estimation process smoother. Future work is calibration of parametric software estimation approaches for the organization under study by making use of organizational process database to plan, estimate and tailor project variables that best suits organization's processes and procedures",2006,0, 61,Architecture-Based Software Reliability Analysis: Overview and Limitations,"With the growing size and complexity of software applications, research in the area of architecture-based software reliability analysis has gained prominence. The purpose of this paper is to provide an overview of the existing research in this area, critically examine its limitations, and suggest ways to address the identified limitations",2007,0, 62,Autonomic and Trusted Computing Paradigms,"The emerging autonomic computing technology has been hailed by world-wide researchers and professionals in academia and industry. Besides four key capabilities, well known as self-CHOP, we propose an additional self-regulating capability to explicitly emphasize the policy-driven self-manageability and dynamic policy derivation and enactment. Essentially, these five capabilities, coined as Self-CHROP, define an autonomic system along with other minor properties. Trusted computing targets guaranteed secured systems. Self-protection alone does not ensure the trustworthiness in autonomic systems. The new trend is to integrate both towards trusted autonomic computing systems. This paper presents a comprehensive survey of the autonomic and trusted computing paradigms and a preliminary conceptual architecture towards trustworthy autonomic grid computing.",2013,0, 63,Building Reverse Engineering Tools with Software Components: Ten Lessons Learned,"My dissertation explores a new approach to construct tools in the domain of reverse engineering. The approach uses already available software components as building blocks, combining and customizing them programmatically. This approach can be characterized as component-based tool-building. The goal of the dissertation is to advance the current state of component-based tool-building towards a discipline that is more predictable and formal. This is achieved with three research contributions: (1) an in-depth literature survey that identifies requirements for reverse engineering tools, (2) a number of tool case studies that utilize component-based tool-building, (3) and ten lessons learned for tool builders that have been distilled from these case studies.",2007,0, 64,Case Study of Breakdown Analysis on Identification of Remote Team Communication Problems,"

The purpose is to apply breakdown analysis in identifying problems of distributed communication. Sample comprises Intranet and Internet teams. The methodology is breakdown analysis. Research framework comprises user-user, user-tool and user-task. The tools include videoconferencing and data conferencing. Transcript coding and qualitative analysis were followed. Procedures include literature review, development of framework, sampling, tool setup and breakdown analysis. Five problem indicators of user-user included unclearness of participant's oral expression, disagreement, off-task, no answer and keep silence. Problem indicators of user-tool were incorrect configuration, unstable facilities and broadband, unfamiliarity with application and facilities. User-task problem indicators included uncompleted task, participant's lateness and ignorance of assigned task. Causes of problems included participant's familiarity, ignorance of task and lateness in meeting. There was no difference of problem indicators between Intranet and Internet connection. Implications included consideration of participant's familiarity, asynchronous communica-tion is in need during inter-meeting and better planning and preparation of facilitator.

",2005,0, 65,Changing perceptions of CASE technology,"The level to which CASE technology has been successfully deployed in IS and software development organisations has been at best variable. Much has been written about an apparent mismatch between user expectations of the technology and the products which are developed for the growing marketplace. In this paper we explore how this tension has developed over time, with the aim of identifying and characterising the major factors contributing to it. We identify three primary themes: volatility and plurality in the marketplace; the close relationship between tools and development methods; and the context sensitivity of feature assessment. By exploring the tension and developing these themes we hope to further the debate on how to improve evaluation of CASE prior to adoption.",2009,0, 66,Cognitive differences between procedural programming and object oriented programming,"Software development is moving from procedural programming towards object-oriented programming (OOP). Past studies in cognitive aspects of programming have focused primarily on procedural programming languages. Object-oriented programming is a new paradigm for computing. Industry is finding that programmers are having difficulty shifting to this new programming paradigm. Findings in prior research revealed that procedural programming requires Piaget's formal operation cognitive level. New from this research is that OOP also requires Piaget's formal operation cognitive level. Also new is that OOP appears to be unrelated to hemispheric cognitive style. OOP appears to be hemispheric style friendly, while procedural programming is preferential to left hemispheric cognitive style. The conclusion is that cognitive requirements are not the cause for the difficulty in shifting from procedural to OOP. An alternative possibility to the difficulty is proactive interference of learning procedural programming prior to learning object oriented programming.",2005,0, 67,Combining Behaviour and Structure,,2010,0, 68,Commonalities in Risk Management and Agile Process Models,"On the surface, agile and risk management process models seem to constitute two contrasting approaches. Risk management follows a heavyweight approach whereas agile process models oppose it. In this paper, we identify commonalities in these two process models. Our results show that they have much in common, and that a merge between them is possible.",2007,0, 69,Component airbag: a novel approach to develop dependable component-based applications,"The increasing use of ""commercial off-the-shelf"" (COTS) components in safety critical scenarios, arises new issues related to the ""dependable"" use of third-party software in such contexts. The characteristics of these components, designed for a generic use, are such to make unpredictable the effects of their use whenever they are integrated in the entire system. The author's Ph.D project aim at proposing an approach to improve dependability of COTS based application, which consists of the following phases: i) each component is stimulated by proper workloads in order to learn the failure behavior; ii) from failure behaviors, the component failure model is defined; and iii) once the failure model is known for each component, the ""component airbag"" is thus created, i.e. a container able of exploiting the failure model in order to monitor and prevent the component from failing. An existent literature analysis, regarding the more used dependability assessment and improvement strategies, is also presented.",2007,0, 70,Conceptual Modeling for Simulation: Issues and Research Requirements,"It is generally recognized that conceptual modeling is one of the most vital parts of a simulation study. At the same time, it also seems to be one of the least understood. A review of the extant literature on conceptual modeling reveals a range of issues that need to be addressed: the definition of conceptual model(ling), conceptual model requirements, how to develop a conceptual model, conceptual model representation and communication, conceptual model validation, and teaching conceptual modeling. It is clear that this is an area ripe for further research, for the clarification of ideas and the development of new approaches. Some areas in which further research could be carried out are identified",2006,0, 71,Critic systems - Towards human-computer collaborative problem solving,"Human-computer collaboration is extremely necessary for solving ill-structured problems and critic systems can effectively facilitate human-computer collaborative problem solving. This paper conducts a systematic study on critic systems. First, the concepts of critic systems are presented. Then, a literature review is presented on critic systems. Afterwards, a generic architecture is put forward for critic systems, with its important aspects being analyzed. Finally, two case studies are given to illustrate critic systems.",2016,0, 72,Critical success factors for software process improvement implementation: An empirical study,"In this article, we present findings from our recent empirical study of the critical success factors (CSFs) for software process improvement (SPI) implementation with 34 SPI practitioners. The objective of this study is to provide SPI practitioners with sufficient knowledge about the nature of issues that play a positive role in the implementation of SPI programmes in order to assist them in effectively planning SPI implementation strategies. Through our empirical study we identified seven factors (higher management support, training, awareness, allocation of resources, staff involvement, experienced staff and defined SPI implementation methodology) that are generally considered critical for successfully implementing SPI. We also report on a literature survey of CSFs that impact SPI and identify six factors (senior management commitment, staff involvement, staff time and resources, training and mentoring, creating process action teams and reviews). We compared our empirical study results with the literature and confirmed the factors identified in the literature, and also identified two new CSFs (SPI awareness and defined SPI implementation methodology) that were not identified in the literature. Finally, we analyzed the CSFs identified by different groups of practitioners and found that they are aware of what is imperative for the successful implementation of SPI programmes. ",2011,0, 73,Designing Mobile Shared Workspaces for Loosely Coupled Workgroups,"

Recent advances in mobile computing devices and wireless communication have brought the opportunity to transport the shared workspace metaphor to mobile work scenarios. Unfortunately, there are few guidelines to support the design of these mobile shared workspaces. This paper proposes a design process and several guidelines to support the modeling of these groupware systems. Particularly, workspaces that support loosely coupled workgroups. The process and guidelines are based on a literature review and authors' experience in the development of mobile shared workspaces.

",2007,0, 74,Developing an algorithm for si engine diagnosis using parity relations,"Diagnosis is an algorithm for finding and isolating faults in a dynamic system. In 1994, California designated some regulations which were called OBD II. According to these regulations, there is a system installed in an automobile which can analyze the function of the automobile continuously. The decrease of pollution for the expansion of diagnostic system is necessary in the future. To reach the aims of diagnosis, some redundancies are required in the system, either hardware or soft ware. In the hardware redundancy methods, the installation of additional sensors or actuators on the system is required which is costly and takes up a lot of space, whereas in software redundancy methods, this is done with no expense. In this article, one of the software redundancy methods or analytical methods is implied for solving the problem. At first a discussion on literature survey is mentioned, and then a modified mathematical model for SI engine is acquired. The usage of this method and parity space relations, which is a model based method, accomplished the process of diagnosis. Developing a modified SI engine model and diagnosis of MAT sensor which less has been considered besides other components are this article contributions.",2006,0, 75,Development of software engineering: A research perspective.,"In the past 40 years, software engineering has emerged as an important sub-field of computer science and has made significant contribution to the software industry. Now it is gradually becoming a new independent discipline. This paper presents a survey of software engineering development from a research perspective. Firstly, the history of software engineering is reviewed with focus on the driving forces of software technology, the software engineering framework and the milestones of software engineering development. Secondly, after reviewing the past academic efforts, the current research activities are surveyed and new challenges brought by Internet are analyzed. Software engineering researches and activities in China are also reviewed. The work in Peking University is described as a representative.",2013,0, 76,Distributed real time database systems: background and literature review,"

Today's real-time systems (RTS) are characterized by managing large volumes of dispersed data making real-time distributed data processing a reality. Large business houses need to do distributed processing for many reasons, and they often must do it in order to stay competitive. So, efficient database management algorithms and protocols for accessing and manipulating data are required to satisfy timing constraints of supported applications. Therefore, new research in distributed real-time database systems (DRTDBS) is needed to investigate possible ways of applying database systems technology to real-time systems. This paper first discusses the performance issues that are important to DRTDBS, and then surveys the research that has been done so far on the issues like priority assignment policy, commit protocols and optimizing the use of memory in non-replicated/replicated environment pertaining to distributed real time transaction processing. In fact, this study provides a foundation for addressing performance issues important for the management of very large real time data and pointer to other publications in journals and conference proceedings for further investigation of unanswered research questions.

",2008,0, 77,Effective Data Interpretation,"Data interpretation is an essential element of mature software project management and empirical software engineering. As far as project management is concerned, data interpretation can support the assessment of the current project status and the achievement of project goals and requirements. As far as empirical studies are concerned, data interpretation can help to draw conclusions from collected data, support decision making, and contribute to better process, product, and quality models. With the increasing availability and usage of data from projects and empirical studies, effective data interpretation is gaining more importance. Essential tasks such as the data-based identification of project risks, the drawing of valid and usable conclusions from individual empirical studies, or the combination of evidence from multiple studies require sound and effective data interpretation mechanisms. This article sketches the progress made in the last years with respect to data interpretation and states needs and challenges for advanced data interpretation. In addition, selected examples for innovative data interpretation mechanisms are discussed. ",2016,0, 78,Embedded Systems Development: Quest for Productivity and Reliability,"It is widely agreed that the state of art in methodologies, techniques and tools for embedded systems development is many years behind their desktop counterparts. The job of the embedded software developer is further complicated by the increasing tendency of system designers to shift functionality and complexity away from hardware and into software. As part of an ongoing research at Philips Semiconductors, we have been investigating solutions to the two main problems of productivity and reliability in embedded software development. This paper describes our research effort in this investigation. Specifically, the paper first describes the requirements on embedded systems and their development challenges. It then provides a literature survey of some techniques that address the issues of productivity and reliability. With this background the paper proposes a model driven architectural approach to embedded software development.",2006,0, 79,Empiricism in Computer Science,"As the name computer science already implies, the study of computers is a field of science. Computer Science (CS) as it exists today lacks to a certain extent, what other sciences rely most on: An empirical body of knowledge. This paper looks at several meta studies which have analyzed the presence of empirical data on CS subjects. It also provides an overview of empiricism in general, some empirical concepts and where computer science and empiricism intersect.",1992,0, 80,Empowering the users? A critical textual analysis of the role of users in open source software development,"

This paper outlines a critical, textual approach for the analysis of the relationship between different actors in information technology (IT) production, and further concretizes the approach in the analysis of the role of users in the open source software (OSS) development literature. Central concepts of the approach are outlined. The role of users is conceptualized as reader involvement aiming to contribute to the configuration of the reader (to how users and the parameters for their work practices are defined in OSS texts). Afterwards, OSS literature addressing reader involvement is critically reviewed. In OSS context, the OSS writers as readers configure the reader and other readers are assumed to be capable of and interested in commenting the texts. A lack of OSS research on non-technical reader involvement is identified. Furthermore, not only are the OSS readers configured, but so are OSS writers. In OSS context while writers may be empowered, this clearly does not apply to the non-technical OSS readers. Implication for research and practice are discussed.

",2008,0, 81,Engineering the Ontology for the SWEBOK: Issues and Techniques,"Auyang [2] described engineering as “the science of production”. This and many other definitions of engineering put an emphasis on disciplined artifact creation as the essence of any engineering discipline. However, the material object produced by every engineering discipline is not necessarily of a similar nature. The case of software engineering is particularly relevant in the illustration of such differences, since software as an artifact is",2003,0, 82,"Ethnography, scenario-based observational usability study, and other reviews inform the design of a web-based E-notebook","As users turn to the World Wide Web to accomplish an increasing variety of daily tasks, many engage in information assimilation (IA), a process defined as the gathering, editing, annotating, organizing, and saving of Web information, and the tracking of ongoing Web work processes. The process of IA, which is similar to traditional note taking but in the Web environment, emerges from a literature review and an ethnographic field study, as presented in this article. Despite strong evidence which suggests that IA is critical to many Web users, however, a scenario-based observational usability study and a heuristic evaluation indicate that it is currently not well supported by existing software applications. This article, which culminates in the presentation of NetNotes-a Web-based e-notebook developed specifically to support the process of IA-illustrates how design requirements can be effectively extracted and synthesized from a variety of complementary background user studies.",2004,0, 83,Evaluating Quality in Model-Driven Engineering,"In model-driven engineering (MDE), models are the prime artifacts, and developing high-quality systems depends on developing high-quality models and performing transformations that preserve quality or even improve it. This paper presents quality goals in MDE and states that the quality of models is affected by the quality of modeling languages, tools, modeling processes, the knowledge and experience of modelers, and the quality assurance techniques applied. The paper further presents related work on these factors and identifies pertinent research challenges. Some quality goals such as well-formedness and precision are especially important in MDE. Research on quality in MDE can promote adoption of MDE for complex system engineering.",2007,0, 84,Evaluating Software Project Prediction Systems,"The problem of developing usable software project cost prediction systems is perennial and there are many competing approaches. Consequently, in recent years there have been exhortations to conduct empirically based evaluations in order that our understanding of project prediction might be based upon real world evidence. We now find ourselves in the interesting position of possessing this evidence in abundance. For example, a review of just three software engineering journals identified 50 separate studies and overall several hundred studies have been published. This naturally leads to the next step of needing to construct a body of knowledge, particularly when not all evidence is consistent. This process of forming a body of knowledge is generally referred to as metaanalysis. It is an essential activity if we are to have any hope of making sense of, and utilising, results from our empirical studies. However, it becomes apparent that when systematically combining results many difficulties are encountered",2005,0, 85,Evolutionary Scheduling: A Review,"Early and seminal work which applied evolutionary computing methods to scheduling problems from 1985 onwards laid a strong and exciting foundation for the work which has been reported over the past decade or so. A survey of the current state-of-the-art was produced in 1999 for the European Network of Excellence on Evolutionary Computing EVONET¿this paper provides a more up-to-date overview of the area, reporting on current trends, achievements, and suggesting the way forward.",2007,0, 86,Experiences from Conducting Semi-structured Interviews in Empirical Software Engineering Research,"Many phenomena related to software development are qualitative in nature. Relevant measures of such phenomena are often collected using semi-structured interviews. Such interviews involve high costs, and the quality of the collected data is related to how the interviews are conducted. Careful planning and conducting of the interviews are therefore necessary, and experiences from interview studies in software engineering should consequently be collected and analyzed to provide advice to other researchers. We have brought together experiences from 12 software engineering studies, in which a total of 280 interviews were conducted. Four areas were particularly challenging when planning and conducting these interviews; estimating the necessary effort, ensuring that the interviewer had the needed skills, ensuring good interaction between interviewer and interviewees, and using the appropriate tools and project artifacts. The paper gives advice on how to handle these areas and suggests what information about the interviews should be included when reporting studies where interviews have been used in data collection. Knowledge from other disciplines is included. By sharing experience, knowledge about the accomplishments of software engineering interviews is increased and hence, measures of high quality can be achieved",2005,0, 87,Exploring the Computing Literature Using Temporal Graph Visualization,"We present a system for the visualization of computing literature with an emphasis on collaboration patterns, interactions between related research specialties and the evolution of these characteristics through time. Our computing literature visualization system, has four major components: A mapping of bibliographical data to relational schema coupled with an RDBMS to store the relational data, an interactive GUI that allows queries and the dynamic construction of graphs, a temporal graph layout algorithm, and an interactive visualization tool. We use a novel technique for visualization of large graphs that evolve through time. Given a dynamic graph, the layout algorithm produces two-dimensional representations of each timeslice, while preserving the mental map of the graph from one slice to the next. A combined view, with all the timeslices can also be viewed and explored. For our analysis we use data from the Association of Computing Machinery's Digital Library of Scientific Literature which contains more than one hundred thousand research papers and authors. Our system can be found online at http://tgrip.cs.arizona.edu.",2004,0, 88,Figure Out the Current Software Requirements Engineering - What Practitioners Expect to Requirements Engineering? -,"This research aims to grasp and describe what Requirements Engineering(RE) covers, what RE tries to solve and what should RE be in the future. For these purposes, the authors did the literature survey and interviews with the authorities of RE and practitioners. The literature survey targeted over 700 papers and reports published from 2001 to 2005 in major RE conferences and journals in order to capture the influential papers and the trend of topics. The interviews targeted 13 authorities in RE academic field and 7 practitioners who have much knowledge and experiences of RE. One of the most important results of this study is the RE area quadrant which shows the overview of RE field. This quadrant supports to find out what topics of RE would be effective for practitioners' issue. Another important finding is the gap between practitioners' expectation and researchers work from the interviews to both sides. This research helps to know the current figure of RE and helps to know what RE should tackle with.",2007,0, 89,Forward and Bidirectional Planning Based on Reinforcement Learning and Neural Networks in a Simulated Robot,"Building intelligent systems that are capable of learning, acting reactively and planning actions before their execution is a major goal of artificial intelligence. This paper presents two reactive and planning systems that contain important novelties with respect to previous neural-network planners and reinforcement-learning based planners: (a) the introduction of a new component (matcher) allows both planners to execute genuine taskable planning (while previous reinforcement-learning based models have used planning only for speeding up learning); (b) the planners show for the first time that trained neural-network models of the world can generate long prediction chains that have an interesting robustness with regards to noise; (c) two novel algorithms that generate chains of predictions in order to plan, and control the flows of information between the systems different neural components, are presented; (d) one of the planners uses backward predictions to exploit the knowledge of the pursued goal; (e) the two systems presented nicely integrate reactive behavior and planning on the basis of a measure of confidence in action. The soundness and potentialities of the two reactive and planning systems are tested and compared with a simulated robot engaged in a stochastic path-finding task. The paper also presents an extensive literature review on the relevant issues.",2003,0, 90,From Autonomy to AOC,"Autonomy oriented computing (AOC) is a new bottom-up paradigm for problem solving and complex systems modeling. In this book, our goal is to substantiate this very statement and to demonstrate useful AOC methodologies and applications. But, before we do so, we need to understand some of the most fundamental issues involved: What are the general characteristics of complex systems consisting of autonomous entities? What types of behavior can a single or a collection of autonomous entities exhibit or generate? How can we give a definition of autonomy based on the notion of behavior? In a bottom-up computing system, how can the property of autonomy be modeled and utilized? What types of problem is such a bottom-up computing paradigm indented to solve? How different is this AOC paradigm from other previous or current computing paradigms?",2007,0, 91,History and literature review,,1959,0, 92,How Artificial Intelligent Agents Do Shopping in a Virtual Mall: A ?Believable? and ?Usable? Multiagent-Based Simulation of Customers? Shopping Behavior in a Mall,"

Our literature review revealed that several applications successfully simulate certain kinds of human behaviors in spatial environments, but they have some limitations related to the ‘believability' and the ‘usability' of the simulations. This paper aims to present a set of requirements for multiagentbased simulations in terms of ‘believability' and ‘usability'. It also presents how these requirements have been put into use to develop a multiagent-based simulation prototype of customers' shopping behavior in a mall. Using software agents equipped with spatial and cognitive capabilities, this prototype can be considered sufficiently ‘believable' and ‘usable' for end-users, mainly mall managers in our case. We show how shopping behavior simulator can support the decision-making process with respect to the spatial configuration of the shopping mall.

",2006,0, 93,Implementing requirements engineering processes throughout organizations: success factors and challenges,"This paper aims at identifying critical factors affecting organization-wide implementation of requirements engineering (RE) processes. The paper is based on a broad literature review and three longitudinal case studies that were carried out using an action research method. The results indicate that RE process implementation is a demanding undertaking, and its success greatly depends on such human factors as motivation, commitment and enthusiasm. Therefore, it is essential that the RE process is useful for its individual users. Furthermore, the results indicate that organizations can gain benefits from RE by defining a simple RE process, by focusing on a small set of RE practices, and by supporting the systematic usage of these practices.",2004,0, 94,Integrating XML and Relational Database Systems,"Relational databases get more and more employed in order to store the content of a web site. At the same time, XML is fast emerging as the dominant standard at the hypertext level of web site management describing pages and links between them. Thus, the integration of XML with relational database systems to enable the storage, retrieval, and update of XML documents is of major importance. Data model heterogeneity and schema heterogeneity, however, make this a challenging task. In this respect, the contribution of this paper is threefold. First, a comparison of concepts available in XML schema specification languages and relational database systems is provided. Second, basic kinds of mappings between XML concepts and relational concepts are presented and reasonable mappings in terms of mapping patterns are determined. Third, design alternatives for integrating XML and relational database systems are examined and X-Ray, a generic approach for integrating XML with relational database systems is proposed. Finally, an in-depth evaluation of related approaches illustrates the current state of the art with respect to the design goals of X-Ray.",1996,0, 95,Investigating the Role of Trust in Agile Methods Using a Light Weight Systematic Literature Review,"Abstract In this paper we use a cut down systematic literature review to investigate the role of trust in agile methods. Our main motivation is to investigate the impact of the enhanced role of developers in agile methods. It is important to investigate the role of trust in agile methods because according to the agile manifesto the role of individual developers is central in an agile team: Individuals and Interactions over processes and tools and Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done [1]. This suggests that managers must trust their staff to make decisions. The most direct forum for trust in agile projects is in the daily stand-up meeting. Project managers must trust that what developers say in the standup they are going to achieve during the day is what they actually achieve. In this paper we investigate the role trust plays in agile methods.",2008,0, 96,Looking Back and Looking Forward: Diffusion and Adoption of Information Technology Research in IFIP WG 8.6?Achievements and Future Challenges,"Working Group 8.6 has existed for more than 10 years now. During this period, members have continuously challenged the work of the group. Recently, researchers at the Copenhagen Business Schcol conducted an interim review of the group's work in the fonn of a literature analysis of all WG 8.6 conference contributions. That review concludes that WG 8.6 works toward and within its own aim and scope declaration, but that there are a number of challenges. One is that WG 8.6 has no joint tenn~nology and no shared theoretical basis. One recommendation from the review team. therefore. was that beyond researching new technologies like mobile mforniation system and management fashions and fads such as business agility, WG 8.6 should stay with its roots and do work to explicitly contribute to IT diffusion theory and terminology. On the basis of this interim rewew. a group of founding, regular. less-regular, and more-recent members of WG 8.6 take a brief look back and a more extended look forward to discuss the achievements and the future challenges of WG 8.6",2005,0, 97,MDE for BPM: A Systematic Review,"Due to the rapid change in the business processes of organizations, Business Process Management (BPM) has come into being. BPM helps business analysts to manage all concerns related to business processes, but the gap between these analysts and people who build the applications is still large. The organization’s value chain changes very rapidly; to modify simultaneously the systems that support the business management process is impossible. MDE (Model Driven Engineering) is a good support for transferring these business process changes to the systems that implement these processes. Thus, by using any MDE approach, such as MDA, the alignment between business people and software engineering should be improved. To discover the different proposals that exist in this area, a systematic review was performed. As a result, the OMG’s Business Process Definition Metamodel (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.",2015,0, 98,Measuring Effective Data Visualization,"In this paper, we systematically examine two fundamental questions in information visualization – how to define effective visualization and how to measure it. Through a literature review, we point out that the existing definitions of effectiveness are incomplete and often inconsistent – a problem that has deeply affected the design and evaluation of visualization. There is also a lack of standards for measuring the effectiveness of visualization as well as a lack of standardized procedures. We have identified a set of basic research issues that must be addressed. Finally, we provide a more comprehensive definition of effective visualization and discuss a set of quantitative and qualitative measures. The work presented in this paper contributes to the foundational research of information visualization.",2006,0, 99,Motivation in Software Engineering: A systematic literature review,"Objective In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers.",2014,0, 100,Postmortem reviews: purpose and approaches in software engineering.,"Conducting postmortems is a simple and practical method for organisational learning. Yet, not many companies have implemented such practices, and in a survey, few expressed satisfaction with how postmortems were conducted. In this article, we discuss the importance of postmortem reviews as a method for knowledge sharing in software projects, and give an overview of known such processes in the field of software engineering. In particular, we present three lightweight methods for conducting postmortems found in the literature, and discuss what criteria companies should use in defining their way of conducting postmortems.",2003,0, 101,Process Design Theory for Digital Information Services,"Information services transfer information goods from a creator to a user. Information services have three design aspects, i. e. content, value, and revenue, and their design has an evolutionary nature, i. e. that information gained in the service’s usage stages is part of their (re)design efforts. The literature abounds of fragmented insights for information services design. This article gives a literature review of methods and techniques that are useful in the representation and analysis of the above-mentioned aspects for each evolving step of information service design. The article also describes several scenarios for information service design projects. These insights have considerable consequences for information services design practices and a list of topics for new design theory research is given.",2010,0, 102,Refactoring test suites versus test behaviour: a TTCN-3 perspective,"As a software engineering discipline, refactoring offers the opportunity for reversal of software 'decay' and preservation of a level of software quality. In a recent paper by Zeiss et al. [23], a set of fifteen refactorings were found applicable to Testing and Test Control Notation (TTCN-3) test behaviour and a set of thirteen refactorings to improving the overall structure of a TTCN-3 test suite. All twenty-eight refactorings were taken from the set of seventy-two described in the seminal text by Fowler [10]. An important issue with any refactoring is the testing effort required during implementation of its mechanics. In this paper, we explore the trade-offs between, and the contrasting characteristics of, the two TTCN-3 sets of refactorings from a refactoring mechanics perspective. Firstly, we use a meta-analysis of the same twenty-eight refactorings based on a dependency matrix developed through scrutiny of the mechanics of all seventy-two refactorings in [10] and then an analysis of the refactoring chains emerging from each of the same twenty-eight refactorings. Results suggest that there are compelling reasons for avoiding test suite structure refactorings when the dependencies and chains of the test suite refactorings are considered. Refactoring test behaviour potentially offers a far simpler, less demanding set of tasks required of the developer both from a re-testing and dependency viewpoint.",2007,0, 103,Reporting Experiments in Software Engineering,"One major problem for integrating study results into a common body of knowledge is the heterogeneity of reporting styles: (1) it is difficult to locate relevant information and (2) important information is often missing. Reporting guidelines are expected to support a systematic, standardized presentation of empirical research, thus improving reporting in order to support readers in (1) finding the information they are looking for, (2) understanding how an experiment is conducted, and (3) assessing the validity of its results. The objective of this paper is to survey the most prominent published proposals for reporting guidelines, and to derive a unified standard that which can serve as a starting point for further discussion. We provide detailed guidance on the expected content of the sections and subsections for reporting a specific type of empirical studies, i.e., controlled experiments. Before the guidelines can be evaluated, feedback from the research community is required. For this purpose, we propose to adapt guideline development processes from other disciplines.",2005,0, 104,Requirements Analysis: A Review,"Many software organizations often bypass the requirements analysis phase of the software development life cycle process and skip directly to the implementation phase in an effort to save time and money. The results of such an approach often leads to projects not meeting the expected deadline, exceeding budget, and not meeting user needs or expectations. One of the primary benefits of requirements analysis is to catch problems early and Minimize thier impact with respect to time and money. This paper is a literature review of the requirements analysis phase and the multitude of techniques available to perform the analysis. It is hoped that by compiling the information into a single document, readers will be more in a position to understand the requirements engineering process and provide analysts a compelling argument as to why it should be employed in modern day software development. ",2015,0, 105,Requirements of Software Visualization Tools: A Literature Survey,"Our objective is to identify requirements (i.e., quality attributes and functional requirements) for software visualization tools. We especially focus on requirements for research tools that target the domains of visualization for software maintenance, reengineering, and reverse engineering. The requirements are identified with a comprehensive literature survey based on relevant publications in journals, conference proceedings, and theses. The literature survey has identified seven quality attributes (i.e., rendering scalability, information scalability, interoperability, customizability, interactivity, usability, and adoptability) and seven functional requirements (i.e., views, abstraction, search, filters, code proximity, automatic layouts, and undo/history). The identified requirements are useful for researchers in the software visualization field to build and evaluate tools, and to reason about the domain of software visualization.",2007,0, 106,Research Directions in Requirements Engineering,"In this paper, we review current requirements engineering (RE) research and identify future research directions suggested by emerging software needs. First, we overview the state of the art in RE research. The research is considered with respect to technologies developed to address specific requirements tasks, such as elicitation, modeling, and analysis. Such a review enables us to identify mature areas of research, as well as areas that warrant further investigation. Next, we review several strategies for performing and extending RE research results, to help delineate the scope of future research directions. Finally, we highlight what we consider to be the ""hot"" current and future research topics, which aim to address RE needs for emerging systems of the future.",2007,0, 107,Review article: A review of structured document retrieval (SDR) technology to improve information access performance in engineering document management,"Information retrieval (IR) is a well-established research and development area. Document formats such as SGML (Standard Generalised Mark-up Language) and XML (eXtensible Mark-up Language) have become widely used in recent years. Traditional IR systems demonstrate limitations when dealing with such documents, which motivated the emergence of structured document retrieval (SDR) technology intending to overcome these limitations. This paper reviews the work carried out from the inception to the development and application of SDR in engineering document management. The key issues of SDR are discussed and the state of the art of SDR to improve information access performance has been surveyed. A comparison of selected papers is provided and possible future research directions identified. The paper concludes with the expectation that SDR will make a positive impact on the process of engineering document management from document construction to its delivery in the future, and undoubtedly provide better information retrieval performance in terms of both precision and functionality.",2008,0, 108,Role of annotation in Electronic Process Guide,Annotations play a major part in our daily life. Similarly electronic process guide or EPG plays an important role in software development in an organization. An EPG can guide the developers about the process used or followed in an environment. The paper describes the annotation in electronic process guide for developers. We first introduced the background of the topic and some of the related researched done in the area of annotation systems. Some of the annotation systems for the Web are available either free or commercially. We then focus on the literature survey on the use of annotation tools and technique in different areas along with the usage of EPG in different scenarios. We also focus on Web based annotation for Jasmine EPG and conclusion is given with the future work.,2007,0, 109,SArt: Towards Innovation at the intersection of Software engineering and art,"Abstract Computer science and art have been in contact since the 1960s. Our hypothesis is that software engineering can benefit from multidisciplinary research at the intersection with art for the purpose of increasing innovation and creativity. To do so, we have designed and planned a literature review in order to identify the existing knowledge base in this interdisciplinary field. A preliminary analysis of both results of our review and observations of software development projects with artist participation, reveals four main issues. These are software development issues, which include requirement management, tools, development and business models; educational issues, with focus on multidisciplinary education; aesthetics of both code and user interface, and social and cultural implications of software and art. The identified issues and associated literature should help researchers design research projects at the intersection of software engineering and art. Moreover, they should help artists to increase awareness about software engineering methods and tools when conceiving and implementing their software-based artworks.",2009,0, 110,Scenario-Based Application Requirements Engineering,"In product line engineering, the application requirements engineers have to ensure both a high degree of reuse and the satisfaction of stakeholder needs. The vast number of possible variant combinations and the influences of the selection of one variant on different requirements models is a challenge for the consistent reuse of product line requirements. Only if the requirements engineers are aware of all product line capabilities (variabilities and commonalities), they are able to decide whether a stakeholder requirement can be satisfied by the product line or not. In this chapter we present a novel approach for the development of application requirements specifications. For this approach, we use an orthogonal variability model with associated requirements scenarios to support requirements engineers during the elicitation, negotiation, documentation, and validation of product line requirements. The presented approach tackles the existing challenges during application requirements engineering by the iterative use of the orthogonal variability model (abstract view) and the requirements scenarios (concrete view) of the product line.",2012,0, 111,Software Architecture Visualization: An Evaluation Framework and Its Application,"In order to characterize and improve software architecture visualization practice, the paper derives and constructs a qualitative framework, with seven key areas and 31 features, for the assessment of software architecture visualization tools. The framework is derived by the application of the Goal Question Metric paradigm to information obtained from a literature survey and addresses a number of stakeholder issues. The evaluation is performed from multiple stakeholder perspectives and in various architectural contexts. Stakeholders can apply the framework to determine if a particular software architecture visualization tool is appropriate to a given task. The framework is applied in the evaluation of a collection of six software architecture visualization tools. The framework may also be used as a design template for a comprehensive software architecture visualization tool.",2008,0, 112,Software Components Evaluation: an Overview,"Objective: To contribute with an overview on the current state of the art concerning metrics-based quality evaluation of software components and component-based assemblies. Method: Comparison of several approaches available in the literature, in terms of their scope, intent, definition technique and maturity. Results: Common shortcomings of current approaches, such as ambiguity in definition, lack of adequacy of the specifying formalisms and insufficient validation of current quality models and metrics for software components. Conclusions: Quality evaluation of components and component-based infrastructures presents new challenges to the Experimental Software Engineering community which are not conveniently dealt with by current approaches. Keywords: Component-Based Software Engineering; Component Evaluation; Software Metrics; Software Quality.",2011,0, 113,Software Multi-project Resource Scheduling: A Comparative Analysis,"Software organizations are always multi-project-oriented, in which situation the traditional project management for individual project is not enough. Related scientific research on multi-project is yet scarce. This paper reports result from a literature review aiming to organize, analyze and make sense out of the dispersed field of multi-project resource scheduling methods. A comparative analysis was conducted according to 6 aspects of application situations: value orientation, centralization, homogeneity, complexity, uncertainty and executive ability. The findings show that, traditional scheduling methods from general project management community have high degree of centralization and limited capability to deal with uncertainty, and do not well catered for software projects. In regard to these aspects agile methods are better, but most of them lack scalability to high complexity. Some methods have balanced competence and special attention should be paid to them. In brief, methods should be chosen according to different situations in practice.",2015,0, 114,Software Reliability Management: Techniques and Applications,"In this chapter we discuss three new stochastic models for assessing software reliability and the degree of software testing progress, and a software reliability management tool.",2005,0, 115,Software Reliability Models: A Selective Survey and New Directions,"Software development, design, and testing have become very intricate with the advent of modern highly distributed systems, networks, middleware, and interdependent applications. The demand for complex software systems has increased more rapidly than the ability to design, implement, test, and maintain them, and the reliability of software systems has become a major concern for our modern society. Within the last decade of the 20th century and the first few years of the 21st century, many reported system outages or machine crashes were traced back to computer software failures. Consequently, recent literature is replete with horror stories due to software problems.",2003,0, 116,Special Characteristics of Software and Software Markets - Implications for Managing Software Business,"This paper examines software markets, and especially how market effects affect how value is created and captured. We propose an initial model that incorporates both the special effects related to software markets, how these effects affect value of software and what factors should be considered to leverage the effects. The model is based on a literature review that resulted in identifying four specific market effects: network externalities, returns to complements, lock-in, and positive feedback. Furthermore, we identified four issues the firms should consider when pursuing the desired market effects: market definition, value configuration, contracts and legal actions, and customers. The literature-based model was evaluated and complemented by two case studies with Finnish software firms incorporating altogether 17 interviews. The model gained initial support on the basis of the interviews. This paper proposes that the identified market effects should be considered when the value of software engineering decisions is evaluated",2006,0, 117,Supporting software development with roles.,"Software development tools are very important in software engineering. Although roles have been acknowledged and applied for many years in several areas related to software engineering, there is a lack of research on software development tools based on roles. Most significantly, there is no complete and consistent consideration of roles in all the phases of software development. Considering the increasing importance and applications of roles in software development, this paper intends to discuss the importance of roles in software engineering and that of role-based software development; review the literature relevant to role mechanisms in software engineering; propose and describe a role-based software process; and implement a prototype tool for developing complex software systems with the help of role mechanisms",2006,0, 118,Supporting the selection of model-based testing approaches for software projects,"Software technologies, such as model-based testing approaches, have specific characteristics and limitations that can affect their use in software projects. To make available knowledge regarding such technologies is important to support the decision regarding their use in software projects. In particular, a choice of model-based testing approach can influence testing success or failure. Therefore, this paper aims at describing knowledge acquired from a systematic review regarding model-based testing approaches and proposing an infrastructure towards supporting their selection for software projects.",2010,0, 119,Ten Strategies for Successful Distributed Development,"This paper presents an overview of the field of distributed development of software systems and applications (DD). Based on an analysis of the published literature, including its use in different industrial contexts, we provide a preliminary analysis that structures existing DD knowledge, indicating opportunities but identifying threats to communication, coordination, and control caused by temporal distance, geographical distance, and socio-cultural distance. An analysis of the case and field study literature has been used to identify strategies considered effective for countering the identified threats. The paper synthesizes from these a set of 10 general strategies for successful DD which, if adopted, should lead to increased company resilience.",2014,0, 120,The affordable application of formal methods to software engineering,"The purpose of this research paper is to examine (1) why formal methods are required for software systems today; (2) the Praxis High Integrity Systems' Correctness-by-Construction methodology; and (3) an affordable application of a formal methods methodology to software engineering. The cultivated research for this paper included literature reviews of documents found across the Internet and in publications as well as reviews of conference proceedings including the 2004 High Confidence Software and Systems Conference and the 2004 Special Interest Group on Ada Conference. This research realized that (1) our reliance on software systems for national, business and personal critical processes outweighs the trust we have in our systems; (2) there is a growing demand for the ability to trust our software systems; (3) methodologies such as Praxis' Correctness-by-Construction are readily available and can provide this needed level of trust; (4) tools such as Praxis' SparkAda when appropriately applied can be an affordable approach to applying formal methods to a software system development process; (5) software users have a responsibility to demand correctness; and finally, (6) software engineers have the responsibility to provide this correctness. Further research is necessary to determine what other methodologies and tools are available to provide affordable approaches to applying formal methods to software engineering. In conclusion, formal methods provide an unprecedented ability to build trust in the correctness of a system or component. Through the development of methodologies such as Praxis' Correctness by Construction and tools such as SparkAda, it is becoming ever more cost advantageous to implement formal methods within the software engineering lifecycle. As the criticality of our IT systems continues to steadily increase, so must our trust that these systems will perform as expected. Software system clients, such as government, businesses and all other IT users, must demand that their IT systems be delivered with a proven level of correctness or trust commensurate to the criticality of the function they perform.",2014,0, 121,"The Brave New World of Ambient Intelligence: An Analysis of Scenarios Regarding Privacy, Identity and Security Issues","

The success of Ambient Intelligence (AmI) will depend on how secure it can be made, how privacy and other rights of individuals can be protected and how individuals can come to trust the intelligent world that surrounds them and through which they move. This contribution presents an analysis of ambient intelligence scenarios, particularly in regard to AmI's impacts on and implications for individual privacy. The analysis draws on our review of more than 70 AmI projects, principally in Europe. It notes the visions as well as the specifics of typical AmI scenarios. Several conclusions can be drawn from the analysis, not least of which is that most AmI scenarios depict a rather too sunny view of our technological future. Finally, reference is made to the SWAMI project (Safeguards in a World of Ambient Intelligence) which, inter alia, has constructed ”dark” scenarios, as we term them, to show how things can go wrong in AmI and where safeguards are needed.

",2006,0, 122,The Clients? Impact on Effort Estimation Accuracy in Software Development Projects,"This paper focuses on the clients' impact on estimation accuracy in software development projects. Client related factors contributing to effort overruns as well as factors preventing overruns are investigated. Based on a literature review and a survey of 300 software professionals we find that: 1) software professionals perceive that clients impact estimation accuracy. Changed and new requirements are perceived as the clients' most frequent contribution to overruns, while overruns are prevented by the availability of competent clients and capable decision makers. 2) Survey results should not be used in estimation accuracy improvement initiatives without further analysis. Surveys typically identify directly observable and project specific causes for overruns, while substantial improvement is only possible when the underlying causes are understood",2005,0, 123,The design of participatory agent-based social simulations.,"It is becoming widely accepted that applied social simulation research is more effective if potential users and stakeholders are closely involved in model specification, design, testing and use, using the principles of participatory research. In this paper, a review of software engineering principles and accounts of the development of simulation models are used as the basis for recommendations about some useful techniques that can aid in the development of agent-based social simulation models in conjunction with users. The authors' experience with scenario analysis, joint analysis of design workshops, prototyping and user panels in a collaborative participatory project is described and, in combination with reviews from other participatory projects, is used to suggest how these techniques might be used in simulation-based research.",2014,0, 124,The evolution of goal-based information modelling: Literature review,"Purpose – The first in a series on goal?based information modelling, this paper presents a literature review of two goal?based measurement methods. The second article in the series will build on this background to present an overview of some recent case?based research that shows the applicability of the goal?based methods for information modelling (as opposed to measurement). The third and concluding article in the series will present a new goal?based information model – the goal?based information framework (GbIF) – that is well suited to the task of documenting and evaluating organisational information flow. Design/methodology/approach – Following a literature review of the goal?question?metric (GQM) and goal?question?indicator?measure (GQIM) methods, the paper presents the strengths and weaknesses of goal?based approaches. Findings – The literature indicates that the goal?based methods are both rigorous and adaptable. With over 20 years of use, goal?based methods have achieved demonstrable and quantifiable results in both practitioner and academic studies. The down side of the methods are the potential expense and the “expansiveness” of goal?based models. The overheads of managing the goal?based process, from early negotiations on objectives and goals to maintaining the model (adding new goals, questions and indicators), could make the method unwieldy and expensive for organisations with limited resources. An additional challenge identified in the literature is the narrow focus of “top?down” (i.e. goal?based) methods. Since the methods limit the focus to a pre?defined set of goals and questions, the opportunity for discovery of new information is limited. Research limitations/implications – Much of the previous work on goal?based methodologies has been confined to software measurement contexts in larger organisations with well?established information gathering processes. Although the next part of the series presents goal?based methods outside of this native context, and within low maturity organisations, further work needs to be done to understand the applicability of these methods in the information science discipline. Originality/value – This paper presents an overview of goal?based methods. The next article in the series will present the method outside the native context of software measurement. With the universality of the method established, information scientists will have a new tool to evaluate and document organisational information flow.",2011,0, 125,The Golden Age of Software Architecture: A Comprehensive Survey,"This retrospective on nearly two decades of software architecture research examines the maturation of the software architecture research area by tracing the evolution of research questions and results through their maturation cycle. We show how early qualitative results set the stage for later precision, formality, and automation, how results have built up over time, and how the research results have moved into practice.",2009,0, 126,The Need for a Paradigm Shift in Addressing Privacy Risks in Social Networking Applications,"Abstract New developments on the Internet in the past years have brought up a number of online social networking applications within the so-called Web 2.0 world that experienced phenomenal growth and a tremendous attention in the public. Online social networking services build their business model on the myriad of sensitive personal data provided freely by their users, a fact that is increasingly getting the attention of privacy advocates. After explaining the economic meaning and importance of online social networks to eCommerce in general and reiterating the basic principles of Web 2.0 environments and their enterprise mechanisms in particular, this paper addresses the main informational privacy risks of Web 2.0 business models with a focus on online social networking sites. From literature review and current expert discussions, new privacy research questions are proposed for the future development of privacyenhancing technologies used within Web 2.0 environments. The resulting paradigm shift needed in addressing privacy risks in social networking applications is likely to focus less on access protection, anonymity and unlinkability type of PET-solutions and more on privacy safeguarding measures that enable greater transparency and that directly attach context and purpose limitation to the personally identifiable data itself. The FIDIS/IFIP workshop discussion has resulted in the idea to combine existing privacy-enhancing technologies and protection methods with new safeguarding measures to accommodate the Web 2.0 dynamics and to enhance the informational privacy of Web 2.0 users.",2008,0, 127,Tightening knowledge sharing in distributed software communities by applying semantic technologies,This report describes the state of the research and practice in the areas of Knowledge Management in Software Engineering. Special emphasis is laid upon specific knowledge representation and reasoning requirements coming from the open-source communities and outsourced software development,2009,0, 128,Tool integration in software engineering: The state of the art in 2004,"The aim of this paper is to identify and investigate previous research in the area of software engineering environments, and how tools are integrated to form such a facility, with the goal of identifying future research questions. The paper consists of an explanation of the method used to identify papers, with each placed in a candidate category. Each of the seven categories is examined in turn, with each paper in each category then reviewed, with any possible arising research questions identified by the author of this paper included. Also included in this review is a section that discusses additional material being papers that have been referenced more than once. The paper concludes by making summing up all the suggestions for further research that were identified in the reviewing of particular papers. Full references and a glossary of commonly used terms completes the paper.",2004,0, 129,Towards management of software as assets: A literature review with additional sources.,"How should and how can software be managed? What is the management concept or paradigm? Software professionals, if they think about management of software at all, think in terms of Configuration Management. This is not a method for over-all software management; it merely controls software items' versions. This is much too fine a level of granularity. Management begins with accurate and timely information. Managers tend to view software as something (unfortunately) very necessary but troubling because, they have very little real information about it and control is still nebulous, at best. Accountants view software as an incomprehensible intangible, neither wholly an expense nor really an asset. They do not have, nor do they produce information concerning it. Their data concerning software barely touches on direct outlays and contains no element of effort. Part of this disorientation is the basic confusion between ''business software'' and ''engineering software''. This ''Gordian Knot'' must be opened; it needs to be made much more clear. This article shows a direction how such clarity may be achieved.",2008,0, 130,Transformational Approaches to Model Driven Architecture - A Review,"The model driven architecture (MDA) has been widely used as a paradigm in software development. This paper presents an overview on the current research in the model driven architecture. We analyze the key concepts of the MDA by illustrative examples, explore the existing approaches and tools that support model transformation - the essential part of the MDA, and classify these methods based on a multidimensional scheme. Furthermore, this paper summarizes the current technical achievements of model transformation techniques in software development at different abstraction levels of a system.",2007,0, 131,Troubleshooting large-scale new product development embedded software projects,"

Many modern new product development (NPD) embedded software projects are required to be run under turbulent conditions. Both the business and the technological environments are often volatile. Uncertainty is then an inherent part of the project management. In such cases, traditional detailed up-front planning with supporting risk management is often inadequate, and more adaptive project management tools are needed. This industrial paper investigates the typical problem space of those embedded software projects. Based on a literature survey coupled with our practical experiences, we compose an extensive structured matrix of different potential project problem factors, and propose a method for assessing the project's problem profile with the matrix. The project manager can then utilize that information for problem-conscious project management. Some industrial case examples of telecommunications products embedded software development are illustrated.

",2006,0, 132,What Architects Should Know About Reverse Engineering and Rengineering,"Architecture reconstruction is a form of reverse engineering that reconstructs architectural views from an existing system. It is often necessary because a complete and authentic architectural description is not available. This paper puts forward the goals of architecture reconstruction, revisits the technical difficulties we are facing in architecture reconstruction, and presents a summary of a literature survey about the types of architectural viewpoints addressed in reverse engineering research.",2005,0, 133,A Comparison of Software Project Overruns-Flexible versus Sequential Development Models,"Flexible software development models, e.g., evolutionary and incremental models, have become increasingly popular. Advocates claim that among the benefits of using these models is reduced overruns, which is one of the main challenges of software project management. This paper describes an in-depth survey of software development projects. The results support the claim that projects which employ a flexible development model experience less effort overruns than do those which employ a sequential model. The reason for the difference is not obvious. We found, for example, no variation in project size, estimation process, or delivered proportion of planned functionality between projects applying different types of development model. When the managers were asked to provide reasons for software overruns and/or estimation accuracy, the largest difference was that more of flexible projects than sequential projects cited good requirement specifications-and good collaboration/communication with clients as contributing to accurate estimates.",2005,0, 134,A meta-analysis of the technology acceptance model,"The paper will address issues related to 3D capture, documentation, storage and management of virtual replicas of museum objects for documentation purposes in view of their inclusion in Europeana. 3D cultural objects present a number of challenges concerning their management. The 3D-COFORM project will provide solutions aiming at making 3D documentation a common practice in the cultural heritage sector. This requires the definition of good practices for data acquisition and storage, together with the design of a novel documentation system for the acquisition and simplification procedures. In fact, these initial steps are often undocumented, and this makes the outcome unreliable for the strict criteria of heritage documentation. The following steps of storing, managing, searching and displaying 3D objects is still an uneasy process, and the project aims at providing state-of-the-art tools to improve the performance in all these stages. Finally, the project will address business processes, mainly through the design and start-up of a Virtual Competence Centre in order to provide guidance to cultural heritage institutions and practitioners wishing to incorporate 3D into their everyday practice.",2010,0, 135,A Probabilistic Model for Predicting Software Development Effort,"Recently, Bayesian probabilistic models have been used for predicting software development effort. One of the reasons for the interest in the use of Bayesian probabilistic models, when compared to traditional point forecast estimation models, is that Bayesian models provide tools for risk estimation and allow decision-makers to combine historical data with subjective expert estimates. In this paper, we use a Bayesian network model and illustrate how a belief updating procedure can be used to incorporate decision-making risks. We develop a causal model from the literature and, using a data set of 33 real-world software projects, we illustrate how decision-making risks can be incorporated in the Bayesian networks. We compare the predictive performance of the Bayesian model with popular nonparametric neural-network and regression tree forecasting models and show that the Bayesian model is a competitive model for forecasting software development effort.",2005,0, 136,A survey of literature on the teaching of introductory programming,"This paper reports the authors' experiences in teaching introductory programming for engineers in an interactive classroom.. The authors describe how the course has evolved from the traditional course, the structure of the classroom, the choice of software, and the elements involving interactive, active, and collaborative learning. They discuss their strategy for assessment. They describe the assessment results including a retrospective assessment of the previous course. They suggest how the course relates to the nontraditional student. They conclude with some suggestions for future modifications",2001,0,311 137,"A Survey on Hair Modeling: Styling, Simulation, and Rendering","Realistic hair modeling is a fundamental part of creating virtual humans in computer graphics. This paper surveys the state of the art in the major topics of hair modeling: hairstyling, hair simulation, and hair rendering. Because of the difficult, often unsolved problems that arise in alt these areas, a broad diversity of approaches is used, each with strengths that make it appropriate for particular applications. We discuss each of these major topics in turn, presenting the unique challenges facing each area and describing solutions that have been presented over the years to handle these complex issues. Finally, we outline some of the remaining computational challenges in hair modeling",2007,0, 138,An assessment of systems and software engineering scholars and institutions (1999-2003),"This paper presents the findings of a five-year study of the top scholars and institutions in the Systems and Software Engineering field, as measured by the quantity of papers published in the journals of the field. The top scholar is Khaled El Emam of the Canadian National Research Council, and the top institution is Carnegie Mellon University and its Software Engineering Institute. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period.",2004,0, 139,An Empirical Analysis of the Impact of Software Vulnerability Announcements on Firm Stock Price,"Security defects in software cost millions of dollars to firms in terms of downtime, disruptions, and confidentiality breaches. However, the economic implications of these defects for software vendors are not well understood. Lack of legal liability and the presence of switching costs and network externalities may protect software vendors from incurring significant costs in the event of a vulnerability announcement, unlike such industries as auto and pharmaceuticals, which have been known to suffer significant loss in market value in the event of a defect announcement. Although research in software economics has studied firms' incentives to improve overall quality, there have not been any studies which show that software vendors have an incentive to invest in building more secure software. The objectives of this paper are twofold. 1) We examine how a software vendor's market value changes when a vulnerability is announced. 2) We examine how firm and vulnerability characteristics mediate the change in the market value of a vendor. We collect data from leading national newspapers and industry sources, such as the Computer Emergency Response Team (CERT), by searching for reports on published software vulnerabilities. We show that vulnerability announcements lead to a negative and significant change in a software vendor's market value. In our sample, on average, a vendor loses around 0.6 percent value in stock price when a vulnerability is reported. We find that a software vendor loses more market share if the market is competitive or if the vendor is small. To provide further insight, we use the information content of the disclosure announcement to classify vulnerabilities into various types. We find that the change in stock price is more negative if the vendor fails to provide a patch at the time of disclosure. Also, more severe flaws have a significantly greater impact. Our analysis provides many interesting implications for software vendors as well as policy make- rs.",2007,0, 140,Applications of agent technology in communications: a review.,"Distance education that transmits information on the global Internet has become the trend of educational development in the coming years; however, it still has some drawbacks and shortcomings. This article focuses on how to apply Multi-Agent technology in distance learning systems. The systems are supposed to teach students individualized according to their personality characteristics and cognitive abilities by establishing Student Agent and Teacher Agent, thus, to improve the intelligence and personalization of distance education system, in order to fully tap the potential of learners and improve teaching effectiveness and learning efficiency.",2010,0, 141,Computer vision in the interface,"In this paper, we present a promising approach to systematically testing graphical user interfaces (GUI) in a platform independent manner. Our framework uses standard computer vision techniques through a python-based scripting language (Sikuli script) to identify key graphical elements in the screen and automatically interact with these elements by simulating keypresses and pointer clicks. The sequence of inputs and outputs resulting from the interaction is analyzed using grammatical inference techniques that can infer the likely internal states and transitions of the GUI based on the observations. Our framework handles a wide variety of user interfaces ranging from traditional pull down menus to interfaces built for mobile platforms such as Android and iOS. Furthermore, the automaton inferred by our approach can be used to check for potentially harmful patterns in the interface's internal state machine such as design inconsistencies (eg,. a keypress does not have the intended effect) and mode confusion that can make the interface hard to use. We describe an implementation of the framework and demonstrate its working on a variety of interfaces including the user-interface of a safety critical insulin infusion pump that is commonly used by type-1 diabetic patients.",2013,0, 142,Determining the impact of software engineering research on practice.,"The impact project provides a solid and scholarly assessment of the impact software engineering research has had on software engineering practice. The assessment takes the form of a series of studies and briefings, each involving literature searches and, where possible, personal interviews.",2008,0, 143,Development of integrated quality information system for continuous improvement,"Based on the extensive literature review and the philosophy of integration of quality tools, the paper develops integrated quality information system (IQIS) software, and applies it to a manufacturing company. The software integrates the quality tools as well as the process quality information. It provides guidance for locating bottleneck process through integrated data analysis and also supports six sigma process improvement. The result shows that the application of IQIS can optimize the process of design and manufacturing, shorten the cycle time of product, reduce the cost, and realize quality improvement continuously",2006,0, 144,Digital Human Modeling for Product Lifecycle Management,The paper presents a new methodology for displaying the user's functional demands and evaluating product design for older people. Digital design models are integrated with virtual users (digital human models) generated in Jack software and refined from 3D body scanning data. The real interaction between physical design prototypes and potential users are captured by a 3D motion capture system. The captured motion is imposed onto the virtual human in Jack to perform design task analysis graphically.,2009,0, 145,Eliciting Requirements by Analysing Threats Caused by Users,"Eliciting requirements is an important issue of system development projects. Some approaches propose to identify requirements by analysing system malfunctioning. Different sources of malfunctioning are dealt with by these approaches: obstacles, conflicts, risks, etc. Our proposal is to analyse each of these sources of malfunctioning using a single notion that we call ""threat"". We propose to use this notion in a method that guides the identification and analysing of each of these sources of malfunctioning. The method helps eliciting requirements to prevent the threat. A threat is defined by a number of variables. This paper presents a literature review of all threats that relate to users. The review is based on a framework that includes several perspectives to analyse user error. The user threats is part of a global threats classification that also covers hardware, environment, design and project types of threats.",2005,0, 146,Empirical study of the effects of open source adoption on software development economics,"In this paper, we present the results of empirical study of the effects of open source software (OSS) components reuse on software development economics. Specifically, we examined three economic factors - cost, productivity, and quality. This study started with an extensive literature review followed by an exploratory study conducted through interviews with 18 senior project/quality managers, and senior software developers. Then, the result of the literature review and the exploratory study was used to formulate research model, hypotheses, and survey questionnaire. Software intensive companies from Canada and the US were targeted for this study. The period of study was between September 2004 and March 2006. Our findings show that there are strong significant statistical correlations between the factors of OSS components reuse and software development economics. The conclusion from this study shows that software organizations can achieve some economic gains in terms of software development productivity and product quality if they implement OSS components reuse adoption in a systematic way. A big lesson learned in this study is that OSS components are of highest quality and that open source community is not setting a bad example (contrary to some opinion) so far as 'good practices' are concerned.",2007,0, 147,Evolving Conditional Value Sets of Cost Factors for Estimating Software Development Effort,"The software cost estimation process is one of the most critical managerial activities related to project planning, resource allocation and control. As software development is a highly dynamic procedure, the difficulty of providing accurate cost estimations tends to increase with development complexity. The inherent problems of the estimation process stem from its dependence on several complex variables, whose values are often imprecise, unknown, or incomplete, and their interrelationships are not easy to comprehend. Current software cost estimation models do not inspire enough confidence and accuracy with their predictions. This is mainly due to the models' sensitivity to project data values, and this problem is amplified because of the vast variances found in historical project attribute data. This paper aspires to provide a framework for evolving value ranges for cost attributes and attaining mean effort values using the Al-oriented problem-solving approach of genetic algorithms, with a twofold aim. Firstly, to provide effort estimations by analogy to the projects classified in the evolved ranges and secondly, to identify any present correlations between effort and cost attributes.",2007,0, 148,Exploiting ?Interface Capabilities? in Overseas Markets: Lessons from Japanese Mobile Phone Handset Manufacturers in the US,,2006,0, 149,Formalizing Informal Stakeholder Decisions--A Hybrid Method Approach,"Decisions are hard to make when available information is incomplete, inconsistent, and ambiguous. Moreover, good-sufficiently complete, consistent, traceable, and testable-requirements are a prerequisite for successful projects. Without understanding what the stakeholders really want and need and writing these requirements in a concise, understandable and testable manner, projects will not develop what the stakeholders wanted leading to either major late rework or project termination. During the development of the WinWin negotiation model and the EasyWinWin requirements negotiation method, we have gained considerable experience in capturing decisions made by stakeholders in over 100 projects. However, the transition from informal decisions to requirements specification is still a challenging problem. Based on our analysis of the projects to date, we have developed an integrated set of gap-bridging methods as a hybrid method to support stakeholders making better decisions in order to eliminate requirements related problems and ease the process of formality transition",2007,0, 150,General principles of construction of knowledge computer systems,"Steep development of computer systems has created a great variety of important technical and scientific problems: for development of an architecture of algorithms, control circuits for access to common resources, and computing structures and distributed database structures. In this paper, the principles of parallelism for the construction of multilevel distributed computer networks are given.",2003,0, 151,Higher Order Pheromone Models in Ant Colony Optimisation,"As a meta-heuristic approach, Ant colony optimization (ACO) has many applications. In the algorithm selection of pheromone models is the top priority. Selecting pheromone models that don't suffer negative biases is a natural choice. Specifically for the travelling salesman problem, the first order pheromone is widely recognized.When come across travelling salesman problem, we study the reasons for the success of ant colony optimization from the perspective of pheromone models,and unify different order pheromone models. In tests, we have introduced the concept of sample locations and the similarity coefficient to pheromone models. The first order pheromone model and the second order pheromone model are compared and are further analysed. We illustrate that the second order pheromone model has better global search ability and diversity of population than the former. With appropriate-scale travelling salesman problems, the second order model performs better than the first order pheromone model.",2017,0, 152,How influential is Brooks' Law? A longitudinal citation context analysis of Frederick Brooks' The Mythical Man-Month,"

Citation context analysis is used to demonstrate the diversity of concept symbols that a book-length publication can represent and the diffusion of influence of these concepts over time and across scholarly disciplines. A content analysis of 574 citation contexts from 497 journal articles citing an edition of Frederick P. Brooks, Jr's The Mythical Man-Month (MMM) over the period 1975-1999 showed that MMM represents a variety of different concepts and is cited in a wide range of subject areas. Over time, a high level of interest in MMM spread from software engineering and computer science to management and information systems, with different areas showing different patterns of focus on concepts within the work. 'Brooks' Law' (the 'mythical man-month' or 'adding more people to a late project makes it later'), accounted for less than 30% of the classified citation contexts. The findings contribute to our understanding of the diffusion of ideas in scholarly communication, and the diversity that can underlie the creation of a reference in a scholarly publication.

",2006,0, 153,Immune System Approaches to Intrusion Detection ? A Review,"The Battery-Sensing Intrusion Protection System (B-SIPS) [1] initially took a non-conventional approach to intrusion detection by recognizing attacks based on anomalous Instantaneous Current (IC) drain. An extension of B-SIPS, the Multi-Vector Portable Intrusion Detection System (MVP-IDS) validates the idea of recognizing attacks based on anomalous IC drain by correlating the detected anomalies with wireless attack traffic from both the Wi-Fi and Bluetooth mediums. To effectively monitor the Wi-Fi and Bluetooth mediums for malicious packet streams, the Snort-Based Wi-Fi and Bluetooth Attack Detection and Signature System (BADSS) modules were introduced. This paper illustrates how a blended strategy of using a low overhead tripwire can be combined with more sophisticated detection mechanisms to provide an effective protection system for limited resource wireless information technology devices.",2010,0, 154,In-house software development: What project management practices lead to success?,"Project management is an important part of software development, both for organizations that rely on third-party software development and for those whose software is developed primarily in-house. Moreover, quantitative survey-based research regarding software development's early, nontechnical aspects is lacking. To help provide a project management perspective for managers responsible for in-house software development, we conducted a survey in an attempt to determine the factors that lead to successful projects. We chose a survey because of its simplicity and because we hoped to find relationships among variables. Also, a survey let us cover more projects at a lower cost than would an equivalent number of interviews or a series of case studies. Our results provide general guidance for business and project managers to help ensure that their projects succeed.",2005,0, 155,Mathematical Methods for Shape Analysis and form Comparison in 3D Anthropometry: A Literature Review,"

Form comparison is a fundamental part of many anthropometric, biological, anthropological, archaeological and botanical researches, etc. In traditional anthropometric form comparison methods, geometry characteristics and internal structure of surface points are not adequately considered. Form comparison of 3D anthropometric data can make up the deficiency of traditional methods. In this paper, methods for analyzing 3D other than 2D objects are highlighted. We summarize the advance of form comparison techniques in the last decades. According to whether they are based upon anatomical landmarks, we partition them into two main categories, landmark-based methods and landmark-free methods. The former methods are further sub-divided into deformation methods, superimposition methods, and methods based on linear distances, while the latter methods are sub-divided into shape statistics-based methods, methods based on function analysis, view-based methods, topology-based methods, and hybrid methods. Examples for each method are presented. The discussion about their advantages and disadvantages are also introduced.

",2007,0, 156,Microformats: The Next (Small) Thing on the Semantic Web?,"Clever application of existing XHTML elements and class attributes can make it easier to describe people, places, events, and other semistructured information in human-readable form. In this paper, the author takes a more detailed look at some examples of microformats, the general principles by which they can be constructed, and how a community of users is forming around these seemingly ad hoc specifications to advance the cause of what some call an alternative to the semantic Web, the ""lowercase semantic Web"".",2006,0, 157,Mobrex: Visualizing Users' Mobile Browsing Behaviors,"This paper deals with the Mobile Browsing Explorer (Mobrex) to give analysts a set of interactive visualizations that highlight various aspects of how users browse an information space. Here, we describe the tool and demonstrate its support of a user study of three browsing techniques for mobile maps. Although we mainly focus here on PDAs and mobile map browsing, Mobrex can easily support analysts studying user interaction with other information spaces and other devices, including mobile phones and desktop computers.",2008,0, 158,Never the CS and IS Twain Shall Meet?,"An enormous intellectual distance exists between the fields of computer science and information systems, which needs to be fixed soon.",2005,0, 159,On the adaptation of Grounded Theory procedures: Insights from the evolution of the 2G method,"Purpose ? To articulate the interpretations and adaptations of Grounded Theory made within the 2G method, and the motivations behind them. Design/methodology/approach ? Literature review and conceptual approach reflecting on the authors' experience of having developed the 2G method. Findings ? Identifies six adaptations of Grounded Theory as being of particular interest. Five relate to method procedures, namely: developing a core category; coding interview data; exposing evolving theories to stakeholders; developing multiple concept frameworks; and inter?linking concepts. The sixth relates to expectations on method users, and the tension between expertise relating to the phenomenon being analysed, and openness in interpreting the data. Research limitations/implications ? Shows how Grounded Theory procedures have been adapted and used in IS methods. Specifically, the paper illustrates and makes explicit how a specific method (the 2G method) has evolved. Practical implications ? Provides insights for users of Grounded Theory (GT) and developers of IS methods on how GT procedures have been interpreted and adapted in previous and the authors' own research. Originality/value ? Provides insights into how Grounded Theory (GT) procedures have been adapted for use in other IS methods, with insights from the authors' own experience of having developed the 2G method. Reflects on the use of GT procedures in a number of case studies related to tool evaluation. Identifies six areas in which specific interpretations or adaptations of GT were considered necessary in the contexts in which the studies were undertaken, and justifies these six departures from standard interpretations of GT procedures.",2005,0, 160,Open Source Software in Industry,"Many of today's most innovative products and solutions are developed on the basis of free and open source software (FOSS). Most of us can no longer imagine the world of software engineering without open source operating systems, databases, application servers, Web servers, frameworks, and tools. Brands such as Linux, MySQL, Apache, and Eclipse have shaped product and service development. They facilitate competition and open markets as well as innovation to meet new challenges. De facto FOSS standards such as Eclipse and Corba simplify the integration of products, whether they're all from one company or from multiple suppliers. IEEE Software has assembled this theme section to provide a brief yet practical overview of where FOSS is heading.",2008,0, 161,Practical Guidelines for Expert-Judgment-Based Software Effort Estimation,"This article presents seven guidelines for producing realistic software development effort estimates. The guidelines derive from industrial experience and empirical studies. While many other guidelines exist for software effort estimation, these guidelines differ from them in three ways: 1) They base estimates on expert judgments rather than models. 2) They are easy to implement. 3) They use the most recent findings regarding judgment-based effort estimation. Estimating effort on the basis of expert judgment is the most common approach today, and the decision to use such processes instead of formal estimation models shouldn't be surprising. Simple process changes such as reframing questions can lead to more realistic estimates of software development efforts.",2005,0, 162,Predicting Bugs from History,"The author gives an indication of the accuracy attainable with RF numerical methods in the prediction of those quantities which depend upon the near-field of radiating structures. A wire-mesh RF mathematical model of a medium size airliner is described. Using this model, and with NEC as the solution code, predictions are presented of the terminal reactance of an installed `long-wire' HF antenna. These represent true predictions rather than `post-dictions' because the RF mathematical model does not incorporate any empirically derived information. The difficulties involved in obtaining the corroborative measurements are examined and direct comparison with the predictions is made. The particular antenna configuration chosen for this study is considered to constitute a stringent test of the predictions. The agreement between calculation and subsequent measurement is fair",1991,0, 163,Project Management within Virtual Software Teams,"When implementing software development in a global environment, a popular strategy is the establishment of virtual teams. The objective of this paper is to examine the effective project management of this type of team. In the virtual team environment problems arise due to the collaborative nature of software development and the impact distance introduces. Distance specifically impacts coordination, visibility, communication and cooperation within a virtual team. In these circumstances the project management of a virtual team must be carried out in a different manner to that of a team in a single-site location. Results from this research highlighted six specific project management related areas that need to be addressed to facilitate successful virtual team operation. Organizational structure, risk management, infrastructure, process, conflict management and team structure and organization. Additional related areas are the sustained support of senior management and the provision of effective infrastructure",2006,0, 164,Protocols in the use of empirical software engineering artifacts,"Ethnography is a powerful qualitative empirical approach which can be used to understand and hence improve work practice. Ethnographically-informed methods are widely adopted in the Social Sciences but are not so popular with software engineering researchers. As with many inter-disciplinary approaches, ethnographic methods can be misunderstood and misapplied, leading to results being dismissed with a “so what?” response. Drawing on my own and other's experience of applying this approach in empirical studies of software practice, I will provide an overview of the role of ethnography in Software Engineering research. I will describe the use of ethnographic methods as a means to provide an in-depth understanding of the socio-technological realities surrounding everyday software development practice. The knowledge gained can be used to understand developers' work practices, to inform the development of new processes, methods and tools, and to evaluate and evolve existing practices.",2012,0, 165,"Software engineering: The past, the future, and your TCSE.","Although it has seen some spectacular successes, the software engineering field still has room for improvement. The Technical Council on Software Engineering is committed to advancing the development, application, and adoption of software engineering.",2005,0, 166,Statistical significance testing - a panacea for software technology experiments?,"

Empirical software engineering has a long history of utilizing statistical significance testing, and in many ways, it has become the backbone of the topic. What is less obvious is how much consideration has been given to its adoption. Statistical significance testing was initially designed for testing hypotheses in a very different area, and hence the question must be asked: does it transfer into empirical software engineering research? This paper attempts to address this question. The paper finds that this transference is far from straightforward, resulting in several problems in its deployment within the area. Principally problems exist in: formulating hypotheses, the calculation of the probability values and its associated cut-off value, and the construction of the sample and its distribation. Hence, the paper concludes that the topic should explore other avenues of analysis, in an attempt to establish which analysis approaches are preferable under which conditions, when conducting empirical software engineering studies.

",2004,0, 167,The art and science of software architecture.,"In this paper we propose some security mechanisms that can be used in a multimedia content distribution system with digital rights management (DRM) to ensure that the software tools used in the client side are trusted not only in the moment when they are installed but during their whole life operation. For this purpose, certification, verification, reverification and recertification mechanisms are described, discussing the advantages as well as the drawbacks of using such techniques. A complex use case is presented to complement the description of the proposed mechanisms. The presented architecture and mechanisms are being implemented in the AXMEDIS project, which aims to create an innovative technology framework for the automatic production, protection and distribution of digital cross media contents over a range of different media channels, including PC (on the Internet), PDA, kiosks, mobile phones and i-TV.",2007,0, 168,Using Repository of Repositories (RoRs) to Study the Growth of F/OSS Projects: A Meta-Analysis Research Approach,"AbstractFree/Open Source Software (F/OSS) repositories contain valuable data and their usefulness in studying software development and community activities continues to attract a lot of research attention. A trend in F/OSS studies is the use of metadata stored in a repository of repositories or RoRs. This paper utilizes data obtained from such RoRs -FLOSSmole- to study the types of projects being developed by the F/OSS community. We downloaded projects by topics data in five areas (Database, Internet, Software Development, Communications, and Games/Entertainment) from Flossmoles raw and summary data of the sourceforge repository. Time series analysis show the numbers of projects in the five topics are growing linearly. Further analysis supports our hypothesis that F/OSS development is moving up the stack from developer tools and infrastructure support to end-user applications such as Databases. The findings have implications for the interpretation of the F/OSS landscape, the utilization and adoption of open source databases, and problems researchers might face in obtaining and using data from RoRs.",2007,0, 169,Using RFID Technologies to Capture Simulation Data in a Hospital Emergency Department,"Simulation professionals understand the importance of accurate data for model validation. Traditional sources of simulation data come from information technology systems, manual records from staff, observations, and estimates by subject matter experts. This paper discusses how radio frequency identification (RFID) technologies were used on a recent consulting engagement at a hospital. Data collected through RFID can validate or replace activity duration estimates from traditional sources. However, the accuracy and cost effectiveness of RFID is not guaranteed. A sound methodology was developed, which included rigorous planning and testing of hardware, processes and data analysis. Hardware vendors needed to understand what the simulation required so they could properly setup equipment and software. Also, ED staff needed to understand the purpose of this data collection to avoid anxiety about personnel evaluations. Finally, efficient and reliable issue and collection of patient tags was crucial to the success of this effort",2006,0, 170,Web engineering security: a practitioner's perspective,"Security is an elusive target in today's high-speed and extremely complex, Web enabled, information rich business environment. This paper presents the idea that there are essential, basic organizational elements that need to be identified, defined and addressed before examining security aspects of a Web engineering development process. These elements are derived from empirical evidence based on a Web survey and supporting literature. This paper makes two contributions. The first contribution is the identification of the Web engineering specific elements that need to be acknowledged and resolved prior to the assessment of a Web engineering process from a security perspective. The second contribution is that these elements can be used to help guide security improvement initiatives in Web engineering",2007,0, 171,2D-3D MultiAgent GeoSimulation with Knowledge-Based Agents of Customers? Shopping Behavior in a Shopping Mall,"

In this paper we present a simulation prototype of the customers' shopping behavior in a mall using a knowledge-based multiagent geosimulation approach. The shopping behavior in a shopping mall is performed in a geographic environment (a shopping mall) and is influenced by several shopper's characteristics (internal factors) and factors which are related to the shopping mall (external or situational factors). After identifying these factors from a large literature review we grouped them in what we called “dimensions”. Then we used these dimensions to design the knowledge-based agents' models for the shopping behavior simulation. These models are created from empirical data and implemented in the MAGS geosimulation platform. The empirical data have been collected from questionnaires in the Square One shopping mall in Toronto (Canada). After presenting the main characteristics of our prototype, we discuss how mall's managers of the Square One can use the Mall_MAGS prototype to make decisions about the mall spatial configuration by comparing different simulation scenarios. The simulation results are presented to mall's managers through a user-friendly tool that we developped to carry out data analysis.

",2005,0, 172,2nd International Workshop on Realising Evidence-Based Software Engineering (REBSE-2),"The REBSE international workshops are concerned with exploring the adaptation and use of the evidence-based paradigm in software engineering research and practice. The workshops address this goal through a mix of presentations and discussion, drawing upon ideas and experiences from other disciplines where appropriate.",2007,0, 173,2nd InternationalWorkshop on Realising Evidence-Based Software Engineering (REBSE-2): Overview and Introduction,"The REBSE international workshops are concerned with exploring the adaptation and use of the evidence-based paradigm in software engineering research and practice, through a mix of presentations and discussion. Here, we provide some background about evidence-based software engineering and its current state.",2007,0, 174,A Bayesian Approach to Modelling Users? Information Display Preferences,"

This paper describes the process by which we constructed a user model for ERST – an External Representation Selection Tutor – which recommends external representations (ERs) for particular database query task types based upon individual preferences, in order to enhance ER reasoning performance. The user model is based on experimental studies which examined the effect of background knowledge of ERs upon performance and preferences over different types of tasks.

",2005,0, 175,A Bayesian Model for Predicting Reliability of Software Systems at the Architectural Level,"

Modern society relies heavily on complex software systems for everyday activities. Dependability of these systems thus has become a critical feature that determines which products are going to be successfully and widely adopted. In this paper, we present an approach to modeling reliability of software systems at the architectural level. Dynamic Bayesian Networks are used to build a stochastic reliability model that relies on standard models of software architecture, and does not require implementation-level artifacts. Reliability values obtained via this approach can aid the architect in evaluating design alternatives. The approach is evaluated using sensitivity and uncertainty analysis.

",2007,0, 176,A brief survey of program slicing,"This paper presents a survey about different types of fuzzy information measures. A number of schemes have been proposed to combine the fuzzy set theory and its application to the entropy concept as a fuzzy information measurements. The entropy concept, as a relative degree of randomness, has been utilized to measure the fuzziness in a fuzzy set or system. However, a major difference exists between the classical Shannon entropy and the fuzzy entropy. In fact while the later deals with vagueness and ambiguous uncertainties, the former tackles probabilistic uncertainties (randomness)",2001,0, 177,A Case History of International Space Station Requirement Faul,"There is never enough time or money to perform verification and validation (V&V) or independent verification and validation (IV&V) on all aspects of a software development project, particularity for complex computer systems. We have only high-level knowledge of how the potential existence of specific requirements faults increases project risks, and of how specific V&V techniques (requirements tracing, code analysis, etc.) contribute to improved software reliability and reduced risk. An approach to this problem, fault-based analysis, is proposed and a case history of the National Aeronautics and Space Administration's (NASA) International Space Station (ISS) project is presented to illustrate its use. Specifically, a tailored requirement fault taxonomy was used to perform trend analysis of the historical profiles of three ISS computer software configuration items as well as to build a prototype common cause tree. ISS engineers evaluated the results and extracted lessons learned",2006,0, 178,A case study of combining i* framework and the Z notation,"Agent-oriented conceptual modeling (AoCM) frameworks are gaining wider popularity in software engineering. In this paper, we are using AoCM framework i* and the Z notation together for requirements engineering (RE). Most formal techniques like Z are suitable for and designed to work in the later phases of RE and early design stages of system development. We argue that early requirements analysis is a very crucial phase of software development. Understanding the organisational environment, reasoning and rationale underlying requirements along with the goals and social dependencies of its stakeholders are important to model and build effective computing systems. The i* framework is one such language which addresses early stage RE issues cited above extremely well. It supports the modeling of social dependencies between agents with respect to tasks and goals both functional and non-functional. We have developed a methodology involving the combined use of i* and the Z notation for agent-oriented RE. In our approach we suggest to perform one-to-one mapping between i* framework and Z. At the first instance general i* model has been mapped into Z schemas, and then i* diagrams of the Emergency Flood Rescue Management Case Study are mapped into Z. Some steps explaining further information refinement with examples are also provided. Using Z specification schemas, we are in a position to express properties that are not restricted to the current state of the system, but also to its past and future history. The case study described in this paper is taken from one of the most important responsibilities of the emergency services agency, managing flood rescue and evacuation operations. By using this case study, we have tested the effectiveness of our methodology to a real-life application",2004,0, 179,A Case Study of Reading Techniques in a Software Company,"Software inspection is an efficient method to detect faults early in the software lifecycle. This has been shown in several empirical studies together with experiments on reading techniques. However, experiments in industrial settings are often considered expensive for a software organization. Hence, many evaluations are performed in the academic environment with artificial documents. In this paper, we describe an empirical study in a software organization where a requirements document under development is used to compare two reading techniques. There are several benefits as well as drawbacks of using this kind of approach, which are extensively discussed in the paper. The reading techniques compared is the standard technique used in the organization (checklist-based) with the test perspective of perspective-based reading. The main result is that the test perspective of perspective-based reading seems more effective and efficient than the company standard method. The impact of this study is that the software organization will apply the new reading technique in future requirements inspections.",2004,0, 180,A Case Study: CRM Adoption Success Factor Analysis and Six Sigma DMAIC Application,"With today's increasingly competitive economy, many organizations have initiated customer relationship management (CRM) projects to improve customer satisfaction, revenue growth and employee productivity gains. However, only a few successful CRM implementations have successfully completed. In order to enhance the CRM implementation process and increase the success rate, in this paper, first we present the most significant success factors for CRM implementation identified by the results of literature reviews and a survey we conducted. Then we propose a strategy to integrate Six Sigma DMAIC methodology with the CRM implementation process addressing five critical success factors (CSF). Finally, we provide a case study to show how the proposed approach can be applied in the real CRM implementation projects. We conclude that by considering the critical success factors, the proposed approach can emphasize the critical part of implementation process and provide high possibility of CRM adoption success.",2007,0, 181,A cautionary note on checking software engineering papers for plagiarism.,"Several tools are marketed to the educational community for plagiarism detection and prevention. This article briefly contrasts the performance of two leading tools, TurnItIn and MyDropBox, in detecting submissions that were obviously plagiarized from articles published in IEEE journals. Both tools performed poorly because they do not compare submitted writings to publications in the IEEE database. Moreover, these tools do not cover the Association for Computing Machinery (ACM) database or several others important for scholarly work in software engineering. Reports from these tools suggesting that a submission has ldquopassedrdquo can encourage false confidence in the integrity of a submitted writing. Additionally, students can submit drafts to determine the extent to which these tools detect plagiarism in their work. Because the tool samples the engineering professional literature narrowly, the student who chooses to plagiarize can use this tool to determine what plagiarism will be invisible to the faculty member. An appearance of successful plagiarism prevention may in fact reflect better training of students to avoid plagiarism detection.",2008,0, 182,"A citation analysis of the ACE2005--2007 proceedings, with reference to the June 2007 CORE conference and journal rankings","

This paper compares the CORE rankings of computing education conferences and journals to the frequency of citation of those journals and conferences in the ACE2005, 2006 and 2007 proceedings. The assumption underlying this study is that citation rates are a measure of esteem, and so there should be a positive relationship between citation rates and rankings. The CORE conference rankings appear to broadly reflect the ACE citations, but there are some inconsistencies between citation rates and the journal rankings. The paper also identifies the most commonly cited books in these ACE proceedings. Finally, in the spirit of ""Quis custodiet ipsos custodes?"" the paper discusses some ways in which the CORE rankings process itself might in future be made more transparent and open to scholarly discourse.

",2008,0, 183,A Classification Proposal for Computer-Assisted Knee Systems,"This paper compares classification techniques to identify several power quality disturbances in a frame of smart metering design for smart grids with high penetration of PV systems. These techniques are: Linear Discriminant Analysis (LDA), Nearest Neighbor Method (kNN), Learning Vector Quantization (LVQ) and Support Vector Machine (SVM). For this purpose, fourteen power-quality features based in higher-order statistics are used to assist classification. Special attention is paid to the spectral kurtosis, whose nature enables measurement options related to the impulsiveness of the power quality events. The best technique of those compared is selected according to correlation and mistake rates. Results clearly reveal the potential capability of the methodology in classifying the single disturbances. Concretely, the SVM classifier obtained an average correlation rate of 99%. Hence, concluding that it is a robust classification method.",2015,0, 184,A Cognitive-Based Mechanism for Constructing Software Inspection Teams,"Software inspection is well-known as an effective means of defect detection. Nevertheless, recent research has suggested that the technique requires further development to optimize the inspection process. As the process is inherently group-based, one approach to improving performance is to attempt to minimize the commonality within the process and the group. This work proposes an approach to add diversity into the process by using a cognitively-based team selection mechanism. The paper argues that a team with diverse information processing strategies, as defined by the selection mechanism, maximize the number of different defects discovered.",2004,0, 185,A Collaborative Augmented Reality System Using Transparent Display,"Augmented reality is a technique to integrate the virtual (i.e., computer-generated) information with objects in the real world. Real world objects can thus be augmented in their properties and functions with the help of a computer. One of the authors has proposed an augmented reality system in which a transparent display is used as a means of integration of the real world and the virtual world. This paper discusses an extension of the system so that it can be used in a collaborative working environment. Capturing the gesture as a means of user interaction is realized by a single camera, aiming at simplifying the system setup and image analysis. We propose an idea of utilizing statistical data of human body and human's perceptional characteristics, as well as vision techniques. Two applications are presented to explain the usefulness of the system: one is a game-like application, which we call mind-to-mind communication, and the other a messenger object.",2004,0, 186,A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction,"In this paper we present a comparative analysis of the predictive power of two different sets of metrics for defect prediction. We choose one set of product related and one set of process related software metrics and use them for classifying Java files of the Eclipse project as defective respective defect-free. Classification models are built using three common machine learners: logistic regression, naive Bayes, and decision trees. To allow different costs for prediction errors we perform cost-sensitive classification, which proves to be very successful: >75% percentage of correctly classified files, a recall of >80%, and a false positive rate <30%. Results indicate that for the Eclipse data, process metrics are more efficient defect predictors than code metrics.",2008,0, 187,A Comparative Longitudinal Study of Non-verbal Mouse Pointer,"

A longitudinal study of two non-speech continuous cursor control systems is presented in this paper: Whistling User Interface (U3I) and Vocal Joystick (VJ). This study combines the quantitative and qualitative methods to get a better understanding of novice users' experience over time. Three hypotheses were tested in this study. The quantitative data show that U3I performed better in error rate and in simulating a mouse click; VJ was better on other measures. The qualitative data indicate that the participants' opinions regarding both tools improved day-by-day. U3I was perceived as less fatiguing than VJ. U3I approached the performance of VJ at the end of the study period, indicating that these two systems can achieve similar performances as users get more experienced in using them. This study supports two hypotheses but does not provide enough evidence to support one hypothesis.

",2007,0, 188,A Comparison of Requirements Specification Methods from a Software Architecture Perspective,"One of the key challenges to producing high-quality software architecture is identifying and understanding the software's architecturally significant requirements. These requirements are the ones that have the most far-reaching effect on the architecture. In this report, five methods for the elicitation and expression of requirements are evaluated with respect to their ability to capture architecturally significant requirements. The methods evaluated are requirements specification using natural language, use case analysis, the Quality Attribute Workshop (developed by the Carnegie Mellon Software Engineering Institute), global analysis, and an approach developed by Fergus O'Brien. These methods were chosen because they are in widespread use or emphasize the capture of architecturally significant requirements. Three problems must be solved to systematically transform business and mission goals into architecturally significant requirements: (1) the requirements must be expressed in a form that provides the information necessary for design; (2) the elicitation of the requirements must capture architecturally significant requirements; and (3) the business and mission goals must provide systematic input for elicitation process. The primary finding from the evaluation of these methods is that there are promising solutions to the first two problems. However, there is no method for systematically considering the business and mission goals in the requirements elicitation.",2006,0, 189,A comprehensive synthesis of research,The paper introduces the concept of reflection matrices in the coupling matrix filter reconfiguration. It is shown that reflection matrices are complementary to rotation matrices a useful concept that can be used alternatively to rotation matrices in the similarity transformations that are applied in order to transform the coupling matrix to a suitable form. A cross-coupled filter example is given where both concepts are used.,2015,0, 190,A Computational Method for Viewing Molecular Interactions in Docking,"

A huge amount of molecular data is available in protein data bank and various other libraries and this amount is increasing day by day. Devising new and efficient computational methods to extract useful information from this data is a big challenge for the researchers working in the field. Computational molecular docking refers to computational methods which attempt to obtain the best binding conformation of two interacting molecules. Information of the best binding conformation is useful in many applications such as rational drug design, recognition, cellular pathways, macromolecular assemblies, protein folding etc. Docking has three important aspects: (i) modeling of molecular shape, (ii) shape matching and (iii) scoring and ranking of potential solutions. In this paper, a new approach is proposed for shape matching in rigid body docking. The method gives visual information about the matching conformations of the molecules. In the approach proposed here, B-spline surface representation technique is used to model the patches of molecular surface. Surface normal and curvature properties are used to match these patches with each other. The 2-D approach used here for generation of surface patches is useful to pixellisation paradigm.

",2006,0, 191,A Computerized Infrastructure for Supporting Experimentation in Software Engineering,"Software engineering (SE) is predominantly a team effort that needs close cooperation among several people who may be geographically distributed. It has been recognized that appropriate tool support is a prerequisite to improve cooperation within SE teams. In an effort to contribute to this line of research, we have designed and developed an infrastructure, called ABC4GSD, based on the models of activity theory (AT) and the principles of the activity-based computing (ABC) paradigm. In this paper, we present a study that empirically evaluates the ability of ABC4GSD in supporting teams cooperation. We designed and executed a study based on a scenario that simulated the follow-the-Sun (FTS) strategy of global SE (GSE). Our research design allowed us to ensure cooperation to be both computer-mediated as well as contained within observable short time-windows - the hand-off activities of the FTS strategy. [Results] Overall, the results show that the cooperation support provided by the ABC4GSD system has been positively perceived by the participants. Nonetheless, open issues stimulating further investigations have been raised especially due to a few mixed results. Aware of the limitations of the simulated scenario, we conclude that the approach followed by the ABC4GSD system based on activities is desirable to improve the cooperation support in SE. Finally, our research approach based on simulating a scenario with geographical and temporal distribution can provide useful ideas for assessing collaborative technologies in SE.",2016,0, 192,A Concept-Based Framework for Retrieving Evidence to Support Emergency Physician Decision Making at the Point of Care,"

The goal of evidence-based medicine is to uniformly apply evidence gained from scientific research to aspects of clinical practice. In order to achieve this goal, new applications that integrate increasingly disparate health care information resources are required. Access to and provision of evidence must be seamlessly integrated with existing clinical workflow and evidence should be made available where it is most often required - at the point of care. In this paper we address these requirements and outline a concept-based framework that captures the context of a current patient-physician encounter by combining disease and patient-specific information into a logical query mechanism for retrieving relevant evidence from the Cochrane Library. Returned documents are organized by automatically extracting concepts from the evidence-based query to create meaningful clusters of documents which are presented in a manner appropriate for point of care support. The framework is currently being implemented as a prototype software agent that operates within the larger context of a multi-agent application for supporting workflow management of emergency pediatric asthma exacerbations.

",2007,0, 193,A content based retrieval system for renal scintigraphy images,"Increasing amount of image data raises the importance of content based query systems. Increasing hardware capacity and improving methods makes development of such systems more feasible. In this study, we aim to develop a content based image retrieval for renal (kidney) scintigraphy images. For this purpose, problem analysis, literature survey, data gathering/processing studies are done, and a content-based image retrieval (CBIR) engine software prototype is developed",2006,0, 194,A Cost-Effective Usability Evaluation Progression for Novel Interactive Systems,"This paper reports on user interface design and evaluation for a mobile, outdoor, augmented reality (AR) application. This novel system, called the battlefield augmented reality system (BARS), supports information presentation and entry for situation awareness in an urban war fighting setting. To our knowledge, this is the first time extensive use of usability engineering has been systematically applied to development of a real-world AR system. Our BARS team has applied a cost-effective progression of usability engineering activities from the very beginning of BARS development. We discuss how we first applied cycles of structured expert evaluations to BARS user interface development, employing user interface mockups representing occluded (non-visible) objects. Then we discuss how results of these evaluations informed our subsequent user-based statistical evaluations and formative evaluations, and present these evaluations and their outcomes. Finally, we discuss how and why this sequence of types of evaluation is cost-effective.",2004,0, 195,A Critical Analysis of the Council of Europe Recommendations on e-voting,"

In September 2004, the Council of Europe's Committee of Ministers officially adopted a set of standards recommended by the Multidisciplinary Ad Hoc Group of Specialists on legal, operational and technical standards for e-enabled voting [7].

This paper puts the standards in their historical context, examines them according to established software engineering principles, and finally suggests how they could be restructured.

",2006,0, 196,A Critical Approach to Privacy Research in Ubiquitous Environments ? Issues and Underlying Assumptions,"

This paper explores the different aspects of ubiquitous environments with regard to the protection of individuals' private life. A critical review of the relative research reveals two major trends. First, that there is a shift in the perception of privacy protection, which is increasingly considered as a responsibility of the individual, instead of an individual right protected by a central authority, such as a state and its laws. Second, it appears that current IT research is largely based on the assumption that personal privacy is quantifiable and bargainable. This paper discusses the impact of these trends and underlines the issues and challenges that emerge. The paper stresses that, for the time being, IT research approaches privacy in ubiquitous environments without taking into account the different aspects and the basic principles of privacy. Finally the paper stresses the need for multidisciplinary research in the area, and the importance that IT research receives input from other related disciplines such as law and psychology. The aim of the paper is to contribute to the ongoing discourse about the nature of privacy and its role in ubiquitous environments and provide insights for future research.

",2007,0, 197,A Cross-Cultural Study of Flow Experience in the IT Environment: The Beginning,"

Flow (optimal) experience is being widely investigated in the IT environments: in human-computer interaction, computer-mediated communication and exploratory behaviour, consumer and marketing applications, educational practice, playing computer, video and online games, psychological rehabilitation of the disabled, web usability testing, etc. Though a universal experience, flow can be expected to be culture specific and culture dependent. Optimal experience has only rarely been studied from a cross-cultural perspective, mainly in the field of gaming activities. An overview of the earliest works in the field is presented, as well as empirical evidences of a study referring to the flow experience and interaction patterns inherent to the samples of Russian and French online players.

",2007,0, 198,A Cross-Lingual Framework for Web News Taxonomy Integration,"

There are currently many news sites providing online news articles, and many Web news portals arise to provide clustered news categories for users to browse more related news reports and realize the news events in depth. However, to the best of our knowledge, most Web news portals only provide monolingual news clustering services. In this paper, we study the cross-lingual Web news taxonomy integration problem in which news articles of the same news event reported in different languages are to be integrated into one category. Our study is based on cross-lingual classification research results and the cross-training concept to construct SVM-based classifiers for cross-lingual Web news taxonomy integration. We have conducted several experiments with the news articles from Google News as the experimental data sets. From the experimental results, we find that the proposed cross-training classifiers outperforms the traditional SVM classifiers in an all-round manner. We believe that the proposed framework can be applied to different bilingual environments.

",2006,0, 199,A curriculum for embedded system engineering,"This paper describes an educational achievement in embedded system field from the academic year 2010 to 2011 in a national college of technology called “KOSEN.” We the authors have been continuing specialized education in the field of the embedded system in order to let students be work-ready engineers in various industrial fields. However, it is recently getting harder and harder for students to imagine their future vision as engineers and to excite their spontaneous motivation to learn. One reason is the fact that it is difficult for students to imagine relationship between final embedded-products and elementary contents of lectures that they study in the college. Furthermore, recent embedded systems are too complex to put the whole picture together. To cope with these problems, we have achieved the following four activities for improvement of our curriculum: 1) Maturing educational partnership with local schools through open-lectures to provide early education to pupils who will be our students in future; 2) Developing a multiplicity-carrying microprocessor board as a new teaching material with its online manual and software libraries; 3) Carrying out several special lectures and meetings which are held for various stages of students in order to provide opportunities for them to become interested in the actual industrial fields; 4) Incorporating a lecture on model-based embedded-product design for advanced course students. Through these activities for improvement of our curriculum, we had provided diverse opportunities to students. We verified the effectiveness of our activities by using questionnaires, and then more than 80% of students affirmed the effectiveness.",2012,0, 200,A Data Integration Broker for Healthcare Systems,"A prototype information broker uses a software service model to collect and integrate diverse patient data from autonomous healthcare agencies, potentially solving many problems that challenge current enterprise-based file systems",2007,0, 201,A DEFECT PREDICTION METHOD FOR SOFTWARE VERSION DIFFERENCES,"One of the challenges in any software organization is the prediction of acceptable degree of software. The effort invested in a software project in terms of required hours of work against required number of people for a project is probably one of the most important and most analysed variables in recent years in the process of prediction of project success. Thus, effort estimation with a high grade of reliability remains as one of the risky components where in the project manager has to deal with it since the inception of project development. Over the past decades hence, prediction of product quality within software engineering, preventive and corrective actions within the various project phases are constantly improved. This paper therefore introduces a novel hybrid method of random forest (RF) and Fuzzy C Means (FCM) clustering for building defect prediction model. Initially, random forest algorithm is used to perform a preliminary screening of variables and to gain an importance ranks. Subsequently, the new dataset is input into the FCM technique, which is responsible for building interpretable models for predicting defects. The capability of this combination method is evaluated using basic performance measurements along with a 10-fold cross validation. FCM and RF technique is applied to software components such as people, process, which act as major decision making model for project success. Experimental results show that the proposed method provides a higher accuracy and a relatively simple model enabling a better prediction of software defects.",2014,0, 202,A Deferrable Scheduling Algorithm for Real-Time Transactions Maintaining Data Freshness,"Periodic update transaction model has been used to maintain freshness (or temporal validity) of real-time data. Period and deadline assignment has been the main focus in the past studies such as the more-less scheme by Xiong and Ramamrithan (2004) in which update transactions are guaranteed by the deadline monotonic scheduling algorithm by Leung and Whitehead (1982) to complete by their deadlines. In this paper, we propose a novel algorithm, namely deferrable scheduling, for minimizing imposed workload while maintaining temporal validity of real-time data. In contrast to previous work, update transactions scheduled by the deferrable scheduling algorithm follow a sporadic task model. The deferrable scheduling algorithm exploits the semantics of temporal validity constraint of real-time data by judiciously deferring the sampling times of update transaction jobs as late as possible. We present a theoretical analysis of its processor utilization, which is verified in our experiments. Our experimental results also demonstrate that the deferrable scheduling algorithm is a very effective approach, and it significantly outperforms the more-less scheme in terms of reducing processor workload",2005,0, 203,A design for evidence - based soft research,"Data processing is a key issue of the design of RFID middleware, and the general processing mode is based on Event. The data processed and output by this solution canpsilat really meet the needs of various application systems; nevertheless, the complex event processing technology can overcome the disadvantage effectively. According to the principles that can make it effective to manage RFID data, this paper proposes a RFID data processing model based on complex event processing (CEP) technique, provides the related definitions, elaborates the function modules involved (the event monitoring module, the buffer module, the event processing module and the event sending/subscribe module) and their solutions.",2008,0, 204,A Domain Engineering Approach to Specifying and Applying Reference Models,"Business process modeling and design, as an essential part of business process management, has gained much attention in recent years. An important tool for this purpose is reference models, whose aim is to capture domain knowledge and assist in the design of enterprise specific business processes. However, while much attention has been given to the content of these models, the actual process of reusing this knowledge has not been extensively addressed. In order to address this lack, we propose to utilize a domain engineering approach, called Applicationbased Domain Modeling (ADOM), for the purpose of specifying and applying reference models. We demonstrate the approach by specifying a sell process reference model and instantiating it for a chocolate manufacturer. The benefits of utilizing the ADOM approach for specifying business models are the provisioning of validation templates by the reference models and the ability to apply the approach to various modeling languages and business process views.",2005,0, 205,A family of empirical studies to compare informal and optimization-based planning of software releases,"Replication of experiments, or performing a series of related studies, aims at attaining a higher level of validity of results. This paper reports on a series of empirical studies devoted to comparing informal release planning with two variants of optimization-based release planning.Two research questions were studied: How does optimization-based release planning compare with informal planning in terms of (i) time to generate release plans, and the feasibility and quality of those plans, and (ii) understanding and confidence of generated solutions and trust in the release planning process.For the family of empirical studies, the paper presents two types of results related to (i) the two research questions to compare the release planning techniques, and (ii) the evolution and lessons learned while conducting the studies.",2006,0, 206,A Fast Computation of Inter-class Overlap Measures Using Prototype Reduction Schemes,"

In most Pattern Recognition (PR) applications, it is advantageous if the accuracy (or error rate) of the classifier can be evaluated or bounded prior to testing it in a real-life setting. It is also well known that if the two class-conditional distributions have a large overlapping volume1, the classification accuracy is poor. This is because, if we intend to use the classification accuracy as a criterion for evaluating a PR system, the points within the overlapping volume tend to have less significance in determining the prototypes. Unfortunately, the computation of the indices which quantify the overlapping volume is expensive. In this vein, we propose a strategy of using a Prototype Reduction Scheme (PRS) to approximately compute the latter. In this paper, we show that by completely discarding2 the points not included by the PRS, we can obtain a reduced set of sample points, using which, in turn, the measures for the overlapping volume can be computed. The value of the corresponding figures is comparable to those obtained with the original training set (i.e., the one which considers all the data points) even though the computations required to obtain the prototypes and the corresponding measures are significantly less. The proposed method has been rigorously tested on artificial and real-life data sets, and the results obtained are, in our opinion, quite impressive - sometimes faster by two orders of magnitude.

",2008,0, 207,A first draft of a Model-driven Method for Designing Graphical User Interfaces of Rich Internet Applications,"The design and development of graphical user interfaces for rich Internet applications are well known difficult tasks with tools. The designers must be aware of the computing platform, the user's characteristics (education, social background, among others) and the environment within users must interact with the application. We present a method to design this type of user interfaces that is model-based and applies an iterative series of XSLT transformations to translate the abstract modeled interface into a final user interface that is coded in a specific platform. In order to avoid the proprietary engines dependency for designing tasks. UsiXML is used to model all the levels. Several model based technologies have been proposed and in this paper we review a XML-compliant user interface description language: XAML",2006,0, 208,A flexible method for maintaining software metrics data: a universal metrics repository.,"A neglected aspect of software measurement programs is what will be done with the metrics once they are collected. Often databases of metrics information tend to be developed as an afterthought, with little, if any concessions to future data needs, or long-term, sustaining metrics collection efforts. A metric repository should facilitate an on-going metrics collection effort, as well as serving as the ?corporate memory? of past projects, their histories and experiences. In order to address these issues, we describe a transformational view of software development that treats the software development process as a series of artifact transformations. Each transformation has inputs and produces outputs. The use of this approach supports a very flexible software engineering metrics repository.",2004,0, 209,A formal framework for component deployment,"Prefetching is an effective method for minimizing the number of fetches between the client and the server in a database management system. In this paper, we formally define the notion of prefetching. We also formally propose new notions of the type-level access locality and type-level access pattern. The type-level access locality is a phenomenon that repetitive patterns exist in the attributes referenced. The type-level access pattern is a pattern of attributes that are referenced in accessing the objects. We then develop an efficient capturing and prefetching policy based on this formal framework. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object-ids or page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i.e., if there is type-level access locality. Many navigational applications in object-relational database management systems (ORDBMSs) have type-level access locality. Therefore, our technique can be employed in ORDBMSs to effectively reduce the number of fetches, thereby significantly enhancing the performance. We also address issues in implementing the proposed algorithm. We have conducted extensive experiments in a prototype ORDBMS to show effectiveness of our algorithm. Experimental results using the 007 benchmark, a real GIS application, and an XML application show that our technique reduces the number of fetches by orders of magnitude and improves the elapsed time by several factors over on-demand fetching and context-based prefetching, which is a state-of-the-art prefetching method. These results indicate that our approach provides a new paradigm in prefetching that improves performance of navigational applications significantly and is a practical method that can be implemented in commercial ORDBMSs.",2005,0, 210,A Formalism for Context-Aware Mobile Computing,"Mobile devices, such as mobile phones and PDAs, have gained wide-spread popularity. Applications for this kind of mobile devices have to adapt to changes in context, such as variations in network bandwidth, battery power, connectivity, reachability of services and hosts, and so on. In this paper, we define context-aware action systems that provides a systematic method for managing and processing context information. The meaning of context-aware action systems is defined in terms of classical action systems, so that the properties of context-aware action systems can be proved using standard action systems proof techniques. We describe the essential notions of this formalism and illustrate the framework with examples on context-aware services for mobile applications.",2004,0, 211,A framework approach to measure innovation maturity,"Innovation is central theme in mission statement of majority of knowledge economy organizations. It represents core renewal process in any organization. Competitive organizations innovate, continuously. But, how are they aware that they are improving enough? Three important phases arise when one look at an organization with respect to Innovation in its entirety, through a cluster of relationships:",2005,0, 212,A Framework for Assessing Business Value of Service Oriented Architectures,"This paper presents a framework used for analyzing decisions regarding implementations of service oriented architectures. The framework assesses the business value of SOA by measuring the modification cost, i.e. the effort needed to become service oriented, and the benefits that can be gained for an enterprise using SOA.",2007,0, 213,A framework for building reality-based interfaces for wireless-grid applications,"While significant advances have been made improve speech recognition performance, and gesture and handwriting recognition, speech- and pen-based systems have still not found broad acceptance in everyday life. One reason for this is the inflexibility of each input modality when used alone. Human communication is very natural and flexible because we can take advantage of a multiplicity of communication signals working in concert to supply complementary information or increase robustness with redundancy. We present a multimodal interface capable of jointly interpreting speech, pen-based gestures, and handwriting in the context of an appointment scheduling application. The interpretation engine based on semantic frame merging correctly interprets 80% of a multimodal data set assuming perfect speech and gesture/handwriting recognition; in the presence of recognition errors the interpretation performance is in the range of 35-62%. A dialog processing scheme uses task domain knowledge to guide the user in supplying information and permits human-computer interactions to span several related multimodal input events",1996,0, 214,A Framework for Classifying and Comparing Software Architecture Evaluation Methods,"Software architecture evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of evaluation methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an evaluation exercise. We use this framework to characterise eight SA evaluation methods.",2004,0, 215,A framework for developing a knowledge-based decision support system for management of variation orders for institutional buildings,"This study describes the framework for developing a knowledge-based decision support system (KBDSS) for making more informed decisions for managing variation orders in institutional buildings. The KBDSS framework consists of two main components, i.e., a knowledge base and a decision support shell. The database will be developed through collecting data from source documents of 80 institutional projects, questionnaire survey, literature review and in-depth interview sessions with the professionals who were involved in these institutional projects. The knowledge base will be developed through initial sieving and organization of data from the database. The decision support shell would provide decision support through a structured process consisting of building the hierarchy between the main criteria and the suggested controls, rating the controls, and analyzing the controls for selection through multiple analytical techniques. The KBDSS would be capable of displaying variations and their relevant details, a variety of filtered knowledge, and various analyses of available knowledge. This would eventually lead the decision maker to the suggested controls for variations and assist in selecting the most appropriate controls. The KBDSS would assist project managers by providing accurate and timely information for decision-making, and a user-friendly system for analyzing and selecting the controls for variation orders for institutional buildings. The study would assist building professionals in developing an effective variation management system. The system would be helpful for them to take proactive measures for reducing variation orders. The findings from this study would also be valuable for all building professionals in general.",2006,0, 216,A framework for evaluating usability of clinical monitoring technology,"Technology design is a complex task, and acceptability is enhanced when usability is central to its design. Evaluating usability is a challenge for purchasers and developers of technology. We have developed a framework for testing the usability of clinical monitoring technology through literature review and experience designing clinical monitors. The framework can help designers meet key international usability norms. The framework includes these direct testing methods: thinking aloud, question asking, co-discovery, performance and psychophysiological measurement. Indirect testing methods include: questionnaires and interviews, observation and ethnographic studies, and self-reporting logs. Inspection, a third usability testing method, is also included. The use of these methods is described and practical examples of how they would be used in the development of an innovative monitor are given throughout. This framework is built on a range of methods to ensure harmony between users and new clinical monitoring technology, and have been selected to be practical to use.",2007,0, 217,A framework for methodologies of visual modeling language evaluation,"We present a framework of syntactic models for the definition and implementation of visual languages. We analyze a wide range of existing visual languages and, for each of them, we propose a characterization according to a syntactic model. The framework has been implemented in the Visual Language Compiler-Compiler (VLCC) system. VLCC is a practical, flexible and extensible tool for the automatic generation of visual programming environments which allows to implement visual languages once they are modeled according to a syntactic model",1997,0, 218,A Framework for Software Engineering Experimental Replications,"Experimental replications are very important to the advancement of empirical software engineering. Replications are one of the key mechanisms to confirm previous experimental findings. They are also used to transfer experimental knowledge, to train people, and to expand a base of experimental evidence. Unfortunately, experimental replications are difficult endeavors. It is not easy to transfer experimental know-how and experimental findings. Based on our experience, this paper discusses this problem and proposes a Framework for Improving the Replication of Experiments (FIRE). The FIRE addresses knowledge sharing issues both at the intra-group (internal replications) and inter-group (external replications) levels. It encourages coordination of replications in order to facilitate knowledge transfer for lower cost, higher quality replications and more generalizable results.",2008,0, 219,A framework for the design and verification of software measurement methods,"The paper presents a method for efficiently verifying service software. This method is used to design a service creation environment (SCE) for NTT's advanced intelligent network (advanced IN). The authors classify types of service software verifications, and then propose a verification method based on these classifications. This verification method consists of three steps: specification verification, simulation, and an actual machine test. The SCE provides a verification environment for each verification step. Use of this SCE shows that manpower required for verification can be reduced. The paper mainly describes verification of service logic programs (SLPs), but verification of management logic programs (MLPs), and operation logic programs (OLPs) is also briefly described",1994,0, 220,A framework for the effective adoption of software development methodologies,"Software reliability has been regarded as one of the most important quality attributes for software intensive systems, especially in defense domain. As most of weapon systems complicated functionalities and controls are implemented by software which is embedded in hardware systems, it became more critical to assure high reliability for software itself. However, many software development organizations in Korea defense domain have had problems in performing reliability engineered processes for developing mission-critical and/or safety-critical weapon systems. In this paper, we propose an effective framework with which software organizations can identify and select metrics associated software reliability, analyze the collected data, appraise software reliability, and develop software reliability prediction/estimation model based on the result of data analyses.",2008,0, 221,A Framework for Three-Dimensional Simulation of Morphogenesis,"We present COMPUCELL3D, a software framework for three-dimensional simulation of morphogenesis in different organisms. COMPUCELL3D employs biologically relevant models for cell clustering, growth, and interaction with chemical fields. COMPUCELL3D uses design patterns for speed, efficient memory management, extensibility, and flexibility to allow an almost unlimited variety of simulations. We have verified COMPUCELL3D by building a model of growth and skeletal pattern formation in the avian (chicken) limb bud. Binaries and source code are available, along with documentation and input files for sample simulations, at http:// compucell.sourceforge.net.",2005,0, 222,A framework for virtual community business success: The case of the internet chess club,"Prior work has identified, in piecemeal fashion, desirable characteristics of virtual community businesses (VCBs) such as inimitable information assets, persistent handles fomenting trust, and an economic infrastructure. The present work develops a framework for the success of a subscription-based VCB by taking into account the above elements and considering as well an interplay of the membership (both regular members and volunteers), technical features of the interface, and an evolutionary business model that supports member subgroups as they form. Our framework is applied by an in-depth survey of use and attitude of regular members and volunteers in the Internet Chess Club (ICC), a popular subscription-based VCB. The survey results reveal that key features of the model are supported in the ICC case: member subgroups follow customized communication pathways; a corps of volunteers is supported and recognized, and the custom interface presents clear navigation pathways to the ICCs key large-scale information asset, a multi-million game database contributed by real-world chess Grandmasters who enjoy complimentary ICC membership. We conclude by discussing VCBs in general and how the framework might apply to other domains.",2004,0, 223,"A Framework of Multi-Agent-Based Modeling, Simulation, and Computational Assistance in an obiquitous Environment","The author exemplifies the framework of PSI (Pervasive System for Indoor-GIS) for exploring the spatial model of dynamic human behavior and developing various services in an ubiquitous computational environment. This does not mean merely constructing a software framework; rather, it means attempting to establish an inclusive service framework for ubiquitous computing. The most advantageous aspect of this component-oriented framework is that it contributes toward developing various service applications for ubiquitous computation as the occasion demands. That is, service applications are expected to do modeling, simulation, monitoring, Web services, and other applications. The author also discusses the relative issues on making a coordinating bridge between behavioral sciences and multiagent systems.",2004,0, 224,A framework to support multiple reconfiguration strategies,"This paper presents a framework to instantiate software technologies selection approaches by using search techniques. The software technologies selection problem (STSP) is modeled as a Combinatorial Optimization problem aiming attending different real scenarios in Software Engineering. The proposed framework works as a top-level layer over generic optimization frameworks that implement a high number of metaheuristics proposed in the technical literature, such as JMetal and OPT4J. It aims supporting software engineers that are not able to use optimization frameworks during a software project due to short deadlines and limited resources or skills. The framework was evaluated in a case study of a complex real-world software engineering scenario. This scenario was modeled as the STSP and some experiments were executed with different metaheuristics using the proposed framework. The results indicate its feasibility as support to the selection of software technologies.",2014,0, 225,A Functional Semantic Web Architecture,"This paper presents a novel architecture for collaborative multi-agent functional modeling in design on the semantic Web. The proposed architecture is composed of two visiting levels, i.e., local level and global level. The local level is an ontology-based functional modeling framework, which uses Web ontology language (OWL) to build a domain-specific local functional design ontology repository. Using this local ontology repository, the requests coming from the functional design agent can be parsed and performed effectively. The global level is a distributed multi-agent collaborative virtual environment, in which, OWL is used as a content language within the standard FIPA agent communication language (FIPA ACL) messages for describing ontologies, enabling diverse functional design ontologies between different agents to be communicated freely. The proposed architecture facilitates the exchange between diverse knowledge representation schemes in different functional modeling environments, and supports computer supported cooperative work (CSCW) between multiple functional design agents",2006,0, 226,A fuzzy classifier approach to assessing the progression of adolescent idiopathic scoliosis from radiographic indicators,"A fuzzy classifier approach was used to predict the progression of adolescent idiopathic scoliosis (AIS). Past studies indicate that individual indicators of AIS do not reliably predict progression. Complex indicators having improved predictive values have been developed but are unsuitable for clinical use. Based on the hypothesis that combining some common indicators with a fuzzy classifier could produce better results, we conducted a study using radiographic indicators measured from 44 moderate AIS patients. We clustered the data using a fuzzy c-means classifier and designed fuzzy rules to represent each cluster. We classified the records in the dataset using the resulting rules. This approach outperformed a binary logistic regression method and a stepwise linear regression method. Less than fifteen minutes per patient is required to measure the indicators, input the data into the system and generate results enabling its use in a clinical environment to aid in the management of AIS.",2004,0, 227,A Generic Visual Critic Authoring Tool,"Critic tools have been used for many domains, including design sketches, education, general engineering, and software design. The focus of this research is to develop a generic visual critic authoring framework embedded within an end user oriented domain specific visual language meta tool. This will allow tool critic support to be rapidly developed in parallel with the tools themselves.",2007,0, 228,A Graph Model for E-Commerce Recommender Systems,"Timed failure propagation graphs (TFPG) are causal models that capture the temporal aspects of failure propagation in dynamic systems. In this paper we present several practical modeling and reasoning considerations that have been addressed based on experience with avionics systems. These include the problem of intermittent faults, handling test alarms, dealing with limited computational resources, and model reduction for large scale systems.",2007,0, 229,A Hands-On Approach for Teaching Systematic Review,"Reviews are an integral part of the software development process. They are one of the key methodologies that undergraduates study in order to develop quality software. Despite their importance, reviews are rarely used in software engineering projects at the baccalaureate level. This paper demonstrates results from a study conducted on students at baccalaureate level enrolled in a one-semester software engineering course at the National University of Computer and Emerging Sciences - Foundation for Advancement of Science and Technology (NUCES-FAST) in Pakistan. The objectives of the study are: to determine how the various team review techniques help to educate students about the importance of the review process and find which technique is more suitable for teaching reviews to undergraduates. Two variations on team review are proposed: Similar Domain Review (SDR) and Cross-Domain Review (CDR) without author. The paper presents a comparison of the proposed and existing team review techniques and measures their effectiveness in terms of defect detection. The results show that the proposed variation SDR is more effective in defect detection than CDR (with/without author). Another interesting result is that the proposed CDR-without author is better than CDR with author (the existing team review approach). Also, early defect detection enabled students to incorporate changes and improve the software quality.",2010,0,230 230,A Hands-On Approach for Teaching Systematic Review,"Reviews are an integral part of the software development process. They are one of the key methodologies that undergraduates study in order to develop quality software. Despite their importance, reviews are rarely used in software engineering projects at the baccalaureate level. This paper demonstrates results from a study conducted on students at baccalaureate level enrolled in a one-semester software engineering course at the National University of Computer and Emerging Sciences - Foundation for Advancement of Science and Technology (NUCES-FAST) in Pakistan. The objectives of the study are: to determine how the various team review techniques help to educate students about the importance of the review process and find which technique is more suitable for teaching reviews to undergraduates. Two variations on team review are proposed: Similar Domain Review (SDR) and Cross-Domain Review (CDR) without author. The paper presents a comparison of the proposed and existing team review techniques and measures their effectiveness in terms of defect detection. The results show that the proposed variation SDR is more effective in defect detection than CDR (with/without author). Another interesting result is that the proposed CDR-without author is better than CDR with author (the existing team review approach). Also, early defect detection enabled students to incorporate changes and improve the software quality.",2010,0, 231,A heretofore undisclosed crux of Eosinophilia-Myalgia Syndrome: compromised histamine degradation,"Abstract.In contrast to early epidemiological evidence offering links between eosinophilia-myalgia syndrome (EMS) and microimpurities of L-tryptophan-containing dietary supplements (LTCDS), this account shows why reliance on a finite impurity from one manufacturer is both unnecessary and insufficient to explain the etiology of EMS. Excessive histamine activity has induced blood eosinophilia and myalgia (Greek: mys, muscle + algos, pain). Termination of the multiple actions of histamine is dependent on particular amine oxidases and histamine-N-methyltransferase. Histamine metabolism is rapid when these degradative reactions are operative. The latent effects of incurred histamine can be potentiated and aggravating when these mechanisms are impaired. Overloads of tryptophan supplements cause among other relevant side-effects an increased formation of formate and indolyl metabolites, several of which inhibit the degradation of histamine. Moreover, (non-EMS) subjects with hypothalamic-pituitary- adrenal (HPA) axis dysregulation have also manifested greatly increased sensitivities to incurred tryptophan and histamine. A final common pathway for syndromes characterized by eosinophilia with myalgia is now evident.",2005,0, 232,A Human-Oriented Tuning of Workflow Management Systems,"The resources needed to execute workflows in a Grid environment are commonly highly distributed, heterogeneous, and managed by different organizations. One of the main challenges in the development of Grid infrastructure services is the effective management of those resources in such a way that much of the heterogeneity is hidden from the end-user. This requires the ability to orchestrate the use of various resources of different types. Grid production infrastructures such as EGEE, DEISA and TeraGrid allow sharing of heterogeneous resources in order for supporting large-scale experiments. The interoperability of these infrastructures and the middleware stacks enabling applications migration and/or aggregating the combined resources of these infrastructures, are of particular importance to facilitate a grid with global reach. The relevance of current and emerging standards for such interoperability is the goal of many researches in the recent years. Starting from a first prototype of grid workflow, we have improved the current version introducing several features such as fault tolerance, security and a different management of the jobs. So our grid meta scheduler, named GMS (Grid Meta Scheduler), has been redesigned, able to be interoperable among grid middleware (gLite, Unicore and Globus) when executing workflows. It allows the composition of batch, parameter sweep and MPI based jobs.",2010,0, 233,A Hybrid Approach to Concept Extraction and Recognition-Based Matching in the Domain of Human Resources,We describe the Convex system for extracting concepts from resumes and subsequently matching the best qualified candidates to jobs. A blend of knowledge-based and speculative concept extraction provides high quality results even outside the scope of the built-in knowledge. A comparison test shows the results found by Convex are significantly better than those found by engines using a keyword or statistical conceptual approach.,2004,0, 234,A Knowledge-Based Modeling System for Time-Critical Dynamic Decision-Making,"In this paper, we study the both-branch fuzzy dynamic decision-making by using the method of set pair analysis in both-branch fuzzy system. The factors and the up-branch, down-branch, both-branch of the Both-branch fuzzy decision-making are analysed with the set pair analysis respectively. It provides the method of both-branch fuzzy decision-making based on SPA, builds up the model of both-branch fuzzy decision degree based on SPA, puts forward the method of forecasting are strongness and weakness situation of the both-branch fuzzy decision degree. Thereby, the paper provides the new prespective and method for researching the both-branch fuzzy decision-making theory, gives a better instrument and more reasional theory for the application of the both-branch fuzzy decision-making.",2010,0, 235,A Landmarker Selection Algorithm Based on Correlation and Efficiency Criteria,"

Landmarking is a recent and promising meta-learning strategy, which defines meta-features that are themselves efficient learning algorithms However, the choice of landmarkers is often made in an ad hoc manner In this paper, we propose a new perspective and set of criteria for landmarkers Based on the new criteria, we propose a landmarker generation algorithm, which generates a set of landmarkers that are each subsets of the algorithms being landmarked Our experiments show that the landmarkers formed, when used with linear regression are able to estimate the accuracy of a set of candidate algorithms well, while only utilising a small fraction of the computational cost required to evaluate those candidate algorithms via ten-fold cross-validation.

",2004,0, 236,A Learner Model for Learning-by-Example Context,"Nowadays learning environments put more and more accent on the intelligence of the system. The intelligence of a learning environment is largely attributed to its ability of adapting to a specific learner during the learning process. The adaptation depends on individual learner's knowledge of the subject to be learned, and other relevant characteristics of the learner. The knowledge and the relevant information about the learner are maintained in the learner model. A learner model can be defined as structured information about the learning process; and this structure contains some values of the learner's characteristics. This paper proposes a new learner model, which is based on the consideration of what is appropriate to the learning-by-example context. The model records five categories of information about the learner: personal data, learner's characteristics, learning state, learner's interactions with the system, and learner's knowledge. This model is being integrated in Sphinx, an educational environment based on learning by means of examples.",2007,0, 237,A Lifecycle Approach towards Business Rules Management,"Automating business rules management has provided significant benefits including greater control, improved flexibility, and the ability to rapidly deploy business rules across processes, information systems and channels (Web, legacy, wireless and otherwise). These benefits, in addition to trends in service orientated architectures, Web semantics, and business process management, have spawned an emerging business rules engine (BRE) market. Despite these developments, little has been published in MIS journals that examine the management of business rules management systems (BRMS) development and deployments. Making use of structuration research methods, we collect data from leading developers, end- users, researchers and thought-leaders from the industry. Data collection results revealed a business rules management lifecycle inclusive of these steps: align, capture, organize, author, distribute, test, apply, maintain. The contextual influences, actors, inputs, outputs and artifacts are identified in each step. Academic and managerial contributions, as well as recommendations for future research are provided.",2008,0, 238,A Lightweight Approach to Semantic Annotation of Research Papers,"

This paper presents a novel application of a semantic annotation system, named Cerno, to analyze research publications in electronic format. Specifically, we address the problem of providing automatic support for authors who need to deal with large volumes of research documents. To this end, we have developed Biblio, a user-friendly tool based on Cerno. The tool directs the user's attention to the most important elements of the papers and provides assistance by generating automatically a list of references and an annotated bibliography given a collection of published research articles. The tool performance has been evaluated on a set of papers and preliminary evaluation results are promising. The backend of Biblio uses a standard relational database to store the results.

",2007,0, 239,A literature survey of the quality economics of defect-detection techniques,"There are various ways to evaluate defect-detection techniques. However, for a comprehensive evaluation the only possibility is to reduce all influencing factors to costs. There are already some models and metrics for the cost of quality that can be used in that context. The existing metrics for the effectiveness and efficiency of defect-detection techniques and experiences with them are combined with cost metrics to allow a more fine-grained estimation of costs and a comprehensive evaluation of defect-detection techniques. The current model is most suitable for directly comparing concrete applications of different techniques",2005,0, 240,A machine learning approach to TCP throughput prediction,"TCP throughput prediction is an important capability for networks where multiple paths exist between data senders and receivers. In this paper, we describe a new lightweight method for TCP throughput prediction. Our predictor uses Support Vector Regression (SVR); prediction is based on both prior file transfer history and measurements of simple path properties. We evaluate our predictor in a laboratory setting where ground truth can be measured with perfect accuracy. We report the performance of our predictor for oracular and practical measurements of path properties over a wide range of traffic conditions and transfer sizes. For bulk transfers in heavy traffic using oracular measurements, TCP throughput is predicted within 10% of the actual value 87% of the time, representing nearly a threefold improvement in accuracy over prior history-based methods. For practical measurements of path properties, predictions can be made within 10% of the actual value nearly 50% of the time, approximately a 60% improvement over history-based methods, and with much lower measurement traffic overhead. We implement our predictor in a tool called PathPerf, test it in the wide area, and show that PathPerf predicts TCP throughput accurately over diverse wide area paths.",2010,0, 241,A Manifesto for Agent Technology: Towards Next Generation Computing,"

The European Commission's eEurope initiative aims to bring every citizen, home, school, business and administration online to create a digitally literate Europe. The value lies not in the objective itself, but in its ability to facilitate the advance of Europe into new ways of living and working. Just as in the first literacy revolution, our lives will change in ways never imagined. The vision of eEurope is underpinned by a technological infrastructure that is now taken for granted. Yet it provides us with the ability to pioneer radical new ways of doing business, of undertaking science, and, of managing our everyday activities. Key to this step change is the development of appropriate mechanisms to automate and improve existing tasks, to anticipate desired actions on our behalf (as human users) and to undertake them, while at the same time enabling us to stay involved and retain as much control as required. For many, these mechanisms are now being realised by agent technologies, which are already providing dramatic and sustained benefits in several business and industry domains, including B2B exchanges, supply chain management, car manufacturing, and so on. While there are many real successes of agent technologies to report, there is still much to be done in research and development for the full benefits to be achieved. This is especially true in the context of environments of pervasive computing devices that are envisaged in coming years. This paper describes the current state-of-the-art of agent technologies and identifies trends and challenges that will need to be addressed over the next 10 years to progress the field and realise the benefits. It offers a roadmap that is the result of discussions among participants from over 150 organisations including universities, research institutions, large multinational corporations and smaller IT start-up companies. The roadmap identifies successes and challenges, and points to future possibilities and demands; agent technologies are fundamental to the realisation of next generation computing.

",2004,0, 242,A meta-analysis approach to refactoring and XP,"The mechanics of seventy-two different Java refactorings are described fully in Fowler's text. In the same text, Fowler describes seven categories of refactoring, into which each of the seventy-two refactorings can be placed. A current research problem in the refactoring and XP community is assessing the likely time and testing effort for each refactoring, since any single refactoring may use any number of other refactorings as part of its mechanics and, in turn, can be used by many other refactorings. In this paper, we draw on a dependency analysis carried out as part of our research in which we identify the 'Use' and 'Used By' relationships of refactorings in all seven categories. We offer reasons why refactorings in the 'Dealing with Generalisation' category seem to embrace two distinct refactoring sub-categories and how refactorings in the 'Moving Features between Objects' category also exhibit specific characteristics. In a wider sense, our meta-analysis provides a developer with concrete guidelines on which refactorings, due to their explicit dependencies, will prove problematic from an effort and testing perspective.",2007,0, 243,A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces,"

The use of embodied agents, defined as visual human-like representations accompanying a computer interface, is becoming prevalent in applications ranging from educational software to advertisements. In the current work, we assimilate previous empirical studies which compare interfaces with visually embodied agents to interfaces without agents, both using an informal, descriptive technique based on experimental results (46 studies) as well as a formal statistical meta-analysis (25 studies). Results revealed significantly larger effect sizes when analyzing subjective responses (i.e., questionnaire ratings, interviews) than when analyzing behavioral responses such as task performance and memory. Furthermore, the effects of adding an agent to an interface are larger than the effects of animating an agent to behave more realistically. However, the overall effect sizes were quite small (e.g., across studies, adding a face to an interface only explains approximately 2.5% of the variance in results). We discuss the implications for both designers building interfaces as well as social scientists designing experiments to evaluate those interfaces.

",2007,0, 244,A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects,"We conducted a quantitative meta-analysis of previous research on the technology acceptance model (TAM) in an attempt to make well-grounded statements on the role of subjective norm. Furthermore, we compared TAM results by taking into account moderating effects of one individual-related factor (type of respondents), one technology-related factor (type of technology), and one contingent factor (culture). Results indicated a significant influence of subjective norm on perceived usefulness and behavioral intention to use. Moderating effects were found for all three factors. The findings yielded managerial implications for both intra-company and market-based settings.",2007,0, 245,A meta-analysis of the training effectiveness of virtual reality surgical simulators,"The increasing use of virtual reality (VR) simulators in surgical training makes it imperative that definitive studies be performed to assess their training effectiveness. Indeed, in this paper we report the meta-analysis of the efficacy of virtual reality simulators in: 1) the transference of skills from the simulator training environment to the operating room, and 2) their ability to discriminate between the experience levels of their users. The task completion time and the error score were the two study outcomes collated and analyzed in this meta-analysis. Sixteen studies were identified from a computer-based literature search (1996-2004). The meta-analysis of the random effects model (because of the heterogeneity of the data) revealed that training on virtual reality simulators did lessen the time taken to complete a given surgical task as well as clearly differentiate between the experienced and the novice trainees. Meta-analytic studies such as the one reported here would be very helpful in the planning and setting up of surgical training programs and for the establishment of reference `learning curves' for a specific simulator and surgical task. If any such programs already exist, they can then indicate the improvements to be made in the simulator used, such as providing for more variety in their case scenarios based on the state and/or rate of learning of the trainee",2006,0, 246,A Meta-analysis of Timbre Perception Using Nonlinear Extensions to CLASCAL,"

Seeking to identify the constituent parts of the multidimensional auditory attribute that musicians know as timbre, music psychologists have made extensive use of multidimensional scaling (<Emphasis Type=""SmallCaps"">mds</Emphasis>), a statistical technique for visualising the geometric spaces implied by perceived dissimilarity. <Emphasis Type=""SmallCaps"">mds</Emphasis>is also well known in the machine learning community, where it is used as a basic technique for dimensionality reduction. We adapt a nonlinear variant of <Emphasis Type=""SmallCaps"">mds</Emphasis>that is popular in machine learning, Isomap, for use in analysing psychological data and re-analyse three earlier experiments on human perception of timbre. Isomap is designed to eliminate undesirable nonlinearities in the input data in order to reduce the overall dimensionality; our results show that it succeeds in these goals for timbre spaces, compressing the output onto well-known dimensions of timbre and highlighting the challenges inherent in quantifying differences in spectral shape.

",2008,0, 247,A Meta-language for Systems Architecting,"A switched beam technique applied to UHF band RFID reader is proposed. The proposed switched beam circuit composed of power divider, phase shifter, and beam controlling logic circuit is implemented on a 1x2 array antenna, and resulted in a -30 to 30 degrees beam switching range in three discrete angles. This capability not only enhances the sensing range and distance but also reduces the possible EMI interference. In addition, this array antenna can help effectively cancel or attenuate mutual interference among tags through antenna arrays directivity. These advantages provided by the antenna array greatly improve the performance of a RFID system in terms of scan efficiency and read rate, which makes RFID applications even more appealing.",2007,0, 248,A method and tools for large scale scenarios,"Performing statistical inference on massive data sets may not be computationally feasible using the conventional statistical inference methodology. In particular, there is a need for methods that are scalable to large volume and variability of data. Moreover, veracity of the inference is crucial. Hence, there is a need to produce quantitative information on the statistical correctness of parameter estimates or decisions. In this paper, we propose a scalable nonparametric bootstrap method that operates with smaller number of distinct data points on multiple disjoint subsets of data. The sampling approach stems from the Bag of Little Bootstraps method and is compatible with distributed storage systems and distributed and parallel processing architectures. Iterative reweighted l1 method is used for each bootstrap replica to find a sparse solution in the face of high-dimensional signal model. The proposed method finds reliable estimates even if the problem is not overdetermined for full data or distinct data subsets by exploiting sparseness. The performance of the proposed method in identifying sparseness in parameter vector and finding confidence intervals and parameter estimates is studied in simulation.",2016,0, 249,A Method for Development of Adequate Requirement Specification in the Plant Control Software Domain,"

This paper proposes a method for development of adequate requirement specifications in the Plant Control Software (PCSW). Before we propose this method, we have analyzed this domain and developed the components as parameter-customized-style in order to facilitate the customization. In the proposed method, PCSW requirement specification is developed from information that is used to customize components. We applied it to five development cases, and achieved 91[%] of Requirement Coverage and 94 [%] of the Requirement Conformity Rate. This result indicates that proposed method have sufficient capabilities to develop exhaustive and adequate PCSW requirement specification.

",2006,0, 250,A Model Curriculum for Aspect-Oriented Software Development,"As new software engineering techniques emerge, there's a cognitive shift in how developers approach a problem's analysis and how they design and implement its software-based solution. Future software engineers must be appropriately and effectively trained in new techniques' fundamentals and applications. With techniques becoming more mature, such training moves beyond specialized industrial courses into postgraduate curricula (as advanced topics) and subsequently into undergraduate curricula. A model curriculum for aspect-oriented software development provides guidelines about fundamentals, a common framework, and a step toward developing a body of knowledge",2006,0, 251,"A MODEL FOR ASSESSING REUSABLE, REMOTING SENSORS IN TEST AND MEASUREMENT SYSTEMS","In a typical system, sensors communicate with a computer via a communication port, such as a serial port; however, with the recent advances in programming languages and the addition of many low-cost networking devices, new methods for communicating with sensors are essential. A sensor's communication port provides an interface for connecting the sensor with a computer to control the sensor or acquire data. Most sensors utilize the Recommended Standard 232 (RS232) port for communication with a computer; however, this port was created to serve as an interface for computers and modems to communicate, not sensors. Utilizing the RS232 port to communicate with a sensor requires configuring the computer's and sensor's settings, cables, and commands. As a result, a majority of sensors are not ""plug and play"" and additional time is required to configure and develop the software to communicate with the sensor. The use of models, modern computers, and networking solutions can be utilized to alleviate these problems and improve the way sensors are incorporated into the design of a system. Therefore, this thesis creates a model for the development of reusable remote servers which handle the communication interface to a sensor and provide a network interface to a computer to control and acquire data from the sensor. The model created can be modified to support various communication interfaces a sensor may use to communicate with a computer. The model is designed with the Unified Modeling Language (UML) to provide reusable diagrams of the model for the development of a sensor subsystem. The diagrams are applied to several sensors by using development platforms and languages to create a sensor?s communication software. The sensor's communication software is deployed on a network processor and the sensor is connected to the network processor which results in the creation of a sensor subsystem. For each sensor subsystem, a software executable is deployed on a networked processor that communicates with the sensor and allows a client application on a different network processor to connect to the sensor subsystem and interact with the sensor. The sensor subsystems created through the diagrams include a heise pressure subsystem, a vaisala temperature/humidity subsystem, and a mettler mass comparator subsystem. The time required to create the sensor subsystems through the diagrams is measured and recorded to determine if the diagrams created can be reused to reduce the amount of time required to interface and communicate with a sensor through a computer. In addition tests are conducted to determine if any advantages can be found by utilizing sensor subsystems in the creation of test and measurement systems. For example, development time of the client application is measured to determine if the use of sensor subsystems can reduce the amount of time required to develop a complete test and measurement system. Additional tests are performed to explore other advantages the sensor subsystems may provide and the results of the tests are compared with prior research. Finally, the results of the tests conducted utilizing the sensor subsystems to develop test and measurement systems have shown reduction of development time of client applications by 60%. In addition, the use of the sensor subsystems also enable multiple client applications the ability to share the sensors allowing the sensor to be reused in the design and deployment of different test and measurement systems. The results of the tests conducted have shown that utilizing UML modeling tools to create diagrams for developing sensor subsystems is an effective means of reducing development time of a sensor's communication software by 90% or more, and improves sensor configuration. Furthermore, the use of sensor subsystems enables reuse of a sensor in other systems.",2007,0, 252,A model for inbound supply risk analysis,"The subject of the research of this paper is modeling of the business intelligence system in area of credit risk analysis on the basis of data contained in the credit bureau. The information contained in a credit bureau give a specific view of the banking sector, based on volume of credit activity, current indebtedness of citizens and companies in banks and in other credit institutions and can be a good input in providing signals for monetary and fiscal policy creators. For the regulators, such as central banks, it is extremely important to control the credit risk exposure and monitor any possible deterioration in order to take adequate measures to prevent adverse situations. Therefore, during the research the basic decision making models for credit risk analysis from this perspective were identified and, in accordance with that, an appropriate data mart and OLAP models were developed. The developed system was tested on data from the case study company, The Central Bank of Montenegro, which is owner of credit registry in Montenegro. The results were very good and significant improvement of the performance of business processes was identified.",2013,0, 253,"A Model for Planning, Implementing and Evaluating Client-Centered IT Education","AbstractThe Department of Research Methodology at the University of Lapland is developing a model for the planning, implementation and evaluation of education in the field of Information Technology (IT). The model draws on the concept of client-centeredness, and makes use of regional cooperation in order to carry out the universities third task. The principal method used in the research is a constructive approach. The model is being developed, and its usability evaluated, in the context of two extensive degree programs at the Department of Research Methodology.",2005,0, 254,A Model for the Implementation of Software Process Improvement: An Empirical Study,"Currently, a number of specific international standards are made available within software engineering discipline to support Software Process Improvement (SPI) such as Capability Maturity Model Integration (CMMI), ISO/IEC 15504, ISO/IEC 90003 and ISO/IEC 12207. Some suggest on integrating and harmonizing these standards to reduce risks and enhance practicality, however there is no official initiative being made to date to implement this reality. Integrated Software Process Assessment (iSPA) is a proposed initiative being developed on the premise to harmonize and integrate a number of existing software process assessments and practices including improvement standards, models and benchmarks. A survey was conducted on thirty software practitioners to measure the strength and weaknesses of their organization's current software process. The survey also attempts to evaluate the acceptance and needs of implementation of a customized SPI model for Malaysia's SME.",2011,0, 255,"A Model of IT Evaluation Management: Organizational Characteristics, IT Evaluation Methodologies, and B2BEC Benefits","

Large organizations have invested substantial financial resources in information technology (IT) over the last few decades. However, many organizations have discovered that they have not yet fully reaped the B2BEC benefits from their IT investments. A model of IT evaluation management is proposed in this paper to examine: (1) the relationship between the adoption of ITEM and B2BEC benefits; and (2) the impact of organizational characteristics (e.g. organizational IT maturity and IT evaluation resources (ITER) allocation) on the relationship between the adoption of ITEM and B2BEC benefits. The cross-national survey results provide empirical evidence in support of our proposed model, and demonstrate that: (a) the level of organizational IT maturity has a direct and significant impact on the adoption of ITEM; (b) the adoption of ITEM has a positive relationship with the ITER allocation; and (c) the ITER allocation has a significant direct influence on B2BEC benefits.

",2007,0, 256,A model-based ontology of the software interoperability problems: preliminary results,"Interoperability usually refers to software system communication. Although there is no widely accepted definition, and therefore no common understanding of the context, there are multiple solutions (protocols, architectures, components) that promise to solve integration issues. The INTEROP network of excellence aims at proposing a large view of interoperability issues, and hence requires a unified definition. As an INTEROP participant, we suggest in this paper, as a first attempt, an ontology of interoperability. We first present the general software engineering concepts our work is based on. We then propose the decisional meta-model and the technical aspects meta-model, as prerequisite to the introduction of the actual interoperability model. Finally, we discuss the pros and cons, as well as different ways the model can be used.",2004,0, 257,A multi-perspective digital library to facilitate integrating teaching research methods across the computing curriculum,"

The <u>c</u>omputing <u>r</u>esearch <u>m</u>ethods (CRM) literature is scattered across discourse communities and published in specialty journals and conference proceedings. This dispersion has led to the use of inconsistent terminology when referring to CRM. With no established CRM vocabulary and isolated discourse communities, computing as a field needs to engage in a sense-making process to establish the common ground necessary to support meaningful dialog.

We propose to establish common ground through the construction of the <u>c</u>omputing <u>r</u>esearch <u>m</u>ethods <u>m</u>ulti-<u>p</u>erspective <u>d</u>igital <u>l</u>ibrary (CRM-MPDL), a scholar-produced digital resource for the CRM community. As with its parent design research project on teaching CRM, CRM-MPDL is being developed through iterative and participatory design in an emergent fashion in tandem with the larger CRM community.

For our resource to be viable, we must carefully explore the rich details and nuances of our stakeholder communities and the perspectives they bring to the sense-making process. As a discount alternative to truly having a representative sample of our user population ""in the room"" with us throughout the design and implementation process, we have implemented a development approach for CRM-MPDL using personas as a means to gain insights and feedback from the target user communities.

For this iteration of the development process, we are concentrating on the needs of the faculty.

In this report, we present our evolving understanding of the project, and seek feedback and input on several key aspects of the theoretical and process models. We then present the framework for the faculty personas, as well as an overview of some of the personas at the time the paper was prepared, in the hopes that we can entice readers to visit the project website to help with the ongoing audit and refinement process. We also give an overview of the content model for CRM-MPDL, which will have evolved (and may even be available as a working prototype) by the time this article appears in print. Finally, we conclude with a current status summary, and issue several specific calls for participation in the ongoing work of the project.

",2007,0, 258,A Neuro-fuzzy Inference System for the Evaluation of New Product Development Projects,"

As a vital activity for companies, new product development is also a very risky process due to the high uncertainty degree encountered at every development stage and the inevitable dependence on how previous steps are successfully accomplished. Hence, there is an apparent need to evaluate new product initiatives systematically and make accurate decisions under uncertainty. Another major concern is the time pressure to launch a significant number of new products to preserve and increase the competitive power of the company. In this work, we propose an integrated decision-making framework based on neural networks and fuzzy logic to make appropriate decisions and accelerate the evaluation process. We are especially interested in the two initial stages where new product ideas are selected and the implementation order of the corresponding projects are determined. We show that this two-staged intelligent approach allows practitioners to roughly and quickly separate good and bad product ideas by making use of previous experiences, and then, analyze a more shortened list rigorously.

",2006,0, 259,A New Approach Towards Procurement of Software Models Via Distributed Business Models,"

Nowadays the distributive nature of many modern enterprises leads business strategists to look forward to new solutions which could take over this new requirement. The ever-increasing surge of e-business trend is another driving force for dealing with new distributed environment in addition to the serious need for core software components. On the other hand, the major role of these information systems in survival of business while tight competition exists is a facet which reveals another requirement focused on the robust relationship of business and the system(s) maintaining it. In this paper we aim to introduce a new approach to procure software models by means of the underlying business model. Since the introduction of UML as the latest OMG standard modeling language in 1997, a few researches have been done to use UML as a tool for business modeling. Unfortunately, recent trends are still immature and confronted with shortages and deficiencies. BSUP, which stands for Business to Software Unified Process, is our new approach to fulfill such a goal by means of a proprietary extension of UML. In this work, while analyzing the issues causing problems in the existing methods, we show how BSUP successfully resolves a few of such problems. Issues such as distributed processes, uncertainty in values and associations, ambiguity in the model, lack of precisely defined semantics and etc. may successfully be addressed and resolved. The BSUP is an ongoing work currently being evaluated in Paxan Corp,1 a mid-scale industrial environment and a leading manufacturer of cosmetics and detergent products in the region. So far a few encouraging benefits have been revealed as briefly discussed within this paper.

",2004,0, 260,A new calibration for Function Point complexity weights,"Function Point (FP) is a useful software metric that was first proposed 25 years ago, since then, it has steadily evolved into a functional size metric consolidated in the well-accepted Standardized International Function Point Users Group (IFPUG) Counting Practices Manual - version 4.2. While software development industry has grown rapidly, the weight values assigned to count standard FP still remain same, which raise critical questions about the validity of the weight values. In this paper, we discuss the concepts of calibrating Function Point, whose aims are to estimate a more accurate software size that fits for specific software application, to reflect software industry trend, and to improve the cost estimation of software projects. A FP calibration model called Neuro-Fuzzy Function Point Calibration Model (NFFPCM) that integrates the learning ability from neural network and the ability to capture human knowledge from fuzzy logic is proposed. The empirical validation using International Software Benchmarking Standards Group (ISBSG) data repository release 8 shows a 22% accuracy improvement of mean magnitude relative error (MMRE) in software effort estimation after calibration.",2008,0, 261,A New Hash-Based RFID Mutual Authentication Protocol Providing Enhanced User Privacy Protection,"

The recently proposed Radio Frequency Identification (RFID) authentication protocol based on a hashing function can be divided into two types according to the type of information used for authentication between a reader and a tag: either a value fixed or one updated dynamically in a tag. In this study we classify the RFID authentication protocol into a static ID-based and a dynamic-ID based protocol and then analyze their respective strengths and weaknesses and the previous protocols in the static/dynamic ID-based perspectives. Also, we define four security requirements that must be considered in designing the RFID authentication protocol including mutual authentication, confidentiality, indistinguishability and forward security. Based on these requirements, we suggest a secure and efficient mutual authentication protocol. The proposed protocol is a dynamic ID-based mutual authentication protocol designed to meet requirements of both indistinguishability and forward security by ensuring the unlinkability of tag responses among sessions. Thus, the protocol can provide more strengthened user privacy compared to previous protocols and recognizes a tag efficiently in terms of the operation quantity of tags and database.

",2008,0, 262,A New Method for Traffic Signs Classification Using Probabilistic Neural Networks,"

Traffic signs can provide drivers with very valuable information about the road, in order to make driving safer and easier. In recent years, traffic signs recognition has aroused wide interests in many scholars. It has two main parts– the detection and the classification. This paper presents a new method for traffic signs classification based on probabilistic neural networks (PNN) and Tchebichef moment invariants. It has two hierarchies: the first hierarchy classifier can coarsely classify the input image into one of indicative signs, warning signs or prohibitive signs according to its background color threshold; the second hierarchy classifiers including of three PNN networks can concretely identify traffic sign. The inputs of every PNN use the new developed Tchebichef moment invariants. The simulation results show that the two-hierarchy classifier can improve the classification ability meanwhile can use in real-time system.

",2006,0, 263,"A new perspective to automatically rank scientific conferences using digital libraries","Citation analysis is performed in order to evaluate authors and scientific collections, such as journals and conference proceedings. Currently, two major systems exist that perform citation analysis: Science Citation Index (SCI) by the Institute for Scientific Information (ISI) and CiteSeer by the NEC Research Institute. The SCI, mostly a manual system up until recently, is based on the notion of the ISI Impact Factor, which has been used extensively for citation analysis purposes. On the other hand the CiteSeer system is an automatically built digital library using agents technology, also based on the notion of ISI Impact Factor. In this paper, we investigate new alternative notions besides the ISI impact factor, in order to provide a novel approach aiming at ranking scientific collections. Furthermore, we present a web-based system that has been built by extracting data from the Databases and Logic Programming (DBLP) website of the University of Trier. Our system, by using the new citation metrics, emerges as a useful tool for ranking scientific collections. In this respect, some first remarks are presented, e.g. on ranking conferences related to databases.",2005,0, 264,A New Regression Based Software Cost Estimation Model Using Power Values,"

The paper aims to provide for the improvement of software estimation research through a new regression model. The study design of the paper is organized as follows. Evaluation of estimation methods based on historical data sets requires that these data sets be representative for current or future projects. For that reason the data set for software cost estimation model the International Software Benchmarking Standards Group (ISBSG) data set Release 9 is used. The data set records true project values in the real world, and can be used to extract information to predict new projects cost in terms of effort. As estimation method regression models are used. The main contribution of this study is the new cost production function that is used to obtain software cost estimation. The new proposed cost estimation function performance is compared with related work in the literature. In the study same calibration on the production function is made in order to obtain maximum performance. There is some important discussion on how the results can be improved and how they can be applied to other estimation models and datasets.

",2007,0, 265,A new web application development methodology: Web service composition.,"One of the biggest problems in the development of high-performance scientific applications is the need for programming environments that allow source code development in an efficient way. However, there is a clear lack of approaches with specific methodologies or optimal working environments to develop high-performance computing software systems. Additionally, existing frameworks are focused on the design and implementation phases, forgetting software component reuse from the earliest stages of the development process. An aspect-oriented and component-based approach is proposed for the development of complex parallel applications from existing functional components and new component definitions, according to business rules established by the users, through a web service entry of the platform. The proposed approach includes a specific methodology to develop high-performance scientific applications through the reuse of components from the earliest stages. Finally, an additional supercomputing-oriented framework aims to facilitate the development of these systems and to make creation, cataloguing, validation and reuse of each application and its components easier.",2012,0, 266,A novel approach to formalize Object-Oriented Design Metrics,"Software testing allows programmers to determine and guarantee the quality of software system. It is one of the essential activities in software development process. Mutation analysis is a branch of software testing. It is classified as a fault-based software testing technology. Unlike other software testing technologies, mutation analysis assesses the quality of software test cases and therefore improves the efficiency of software testing by measuring and improving the quality of test cases set. Mutation analysis works by generating mutants, which describe the faults that may occur in the software programs, from original program. Mutants are generated by applying mutation operators on original software program. Mutation operator is a rule that specifies the syntactic variations of strings. There have been several works about mutation analysis support system for conventional languages such as C and FORTRAN. However, there are a few for object-oriented languages such as C++ and Java. This article aims to propose a novel approach to design and implement mutation operators for object-oriented programming language. The essential idea of proposed method is the usage of JavaML language as the intermediate representation to implement mutation operators: it first converts the original program into JavaML document; then implements mutation operator for JavaML document and gets the mutated JavaML document with the help of DOM - a tool to process JavaML document; finally it converts the mutated JavaML document into mutant program.",2014,0, 267,A Novel Individual Blood Glucose Control Model Based on Mixture of Experts Neural Networks,"AbstractAn individual blood glucose control model (IBGCM) based on the Mixture of Experts (MOE) neural networks algorithm was designed to improve the diabetic care. MOE was first time used to integrate multiple individual factors to give suitable decision advice for diabetic therapy. The principle of MOE, design and implementation of IBGCM were described in details. The blood glucose value (BGV) from IBGCM extremely approximated to training data (r=0.97 0.05, n=14) and blood glucose control aim (r=0.95 0.06, n=7).",2004,0, 268,A Numerical Trip to Social Psychology: Long-Living States of Cognitive Dissonance,"

The Heider theory of cognitive dissonance in social groups, formulated recently in terms of differential equations, is generalized here for the case of asymmetric interpersonal ties. The space of initial states is penetrated by starting the time evolution several times with random initial conditions. Numerical results show the fat-tailed distribution of the time when the dissonance is removed. For small groups (<em>N</em>=3) we found some characteristic patterns of the long-living states. There, mutual relations of one of the pairs differ in sign.

<Emphasis Type=""Bold"">PACS numbers:</Emphasis>89.65.-s, 02.50.-r.

",2007,0, 269,A Pattern-Based Framework for the Exploration of Design Alternatives,"We propose an extreme-scale distributed design exploration framework in the inter-cloud infrastructure consisting of supercomputers and cloud computing systems. In design explorations, we need to search optimal configurations of input parameters that maximize or minimize objective function(s). In our framework, simulations are performed in parallel employing multiple supercomputers of the Japanese nation-wide high-performance computing infrastructure, and virtual or real machines in academic cloud systems are employed for parameter surveys and optimizations with machine learning. To enable nation-wide scale collaborations of supercomputers and cloud systems, we employ scalable distributed databases and objective storages to store input design parameters and resulting output information generated by simulations.",2015,0, 270,A pattern-based methodology for multimodal interaction design.,"In this correspondence, we present an approach to identifying and constructing profiles of user interfaces for educational games. Our approach is based on framing games as educational tools that incorporate fun and learning through motivation as the key ingredient in the learning process and multimodal interaction as the medium for conveying educational material. The proposed solution formalizes the design process, describing educational games in terms of estimated effects that they produce on players. Building upon research on learning and motivation theory, we are connecting these effects with player learning preferences and motivation states. The essence of our solution is the educational game metamodel (EGM), which defines platform-independent educational game concepts. Using the EGM, we have explored novel design approaches for educational games. The metamodel can be used as a conceptual basis for creation of platform-independent educational games, allowing authoring for device and network independence.",2011,0, 271,A Performance Analysis of MANET Multicast Routing Algorithms with Multiple Sources,"Since network resource of mobile ad hoc network (MANET) is limited due to the contention-based wireless communication channel at the medium access layer and energy of mobile nodes is constrained due to the energy-limited batteries, the scalability issue is one of main research topics in developing MANET routing algorithms. Therefore, this paper analyzes the message complexities of group shared tree (GST) and source- specific tree (SST) that are implemented in most MANET multicast routing algorithms. Simulation demonstrates that in a wireless ad hoc network where SST and GST are well maintained during the simulation, SST algorithm is able to achieve very competitive performance (i.e. less message complexity) under the multiple packet transmissions, in comparison with GST where no core selection algorithm is adopted.",2007,0, 272,A Performance Measurement System for Virtual and Extended Enterprises,"Working principle of gas brake valve and performance testing methods of tightness, static and dynamic characteristics are introduced. Several characteristic parameters, such as initial pressure differential and dynamic response time, are blended into testing methods. An automatic measurement system of the gas brake valve based on virtual instrument is developed, and realizes double-position independent inspection of one set of PC control system, improving test efficiency. It is designed by using Labview, one of graphical programming software. Combining with data acquisition and servo movement module, this system can realize the tightness, static characteristic and dynamic characteristic parameters testing of gas brake valve. The results show that this system can effectively detect the gas brake valve for on-line measuring, testing time for 120s, simple operation, and satisfying the requirements of dynamic testing process.",2011,0, 273,A Practical Buses Protocol for Anonymous Network Communication,"An efficient anonymous communication protocol, called MANET Anonymous Peer-to-peer Communication Protocol (MAPCP), for P2P applications over mobile ad-hoc networks (MANETs) is proposed in this work. MAPCP employs broadcasts with probabilistic-based flooding control to establish multiple anonymous paths between communication peers. It requires no hop-by-hop encrypt ion/decryption along anonymous paths and, hence, demands lower computational complexity and power consumption than those MANET anonymous routing protocols. Since MAPCP builds multiple paths to multiple peers within a single query phase without using an extra route discovery process, it is more efficient in P2P applications. Through analysis and extensive simulations, we demonstrate that MAPCP always maintains a higher degree of anonymity than a MANET anonymous single-path routing protocol in a hostile environment. Simulation results also show that MAPCP is resilient to passive attacks.",2007,0, 274,A Probabilistic Approach to Web Portal?s Data Quality Evaluation,"Advances in technology and the use of the Internet have favoured the emergence of a large number of Web applications, including Web Portals. Web portals provide the means to obtain a large amount of information therefore it is crucial that the information provided is of high quality. In recent years, several research projects have investigated Web Data Quality; however none has focused on data quality within the context of Web Portals. Therefore, the contribution of this research is to provide a framework centred on the point of view of data consumers, and that uses a probabilistic approach for Web portal's data quality evaluation. This paper shows the definition of operational model, based in our previous work.",2007,0, 275,A process to reuse experiences via narratives among software project managers,"Software project management is a complex process requiring extensive planning, effective decision-making, and proper monitoring throughout the course of a project. Unfortunately, software project managers rarely capture and reuse the knowledge gained during a project on subsequent projects. To enable the repetition of prior successes and avoidance of previous mistakes, I propose that software project managers can improve their management abilities by reusing their own and others' past experiences with written narratives. I use multiple methodologies---including literature review, grounded theory, design science research, and experimentation---to create a process for software project managers to reuse knowledge gained through experiences on software projects. In the literature review, I examine relevant research areas to inspire ideas on how to reuse knowledge via written narratives in software project management. Interviews with software project managers, analyzed using grounded theory, provide insight into the current challenges of reusing knowledge during a project. I leverage design science research methodology to develop a process of experience reuse that incorporates narratives and wikis to enable software project managers to share their experiences using written narratives. Experimentation evaluates whether the process developed using the design science research methodology improves the current knowledge reuse practices of software project managers.",2006,0, 276,A proposed framework for the analysis and evaluation of business models,"An actor operating in today's telecom market is exposed to rapid changes of the business situation -market conditions, technology trends, players' picture, regulations. Successful operations depend on robust business models and the assessment of their prospective. Thus, designing business models and analysing their profitability are steadily becoming more important. Getting the results of these activities timely and with sufficient accuracy is easier if the problems are approached in a structured and routine way. Consequently, a framework that structures business modelling and assessment of (novel) business ideas is a helpful tool for the actors operating in the telecom market. This paper describes the BIZTEKON framework, which can be efficiently used to model and analyse any business situation that may emerge. A business situation may alter due to the presence of new roles, competitors, strategic alliances, technologies, products, (target) customers' groups/countries, etc. Theoretical basis of the BIZTEKON is built upon its elements -the terminology, the concepts, and the procedure. These are explained in detail here. Practical applicability of the framework is demonstrated on an example of the ""CDMA450-based broadband-access"" business case.",2007,0, 277,A proposed investigation of the is employee job context,"

High turnover in the IS context has been recognized as an area of concern. Therefore, many IS researchers have focused on the study of issues related to turnover. However, there is still a limited understanding of the idiosyncrasies of the IS context, which could affect employees' attitudes toward the organization and the job. This paper is a part of an ongoing research program which focuses on understanding the unique characteristics of the IS job environment. It aims to provide a logical framework for studying the IS context and understanding IS employee job related outcomes, such as turnover intention to develop better guidelines for IS personnel management.

",2008,0, 278,A Proposed Low-cost Security System Based on Embedded Internet,"In an era when computing systems are approaching towards mobility, smaller size, and the area of computer applications are becoming wider and broader, embedded Internet control facilitates the control of useful devices with the tremendous opportunity of the Internet, ad hoc networks, mobile telecommunications, etc. Embedded Internet control also facilitates the development of low cost systems, which are normally feasible for the requirements of Third World countries. In this paper the objective of the described project was to develop a security system for office or home doors, which could be controlled in embedded Internet technology. The approach is to design the required sensor and actuator, client and server, and thereby enable the control through the mobile phone. The paper extends the use of F-Bus protocol in Nokia 3310 mobile phones to send and receive SMS form servers",2006,0, 279,A Prototype Integrated Decision Support System for Breast Cancer Oncology,"

This paper describes an integrated clinical support system combining data entry and access with prognostic modelling, for use by the clinician, complemented by a patient information system tailored to the particular characteristics of the individual patient. The core of the system comprises a modelling methodology based on the PLANN-ARD neural network which combines risk staging with automatic rule generation to derive an explanation facility for the risk group allocation of each patient. The aim of the system is to promote better informed decision making on the part of both the clinician and the patient, exploiting the combined potential of analytical methodologies and the internet.

",2007,0, 280,A qualitative analysis of reflective and defensive student responses in a software engineering and design course,"

For students encountering difficult material, reflective practice is considered to play an important role in their learning. However, students in this situation sometimes do not behave reflectively, but in less productive and more problematic ways. This paper investigates how educators can recognize and analyze students' confusion, and determine whether students are responding reflectively or defensively. Qualitative data for the investigation comes from an upper-level undergraduate software engineering and design course that students invariably find quite challenging. A phenomenological analysis of the data, based on Heidegger's dynamic of rupture, provides useful insight to students' experience. A clearer understanding of the concepts presented in this paper should enable faculty to bring a more sophisticated analysis to student feedback, and lead to a more informed and productive interpretation by both instructor and administration.

",2006,0, 281,A quantitative approach to software development using IEEE 982.1.,"Although software's complexity and scope have increased tremendously over the past few decades, advances in software engineering techniques have been only moderate at best. Software measurement has remained primarily a labor-intensive effort and thus subject to human limitations. The Space Shuttle's avionics software is an excellent example of how applying standards such as IEEE 982.1 can help alleviate human limitations by providing a quantitative roadmap to answer key questions",2007,0, 282,A Reactive Measurement Framework,"The IEEE 1588 Standard for A Precision Clock Synchronization Protocol for Networked Measurement and Control Systems (PTP) has been developed to provide better quality in-band clock synchronization in distributed measurement and control systems and other network embedded systems. Using IEEE 1588 compliant Ethernet equipment (network interface cards and switches) it is possible to provide sub 1 microsecond precision of slave clocks compared to a reference clock. The centerpiece of an IEEE 1588 based clock is the clock servo disciplining the local clock based on the information provided by IEEE 1588 about the reference time. However, it is an opened question how precision can be increased even further by advanced clock servo design. Furthermore, as only a minority of office and industrial Ethernet switches support IEEE 1588, it is also paramount to investigate the achievable performance for such systems, which is also defined by the clock servo. To design advanced control systems the plant and the disturbances must be modeled, and the model must be built based on representative measurements. In our case, the one-way network delay in between the reference clock and the slave clock must be characterized in Local Area Networks (LAN) for typical network load scenarios for advanced servo design, so 1-10 ns accurate one-way delay and jitter measurements must be achieved for this application. Measuring one-way delay in Local Area Networks requires specialized and expensive equipment; however, we will show that it is possible to build such measurement system using commercial off-the-shelf (COTS) components and open source software. Our system uses Intel network interface cards, the open source Linux operating system, and some self-developed software components (both Linux kernel drivers and user space programs).",2014,0, 283,A Realistic Empirical Evaluation of the Costs and Benefits of UML in Software Maintenance,"The Unified Modeling Language (UML) is the de facto standard for object-oriented software analysis and design modeling. However, few empirical studies exist that investigate the costs and evaluate the benefits of using UML in realistic contexts. Such studies are needed so that the software industry can make informed decisions regarding the extent to which they should adopt UML in their development practices. This is the first controlled experiment that investigates the costs of maintaining and the benefits of using UML documentation during the maintenance and evolution of a real, non-trivial system, using professional developers as subjects, working with a state-of-the-art UML tool during an extended period of time. The subjects in the control group had no UML documentation. In this experiment, the subjects in the UML group had on average a practically and statistically significant 54% increase in the functional correctness of changes (p=0.03), and an insignificant 7% overall improvement in design quality (p=0.22) - though a much larger improvement was observed on the first change task (56%) - at the expense of an insignificant 14% increase in development time caused by the overhead of updating the UML documentation (p=0.35).",2008,0, 284,A realization of a reflection of personal information on distributed brainstorming environment,"AbstractA meeting using Same Time / Different Place groupware frees ourselves from the place restrictions. However it cannot free ourselves from the time restrictions as we must stay in front of terminals connected to networks in the same meeting. On the other hand, Different Time / Different Place groupware that makes participants free from the time restrictions cannot use for meetings for creation, coordination and decision as they need mutual interaction. The main purpose of this research is to construct an experimental system that can utter automatically in brainstorming that is, a typical meeting for creation. The utterances are generated from personal information inputted or selected in advance. This system will free participants from the time restrictions in meeting for creation. From the results of evaluations, utterances from system included some useless ones, but they stimulated participants' idea generation and contributed to some extent.",1997,0, 285,A Redesign Framework for Call Centers,"E-commerce is not just the transaction, it is also the customer service. The advent of interactive e-commerce has without doubt made buying and selling on the Web successful. However, it continues to lack the quality of personal contact with the customer, which is essential in building and sustaining customer relationships on the Internet. “Real-time” text communication currently used by some companies lacks the voice and video combination that is needed to fill this communication gap. According to Forrester Research, only 16% of web users actually read the web page word by word. 67% of on-line consumers follow it to the order page but do not complete a transaction. In response to this problem, this research entails developing a software interface that will enable a Web customer to click and talk to a sales representative in real-time and also see the picture and the profile of the sales representative. In this research, a web user initiates a WebClick (live representative) request, that passes through the Internet and the Web server notifies our server of an incoming call. The server in turn notifies the sales representative's computer by generating a “Web pop” showing the particular page the customer was browsing at the time the call was initiated. The sales representative can now speak with the web user or route the call to another sales representative. This research was limited to voice only",2000,0, 286,A Reflective Practice of Automated and Manual Code Reviews for a Studio Project,"In this paper, the target of code review is project management system (PMS), developed by a studio project in a software engineering master's program, and the focus is on finding defects not only in view of development standards, i.e., design rule and naming rule, but also in view of quality attributes of PMS, i.e., performance and security. From the review results, a few lessons are learned. First, defects which had not been found in the test stage of PMS development could be detected in this code review. These are hidden defects that affect system quality and that are difficult to find in the test. If the defects found in this code review had been fixed before the test stage of PMS development, productivity and quality enhancement of the project would have been improved. Second, manual review takes much longer than an automated one. In this code review, general check items were checked by automation tool, while project-specific ones were checked by manual method. If project-specific check items could also be checked by automation tool, code review and verification work after fixing the defects would be conducted very efficiently. Reflecting on this idea, an evolution model of code review is studied, which eventually seeks fully automated review as an optimized code review.",2005,0, 287,A replicated experiment of pair-programming in a 2nd-year software development and design computer science course,"This paper presents the results of a replicated pair programming experiment conducted at the University of Auckland (NZ) during the first semester of 2005. It involved 190 second year Computer Science students attending a software design and construction course. We replicated the experiment described in [18], investigating similar issues to those reported in [32] and employing a subset of the questionnaires used in [32]. Our results confirm the use of pair programming as an effective programming/design learning technique.",2006,0, 288,A Replicated Study Comparing Web Effort Estimation Techniques,"

The objective of this paper is to replicate two previous studies that compared at least three techniques for Web effort estimation in order to identify the one that provides best prediction accuracy. We employed the three effort estimation techniques that were mutual to the two studies being replicated, namely Forward Stepwise Regression (SWR), Case-Based Reasoning (CBR) and Classification & Regression Trees (CART). We used a cross-company data set of 150 Web projects from the Tukutuku data set. This is the first time such large number of Web projects is used to compare effort estimation techniques. Results showed that all techniques presented similar predictions, and these predictions were significantly better than those using the mean effort. Thus, all the techniques can be exploited for effort estimation in the Web domain, also using a cross-company data set that is specially useful when companies do not have their own data on past projects from which to obtain their estimates, or that have data on projects developed in different application domains and/or technologies.

",2007,0, 289,A requirements engineering framework for cross-organizational ERP systems,"Security engineering is a new research area in software engineering that covers the definition of processes, plans and designs for security. The researchers are working in this area and however there is a lack in security requirements treatment in this field. Requirements engineering is a major action that begins during the communication activity and continues into the modeling activity. Requirements engineering builds a bridge to design and construction. The security requirements is one of the non functional requirements which acts as constrains on the functions of the system, but our view is that security requirements to be considered as functional requirements and to be analyzed during the earlier phase of software development i.e. Requirements engineering phase. An increasing part of the communication and sharing of information in our society utilizes electronic media. IT security is becoming central to the ability to fulfil business goals, build trustworthy systems, and protect assets. In order to develop systems with adequate security features, it is essential to capture the corresponding security needs and requirements. It is called as the Security requirements engineering, which is emerging as a branch of software engineering, spurred by the realization that security must be dealt with early during requirements phase. In this paper we have proposed a framework for Security Requirements engineering and applied on online trading system. Online trading systems form a critical part of the securities and capital markets today. By using security requirements engineering framework we are able to develop a secure online trading system. The results obtained using Proposed Security Requirements Engineering Framework is simple and better than the Haley and His Colleagues Framework.",2011,0, 290,A research agenda for mobile usability,"With the fast growing popularity of smartphone technology, the percentage of people accessing critical information from their handheld devices such as tablets, and e-readers, is increasing rapidly. Advanced computing devices are changing the paradigm of mobile communication. In this research paper, a solution called e-Onama is described in detail along with its applicability and future enhancement opportunities. e-Onama is an indigenously developed software solution by C-DAC which enables user to access HPC facility while on the move. This is an innovative product for addressing the needs of the open science and engineering community in HPC with ease. e-Onama has been developed for several mobile platforms like Windows, Linux and Android.",2013,0, 291,A Research on Chinese Consumers? Using Intention on 3G Mobile Phones,"AbstractWith the explosion of technology development, information appliance (IA) and digital devices (DD) become more and more popular these years. And the development of the third generation (3G) mobile phone is viewed as the most potential profitable product in the next decade. However, 3G mobile phone is confronting its promotion bottleneck. By using diffusion of innovation and lifestyle theory, hence, this study aims to examine the factors influencing adopting 3G mobile phone intention. Result shows only technology cluster is significantly related to adoption intention among four variables examined in this study.",2007,0, 292,A review for mobile commerce research and applications,"As internet access is becoming pervasive there is an increasing number of mobile applications users, Enterprises are now reaching a diversified number of customers through the use of web and mobile applications. However, this improvement in the accessibility means to computing resources does not move in same pace with the improvement of security controls to protect data and services offered through web and mobile applications. This review paper is focused on identification of both practical and theoretical security frameworks for web and mobile applications in use, with an intent of assessing the capability of the frameworks to assist developers build secure mobile web applications. A discussion follows the review by highlighting main characteristics of the frameworks with their merits and demerits. The analysis establishes that, available security frameworks are not adequate for the growing convergence of web and mobile applications, in that there are some security gaps and therefore suggest a need of developing a new security framework for the converged web and mobile applications.",2014,0, 293,A Review of Metrics for Knowledge Management Systems and Knowledge Management Initiatives,"Metrics are essential for the advancement of research and practice in an area. In knowledge management (KM), the process of measurement and development of metrics is made complex by the intangible nature of the knowledge asset. Further, the lack of standards for KM business metrics and the relative infancy of research on KM metrics points to a need for research in this area. This paper reviews KM metrics for research and practice and identifies areas where there is a gap in our understanding. It classifies existing research based on the units of evaluation such as user of KMS, KMS, project, KM process, KM initiative, and organization as a whole. The paper concludes by suggesting avenues for future research on KM and KMS metrics based on the gaps identified.",2004,0, 294,A Review of Public Health Syndromic Surveillance Systems,"

In response to the critical need of early detection of potential infectious disease outbreaks or bioterrorism events, public health syndromic surveillance systems have been rapidly developed and deployed in recent years. This paper surveys major research and system development issues related to syndromic surveillance systems and discusses recent advances in this important area of security informatics study.

",2006,0, 295,A Roadmap for Web Mining: From Web to Semantic Web,"This paper demonstrates the automatic creation of a Web service that chains together existing Web services to achieve a particular goal. The generated service implements the necessary workflows to convert an instance data of one system into an instance data of another. This paper further demonstrates the reconciliation of structural, syntactic, and representational mismatches between the input instance and the desired output instance",2006,0, 296,A roadmap of problem frames research.,"Extension knowledge can be used to solve the problem of the objective world. Using extension strategy to improve the ESGS generation system, human intelligence has important significance, Conduction transform and conduction problem solving contradiction is ESGS involves important issues. This paper analyzes the problem of solving the contradiction between conduction steps, discusses the application of transmission problem, proposed further thinking.",2013,0, 297,A semiotic metrics suite for assessing the quality of ontologies,"A suite of metrics is proposed to assess the quality of an ontology. Drawing upon semiotic theory, the metrics assess the syntactic, semantic, pragmatic, and social aspects of ontology quality. We operationalize the metrics and implement them in a prototype tool called the Ontology Auditor. An initial validation of the Ontology Auditor on the DARPA Agent Markup Language (DAML) library of domain ontologies indicates that the metrics are feasible and highlights the wide variation in quality among ontologies in the library. The contribution of the research is to provide a theory-based framework that developers can use to develop high quality ontologies and that applications can use to choose appropriate ontologies for a given task.",2005,0, 298,A Sensor Positioning System for Functional Near-Infrared Neuroimaging,"

In cognitive studies using functional near-infrared (fNIR) techniques, the optical sensors are placed over the scalp of the subject. In order to document the actual sensor location, a system is needed that can measure the 3D position of an arbitrary point on the scalp with a high precision and repeatability and express sensor location in reference to the international 10-20 system for convenience. In addition, in cognitive studies using functional magnetic resonance imaging (fMRI), the source location is commonly expressed using Talairach system. In order to correlate the results from the fNIR study with that of the fMRI study, one needs to project the source location in Talairach coordinates onto a site on the scalp for the placement of the fNIR sensors. This paper reports a sensor positioning system that is designed to achieve the above goals. Some initial experimental data using this system are presented.

",2007,0, 299,A Service Oriented Architecture Supporting Data Interoperability for Payments Card Processing Systems,"

As the size of an organization grows, so does the tension between a centralized system for the management of data, metadata, derived data, and business intelligence and a distributed system. With a centralized system, it is easier to maintain the consistency, accuracy, and timeliness of data. On the other hand with a distributed system, different units and divisions can more easily customize systems and more quickly introduce new products and services. By data interoperability, we mean the ability of a distributed organization to work with distributed data consistently, accurately and in a timely fashion. In this paper, we introduce a service oriented approach to analytics and describe how this is used to measure and to monitor data interoperability.

",2006,0, 300,A Shared Service Terminology for Online Service Provisioning,"Internet of Things applications using RFID sensors are a challenging task due to the limited capacity of batteries. Thus, energy efficient control has become more critical design with RFID sensor integrating complex tag identification processing techniques. Previous works on power efficient control in multiple RFID tags identification systems often design tag anti-collision protocols in identification process, and seldom consider the features of tags are able to detect energy within its radio rang among each other. This paper is dedicated to developing a share energy provisioning (SEP) strategy for energy-limited multiple RFID tag identification system. First, SEP can dynamically adapt the variable energy resources due to the cognitive radio technique. Second, SEP combines the energy control with tags groups in wait time T, through classifying the tag into different groups according to its distances. Third, it introduces the optimization theoretical analysis energy for multiple RFID tags identification system, so as to minimize the time and energy of it takes to send tags data to reader. Finally, it shares the energy resource as different energy harvest under energy-limited RFID systems. Experimental results demonstrate the energy efficiency of the proposed approach.",2011,0, 301,A Simulation Approach for Evaluating Scalability of a Virtually Fully Replicated Real-time Database,"We use a simulation approach to evaluate large scale resource usage in a distributed real-time database. Scalability is often limited by that resource usage is higher than what is added to the system when a system is scaled up. Our approach of Virtual Full Replication (VFR) makes resource usage scalable, which allows large scale real-time databases. In this paper we simulate a large scale distributed real-time database with VFR, and we compare it to a fully replicated database (FR) for a selected set of system parameters used as independent variables. Both VFR and FR support local timeliness of transactions by ensuring local availability for data objects accessed by transactions. The difference is that VFR has a scalable resource usage due to lower bandwidth usage for data update replication. The simulation shows that a simulator has several advantages for studying large scale distributed real-time databases and for studying scalability in resource usage in such systems.",2006,0, 302,A Strategic Approach Development for a Personal Digital Travel Assistant Used in 2008 Olympic Game,"

The study was to develop a strategic approach by which designers can identify what 2008 Olympic game travelers require for a personal digital travel assistant (PDTA) and how the user requirements can be met in the new PDTA product. To achieve the purpose, the researchers organized two teams co-working across two countries. Beijing Team comprised a professor, three observers, and fifteen travelers including two subjects observed by three observers. Taipei Team was composed of another professor, two designers and twelve subjects using brain storming to predict consumers' requirements for a PDTA. The procedures of the study included technical trial between two sites, scenario approach for briefing, a field trip for concept development according to user requirements observed throughout the travel, and a computer mediated communication between two sites for concept refinement. The findings include a strategic approach developed. It could be applied into any new product development based on user requirements.

",2006,0, 303,A study of developer attitude to component reuse in three IT companies.,"AbstractThe paper describes an empirical study to investigate the state of practice and challenges concerning some key factors in reusing of in-house built components. It also studies the relationship between the companies reuse level and these factors. We have collected research questions and hypotheses from a literature review and designed a questionnaire. 26 developers from three Norwegian companies filled in the questionnaire based on their experience and attitudes to component reuse and component-based development. Most component-based software engineering articles deal with COTS components, while components in our study are in-house built. The results show that challenges are the same in component related requirements (re)negotiation, component documentation and quality attributes specification. The results also show that informal communications between developers are very helpful to supplement the limitation of component documentation, and therefore should be given more attention. The results confirm that component repositories are not a key factor to successful component reuse.",2004,0, 304,A Study of Learners? Perceptions of the Interactivity of Web-Based Instruction,"The Internet breaks the limitations of time and space and provides a flexible platform for learning. Learning is a two-way communication and interactivity is a process to enhance communication among instructors, learners, learning materials, and interfaces. However, little studies exist regarding the effect of WBI interactivity on learners' performances and attitude. This research examines how different degrees of WBI interactivity influence learners' attitude, satisfaction, and performance in using the WBI systems. The results provide useful applications for WBI designers and instructors.",2005,0, 305,A Study of Quality Improvements By Refactoring,"Combined traditional soft ground improvement methods, high vacuum compact method of using alternately high vacuum drainage and dynamic consolidation to accelerate the dissipation of excess pore water pressure and the consolidation of soft soil has been proposed in this paper. The in-situ test has been studied for a hydraulic fill ground improvement in Yellow River Delta region, and construction process, the test parameters, soil deformation, excess pore water pressure and improvement effect have also been analyzed. High vacuum compact method is a time-saving, effective and low-cost soft soil improvement method, which can be suitable for hydraulic fills ground improvement in Yellow River Delta region.",2011,0, 306,A study of subsidiaries' views of information systems strategic planning in multinational organisations,"This research examines information systems strategic planning (ISSP) in multinationals from the perspective of the subsidiaries. The research was carried out through interviews with the IT and business managers in subsidiaries of nine large American, European, and Japanese multinationals. The evidence from this study reveals that, in the majority of these organisations, IS planning is either centralised or moving towards centralisation. The main focus of IS planning, in many of these organisations, is to control cost and achieve scale economies. As centralisation increases IT tends to control the planning process and, as a result, IS planning becomes more tactical than strategic and is dominated by IT infrastructure planning. Project implementation was the main criterion used to measure IS planning success. However, due to the dominant role of IT, the subsidiary business managers are often less satisfied with the IS planning approach compared with the subsidiary IT managers. The level of involvement of business managers and their satisfaction with ISSP was related to the degree of decentralisation of responsibility for IS planning.",2007,0, 307,A Study on CRM and Its Customer Segmentation Outsourcing Approach for Small and Medium Businesses,"AbstractSupported by technologies of Customer Satisfaction, Information Technology, and Data Mining, etc., CRM aims to enhance the effectiveness and performance of the businesses by improving the customer satisfaction and loyalty. CRM is now becoming a popular management methodology in manufacturing, sales, marketing, and finance. In China, there are a lot of small and medium businesses. For these businesses, sourcing CRM services on the web is a key business tactics for reducing the total ownership costs and implementation risks linked to big bang CRM implementations. In this paper, first, the architecture and contents of CRM approach for small and medium businesses were discussed according to the management characteristics. Second, contributes to the eCRM implementation landscape by providing a detailed account of the business process design and implementation support for a customer segmentation outsourcing.",2008,0, 308,A Study on Customer Satisfaction in Mobile Telecommunication Market by Using SEM and System Dynamic Method,"AbstractIn this article, a new method is presented to research the mechanism of Customer Satisfaction (CS). Firstly, the research model of CS based on the TAM and ACSI is built. Secondly, some important correlation coefficients of research model can be got from the SEM method. Thirdly, with these correlation coefficients, the main functions of system dynamic model are built, and the evolution of the system is simluated with the help of VENSIM. At last, one simple example is designed by using the method and some meaningful conclusions are provided.",2008,0, 309,A Study on Feature Analysis for Musical Instrument Classification,"In tackling data mining and pattern recognition tasks, finding a compact but effective set of features has often been found to be a crucial step in the overall problem-solving process. In this paper, we present an empirical study on feature analysis for recognition of classical instrument, using machine learning techniques to select and evaluate features extracted from a number of different feature schemes. It is revealed that there is significant redundancy between and within feature schemes commonly used in practice. Our results suggest that further feature analysis research is necessary in order to optimize feature selection and achieve better results for the instrument recognition problem.",2008,0, 310,A survey of component based system quality assurance and assessment.,"Component based software development approach is built on the notion to develop software systems by choosing appropriate off-the-shelf components and the η to assemble them with a well-defined software architecture. Quality assurance (QA) for component-based software development is a newer concept in the software engineering community as new software development paradigm and traditional approach are much different from each other. In this paper, we survey current component-based software technologies, define their advantages and disadvantages, and debate the features they inherit. We also address QA is sues for component-based software. As a major involvement, we propose a QA model for component-based software development, which covers component requirement analysis, component development, component certification, component customization, and system architecture design, integration, testing, and maintenance.",2016,0, 311,A survey of literature on the teaching of introductory programming,"This paper reports the authors' experiences in teaching introductory programming for engineers in an interactive classroom.. The authors describe how the course has evolved from the traditional course, the structure of the classroom, the choice of software, and the elements involving interactive, active, and collaborative learning. They discuss their strategy for assessment. They describe the assessment results including a retrospective assessment of the previous course. They suggest how the course relates to the nontraditional student. They conclude with some suggestions for future modifications",2001,0, 312,A Survey of Software Engineering Educational Delivery Methods and Associated Learning Theories,"Software engineering education has acquired a notorious reputation for producing students that are ill-prepared for being productive in real-world software engineering settings. Although much attention has been devoted to improving the state of affairs in recent years, it still remains a difficult problem with no obvious solutions. In this paper, I attempt to discover some of the roots of the problem, and provide suggestions for addressing these difficulties. A survey of software engineering educational approaches is first presented. A categorization of these approaches in terms of the learning theories they leverage then reveals a number of deficiencies and potential areas for improvement. Specifically, there are a number of underutilized learning theories (Learning through Failure, Keller?s ARCS, Discovery Learning, Aptitude-Treatment Interaction, Lateral Thinking, and Anchored Instruction), and the majority of existing approaches do not maximize their full educational potential. Furthermore, the approaches that engage the widest range of learning theories (practice-driven curricula, open-ended approaches, and simulation) are also the most infrequently used. Based on these observations, the following recommendations are proposed: Modify existing approaches to maximize their",2005,0, 313,A survey of software infrastructures and frameworks for ubiquitous computing,"This general software architecture is designed to support ubiquitous computing's fundamental challenges, helping the community develop and assess middleware and frameworks for this area.",2008,0, 314,A Survey of Software Refactoring,"The product created during Software Development effort has to be tested since bugs may get introduced during its development. This paper presents a comprehensive survey of the various software testing methodologies available to test for refactoring based software models. In this paper we have discussed about how Unit testing, Integration Testing and System Testing are being carried out in the refactoring based software models. To accommodate these strategies, several new techniques have been proposed like Fault-based Testing, Scenario-based Testing, and also develop a framework for refactoring based software model and brief discussion of these techniques has also been presented.",2011,0, 315,"A System for Semantically Enhanced, Multifaceted, Collaborative Access: Requirements and Architecture","Requirement analysis is an important process in the design and development of information systems. Various tools and methodologies are used to gather the user requirements. These methodologies have their own strength and weakness. In this paper we will apply collaborative requirements engineering process to determine the requirements of the: Semantically Enhanced, Multifaceted, and Collaborative Access to Cultural Heritage (MOSAICA) project. MOSAICA is an EU Framework VI funded project. The process has been carried out in an interactive manner thereby creating a process that permits effective coordination and easy change management among project partners",2007,0, 316,A systematic approach to the development of research-based web design guidelines for older people,"

This paper presents a systematic approach to the development of a set of research-based ageing-centred web design guidelines (SilverWeb Guidelines). The approach included an initial extensive literature review in the area of human–computer interaction and ageing, the development of an initial set of guidelines based on the reviewed literature, a card sorting exercise for their classification, an affinity diagramming exercise for the reduction and further finalisation of the guidelines, and finally a set of heuristic evaluations for the validation and test of robustness of the guidelines. The 38 final guidelines are grouped into eleven distinct categories (target design, use of graphics, navigation, browser window features, content layout design, links, user cognitive design, use of colour and background, text design, search engine, user feedback and support).

",2007,0, 317,A systematic innovation case study: new concepts of domestic appliance drying cycle,"While incremental innovation is for most companies a well assessed process, radical product innovation is often handled with difficulty, mainly due to myriad obstacles in the idea-to cash process which limits company?s ability to innovate. As a typical approach, engineers firstly try to find innovative solutions only inside their technological product space, basically thinking accordingly to their commonly assessed know-how. In this paper an industrial case is analyzed, showing how TRIZ methodology offers to technicians a systematic way to solve problematic contradictions and find effective ideas.",2008,0, 318,A Systematic Review of Automated Test Data Generation Techniques,"Genetic algorithms have been successfully applied in the area of software testing. The demand for automation of test case generation in object oriented software testing is increasing. Extensive tests can only be achieved through a test automation process. The benefits achieved through test automation include lowering the cost of tests and consequently, the cost of whole process of software development. Several studies have been performed using this technique for automation in generating test data but this technique is expensive and cannot be applied properly to programs having complex structures. Since, previous approaches in the area of object-oriented testing are limited in terms of test case feasibility due to call dependences and runtime exceptions. This paper proposes a strategy for evaluating the fitness of both feasible and unfeasible test cases leading to the improvement of evolutionary search by achieving higher coverage and evolving more number of unfeasible test cases into feasible ones.",2013,0, 319,A systematic review of cross- vs. within-company cost estimation studies,"The objective of this paper is to determine under what circumstances individual organizations would be able to rely on cross-company-based estimation models. We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. Ten papers compared cross-company and within-company estimation models; however, only seven presented independent results. Of those seven, three found that cross-company models were not significantly different from within-company models, and four found that cross-company models were significantly worse than within-company models. Experimental procedures used by the studies differed making it impossible to undertake formal meta-analysis of the results. The main trend distinguishing study results was that studies with small within-company data sets (i.e., $20 projects) that used leave-one-out cross validation all found that the within-company model was significantly different (better) from the cross-company model. The results of this review are inconclusive. It is clear that some organizations would be ill-served by cross-company models whereas others would benefit. Further studies are needed, but they must be independent (i.e., based on different data bases or at least different single company data sets) and should address specific hypotheses concerning the conditions that would favor cross-company or within-company models. In addition, experimenters need to standardize their experimental procedures to enable formal meta-analysis, and recommendations are made in Section 3.",2007,0, 320,A systematic review of site-specific transcription factors in the fruit fly Drosophila melanogaster,"

Summary: We present a manually annotated catalogue of site-specific transcription factors (TFs) in the fruit fly Drosophila melanogaster. These were identified from a list of candidate proteins with transcription-related Gene Ontology (Go) annotation as well as structural DNA-binding domain assignments. For all 1052 candidate proteins, a defined set of rules was applied to classify information from the literature and computational data sources with respect to both DNA-binding and transcriptional regulatory properties. We propose a set of 753 TFs in the fruit fly, of which 23 are confident novel predictions of this function for previously uncharacterized proteins.

Availability: http://www.flytf.org/

Contact: boris@mrc-lmb.cam.ac.uk

Supplementary information: Supplementary data are available at http://www.flytf.org/

",2006,0, 321,A systematic review of software process tailoring,"Software product line (SPL) is a set of software systems that share a common, managed set of features satisfying the specific needs of a particular market segment. Bad smells are symptoms that something may be wrong in system design. Bad smells in SPL are a relative new topic and need to be explored. This paper performed a Systematic Literature Review (SLR) to find and classify published work about bad smells in SPLs and their respective refactoring methods. Based on 18 relevant papers found in the SLR, we identified 70 bad smells and 95 refactoring methods related to them. The main contribution of this paper is a catalogue of bad smells and refactoring methods related to SPL.",2014,0, 322,A Systematic Review of Software Requirements Prioritization,"Software product line (SPL) is a set of software systems that share a common, managed set of features satisfying the specific needs of a particular market segment. Bad smells are symptoms that something may be wrong in system design. Bad smells in SPL are a relative new topic and need to be explored. This paper performed a Systematic Literature Review (SLR) to find and classify published work about bad smells in SPLs and their respective refactoring methods. Based on 18 relevant papers found in the SLR, we identified 70 bad smells and 95 refactoring methods related to them. The main contribution of this paper is a catalogue of bad smells and refactoring methods related to SPL.",2014,0, 323,A Task-Based Framework for Mobile Applications to Enhance Salespersons? Performance,"AbstractThe paper suggests a framework for mobile applications aimed at supporting salespersons tasks for greater performance when they are operating within a highly mobile work environment. To do so the paper starts by providing a review of mobile technologies characteristics in terms of mobile devices, connectivity and mobile applications. After deriving a set of propositions, the paper offers some concluding remarks and suggests areas for future research",2005,0, 324,A task-specific ontology for the application and critiquing of time-oriented clinical guidelines,"AbstractClinical guidelines reuse existing clinical procedural knowledge while leaving room for flexibility by the care provider applying that knowledge. Guidelines can be viewed as generic skeletal-plan schemata that are instantiated and refined dynamically by the care provider over significant periods of time and in highly dynamic environments. In the Asgaard project, we are investigating a set of tasks that support the application of clinical guidelines by a care provider other than the guideline's designer. We are focusing on application of the guideline, recognition of care providers' intentions from their actions, and critique of care providers' actions given the guideline and the patient's medical record. We are developing methods that perform these tasks in multiple clinical domains, given an instance of a properly represented clinical guideline and an electronic medical patient record. In this paper, we point out the precise domain-specific knowledge required by each method, such as the explicit intentions of the guideline designer (represented as temporal patterns to be achieved or avoided). We present a machine-readable language, called Asbru, to represent and to annotate guidelines based on the taskspecific ontology. We also introduce an automated tool for acquisition of clinical guidelines based on the same ontology; the tool was developed using the PROTG-II framework's suite of tools.",1997,0, 325,A taxonomy of DFA-based string processors,"The paper presents a comprehensive life cycle model for the WSI associative string processor (WASP) which simulates the manufacture, acceptance, and operational life of a hypothetical target device. After 100,000 hours of continuous simulated operation, the device is shown to support an average of over 8.5 K APEs per wafer under a worst case scenario, and an average of over 12 K APEs per wafer under an expected case scenario",1995,0, 326,A Template-Based Markup Tool for Semantic Web Content,"

The Intelligence Community, among others, is increasingly using document metadata to improve document search and discovery on intranets and extranets. Document markup is still often incomplete, inconsistent, incorrect, and limited to keywords via HTML and XML tags. OWL promises to bring semantics to this markup to improve its machine understandability. A usable markup tool is becoming a barrier to the more widespread use of OWL markup in operational settings. This paper describes some of our attempts at building markup tools, lessons learned, and our latest markup tool, the Semantic Markup Tool (SMT). SMT uses automatic text extractors and templates to hide ontological complexity from end users and helps them quickly specify events and relationships of interest in the document. SMT automatically generates correct and consistent OWL markup. This comes at a cost to expressivity. We are evaluating SMT on several pilot semantic web efforts.

",2005,0, 327,A Theoretical Approach to Information Needs Across Different Healthcare Stakeholders,"AbstractIncreased access to medical information can lead to information overload among both the employees in the healthcare sector as well as among healthcare consumers. Moreover, medical information can be hard to understand for consumers who have no prerequisites for interpreting and understanding it. Information systems (e.g. electronic patient records) are normally designed to meet the demands of one professional group, for instance those of physicians. Therefore, the same information in the same form is presented to all the users of the systems regardless of the actual need or prerequisites. The purpose of this article is to illustrate the differences in information needs across different stakeholders in healthcare. A literature review was conducted to collect examples of these different information needs. Based on the findings the role of more user specific information systems is discussed.",2007,0, 328,A toolkit for managing user attention in peripheral displays,"

Traditionally, computer interfaces have been confined to conventional displays and focused activities. However, as displays become embedded throughout our environment and daily lives, increasing numbers of them must operate on the periphery of our attention. <i>Peripheral displays</i> can allow a person to be aware of information while she is attending to some other primary task or activity. We present the Peripheral Displays Toolkit (PTK), a toolkit that provides structured support for managing user attention in the development of peripheral displays. Our goal is to enable designers to explore different approaches to managing user attention. The PTK supports three issues specific to conveying information on the periphery of human attention. These issues are <i>abstraction</i> of raw input, rules for assigning <i>notification levels</i> to input, and <i>transitions</i> for updating a display when input arrives. Our contribution is the investigation of issues specific to attention in peripheral display design and a toolkit that encapsulates support for these issues. We describe our toolkit architecture and present five sample peripheral displays demonstrating our toolkit's capabilities.

",2004,0, 329,A tour of language customization concepts.,"Summary form only given. In order to intelligently process vast information on the Web, we need to make computers understand the meaning of the Web contents and manipulate them taking account of their semantics. Since text is the major medium conveying information, it is thus natural and reasonable to set it as the immediate target that the computer understands the meaning, while there are other types of media such as picture, movie, etc. Toward this direction, the activity of the semantic Web is going on. It aims to establish a standardized machine-readable description format of meta-data. However, the meta-data are only fragments of the Web contents. Unlike the semantic Web, we aim to describe the concept meaning expressed in the whole natural language texts with a common format that the computer can understand. We have designed concept description language (CDL) as a vehicle for this end, and started its standardization activity in W3C. There are several levels of the meaning of the texts, ranging from shallow level to deep one. While it is still difficult to make a consensus on how to describe the deep meaning, we think that a certain consensus can be attained on a way of describing the shallow meaning of the texts, based on the research results accumulated in the field of natural language processing such as machine translation over the last several decades. In CDL, besides lexicons, 45 relations are predefined as being necessary and sufficient for denoting every semantic relation between entities (lexicons in a simple case). These CDL relations can be used universally, while the ontologies in the semantic Web are domain dependent and thus cause some problematic situations. Current issues of CDL are, among others, an easy semi-automatic way of converting natural language texts into the CDL description, and an effective mechanism of executing semantic retrieval on the CDL database. We believe that CDL contributes to build a framework of next-generation Web which - provides the foundation for a variety of semantic computing. Also, CDL may contribute to overcome the language barrier among nations.",2008,0, 330,A UML Profile for Knowledge-Based Systems Modelling,"The Knowledge engineering (KE) techniques are essentially based on the knowledge transfer approach, from domain experts directly to systems. However, this has been replaced by the modelling approach which emphasises using conceptual models to model the problem-solving skill of the domain expert. This paper discusses extending the Unified Modelling Language by means of a profile for modelling knowledge-based system in the context of Model Driven Architecture (MDA) framework. The profile is implemented using the executable Modelling Framework (XMF) Mosaic tool. A case study from the health care domain demonstrates the practical use of this profile; with the prototype implemented in Java Expert System Shell (Jess). The paper also discusses the possible mapping of the profile elements to the platform specific model (FSM) of Jess and provides some discussion on the Production Rule Representation (FRR) standardisation work.",2007,0, 331,A Unifying Multimodel Taxonomy and Agent-Supported Multisimulation Strategy for Decision-Support,"Intelligent agent technology provides a promising basis to develop next generation tools and methods to assist decision-making. This chapter elaborates on the emergent requirements of decision support in light of recent advancements in decision science and presents a conceptual framework that serves as an agent-based architecture for decision-support. We argue that in most decision-making problems, the nature of the problem changes as the situation unfolds. Initial parameters, as well as scenarios can be irrelevant under emergent conditions. Relevant contingency decision-making models need to be identified and instantiated to continue exploration. In this paper, we suggest a multi-model framework that subsumes multiple submodels that together constitute the behavior of a complex multi-phased decisionmaking process. It has been argued that situation awareness is a critical component of experience-based decision-making style. Perception, understanding, and anticipation mechanisms are discussed as three major subsystems in realizing the situation awareness model.",2008,0, 332,A usability study on human-computer interface for middle-aged learners,"''Usability'' is considered to be inherent in human-computer interface because it expresses the relationship between end users and computer applications. In this paper, we conducted a study to examine the usability of human-computer interface for middle-aged learners in Taiwan. There are two phases contained in the study: (1) an elementary computer-training task, and (2) a usability analysis of human-computer interface. Making use of a questionnaire survey, correlation analysis, and the grey relational model, some user characteristics and learning behavior were derived. For example, regarding middle-aged learners, the usability of present mouse and monitor devices is preferable to that of the keyboard device and a Windows-based software interface. Educational level is the major factor influencing middle-aged learners' use of computer interfaces. To unemployed middle-aged learners, more males than females were found to exhibit the phenomenon of computerphobia. The younger age learners show lower anxiety and hold more positive attitudes toward computer learning than the older-aged ones. Besides, the higher education learners hold much more positive expectation toward computer learning while the lower education learners pay more attention to their learning capability and deficiency.",2007,0, 333,A Visualization Solution for the Analysis and Identification of Workforce Expertise,"

Keeping sight of the enterprise's workforce strengthens the entire business by helping to avoid poor decision-making and lowering the risk of failure in problem-solving. It is critical for large-scale, global enterprises to have capabilities to quickly identify subject matter experts (SMEs) to staff teams or to resolve domain-specific problems. This requires timely understanding of the kinds of experience and expertise of the people in the firm for any given set of skills. Fortunately, a large portion of the information that is needed to identify SMEs and knowledge communities is embedded in many structured and unstructured data sources. Mining and understanding this information requires non-linear processes to interact with automated tools; along with visualizations of different interrelated data to enable exploration and discovery. This paper describes a visualization solution coupled with an interactive information analytics technique to facilitate the discovery and identification of workforce experience and knowledge community capacity.

",2007,0, 334,A Web- and Problem-Based Learning System in Artificial Intelligence,"This paper aims to construct a Web-based PBL system for the students at the department of computer science and information engineering. The central bases on instructional theory, learning theory, and PBL instructional activities are applied to this paper. The research methods include literature review, experts' visits, PBL instruction, interview, focus group, etc. Based on the above theories and research methods, the authors intend to educate the students' teamwork, data analyzing, and problem-solving capabilities. Moreover, the ability of technological innovation can enable students to enhance their competencies. The research results obviously show that the PBL instruction can help students learn more about artificial intelligence (AI)",2006,0, 335,A web-based group decision support system for the selection and evaluation of educational multimedia,"

Multimedia is now a relatively mature field, having advanced over more than two decades. The combination of text, images, videos, animations, etc. in both presentational and conversational form is now a standard part of most computer applications, web-based and stand-alone alike. As such, Educational Multimedia (EMM) can be a great tool to improve teaching and learning. However, EMM selection and evaluation for higher education is a complex and interdisciplinary problem characterised by uncertainty, dynamics, explicit and implicit knowledge and constraints, and involvement of different stakeholders. In this work, we use a domain-based web oriented Group Decision Support System (GDSS) for the selection and continuous evaluation of EMM for educational providers. We investigate the viability of developing and validating a web-based GDSS, integrated with knowledge management (KM) and instructional design, as a support tool to overcome the difficulties in the selection and evaluation of EMM. The proposed solution manages and supports the six phases of planning, intelligence, design, choice, implementation, and evaluation. In addition to design and implementation, performance evaluation is also presented using data collected from experts, instructors, and EMM producers. The results reveal that the proposed solution can successfully help educational consumers in selecting and evaluating EMM.

",2007,0, 336,A/B Dashboard: The Case for a Virtual Information Systems Development Environment to Support a RAD Project,"The impact of digitization on organizational structures and processes is transforming relationships and work practices. The dynamics of the digital economy fuelled by the high rate of innovation in digital technologies demand that firms adopt an equally rapid and responsive information systems development model/paradigm to keep pace. With reference to Rapid Applications Development (RAD), this case study considers the degree to which an iterative and participative information systems development process is amenable to being conducted in a virtual manner. Using participatory action research, this paper investigates the potential for deploying a virtual learning environment as a CASE tool for the development of web-based applications. In this instance the final IT artefact is an operational prototype - the A/B Dashboard. We demonstrate that a virtual information systems development environment can successfully support a RAD/DSDM approach, resolving some of the existing problems associated with RAD.",2004,0, 337,Abstracting the Grid,"Abstract graphical Grid workflow can adapt the dynamic nature of Grid environment. It is more intuitive and convenient for user to apply Grid services solving the remote sensing distributable computing problems than concrete Grid workflows. This paper firstly introduces the feasibilities and advantages of abstract Grid workflow applying for remote sensing quantitative retrieval services. And then it gives the relative research status about science workflow applied to geosciences' domain. In the design of Grid workflow data structure, the authors illustrate the abstract Grid workflow's data structure for remote sensing quantitative retrieval. And then the authors give the key operation algorithms of the abstract Grid workflow data structure. In the implementation part, the authors have completed the abstract graphical Grid workflow composition system for remote sensing quantitative retrieval service. Using it, users can construct a workflow based on remote sensing application just by dragging and clicking the components of interest provided by the system.",2010,0, 338,Accelerating ATM Simulations Using Dynamic Component Substitution (DCS),"

The steady growth in the multifaceted use of broadband asynchronous transfer mode (ATM) networks for time-critical applications has significantly increased the demands on the quality of service (QoS) provided by the networks. Satisfying these demands requires the networks to be carefully engineered based on inferences drawn from detailed analysis of various scenarios. Analysis of networks is often performed through computer-based simulations. Simulation-based analysis of the networks, including nonquiescent or rare conditions, must be conducted using high-fidelity, high-resolution models that reflect the size and complexity of the network to ensure that crucial scalability issues do not dominate. However, such simulations are time-consuming because significant time is spent in driving the models to the desired scenarios. In an endeavor to address the issues associated with the aforementioned bottleneck, this article proposes a novel, multiresolution modeling-based methodology called dynamic component substitution (DCS). DCS is used to dynamically (i.e., during simulation) change the resolution of the model, which enables more optimal trade-offs between different parameters such as observability, fidelity, and simulation overheads, thereby reducing the total time for simulation.The article presents the issues involved in applying DCS in parallel simulations of ATM networks. An empirical evaluation of the proposed approach is also presented.The experiments indicate that DCS can significantly accelerate the simulation of ATM networks without affecting the overall accuracy of the simulation results.

",2006,0, 339,Acceptance of Visual Search Interfaces for the Web - Design and Empirical Evaluation of a Book Search Interface,"

Theoretically, visual search interfaces are supposed to outperform list interfaces for such task types as nonspecific queries because they make use of additional semantic information (like price, date or review for a book). But why are web sites like Amazon or eBay still using classical textual list interfaces? Many visual interfaces performed well on objective measures (retrieval time, precision or recall). But subjective factors (ease, joy, usefulness) determining their acceptance in practice are often neglected. Therefore, we created a graphical interface for searching books and evaluated it in a 51 participant study. The study builds on the technology acceptance model which measures users’ subjective attitude towards using an interface. We found that the variable enjoyment is of higher relevance in both visual and textual search interfaces than previously stated. Finally, the novel interface yielded significantly better results for book searches than the textual one.

",2005,0, 340,Accumulation and presentation of empirical evidence: problems and challenges,"Understanding the effects of software engineering techniques and processes under varying conditions can be seen as a major prerequisite towards predictable project planning and guaranteeing software quality. Evidence regarding the effects of techniques and processes for specific contexts can be gained by empirical studies. Due to the fact that software development is a human-based and context-oriented activity the effects vary from project environment to project environment. As a consequence, the studies need to be performed in specific environments and the results are typically only valid for these local environments. Potential users of the evidence gained in such studies (e.g., project planners who need to select techniques and processes for a project) are confronted with difficulties such as finding and understanding the relevant results and assessing whether and how they can be applied to their own situation. Thereby, effective transfer and use of empirical findings is hindered. Our thesis is that effective dissemination and exploitation of empirical evidence into industry requires aggregation, integration, and adequate stakeholder-oriented presentation of the results. This position paper sketches major problems and challenges and proposes research issues towards solving the problem.",2005,0, 341,Achieving Quality in Open Source Software,"The open source software community has published a substantial body of research on OSS quality. Focusing on this peer-reviewed body of work lets us draw conclusions from empirical data rather than rely on the large volume of evangelical opinion that has historically dominated this field. This body of published research has become much more critical and objective in its efforts to understand OSS development, and a consensus has emerged on the key components of high-quality OSS delivery. This article reviews this body of research and draws out lessons learned, investigating how the approaches used to deliver high-quality OSS differ from, and can be incorporated into, closed-source software development",2007,0, 342,Adaptation of the Balanced Scorecard Model to the IT Functions,"The essence of keeping score about everything cannot be overemphasized. A performance measurement system may provide an early warning detection system indicating what has happened; diagnose reason for the current situation; and indicate what remedial action should be taken. Businesses rely on performance measurement systems to provide a feedback on the health of the business. The balanced scorecard (BSC) framework suggests the use of non-financial performance measures via three additional perspectives, i.e. customer, internal business process, learning and innovation to supplement traditional financial measures, believing that if used in that way, the scorecard addresses a serious deficiency in traditional management systems: their inability to link a company's long term strategy with its short term actions. This paper highlights the issues and combination of factors that influences the adaptation of the balanced scorecard model to the IT function and draws attention to cases of organisations that have attempted to implement the balanced scorecard to their IT departments, highlighting their challenges and rewards.",2005,0, 343,Adapting PROFES for Use in an Agile Process: An Industry Experience Report,"

Background: Agile methods are starting to get established not only in new business organizations, but also in organizations dealing with innovation and early product development in more traditional branches like automotive industry. Customers of those organizations demand a specified quality of the delivered products.

Objective: Adapt the PROFES Improvement Methodology for use in an industrial, agile process context, to ensure more predictable product quality.

Method: An explorative case study at BMW Car IT, which included several structured interviews with stakeholders such as customers and developers.

Result: Adapted PROFES methodology with regard to agility and initial product-process dependencies, which partially confirm some of the original PROFES findings.

Conclusion: The cost-value ratio of applying PROFES as an improvement methodology in an agile environment has to be carefully considered.

",2005,0, 344,Adapting to When Students Game an Intelligent Tutoring System,"Game-based learning is considered as a very motivational tool to accelerate active learning of students. As such learning environments usually follow a computer-assisted instruction concept that offers no adaptability to each student, some idea from Intelligent Tutoring Systems (ITS) are borrowed and applied in educational games to teach introductory programming. Thus, we developed a Game-based Intelligent Tutoring System (GITS) in the form of a competitive board game. The board game revises the classic table game ""Snakes and Ladders"" to improve web-based problem solving skills and learning computer programming. Moreover, a mini-game, tic-tac-toe quiz, is applied in GITS to update the Bayesian network used for the process of decision-making in our proposed system. Our future work is to evaluate the GITS by conducting an experimental study using novices.",2015,0, 345,Adaptive Information for Consumers of Healthcare,"This paper presents a novel local threshold segmentation algorithm for digital images incorporating shape information. In image segmentation, most of local threshold algorithms are only based on intensity analysis. In many applications where an image contains objects with a similar shape, besides the intensity information, prior known shape attributes could be exploited to improve the segmentation. The goal of this work is to design a local threshold algorithm that includes shape information to enhance the segmentation quality. The algorithm can be divided into two steps: adaptively selecting local threshold based on maximum likelihood, and then removing unwanted segmented fragments by a supervised classifier. Shape attribute distributions are learned from typical objects in ground truth images. Local threshold for each object in an image to be segmented is chosen to maximize probabilities of these shape attributes according to learned distributions. After local thresholds are picked, the algorithm applies a supervised classifier trained by shape features to reject unwanted fragments. Experiments on oil sand images have shown that the proposed algorithm has superior performance to local threshold approaches based on intensity information in terms of segmentation quality.",2009,0, 346,Adaptive Manufacturing: A Real-Time Simulation-Based Control System,This paper presenst a real-time execution system to support adaptive manufacturing strategies. We propose a functional architecture integrating optimisation and simulation techniques with Enterprise Resource Planning and Manufacturing Execution Systems. The model allows the implementation of real-time decision-making components and feedback-loop mechanisms. A demonstrative example based on a real life production scenario is presented to illustrate how such integration can be achieved.,2006,0, 347,Adopting Model Driven Software Development in Industry ? A Case Study at Two Companies,"

Model Driven Software Development (MDD) is a vision of software development where models play a core role as primary development artifacts. Its industrial adoption depends on several factors, including possibilities of increasing productivity and quality by using models. In this paper we present a case study of two companies willing to adopt the principles of MDD. One of the companies is in the process of adopting MDD while the other withdrew from its initial intentions. The results provide insights into the differences in requirements for MDD in these organizations, factors determining the decision upon adoption and the potentially suitable modeling notation for the purpose of each of the companies. The analysis of the results from this case study, supported by the conclusions from a previous case study of a successful MDD adoption, show also which conditions should be fulfilled in order to increase the chances of succeeding in adopting MDD.

",2006,0, 348,Adoption of Instant Messaging Technologies by University Students,"The main objective of this paper is to better understand the nature and patterns of students? socialization patterns in relation to the adoption of Instant Messaging (IM) systems. A model based on the Extended Planned Behavior Theory (EPBT) was applied to a sample of 80 students of software engineering at the University of New South Wales, Australia. Based on the EPBT model, a questionnaire was administered to these students. A number of key concepts were identified in relation to the students? adoption of IM. It was also found that students use IM to support a number of task-related purposes such as collaborating with their classmates about group work and assignments, as well as for scheduling and coordinating meetings and significant results were obtained.",2007,0, 349,Advancing the State of the Art in Computational Gene Prediction,"

Current methods for computationally predicting the locations and intron-exon structures of protein-coding genes in eukaryotic DNA are largely based on probabilistic, state-based generative models such as hidden Markov models and their various extensions. Unfortunately, little attention has been paid to the optimality of these models for the gene-parsing problem. Furthermore, as the prevalence of alternative splicing in human genes becomes more apparent, the ""one gene, one parse"" discipline endorsed by virtually all current gene-finding systems becomes less attractive from a biomedical perspective. Because our ability to accurately identify all the isoforms of each gene in the genome is of direct importance to biomedicine, our ability to improve gene-finding accuracy both for human and non-human DNA clearly has a potential to significantly impact human health. In this paper we review current methods and suggest a number of possible directions for further research that may alleviate some of these problems and ultimately lead to better and more useful gene predictions.

",2006,0, 350,Advantages of Spoken Language Interaction in Dialogue-Based Intelligent Tutoring Systems,"AbstractThe ability to lead collaborative discussions and appropriately scaffold learning has been identified as one of the central advantages of human tutorial interaction [6]. In order to reproduce the effectiveness of human tutors, many developers of tutorial dialogue systems have taken the approach of identifying human tutorial tactics and then incorporating them into their systems. Equally important as understanding the tactics themselves is understanding how human tutors decide which tactics to use. We argue that these decisions are made based not only on student actions and the content of student utterances, but also on the meta-communicative information conveyed through spoken utterances (e.g. pauses, disfluencies, intonation). Since this information is less frequent or unavailable in typed input, tutorial dialogue systems with speech interfaces have the potential to be more effective than those without. This paper gives an overview of the Spoken Conversational Tutor (SCoT) that we have built and describes how we are beginning to make use of spoken language information in SCoT.",2004,0, 351,Agent Based Simulation Architecture for Evaluating Operational Policies in Transshipping Containers,"

An agent based simulator for evaluating operational policies in the transshipment of containers in a container terminal is described. The simulation tool, called SimPort, is a decentralized approach to simulating managers and entities in a container terminal. Real data from two container terminals are used as input for evaluating eight transshipment policies. The policies concern the sequencing of ships, berth allocation, and stacking rule. They are evaluated with respect to a number of aspects, such as, turn-around time for ships and traveled distance of straddle carriers. The simulation results indicate that a good choice in yard stacking and berthing position policies can lead to faster ship turn-around times. For instance, in the terminal studied the Overall-Time-Shortening policy offers fast turn-around times when combined with a Shortest-Job-First sequencing of arriving ships.

",2009,0, 352,Agent Shell for the Development of Tutoring Systems for Expert Problem Solving Knowledge,"

This paper introduces the concept of learning and tutoring agent shell as a general and powerful tool for rapid development of a new type of intelligent assistants that can learn complex problem solving expertise directly from human experts, can support human experts in problem solving and decision making, and can teach their problem solving expertise to non-experts. This shell synergistically integrates general problem solving, learning and tutoring engines and has been used to build a complex cognitive assistant for intelligence analysts. This assistant has been successfully used and evaluated in courses at US Army War College and George Mason University. The goal of this paper is to provide an intuitive overview of the tutoring-related capabilities of this shell which rely heavily on its problem solving and learning capabilities. They include the capability to rapidly acquire the basic abstract problem solving strategies of the application domain, directly from a subject matter expert. They allow an instructional designer to rapidly design lessons for teaching these abstract problem solving strategies, without the need of defining examples because they are automatically generated by the system from the domain knowledge base. They also allow rapid learning of test questions to assess students' problem solving knowledge. The proposed type of cognitive assistant, capable of learning, problem solving and tutoring, as well as the learning and tutoring agent shell used to build it, represent a very promising and expected evolution for the knowledge-based agents for ""ill-defined"" domains.

",2008,0, 353,Agent-based Dynamic Scheduling for Distributed Manufacturing,"At Fujitsu AMD Semiconductor Limited (FASL), the original scheduling and dispatching process was time consuming and inadequate to understand the effects on equipment utilization and cycle times caused by sudden changes in product mix or volume. The re-prioritizing of products due to dynamic changes in equipment status was difficult to achieve. As with any semiconductor manufacturing facility, the flow of materials through the factory was nonlinear. This meant that linear mathematical models could not predict the behavior of the flow accurately, and that initial metrics of the factory were not sufficient to predict status in the future. Furthermore, stochastic components inherent to process flows, such as equipment failures, yields, dynamic queues and reentrant flow compounded complexity. Consequently, FASL implemented MS/X OnTime from TYECIN Systems because of its modeling, scheduling, and dispatching capabilities. To update the simulation-based scheduler with actual WIP and equipment status, the scheduler was interfaced to the MES. The coupled systems provided an almost real-time dispatching capability. As a result of implementation, planners were able to model, schedule, and dispatch all products with 95% accuracy, generate dispatch lists by operator, and perform what-if analyses of changing factory conditions. Manual operations were also reduced to a minimum",1997,0, 354,Agent-Based Ontology Management towards Interoperability,"We explore how computational ontologies can be impactful vis-à-vis the developing discipline of ""data science."" We posit an approach wherein management theories are represented as formal axioms, and then applied to draw inferences about data that reside in corporate databases. That is, management theories would be implemented as rules within a data analytics engine. We demonstrate a case study development of such an ontology by formally representing an accounting theory in First-Order Logic. Though quite preliminary, the idea that an information technology, namely ontologies, can potentially actualize the academic cliché, ""From Theory to Practice,"" and be applicable to the burgeoning domain of data analytics is novel and exciting.",2016,0, 355,Agent-based simulation of open source evolution,"Aiming at the problem of the characteristics of different participants influencing on the evolution of the community project, a simulation model is established based on the agent-based method. The mechanism of participant selection and performing tasks and the mechanism of cooperation between participants are added in the model. The evolution situation under different composition structures of participants is compared and analyzed, and a sensitivity analysis of the cooperation probability between participants is introduced. The results shows that the joining of innovation leader could impact the evolution of the community project largely, but the impact is weaken with the increase of the number of innovation leaders. The increase of the number of core developers can promote the evolution of the community project, but the impact is limited. The cooperation probability of participants has a greater influence on the evolution community project. Finally, the management recommendations are given based on the results.",2016,0, 356,AgentZ: Extending object-Z for multi-agent systems specification.,"The paper presents a logical framework to model some aspects of contextuality; i.e., generating contexts in a multi-context setting. Following the existing work on extended logic programming and multi-agent systems, a contextual reasoning procedure for a particular class of multi-context systems, the law ones, is proposed based on Grice's maxims (1975), which in turn are used to support a larger set of contexts by combining contexts into compound structures, thus defining a logic of contexts. The notion of compound contexts reflects the beliefs, desires, intentions and obligations, among others, that depend on the problem, leading to a variety of dynamic context formations. In its applied form, the Portuguese Public Prosecution Service is considered, which is the state body entrusted with representing the state, bringing criminal cases to court, defending democratic legality, and any other interests that the law determines",1997,0, 357,Aggregation of Empirical Evidence,"Although asset growth has little effect on the behavior of the typical fund, funds should alter investment behavior as assets under management increase. We find that large funds diversify their portfolios in response to growth. Greater diversification, especially for large funds, is associated with better performance.",2010,0, 358,Aggregation Process with multiple evidence levels for experimental studies in Software Engineering,Current meta-analysis-based procedures for aggregating experimental studies borrowed from other branches of science have proved not to be suitable for real-world software engineering. This paper presents an alternative aggregation process to the standard. It is based on an aggregation strategy with multiple evidence levels. Each evidence level is linked to a specific aggregation technique which is assigned depending on the quality and quantity of identified experimental studies.,2007,0,359 359,"AGGREGATION PROCESS WITH MULTIPLE EVIDENCE LEVELS FOR EXPERIMENTAL STUDIES IN SOFTWARE ENGINEERING","Decaps are the panacea for the noise-related issues. Due to the short distance advantage, decaps are embedded in the fan-out wafer-level package instead of the printed circuit board. These decaps, generally thicker than chips, will have a crucial influence on the molding process as well. A lot of issues are encountered in the molding process, especially with lots of decaps, i.e., voids issues, incomplete filling issues, and die shift issues. In an optimized design, all these issues should be prevented or reduced as much as possible. In this paper, design flow for the wafer-level molding process is demonstrated and design guidelines are provided. Three important evaluation standards are used to evaluate the design, i.e., incomplete filling, drag force, and voids. Two kinds of design parameters, structure parameters (i.e., die placement, die size and thickness, and so on) and process parameters (i.e., vacuum pressure, filling speed, and so on), are optimized in the whole study. Optimization of these parameters helps the real wafer-level molding process to be conducted smoothly.",2015,0, 360,"Ahaa --agile, hybrid assessment method for automotive, safety critical smes","The need for software is increasingly growing in the automotive industry. Software development projects are, however, often troubled by time and budget overruns, resulting in systems that do not fulfill customer requirements. Both research and industry lack strategies to combine reducing the long software development lifecycles (as required by time-to-market demands) with increasing the quality of the software developed. Software process improvement (SPI) provides the first step in the move towards software quality, and assessments are a vital part of this process. Unfortunately, software process assessments are often expensive and time consuming. Additionally, they often provide companies with a long list of issues without providing realistic suggestions. The goal of this paper is to describe a new low-overhead assessment method that has been designed specifically for small-to-medium-sized (SMEs) organisations wishing to be automotive software suppliers. This assessment method integrates the structured-ness of the plan-driven SPI models of Capability Maturity Model Integration (CMMI) and Automotive SPICEtrade with the flexibleness of agile practices.",2008,0, 361,Algorithm and care pathway: Clinical guidelines and healthcare processes,"AbstractThis paper reports ongoing work in the project PRESTIGE: Guidelines in Healthcare. An approach has been developed to representing the knowledge content of clinical guideline and protocols, using a declarative model incorporating a lifecycle model of clinical acts and activities. We also encountered the need to analyse and model the healthcare processes in which the use of a clinical guideline is embedded. A business process re-engineering (BPRE) methodology, developed in the previous AIM programme project SHINE, has been used for this purpose, enabling the mapping of the knowledge components of a clinical guideline to the specific healthcare processes where they are applicable. We review the need for a combination of algorithmic and process-oriented views of guideline knowledge in order to enable effective delivery of guideline-based clinical decision support.",1997,0, 362,Algorithm Visualization: The State of the Field,"In terms of the concepts of state and state transition, a new algorithm-State Transition Algorithm (STA) is proposed in order to probe into classical and intelligent optimization algorithms. On the basis of state and state transition, it becomes much simpler and easier to understand. As for continuous function optimization problems, three special operators named rotation, translation and expansion are presented. While for discrete function optimization problems, an operator called general elementary transformation is introduced. Finally, with 4 common benchmark continuous functions and a discrete problem used to test the performance of STA, the experiment shows that STA is a promising algorithm due to its good search capability.",2011,0, 363,Aligning learning objectives with service-learning outcomes in a mobile computing application,"We propose the development of a mobile, location-aware tour of the Bonsai Exhibition Garden of the North Carolina Arboretum. The tour will be a web-based, customizable, multimedia presentation on handheld Personal Digital Assistants. The complete tour, including all presentation materials and system installation, will be developed via a series of three classes at the University of North Carolina at Asheville. These classes, Database Management Systems, Human Computer Interface, and Systems Integration, will occur over a period of two semesters. The objective of this work is to create relevant and effective coursework that empowers students. Students produce state-of-the-art technology that serves their community thereby demonstrating the value of both the technology and their understanding of that technology for the betterment of others.",2006,0, 364,Ambient Intelligence and Multimodality,"This paper proposes a multidimensional recursive ambient modal analysis algorithm called recursive frequency domain decomposition (recursive FDD or RFDD). The method enables simultaneous processing of a large number of synchrophasor measurements for real-time ambient modal estimation. The method combines a previously proposed multidimensional block processing algorithm FDD with a single input recursive least square (RLS) algorithm into developing a new frequency domain multidimensional recursive algorithm. First, an auto-regressive model is fitted onto the sampled data of each signal using the time-domain RLS approach. Subsequent modal analysis is carried out in frequency domain in the spirit of FDD. The conventional FDD method uses non-parametric methods for power spectrum density (PSD) estimation. The proposed method in this paper by estimating PSD with a parametric method provides smoother PSD estimation which results in less standard deviation in RFDD estimates compared to FDD. The algorithm is tested on archived synchrophasor data from a real power system.",2017,0, 365,Amorphous Slicing of C Programs with TXL,"Interactive learning environments (ILE) supports discovery learning. According to Lewis et al. (1993), the development of which would improve retention, deepen understanding, and enhance motivation. Existing ILEs on the programming domain generally provide programming exercises and simulations of code as their main sources of interaction. This paper describes the design and implementation of two (2) additional components of the ILE, namely: the program slicer and the text generator. The program slicer determines the slices in the code that represent some algorithm. The text generator explains the algorithm in English.",2005,0, 366,An Agent-based Approach for the Maintenance of Database Applications,"Database systems lie at the core of almost every modern software application. The interaction between the application source code and the underlying database schema results in a dependency relationship that affects the application 's maintainability by raising a number of additional maintenance issues. To assess this effect and facilitate the maintenance process, a software engineering approach based on software agents is introduced. The distributed and cooperative nature of a software agent system provides the flexibility required to analyze modern multi-tier database applications such as web-based applications. A prototype system, which employs agent architecture in order to satisfy the requirements of the suggested approach, is presented.",2007,0, 367,An agent-based approach to intrastream synchronization for multimedia applications,"Nowadays, many multimedia cluster-to-cluster applications exist, such as 3D tele-immersion (3DTI), computer-supported collaborative workspaces (CSCWs), distributed multimedia presentations (DMP).... All these applications have sophisticated data transport requirements due to the use of multiple, semantically related flows of information. A synchronization mechanism must be used to synchronize the playout of the streams, regardless of the number of receivers and the number of streams played on the receiver clusters. In this paper, we present a new solution for multimedia group synchronization in such applications. Rather than base the solution on the definition of a new synchronization protocol (as other authors do), we base it on small modifications or extensions to RTP/RTCP standard protocols already used in most multimedia applications. Due to this, the overload introduced by the approach is minimal. The suitability of the approach was evaluated in a real one-way cluster-to-cluster application, with satisfactory results.",2009,0, 368,An Analysis Framework of Factors Influencing China Software and Information Service Offshore Outsourcing,"Offshore outsourcing has become popular because of cost or skill advantage. China is considered as one of major global software and information service offshore destinations, but it cannot compete with India right now. Based on literature review, this paper first presents an overview of the industry. Then 38 service providers in China are investigated to give a firm-level analysis of the industry. A framework of factors influencing the industry is presented. Influence factors are classified into macro factors, middle factors, and micro factors, each of which is analyzed in detail. It is concluded that China software and information service offshore outsourcing industry is of great potential, but needs marketing and advertising.",2008,0, 369,An Analysis of IFIP TC 8 WG 8.6,AbstractThe IFIP TC 8 WG 8.6 focuses on the transfer and diffusion of information technology. Since the working group was established in 1993 there have been a number of events where members of the group have produced contributions analyzing transfer and diffusion of IT in different settings and from different perspectives. In this paper we report the result of an analysis of the theoretical perspectives the contributors have applied in the studies. Our analysis suggests that even though there is an even distribution of factor and process oriented studies reported in proceedings the theoretical denominator for the long standing members of WG 8.6 is the process oriented approach to the study of transfer and diffusion of IT.,2006,0, 370,An analysis of research in computing disciplines,"From the perspective of systems engineering, this paper analyzes the composition, structure, functions and characteristics of the tourism system and presents that the tourism system is a complex one which consists of the tourism system, tourist service facilities system, ecological system and tourism administrator system. It also puts forward that the development of tourism is a systematic project and discusses the application of the systems engineering to the tourism system.",2011,0, 371,An Annotated Bibliography of Personnel Scheduling and Rostering,"THERE HAVE BEEN relatively few systematic empirical investigations of the behavior of Research and Development personnel. However, as the findings of sociologists and psychologists working in such areas as creativity, leadership, occupational values, and organizational change have been made available, the Transactions has tried to keep its readers posted on new developments through the publication of relevant articles. In accord with this policy, the present section has attempted to bring to Transactions readers summaries of recent research on researcher behavior with which they may not be familiar. Most of the studies summarized here were first published in professional sociology and psychology journals, although a few books and unpublished papers are also included.",1963,0, 372,An Approach for Assessing Suitability of Agile Solutions: A Case Study,"

Dynamic market situation and changing customer requirements generate more demands for the product development. Product releases should be developed and managed in short iterations answering to the rapid external changes and keeping up a high quality level. Agile practices (such as the best practices in Extreme Programming and Scrum) offer a great way of monitoring and controlling rapid product development cycles and release development. One problem in product development projects, however, is how to apply agile methods and principles as a part of the complex product development. The purpose of this paper is to describe, how Agile Assessment was conducted in a case company in order to support product development and customer support improvement. During the experiment it was found that Agile Assessment is an efficient method to clarify what agile practices are suitable for the organization's product development and customer co-operation. Another finding was that the use of the best suitable agile practices would improve incremental development monitoring and traceability of requirements.

",2005,0, 373,An approach for experimentally evaluating effectiveness and efficiency of coverage criteria for software testing,"

Experimental work in software testing has generally focused on evaluating the effectiveness and effort requirements of various coverage criteria. The important issue of testing efficiency has not been sufficiently addressed. In this paper, we describe an approach for comparing the effectiveness and efficiency of test coverage criteria using mutation analysis. For each coverage criterion under study, we generate multiple coverage-adequate minimal test suites for a test-program from a test-pool, which are then executed on a set of systematically generated program mutants to obtain the fault data. We demonstrate the applicability of the proposed approach by describing the results of an experiment comparing the three code-based testing criteria, namely, block coverage, branch coverage, and predicate coverage. Our results suggest that there is a trade-off between effectiveness and efficiency of a coverage criterion. Specifically, the predicate coverage criterion was found to be most effective but least efficient whereas the block coverage criterion was most efficient but least effective. We observed high variability in the performance of block test suites whereas branch and predicate test suites were relatively consistent. Overall results suggest that the branch coverage criterion performs consistently with good efficiency and effectiveness, and it appears to be the most viable option for code-based control flow testing.

",2008,0, 374,An architecture for privacy-sensitive ubiquitous computing,"In our previous paper, we proposed metabolic computing model in order to realize sustainable information system. We think that metabolic computing model has high fault tolerance and sustainability. We also proposed a realistic architecture of metabolic computing model. Metaboloid is a processing unit in this architecture. A set of metaboloids is organized as mesh connected NORMA. However, in simple metabolism, the specification of architecture is not changed. So, even if manufacturing technology of hardware is innovative, the performance will not be improved. In this paper, we propose an evolutional architecture for metabolic computing model. In this architecture, if the specification is portable and if the difference between specifications is only 1, a set of metaboloids of which specification is different can work together.",2011,0, 375,An Assessment of the Currency of Free Science Information on the Web,"

As the Internet has become a ubiquitous tool in modern science, it is increasingly important to evaluate the currency of free science information on the web. However, there are few empirical studies which have specifically focused on this issue. In this paper, we used the search engines Google, Yahoo and Altavista to generate a list of web pages about 32 terms. Sample pages were examined according to the criteria which were developed in this study. Results revealed that the mean of currency of free science information on the web was 2.6482 (n=2814), only 982 (34.90%) of samples got higher mean scores than the average. Sample pages with different domain names or subjects had difference with significance (P?0.05). In conclusion, the currency of free science information on the web is unsatisfactory. The developed criteria set here could be a useful instrument for researchers and the public to assess information currency on the web by themselves.

",2007,0, 376,"An Attribute Proposal for Same Vendor, Version-to-Version COTS Upgrade Decisions","Commercial Off-the-Shelf (COTS) Productivity Applications have provided a reliable, convenient, and consistent productivity environment for commercial, government, and personal users for over two decades. Vendors today, such as Microsoft and Adobe, typically deliver new capabilities as follow-on versions or upgrades to existing product lines. Annually, corporations needlessly spend millions of dollars investing in follow-on versions of currently deployed applications without adequately evaluating upgrade decisions. Exacerbating the problem, vendors release new versions of their productivity applications with increasingly shorter intervals, advertising them as full of new and ‘‘must have’’ capabilities. The decision to upgrade is challenging. Unfortunately, there is a lack of ‘‘same vendor, version-to-version’’ upgrade decision support models to assist Information Technology (IT) decision-makers whether to upgrade or not. As a result, IT decision makers typically employ general strategies that are not based on clear, well-defined decision attributes, which, in turn, waste valuable IT resources. This research effort proposes an upgrade decision support model for COTS productivity Information Technology (IT) decision makers, who leverage COTS applications, are regularly faced with the challenge of upgrading to the latest version of their COTS products. Microsoft, as well as other vendors, are releasing upgrades to their products every 18-24 months, each purported to be full of new ?features? and capabilities. Every new release represents another IT decision and investment for the organization. There are advantages and disadvantages of using COTS applications. Many of these disadvantages surface when new software versions are released by vendors. As a result, IT decision makers typically employ general strategies that are not based on clear, well-defined decision attributes which, in turn, place the organization at risk. In addition, currently available COTS decision models do not specifically address same vendor, versionto-version upgrade decisions. This research effort proposes as set of 25 specific decision attributes and their values, organized into 9 categories, which are considered key considerations for same vendor, version-to-version COTS application upgrades. These proposed attributes are based on research and literature reviews that contribute to same vendor, version-to-version upgrade decisions. Finally, this paper ends with recommendations for further research in this subject area. This paper will build on existing research and literature to enumerate the challenges faced by IT decision makers and why current approaches are inadequate. This paper is organized into the following sections: Section 1 will describe the advantages and disadvantages of using COTS applications. Section 2 will describe the challenges current IT decision makers face. Section 3 will describe strategies and their limitations associated with same vendor, version-to-version upgrades. Section 4 will propose a set of attributes and value ranges, and Section 5 will conclude on recommendations for further research. ",2005,0, 377,An automated testing strategy targeted for efficient use in the consulting domain,"Test automation can decrease release cycle time for software systems compared to manual test execution. Manual test execution is also considered inefficient and error-prone. However, few companies have gotten far within the field of test automation. This thesis investigates how testing and test automation is conducted in a test consulting setting. It has been recognized that low test process maturity is common in customer projects and this has led to equally low system testability and stability. The study started with a literature survey which summarized the current state within the field of automated testing. This was followed by a consulting case study. In the case study it was investigated how the identified test process maturity problems affect the test consulting services. The consulting automated testing strategy (CATS) been developed to meet the current identified challenges in the domain. Customer guidelines which aim to increase the test process maturity in the customer organization have also been developed as a support to the strategy. Furthermore, the study has included both industrial and academic validation which has been conducted through interviews with consultant practitioners and researchers.",2007,0, 378,An Economic Analysis of Market for Software Vulnerabilities,"Software vulnerability identification and their disclosure has been a critical area of concern for policy makers. Traditionally, computer emergency response team (CERT) has been acting as an infomediary between benign identifiers who report vulnerability information and users of the software. After verifying a reported vulnerability, and obtaining the remediation in the form of a patch from the software vendor, the infomediary - CERT - sends out a public ""advisory"" to inform software users about it. In the CERT type mechanism, reporting vulnerabilities is voluntary with no explicit monetary gains to benign identifiers. Of late, firms such as iDefense have been proposing a different market based mechanism. In this market based mechanism, the infomediary rewards identifiers for each vulnerability disclosed to it. The infomediary then shares this information with its clients who are users of this software. Using this information, clients can protect themselves against attacks that exploit those specific vulnerabilities. The key issue addressed in this paper is whether movement towards such a market based mechanism for vulnerabilities leads to a better social outcome? We study this problem by characterizing the behavior of software users benign and malign identifiers (or hackers).",2004,0, 379,An Effective Method for Analyzing Intrusion Situation Through IP-Based Classification,"

Due to a false alert or mass alerts by current intrusion detection systems, the system administrators have difficulties in real-time analysis of an intrusion. In order to solve this problem, it has been studied to analyze the intrusion situation or correlation. However, the existing situation analysis method is grouping with the similarity of measures, and it makes hard to respond appropriately to an elaborate attack. Also, the result of the method is so abstract that the raw information before reduction must be analyzed to realize the intrusion. In this paper, we reduce the number of alerts using the aggregation and correlation and classify the alerts by IP addresses and attack types. Through this method, our tool can find a cunningly cloaked attack flow as well as general attack situation, and more, they are visualized. So an administrator can easily understand the correct attack flow.

",2005,0, 380,An Effective Real-Parameter Genetic Algorithm with Parent Centric Normal Crossover for Multimodal Optimisation,"AbstractEvolutionary Algorithms (EAs) are a useful tool to tackle real-world optimisation problems. Two important features that make these problems hard are multimodality and high dimensionality of the search landscape.In this paper, we present a real-parameter Genetic Algorithm (GA) which is effective in optimising high dimensional, multimodal functions. We compare our algorithm with two previously published GAs which the authors claim gives good results for high dimensional, multimodal functions. For problems with only few local optima, our algorithm does not perform as well as one of the other algorithm. However, for problems with very many local optima, our algorithm performed significantly better. A wider comparison is made with previously published algorithms showing that our algorithm has the best performance for the hardest function tested.",2004,0, 381,An Effective Scheduling Scheme for Information Searching with Computational Resources Scattered over a Large-Scale Network,"

In this article, we propose an efficient resource distribution scheduling scheme for information retrieval from multiple information sources. This scheme can restrain costs for information searching and retrieval by limiting the effective area of connection from an information source to computational resources. Furthermore, we show the effectiveness of the scheme by computer simulations with various assumption settings.

",2008,0, 382,An efficient intrusion detection system using a boosting-based learning algorithm,"Although there are many anomaly detection systems based on learning algorithms that are able to detect unknown attacks or variants of known attacks, most systems require sophisticated training data for supervised learning. Because it is difficult to prepare the training data, anomaly detection systems are not widely used in the practical environment. In this paper, we propose an anomaly detection system based on machine learning that requires no prepared training data. The system generates sophisticated training data that is applicable to the learning by processing alerts that a signature based intrusion detection system (IDS) outputs. We evaluated the system using two types of traffic: the 1999 DARPA IDS evaluation data and the security scanner data. The results show that the training data generated by the system is suitable for learning attack behaviors and the system is able to detect variants of worms and known attacks.",2005,0, 383,An Efficient Multimodal 2D-3D Hybrid Approach to Automatic Face Recognition,"We present a fully automatic face recognition algorithm and demonstrate its performance on the FRGC v2.0 data. Our algorithm is multimodal (2D and 3D) and performs hybrid (feature based and holistic) matching in order to achieve efficiency and robustness to facial expressions. The pose of a 3D face along with its texture is automatically corrected using a novel approach based on a single automatically detected point and the Hotelling transform. A novel 3D spherical face representation (SFR) is used in conjunction with the scale-invariant feature transform (SIFT) descriptor to form a rejection classifier, which quickly eliminates a large number of candidate faces at an early stage for efficient recognition in case of large galleries. The remaining faces are then verified using a novel region-based matching approach, which is robust to facial expressions. This approach automatically segments the eyes- forehead and the nose regions, which are relatively less sensitive to expressions and matches them separately using a modified iterative closest point (ICP) algorithm. The results of all the matching engines are fused at the metric level to achieve higher accuracy. We use the FRGC benchmark to compare our results to other algorithms that used the same database. Our multimodal hybrid algorithm performed better than others by achieving 99.74 percent and 98.31 percent verification rates at a 0.001 false acceptance rate (FAR) and identification rates of 99.02 percent and 95.37 percent for probes with a neutral and a nonneutral expression, respectively.",2007,0, 384,An E-Health Community of Practice: Online Communication in an E-Health Service Delivery Environment,"

Results of a series of studies of consumer response to online interactive communication and video-based technologies for the delivery of health care services are presented. The studies include development, evaluation and usability studies of two interactive, video conferencing web sites; Caring for Others© [CFO] designed for older adults caring for a family member with a chronic disease, and Caring for Me© [CFM] designed to support an e-health program for obese adolescents. Stages of web site development, usability analyses, and evaluation of consumer response to the customized e-health programs are reported.

",2007,0, 385,An Elaboration on Dynamically Re-configurable Communication Protocols Using Key Identifiers,"In this paper we present a novel concept and methodology for generating tailored communication protocols specific to an application's requirements and the operating environment for a mobile node roaming among different access networks within the global Internet. Since the scheme that we present employs a universal technique, it can be also deployed in small-scale independent networks such as sensor networks to generate application-specific lightweight transport protocols as is appropriate to its energy-constrained operating environments. We present preliminary experimental and analytical results that confirm and justify the feasibility of our method based on a practical example applicable to the sensor network environment.",2005,0, 386,An empirical analysis of risk components and performance on software projects,"Risk management and performance enhancement have always been the focus of software project management studies. The present paper shows the findings from an empirical study based on 115 software projects on analyzing the probability of occurrence and impact of the six dimensions comprising 27 software risks on project performance. The MANOVA analysis revealed that the probability of occurrence and composite impact have significant differences on six risk dimensions. Moreover, it indicated that no association between the probability of occurrence and composite impact among the six risk dimensions exists and hence, it is a crucial consideration for project managers when deciding the suitable risk management strategy. A pattern analysis of risks across high, medium, and low-performance software projects also showed that (1) the ''requirement'' risk dimension is the primary area among the six risk dimensions regardless of whether the project performance belongs to high, medium, or low; (2) for medium-performance software projects, project managers, aside from giving importance to ''requirement risk'', must also continually monitor and control the ''planning and control'' and the ''project complexity'' risks so that the project performance can be improved; and, (3) improper management of the ''team'', ''requirement'', and ''planning and control'' risks are the primary factors contributing to a low-performance project.",2007,0, 387,An Empirical Approach to Characterizing Risky Software Projects Based on Logistic Regression Analysis,"

During software development, projects often experience risky situations. If projects fail to detect such risks, they may exhibit confused behavior. In this paper, we propose a new scheme for characterization of the level of confusion exhibited by projects based on an empirical questionnaire. First, we designed a questionnaire from five project viewpoints, requirements, estimates, planning, team organization, and project management activities. Each of these viewpoints was assessed using questions in which experience and knowledge of software risks are determined. Secondly, we classify projects into ""confused"" and ""not confused,"" using the resulting metrics data. We thirdly analyzed the relationship between responses to the questionnaire and the degree of confusion of the projects using logistic regression analysis and constructing a model to characterize confused projects. The experimental result used actual project data shows that 28 projects out of 32 were characterized correctly. As a result, we concluded that the characterization of confused projects was successful. Furthermore, we applied the constructed model to data from other projects in order to detect risky projects. The result of the application of this concept showed that 7 out of 8 projects were classified correctly. Therefore, we concluded that the proposed scheme is also applicable to the detection of risky projects.

",2005,0, 388,An Empirical Assessment of the Perception of Computer Security between US and Korea : Focused on Rootkits,"The surveys were conducted from 400 students of five Korea and three U.S. universities to compare their knowledge and experience with various forms of malware.They provide an empirical assessment of the cross-cultural similarities and differences between students in the two countries. The variables examined include knowledge of computer viruses, spyware, and rootkits as well as perceptions of the damage that can result from various computer malware.While the two groups are similar with respect to their relative familiarity of rootkits compared with that of spyware and viruses, and in terms of how they perceive the malware knowledge of their peers, they exhibit significant differences in self-reported perceptions of rootkit familiarity. U.S. students report higher levels for all tested malware types, including the fictional ""Trilobyte"" virus. These comparisons reveal that little is known about rootkits today. However, there is hope for an accelerated rootkit awareness because of the rapid assimilation of Spyware knowledge in recent years.",2007,0, 389,"AN EMPIRICAL EVALUATION OF CLIENT - VENDOR RELATIONSHIPS IN INDIAN SOFTWARE OUTSOURCING COMPANIES","Efficient server selection algorithms reduce retrieval time for objects replicated on different servers and are an important component of Internet cache architectures. This paper empirically evaluates six client-side server selection algorithms. The study compares two statistical algorithms, one using median bandwidth and the other median latency, a dynamic probe algorithm, two hybrid algorithms, and random selection. The server pool includes a topologically dispersed set of United States state government Web servers. Experiments were run on three clients in different cities and on different regional networks. The study examines the effects of time-of day, client resources, and server proximity. Differences in performance highlight the degree of algorithm adaptability and the effect that network upgrades can have on statistical estimators. Dynamic network probing performs as well or better than the statistical bandwidth algorithm and the two probe bandwidth hybrid algorithms. The statistical latency algorithm is clearly worse, but does outperform random selection",2000,0, 390,An empirical examination of a process-oriented IT business success model,"

The value of information technology (IT) to modern organizations is almost undeniable. However, the determination of that value has been elusive in research and practice. We used a process-oriented research model developed using two streams of IT research to examine the value of IT in business organizations. One stream is characterized by examining how IT and non-IT variables affect other so-called IT success variables. The second stream is commonly referred to as IT business value, defined as the contribution of IT to firm performance. The resulting research model is referred to in our paper as the IT business success model. Data was collected from 225 top IS executives in fairly large organizations to empirically examine several hypotheses derived from theory concerning the causal nature of the IT business success model. A set of measures for the IT business success model was developed through an intense investigation of the IT literature. The measures were tested for validity and reliability using confirmatory factor analysis. The hypotheses that resulted from past research and conceptually illustrated in the research model were assessed using structural equation analysis. The implications of these findings and the limitations of the study are discussed in an effort to contribute to building a process-oriented theory base for IT business success at the organizational level of analysis.

",2006,0, 391,An empirical framework for comparing effectiveness of testing and property-based formal analysis,"Today, many formal analysis tools are not only used to provide certainty but are also used to debug software systems - a role that has traditional been reserved for testing tools. We are interested in exploring the complementary relationship as well as tradeoffs between testing and formal analysis with respect to debugging and more specifically bug detection. In this paper we present an approach to the assessment of testing and formal analysis tools using metrics to measure the quantity and efficiency of each technique at finding bugs. We also present an assessment framework that has been constructed to allow for symmetrical comparison and evaluation of tests versus properties. We are currently beginning to conduct experiments and this paper presents a discussion of possible outcomes of our proposed empirical study.",2005,0, 392,An empirical investigation of software reuse benefits in a large telecom product,

Background. This article describes a case study on the benefits of software reuse in a large telecom product. The reused components were developed in-house and shared in a product-family approach. Methods. Quantitative data mined from company repositories are combined with other quantitative data and qualitative observations. Results. We observed significantly lower fault density and less modified code between successive releases of the reused components. Reuse and standardization of software architecture and processes allowed easier transfer of development when organizational changes happened. Conclusions. The study adds to the evidence of quality benefits of large-scale reuse programs and explores organizational motivations and outcomes.

,2008,0, 393,An empirical investigation of the key determinants of data warehouse adoption,"Data warehousing (DW) has emerged as one of the most powerful decision support technologies during the last decade. However, despite the fact that it has been around for some time, DW has experienced limited spread/use and relatively high failure rates. Treating DW as a major IT infrastructural innovation, we propose a comprehensive research model - grounded in IT adoption and organizational theories - that examines the impact of various organizational and technological (innovation) factors on DW adoption. Seven factors - five organizational and two technological - are tested in the model. The study employed rigorous measurement scales of the research variables to develop a survey instrument and targeted 2500 organizations in both manufacturing and services segments within two major states in the United States. A total of 196 firms (276 executives), of which nearly 55% were adopters, responded to the survey. The results from a logistic regression model, initially conceptualizing a direct effect of each of the seven variables on adoption, indicate that five of the seven variables (three organizational factors - commitment, size, and absorptive capacity - and two innovation characteristics - relative advantage and low complexity) are key determinants of DW adoption. Although scope for DW and preexisting data environment within the organization were favorable for adopter firms, they did not emerge as key determinants. However, the study provided an opportunity to explore a more complex set of relationships. This alternative structural model (using LISREL) provides a much richer explanation of the relationships among the antecedent variables and with adoption, the dependent variable. The study, especially the revised conceptualization, contributes to existing research by proposing and empirically testing a fairly comprehensive model of organizational adoption of an information technology (IT) innovation, more specifically a DSS technology. The findings of the study have interesting implications with respect to IT/DW adoption, both for researchers and practitioners.",2008,0, 394,An Empirical Investigation of the Key Factors for Success in Software Process Improvement,"Understanding how to implement software process improvement (SPI) successfully is arguably the most challenging issue facing the SPI field today. The SPI literature contains many case studies of successful companies and descriptions of their SPI programs. However, the research efforts to date are limited and inconclusive and without adequate theoretical and psychometric justification. This paper extends and integrates models from prior research by performing an empirical investigation of the key factors for success in SPI. A quantitative survey of 120 software organizations was designed to test the conceptual model and hypotheses of the study. The results indicate that success depends critically on six organizational factors, which explained more than 50 percent of the variance in the outcome variable. The main contribution of the paper is to increase the understanding of the influence of organizational issues by empirically showing that they are at least as important as technology for succeeding with SPI and, thus, to provide researchers and practitioners with important new insights regarding the critical factors of success in SPI.",2005,0, 395,An Empirical Process for Building and Validating Software Engineering Parametric Models,"Parametric modeling is a statistical technique whereby a dependent variable is estimated based on the values of and the relationships between the independent variable(s). The nature of the dependent variable can vary greatly based on one?s domain of interest. In software engineering, parametric models are often used to help predict a system?s development schedule, cost-to-build, and quality at various stages of the software lifecycle. In this paper, we discuss the use of parametric modeling in software engineering and present a nine-step parametric modeling process for creating, validating, and refining software engineering parametric models. We illustrate this process with three software engineering parametric models. Each of these models followed the nine-steps in different ways due to the research technique, the nature of the model, and the variability of the data. The three models have been shown to be effective estimators of their respective independent variables. This paper aims to assist other software engineers in creating parametric models by establishing important steps in the modeling process and by demonstrating three variations on following the nine-step process",2005,0, 396,An Empirical Research of Successful ERP Implementation Based on TAM,"From a perspective based on behavioural science and social psychology, we analyze the behavioral factors that influence the ERP(enterprise resource planning) implementation. We integrate the TAM(technology acceptance model) with TOE(technical-orgnization-environment) model ,and add the third-party consulting institutions factor as a new external variable. We present an improved planning model to discrete manufacturing enterprise informationization. And according to the features of discrete manufacturing enterprise infomationization, we present an infomationization process model. Based on the two models, we subdivide the indicator variable to advance a discrete manufacturing enterprise information adoption evaluation indicator system. Finally, according to a case analysis, we draw some conclusions and recommendations in order to improve the effect and efficiency of enterprise informatization.",2010,0, 397,An empirical study of factors that affect user performance when using UML interaction diagrams,"During the requirements process it is of key importance that all representations used are clearly understood by those who must use them. Therefore it is essential to ensure that those representations are presented as effectively as possible. The research reported in this paper relates to an empirical study carried out to investigate factors which might affect user performance when using UML interaction diagrams. Several variables were investigated in the study; these were identified from the related literature and earlier research by us as being important in understanding interaction diagrams. The independent variables investigated in the study were diagram type, user pre-test and post-test preference, individual's cognitive style, text direction, scenario type and question type. Time taken to formulate the correct answer was the dependent variable used as the measure of performance. Statistical analysis of data showed significant differences for several variables, including diagram type, preference, and scenario type (p<0.05).",2005,0, 398,An Empirical Study of Open-Source and Closed-Source Software Products,"We describe an empirical study of open-source and closed-source software projects. The motivation for this research is to quantitatively investigate common perceptions about open-source projects, and to validate these perceptions through an empirical study. We investigate the hypothesis that open-source software grows more quickly, but does not find evidence to support this. The project growth is similar for all the projects in the analysis, indicating that other factors may limit growth. The hypothesis that creativity is more prevalent in open-source software is also examined, and evidence to support this hypothesis is found using the metric of functions added over time. The concept of open-source projects succeeding because of their simplicity is not supported by the analysis, nor is the hypothesis of open-source projects being more modular. However, the belief that defects are found and fixed more rapidly in open-source projects is supported by an analysis of the functions modified. We find support for two of the five common beliefs and conclude that, when implementing or switching to the open-source development model, practitioners should ensure that an appropriate metrics collection strategy is in place to verify the perceived benefits.",2004,0, 399,AN EMPIRICAL STUDY OF RUN-TIME COUPLING AND COHESION SOFTWARE METRICS,"Developing software is expensive; thus keep it useful to its users is important. On the other hand, due to constant maintenance performed to meet the changing needs of users, software undergoes degradation of its internal structure, particularly in coupling and cohesion. Monitoring the development of software by using some of its versions can aid Software Engineer with relevant information to guide your maintenance activities. In this paper, we presented a view of the evolution of versions of software. For this, a study was conducted in 10 versions of FindBugs using coupling and cohesion metrics calculated from VizzMaintenance and Metric plug-ins. In this study, we applied the Pearson linear correlation analysis among measurements. The result showed that there is some correlation between these metrics, because coupling metrics directly influenced the cohesion metrics, with undesirable characteristics such as high coupling and low cohesion compromising software quality.",2015,0, 400,"An Empirical Study of the Complex Relationships between Requirements Engineering Processes and Other Processes that Lead to Payoffs in Productivity, Quality, and Risk Management","Requirements engineering is an important component of effective software engineering, yet more research is needed to demonstrate the benefits to development organizations. While the existing literature suggests that effective requirements engineering can lead to improved productivity, quality, and risk management, there is little evidence to support this. We present empirical evidence showing how requirements engineering practice relates to these claims. This evidence was collected over the course of a 30-month case study of a large software development project undergoing requirements process improvement. Our findings add to the scarce evidence on RE payoffs and, more importantly, represent an in-depth explanation of the role of requirements engineering processes in contributing to these benefits. In particular, the results of our case study show that an effective requirements process at the beginning of the project had positive outcomes throughout the project lifecycle, improving the efficacy of other project processes, ultimately leading to improvements in project negotiation, project planning, and managing feature creep, testing, defects, rework, and product quality. Finally, we consider the role collaboration had in producing the effects we observed and the implications of this work to both research and practice",2006,0, 401,"An empirical study of the effect of complexity, platform, and program type on software development effort of business applications","

Several popular cost estimation models like COCOMO and function points use adjustment variables, such as software complexity and platform, to modify original estimates and arrive at final estimates. Using data on 666 programs from 15 software projects, this study empirically tests a research model that studies the influence of three adjustment variables--software complexity, computer platform, and program type (batch or online programs) on software effort. The results confirm that all the three adjustment variables have a significant effect on effort. Further, multiple comparison of means also points to two other results for the data examined. Batch programs involve significantly higher software effort than online programs. Programs rated as complex have significantly higher effort than programs rated as average.

",2006,0, 402,"An empirical study of the relationships between IT infrastructure flexibility, mass customization, and business performance","Information technology (IT) infrastructure deserves serious attention from both the practitioner and academic communities, especially concerning the factors for IT infrastructure flexibility. The issue of flexibility is viewed as a critical aspect of IT infrastructure, because organizations are faced with an ever-increasing rate of change in their business environments. One effort most business sectors have made to prepare for this change is the trend toward mass customization. Recently, many organizations have embraced mass customization in an attempt to provide unique value to their customers in a cost-efficient manner.The purpose of this study is to empirically investigate a sequential relationship between IT infrastructure flexibility, mass customization, and business performance. The process involves an investigation of the critical factors for IT infrastructure flexibility, along with the firm's mass customization and business performance indicators. The findings of this study provide evidence that integration and modularity of an organization's IT infrastructure facilitate the organization's effort to accommodate mass customization. Additionally, the flexibility of the IT personnel, the human component of IT infrastructure, and mass customization directly affect the organization's business performance.",2005,0, 403,An Empirical Study on Business-to- Government Data Exchange Strategies to Reduce the Administrative Costs for Businesses,"AbstractIn recently developed policies the electronic exchange of data with governmental organisations is seen as a means to help reduce the administrative burden for businesses. Even laws have become active to enforce electronic data filing. However, we do not know whether these eGovemment applications do help reduce the administrative burden, so we do not know whether this new legislation is effective either, Although many business-to-government systems are currently being implemented, the adoption of these data interchange systems in a governmental context has not yet been studied extensively. In the study reported in this paper we investigate data exchange related adoption strategies in order to be able to address (in)effective strategies for the reduction of the administrative burden. We present an analysis of adoption factors that influence adoption decisions of SME companies in this context. Based on a representative survey we found some factors that seem to be relevant for the (non)adoption of business-to-government data exchange systems. We found that especially small companies tend to outsource eGovernment related data exchange processes. Therefore we conclude that it is very unlikely that the governments aims to reduce administrative burden are met using current implementation strategies. We suggest an adapted strategy.",2006,0, 404,An Environment of Conducting Families of Software Engineering Experiments,"An important goal of most empirical software engineering research is the transfer of research results to industrial applications. Two important obstacles for this transfer are the lack of control of variables of case studies, i.e., the lack of explanatory power, and the lack of realism of controlled experiments. While it may be difficult to increase the explanatory power of case studies, there is a large potential for increasing the realism of controlled software engineering experiments. To convince industry about the validity and applicability of the experimental results, the tasks, subjects and the environments of the experiments should be as realistic as practically possible. Such experiments are, however, more expensive than experiments involving students, small tasks and pen-and-paper environments. Consequently, a change towards more realistic experiments requires a change in the amount of resources spent on software engineering experiments. This paper argues that software engineering researchers should apply for resources enabling expensive and realistic software engineering experiments similar to how other researchers apply for resources for expensive software and hardware that are necessary for their research. The paper describes experiences from recent experiments that varied in size from involving one software professional for 5 days to 130 software professionals, from 9 consultancy companies, for one day each.",2002,0, 405,An Environment to Support Large Scale Experimentation in Software Engineering,"Experimental studies have been used as a mechanism to acquire knowledge through a scientific approach based on measurement of phenomena in different areas. However it is hard to run such studies when they require models (simulation), produce amount of information, and explore science in scale. In this case, a computerized infrastructure is necessary and constitutes a complex system to be built. In this paper we discuss an experimentation environment that has being built to support large scale experimentation and scientific knowledge management in software engineering.",2008,0, 406,An evaluation of a model-based testing method for information systems,"Potential faults in safety critical systems may lead to system failures thus bring huge human injuries. How to ensure the correctness of the system during the system development is very important. System function testing has been regarded as an effective approach which normally applied in the final stage of system development to ensure the consistence of system functions and specifications. In this paper, an integrated model-based test case generation method combining Hybrid Communicating Sequential Processes (HCSP) and Timed Automata is introduced, in which HCSP is used to formally model the scenarios of the system, while Timed Automata is used to verify the system properties in HCSP models. To bridge the gap between the HCSP model and Timed Automata model, transition rules are defined according to the characteristics of systems. Based on the Network Timed Automaton model, a tool chain (UPPAAL and CoVer) is presented to automatically generate test case with coverage criteria in a simple and flexible manner. The tool chain is also applied to analyze the typical Radio Block Center (RBC) handover scenario in Chinese Train Control System Level 3 (CTCS-3). Logical and timing properties of the case study are verified and different test case suites of Vital Computer (VC) components in RBC handover model are automatically generated with different coverage criteria.",2016,0, 407,An Evaluation of Two Bug Pattern Tools for Java,"Automated static analysis is a promising technique to detect defects in software. However, although considerable effort has been spent for developing sophisticated detection possibilities, the effectiveness and efficiency has not been treated in equal detail. This paper presents the results of two industrial case studies in which two tools based on bug patterns for Java are applied and evaluated. First, the economic implications of the tools are analysed. It is estimated that only 3-4 potential field defects need to be detected for the tools to be cost-efficient. Second, the capabilities of detecting field defects are investigated. No field defects have been found that could have been detected by the tools. Third, the identification of fault-prone classes based on the results of such tools is investigated and found to be possible. Finally, methodological consequences are derived from the results and experiences in order to improve the use of bug pattern tools in practice.",2008,0, 408,An Evolution Model for Software Modularity Assessment,"The value of software design modularity largely lies in the ability to accommodate potential changes. Each modularization technique, such as aspect-oriented programming and object-oriented design patterns, provides one way to let some part of a system change independently of all other parts. A modularization technique benefits a design if the potential changes to the design can be well encapsulated by the technique. In general, questions in software evolution, such as which modularization technique is better and whether it is worthwhile to refactor, should be evaluated against potential changes. In this paper, we present a decision-tree-based framework to generally assess design modularization in terms of its changeability. In this framework, we formalize design evolution questions as decision problems, model software designs and potential changes using augmented constraint networks (ACNs), and represent design modular structure before and after envisioned changes using design structure matrices (DSMs) derived from ACNs. We formalize change impacts using an evolution vector to precisely capture well-known informal design principles. As a preliminary evaluation, we use this model to compare the aspect-oriented and object-oriented observer pattern in terms of their ability to accommodate envisioned changes. The results confirm previous published results, but in formal and quantitative ways.",2007,0, 409,An experiment on the role of graphical elements in architecture visualization,"The evolution and maintenance of large-scale software systems requires first an understanding of its architecture before delving into lower level details. Tools facilitating the architecture comprehension tasks by visualization provide different sets of graphical elements. We conducted a controlled experiment that exemplifies the critical role of such graphical elements when aiming at understanding the architecture. The results show that a different configuration of graphical elements influences program comprehension tasks significantly. In particular, a gain of effectiveness by 63% in basic architectural analysis tasks was achieved simply by choosing a different set of graphical elements. Based on the results we claim that significant effort should be spent on the configuration of architecture visualization tools",2006,0, 410,An Experimental Evaluation of Information Visualization Techniques and Decision Style,"

This study aims to investigate the extent to which information visualization (IV) techniques and decision style affect decision performance and user preferences in a decision support environment. The study adopted an experimental method. Findings from this study provide theoretical, empirical and practical contributions. The results showed that there were significant differences in decision performance and user preference across IV techniques and decision style. The findings have important implications for the decision support system (DSS) designers, and provide important research issues for future work

",2007,0, 411,An experimental investigation of formality in UML-based development.,"The object constraint language (OCL) was introduced as part of the Unified Modeling Language (UML). Its main purpose is to make UML models more precise and unambiguous by providing a constraint language describing constraints that the UML diagrams alone do not convey, including class invariants, operation contracts, and statechart guard conditions. There is an ongoing debate regarding the usefulness of using OCL in UML-based development, questioning whether the additional effort and formality is worth the benefit. It is argued that natural language may be sufficient, and using OCL may not bring any tangible benefits. This debate is in fact similar to the discussion about the effectiveness of formal methods in software engineering, but in a much more specific context. This paper presents the results of two controlled experiments that investigate the impact of using OCL on three software engineering activities using UML analysis models: detection of model defects through inspections, comprehension of the system logic and functionality, and impact analysis of changes. The results show that, once past an initial learning curve, significant benefits can be obtained by using OCL in combination with UML analysis diagrams to form a precise UML analysis model. But, this result is however conditioned on providing substantial, thorough training to the experiment participants.",2005,0, 412,An Exploratory Study of How Older Women Use Mobile Phones,"Ageing populations are turning to technology in greater numbers than ever. New technology is being designed to help older people live independently for longer. Despite the usefulness of mobile phones especially older people, the current problems with its complex features and interface designs have intimidated some older people users from using the device. The authors wished to explore exposure to the real-world technology needs of older people by evaluating the mobile phones use among them. Although numerous studies have been reported on the various benefits of interviewing in Human-Computer Interaction (HCI) research, little is known about preparatory interviewing in engaging with ageing population. The purpose of this study was to explore the interviewing technique in eliciting requirements from older people. A qualitative approach and semi-structured interviews were used with a sample size of seven Malaysian elders. This paper reports interview experience with the older people. The results suggest that the interviewing guidelines are recommended to be applied in the future research on HCI and older people.",2013,0, 413,An Improved Platform for Medical E-Learning,"As one of the subject in modern educational technology, E-Learning has not been widely applied yet. The accumulation of knowledge is a tree growing process. From the person's cognitive processes, we propose a methodology for E-Learning design based on tree structure. Knowledge point is stored in a database as a record, and then is bound to a TreeView control, in order to show the form of a tree. Finally, a very visual and friendly user interface E-Learning platform with clear context between knowledge points was constructed by using the latest Microsoft development technologies. And the trial platform is supported by teachers and students praise.",2010,0, 414,An Infrastructure for Indexing and Organizing Best Practices,"Industry best practices are widely held but not necessarily empirically verified software engineering beliefs. Best practices can be documented in distributed web-based public repositories as pattern catalogues or practice libraries. There is a need to systematically index and organize these practices to enable their better practical use and scientific evaluation. In this paper, we propose a semi-automatic approach to index and organise best practices. A central repository acts as an information overlay on top of other pre-existing resources to facilitate organization, navigation, annotation and meta-analysis while maintaining synchronization with those resources. An initial population of the central repository is automated using Yahoo! contextual search services. The collected data is organized using semantic web technologies so that the data can be more easily shared and used for innovative analyses. A prototype has demonstrated the capability of the approach.",2007,0, 415,An Infrastructure for Mining Medical Multimedia Data,Provides notice of upcoming special issues of interest to practitioners and researchers.,2006,0, 416,An initial research agenda for SCM information systems,"Shortcomings of the traditional chain-like information flow control pattern in supply chain system are analyzed. A net-like information flow control pattern is put forward and the architecture is designed. In the architecture, Combined Scheduling Center (CSC) serves as the information center, the node enterprises in the supply chain could directly exchange data with CSC based on Internet/Intranet. Furthermore, the net-like information flow architecture is successfully applied to the business sub-system in enterprise logistics system, which integrates the business management, decision management and supply management, the cooperation and collaboration of the members in supply chain achieves the supporting in synchronization. Finally, an integrated architecture for enterprise logistics system based on Supply Chain Management (SCM) is established. It enhances the integration of enterprises and suppliers in the course of business.",2010,0, 417,An initial study in using audio-visual stimuli in e-commerce,"

This paper investigates the role of multi-modal metaphors for e-commerce applications. More specifically, the use of textual representations (including tabular representations and tables), graphical representations (including dynamically formed graphs) in the presence and absence of speech are investigated. The paper describes the initial study performed prior to the development of an experimental platform that was used to investigate these issues. In this study, 40 users were used to obtain an overall understanding of the approach taken and to determine the viability of the approach prior to further experiments. The results indicated that the approach of dynamically created graphs in the presence and absence of speech was usable and enable the development of the experimental platform for further experiments.

",2007,0, 418,"An Initial Study to Develop an Empirical Test for Software Engineering Expertise","This research study analyses impact of E-Learning Tool named R-U-LEXIC for dyslexic children with special reference to selective schools in Tamil Nadu, South India. The objective of the work is to create a combined and interactive environment, where children may screened on a mass scale for dyslexia by means of an online tool named R-U-LEXIC. The research focuses on developing software platforms, integrating man-machine interfaces in the screening and remediation process and elaborating their technical specifications in view of their later integration within a local or national network. This study was carried out by allowing 180 students in all the four Grades who are in the age group of 5-12, to use the tool developed for visual and auditory perception. The parents/teacher who assists the child should also answer a set of questions which comprises of Yes/No/Not sure questions that is used to assess their child's behaviour. Based on the score generated by the student, parent and teacher the tool gives the intensity level in terms of percentage which describes whether the child is Dyslexic or not. The research analysis was performed using SPSS Statistic 17.0. The statistical techniques applied for drawing statistical inferences and conclusions about the study included descriptive statistics, mean and standard deviation, one sample t test, one-way ANOVA and reliability test. The results of this study clearly revealed that there is a positive relationship between the data collected from Parents and Teachers and the students were excited and happy to take the test and could understand and use the E-learning tool easily.",2016,0, 419,An Integrated Framework for Meta Modeling,"In this paper an interdigital n-type CoSi2-Si Schottky diode is fabricated in SMIC 0.18 mum RF CMOS process. A novel and accurate Schottky diode model has been developed base on the DC and RF measured data. In this novel model the losses due to parasitic capacitance dielectric and metal plate are considered. It is shown that the suggested novel model fits the measurement very well for different voltage biases over the wide frequency range of 0.05 GHz to 8.5 GHz. A type of four stages charge pump is designed using this new Schottky diode model, the design charge pump can get efficiency of 40%.",2007,0, 420,An integrated high-resolution hydrometeorological modeling testbed using LIS and WRF,"Interactions between the atmosphere and the land surface have considerable influences on weather and climate. Coupled land-atmosphere systems that can realistically represent these interactions are thus critical for improving our understanding of the atmosphere-biosphere exchanges of energy, water, and their associated feedbacks. NASA's Land Information System (LIS) is a high-resolution land data assimilation system that integrates advanced land surface models, high-resolution satellite and observational data, data assimilation techniques, and high performance computing tools. LIS has been coupled to the Weather Research and Forecasting (WRF) model, enabling a high-resolution land-atmosphere modeling system. Synthetic simulations using the coupled LIS-WRF system demonstrates the interoperable use of land surface models, high-resolution land surface data and other land surface modeling tools through LIS. Real case study simulations for a June 2002 International H2O Project (IHOP) day is conducted by executing LIS first in an uncoupled manner to generate high-resolution soil moisture and soil temperature initial conditions. During the case study period, the land surface (LIS) and the atmospheric (WRF) models are executed in a coupled manner using the LIS-WRF system. The results from the simulations illustrate the impact of accurate, high-resolution land surface conditions on improving the prediction of clouds and precipitation. Thus, the coupled LIS-WRF system provides a testbed to enable studies in improving our understanding and predictability of regional and global water and energy cycles.",2008,0, 421,An Integrated Methodology of Manufacturing Business Improvement Strategies,"The paper describes the results of a recent field study of CIM adoption strategies in US manufacturing firms. The purpose of the study was to identify the extent to which CIM technologies are in use in US firms, the impact of a facility's process characteristics on the CIM development process, and the adoption policy being followed implicitly or explicitly. The survey focused on the following aspects:(a) manufacturing process characteristics, (b) the CIM development process, (c) the CIM architecture, and (d) perceived value and benefits. Our results indicate that CIM implementations follow a definite temporal pattern with respect to the adoption of certain information technologies. In addition, the initiative for CIM programs is usually generated from the bottom-up. This gradual bottom-up approach appears to restrain, rather than enable, plant-wide integration for critical business processes such as order fulfilment or product development. While most CIM users find that their CIM projects successfully meet their initial operational goals, the technology seems to be poorly integrated still. More crucially, it appears that CIM is not being adopted as a strategic information system for competitive missions",1995,0, 422,An integrated product development process for mobile software,"The rising penetration of smartphones enables new mobile services and business models. A huge number of different operating systems, available functionalities, open questions regarding revenue sources and streams, legal issues, as well as a lack of knowledge in designing mobile user experiences call for a holistic product development process in this domain. This paper describes a five-step product development process for mobile software and services encompassing organizational, business and technical issues and reports on the practical experiences made in a real-life project.",2007,0, 423,An Integrative Network Approach to Map the Transcriptome to the Phenome,"

Although many studies have been successful in the discovery of cooperating groups of genes, mapping these groups to phenotypes has proved a much more challenging task. In this paper, we present the first genome-wide mapping of gene coexpression modules onto the phenome. We annotated coexpression networks from 136 microarray datasets with phenotypes from the Unified Medical Language System (UMLS). We then designed an efficient graph-based simulated annealing approach to identify coexpression modules frequently and specifically occurring in datasets related to individual phenotypes. By requiring phenotypespecific recurrence, we ensure the robustness of our findings. We discovered 9,183 modules specific to 47 phenotypes, and developed validation tests combining Gene Ontology, GeneRIF and UMLS. Our method is generally applicable to any kind of abundant network data with defined phenotype association, and thus paves the way for genome-wide, gene network-phenotype maps.

",2008,0, 424,An Intelligent Expert Systems' Approach to Layout Decision Analysis and Design under Uncertainty,"This chapter describes an intelligent soft computing based approach to layout decision analysis and design. The solution methodology involves the use of heuristics, metaheuristics, human intuition as well as soft computing tools like artificial neural networks, fuzzy logic, and expert systems. The research framework and prototype contribute to the field of intelligent decision making in layout analysis and design by enabling explicit representation of experts' knowledge, formal modeling of fuzzy user preferences, and swift generation/manipulation of superior layout alternatives to facilitate the cognitive, ergonomic, and economic efficiency of layout designers.",2008,0, 425,An Interoperability Classification Framework for Method Chunk Repositories,"The competitiveness and efficiency of an enterprise is dependent on its ability to interact with other enterprises and organisations. In this context interoperability is defined as the ability of business processes as well as enterprise software and applications to interact. Interoperability remains a problem and there are numerous issues to be resolved in different situations. We propose method engineering as an approach to organise interoperability knowledge in a method chunk repository. In order to organise the knowledge repository we need an interoperability classification framework associated to it. In this paper we propose a generic architecture for a method chunk repository, elaborate on a classification framework and associate it to some existing bodies of knowledge. We also show how the proposed framework can be applied in a working example.",2007,0, 426,"An Investigation into E-Commerce Adoption Profile for Small and Medium-Sized Enterprises in Bury, Greater Manchester, UK","

E-commerce is the commercial transaction between and among organizations and individuals enabled by the digital technologies. Most recently, it is mainly referred to as the Internet-based electronic commerce. E-commerce provides many benefits for organisations to conduct business on the Internet. Since 1994, millions of companies have stepped into the digital world. However, due to the lack of knowledge and expertise of new technologies and many other reasons, the e-adoption rate for Small and Medium-sized Enterprises (SMEs) still lags behind. This paper investigates the e-adoption status for SMEs in the Bury area of Greater Manchester, UK. To conduct the research, a survey method is employed and questionnaires are distributed to local SMEs. The collected date is analysed and results are compared to the national statistics. Results show that the adoption rate of information and communication technology (ICT) and e-commerce in Bury is a step ahead of the UK in general.

",2007,0, 427,An Investigation of Dispute Resolution Mechanisms on Power and Trust: A Domain Study of Online Trust in e-Auctions,"

Auctions have experienced one of the most successful transitions from a ‘bricks and mortar' presence into an online environment. However, online auctions have one of the highest percentages of disputes and online fraud. This research investigates people's perceptions of dispute resolution prior to an online transaction. People's perceptions of ‘power to resolve' a dispute is investigated as factor that may impact people's perceptions of online trust of e-business.

A research model is proposed that is founded in online trust theory. The research design is a quantitative study. Data collected is analysed using analysis of variance testing, and structural equation modeling (SEM). This research provides a better understanding of the dispute resolution phenomenon, and potentially opens up a new direction of research into dispute avoidance. A new ‘power to resolve' construct is developed to extend theory.

",2005,0, 428,An Iterative Approach for Web Catalog Integration with Support Vector Machines,"

Web catalog integration is an emerging problem in current digital content management. Past studies show that more improvement on integration accuracy can be achieved with advanced classifiers. Because Support Vector Machine (SVM) has shown its supremeness in recent research, we propose an iterative SVM-based approach (SVM-IA) to improve the integration performance. We have conducted experiments of real-world catalog integration to evaluate the performance of SVM-IA and cross-training SVM. The results show that SVM-IA has prominent accuracy performance, and the performance is more stable.

",2005,0, 429,An Iterative Empirical Strategy for the Systematic Selection of a Combination of Verification and Validation Technologies,"The rapid development of verification and validation (V&V) has resulted in a multitude of V&V technologies, making V&V selection difficult for practitioners. Since most V&V technologies will be combined it is important to be aware of how they should be combined and the cost-effectiveness of these combinations. This paper presents a strategy for selecting and evaluating particular V&V combinations that focuses on maximising completeness and minimising effort. The strategy includes a systematic approach for applying empirical information regarding the costs and capabilities of V&V technologies.",2007,0, 430,An operations perspective on strategic alliance success factors - An exploratory study of alliance managers in the software industry.,"Purpose ? To explore alliance managers' perceptions of the most significant determinants of strategic alliance success in the software sector. Design/methodology/approach ? The study is based on 30 key informant interviews and a survey of 143 alliance managers. Findings ? While both structural and process factors are important, the most significant factors affecting alliance success are the adaptability and openness of the alliance partners, human resource practices and partners' learning capability during implementation. Alliance partners should pay more attention to operational implementation issues as an alliance evolves, in order to achieve successful cooperative relationships. Research limitations/implications ? This research has responded to the call for more empirical study of the underlying causes of successful alliances. It contributes to the ongoing debate about which factors have most impact on strategic alliance outcomes, and complements prior research on several dimensions. First, using selected interview quotations to illuminate the quantitative analysis, it contributes to a deeper understanding of the alliance process, and reduced the ambiguity about which factors are most influential. In particular, the study provides support for those authors who have argued for the relative importance of the alliance implementation process. Second, support has also been found for the prominence of learning capability and the inter?partner learning process as a major component of effective alliance implementation. Third, the results are based on the views of practicing alliance managers, which addresses a recognized gap in the literature. Practical implications ? The results send a signal to senior managers contemplating strategic alliances that they should not underestimate the importance of alliance process factors and the role that alliance managers play in achieving successful alliance relationships. This is particularly important, given the high levels of alliance failure reported in the extant literature. Originality/value ? While past research on strategic alliances has placed more emphasis on the importance of alliance formation than on implementation, there is an ongoing debate about whether structural, formation factors have more influence on alliance success than implementation or process factors. There has been only limited empirical work examining this interplay between structure and process, particularly from an operations perspective, and very few studies have examined strategic alliances in the software industry.",2005,0, 431,"An Optimization Model for Sea Port Equipment Configuration Case study: Karlshamn-Klaipeda Short Sea Shipping Link","Today, freight volumes on roads have gone up to a level that there is a need for alternative transport modes. Short Sea Shipping (SSS) is one alternative with a potential that can help reduce the high traffic on roads. Most SSS systems use vessels whereby cargo is rolled on and off using a ramp with very small capacities usually less than 500 TEU, but with increasing cargo traffic, it is not clear if such solutions will be efficient. For ports involved in SSS to meet up this new wave of change, the challenge to make appropriate investments and analysis tools is important. The type of vessel suitable for a SSS operation (such as roll-on roll-off (RoRo), lift-on lift-off (LoLo) etc) has been addressed in this thesis based on their compatibility and cost effectiveness with the terminal equipments. The purpose of this study is to develop an optimization model that can be incorporated into a Computer Decision Support System (DSS) for selecting equipments including ships at a strategic level for investments in handling unitised cargo at port terminals in the context of Short Sea Shipping (SSS). The main contribution of the thesis is the application of computer science techniques in the domain of strategic decision making related to the configuration of complex systems (e.g. interrelationships between ships and equipments) with choices of handling equipment. From modelling the selection of port terminal equipments for SSS, we realised that while integer linear programming is a promising approach for studying such systems, it remains a challenge to handle complex issues in depth especially in relation to the quay crane due to interdependencies between time, cost and capacity of equipments. Model results indicates that a LoLo vessel with a capacity between (500 and 1000 TEU) capable of completing a SSS voyage such that handling is done within 48 hours will be less costly than a RoRo that does it with multiple voyages or one voyage each for multiple RoRo vessels for TEU volumes greater than 1000. But RoRo vessels remain useful for trailers that cannot be transported by LoLo vessels. containers.",2007,0, 432,Analysing the Demand Side of E-Government: What Can We Learn From Slovenian Users?,"

Many surveys and studies to date have pointed out that there is a considerable gap between expressed interest from potential users and the actual use of e-government information and services. However, the factors influencing that gap have not yet been fully explained and understood. This paper therefore investigates the real driving forces concerning the 'demand' side of egovernment and the take-up of public e-services. The paper summarises the findings of similar studies carried out in other countries and compares them with the results of the extensive study carried out in Slovenia during 2004 and 2006, with a focus on user expectations and satisfaction.

",2007,0, 433,Analysis & recommendations for the management of cots: computer off the shelf-software projects,A Laplace technique is used to analyze the most general class E amplifier: that with finite DC-feed inductance and finite output network Q. The analysis is implemented with a computer program using PC-MATLAB software. A listing of the program is provided,1989,0, 434,Analysis and Comparison of Reordering for Two Factorization Methods (LU and WZ) for Sparse Matrices,"

The authors of the article make analysis and comparison of reordering for two factorizations of the sparse matrices --- the traditional factorization into the matrices <Emphasis Type=""Bold"">L</Emphasis>and <Emphasis Type=""Bold"">U</Emphasis>as well as the factorization into matrices <Emphasis Type=""Bold"">W</Emphasis>and <Emphasis Type=""Bold"">Z</Emphasis>. The article compares these two factorizations regarding: the produced quantity of non-zero elements alias their susceptibility to a fill-in; the algorithms reorganizing matrix (for LU it will be the algorithm AMD but for WZ it will be a modification of the Markowitz algorithm); as well as the time of the algorithms. The paper also describes the results of a numerical experiment carried for different sparse matrices from Davis Collection.

",2008,0, 435,Analysis of a deployed software,"This paper presents a novel approach to unit testing that lets users of deployed software assist in performing mutation testing of the software. Our technique, MUGAMMA, provisions a software system so that when it executes in the field, it will determine whether users' executions would have killed mutants (without actually executing the mutants), and if so, captures the state information about those executions. In the absence of bug reports, knowledge of executions that would have killed mutants provides additional confidence in the system over that gained by the testing performed before deployment. Captured information about the state before and after execution of units (e.g., methods) can be used to construct test cases for use in unit testing when changes are made to the software. The paper also describes our prototype MuGamma implementation along with a case study that demonstrates its potential efficacy.",2006,0, 436,Analysis of an Infant Cry Recognizer for the Early Identification of Pathologies,"

This work presents the development and analysis of an automatic recognizer of infant cry, with the objective of classifying three classes, normal, hypo acoustics and asphyxia. We use acoustic feature extraction techniques like MFCC, for the acoustic processing of the cry's sound wave, and a Feed Forward Input Delay neural network with training based on Gradient Descent with Adaptive Back-Propagation for classification. We also use principal component analysis (PCA) in order to reduce vector's size and to improve training time. The complete infant cry database is represented by plain text vector files, which allows the files to be easily processed in any programming environment. The paper describes the design, implementation as well as experimentation processes, and the analysis of results of each type of experiment performed.

",2005,0, 437,Analysis of combinatorial user effect in international usability tests,"User effect in terms of influencing the validity and reliability of results derived from standard usability tests has been studied with different approaches during the last decade, but inconsistent findings were obtained. User effect is further complicated by other confounding variables. With the use of various computational models, we analyze the extent of user effect in a relatively complex arrangement of international usability tests in which four different European countries were involved. We explore five aspects of user effect, including optimality of sample size, evaluator effect, effect of heterogeneous subgroups, performance of task variants, and efficiency of problem discovery. Some implications for future research are drawn.",2004,0, 438,Analysis of jitter due to call-level fluctuations,"This paper studies a method to improve the measurement accuracy of in-orbit test (IOT) for the Ka-band communication satellite payload. Since ground telemetry and command station of this satellite operates at low elevation, Ka-band atmospheric attenuation and fluctuation are much large. Due to lack of effective way to separate the influence of atmospheric attenuation, IOT processing has to employ the conventional method for many years, that is to say, tests need to choose sunny days and atmospheric attenuation is supposed to be fixed in this situation. However, this assumption degrades significantly measurement performance of the conventional IOT processing. To overcome this shortcoming, the paper proposes an improved IOT method, in which IOT test data is corrected more accurately after introducing two-channel-microwave-radiometer into IOT procedure to measure atmospheric attenuation of uplink and downlink along signal propagation path. Theoretical analysis for the measurement accuracy of the conventional method and the improved method are investigated in the paper, and comparisons for actual test data of these two methods are also provided, then IOT experiment result proves that the proposed scheme outperforms the conventional scheme remarkably.",2014,0, 439,Analysis of Mobile Commerce Performance by Using the Task-Technology Fit,"AbstractThe rapid growth of investments in mobile commerce (M-commerce) to reach a large and growing body of customers, coupled with low communication costs, has made user acceptance an increasingly critical management issue. The study draws upon the task-technology fit (TTF) model as its theoretical basis and its empirical findings to pragmatically explain the key factors that affect the performance and user acceptance of M-commerce. A total of 110 usable responses were obtained. The findings indicate that the task, technology, and individual user characteristics positively affect task-technology fit and M-commerce usage. The task-technology fit and M-commerce usage are the dominant factors that affect M-commerce performance. The result points out the importance of the fit between technologies and users tasks in achieving individual performance impact from M-commerce. This paper identifies pertinent issues and problems that are critical in the development of M-commerce.",2005,0, 440,Analysis of the influence of communication between researchers on experiment replication,"The replication of experiments is a key undertaking in SE. Successful replications enable a discipline's body of knowledge to grow, as the results are added to those of earlier replications. However, replication is extremely difficult in SE, primarily because it is difficult to get a setting that is exactly the same as in the original experiment. Consequently, changes have to be made to the experiment to adapt it to the new site. To be able to replicate an experiment, information also has to be transmitted (usually orally and in writing) between the researchers who ran the experiment earlier and the ones who are going to replicate the experiment. This article examines the influence of the type of communication there is between experimenters on how successful a replication is. We have studied three replications of the same experiment in which different types of communication were used.",2006,0, 441,Analysis on Research and Application of China C2C Websites Evaluating Index System,"With the high-speed development of C2C Electronic Business, The issue of trust among the transactions is rapidly following up, and Credit Evaluation System needs improvement as soon as possible. This paper makes a further study on the existing credit evaluation methods and submits a new credit evaluation model. The new model optimizes the credit evaluation grade, cracks down speculation on the credit behavior by technical methods, and considers some factors such as the trading amount and credit evaluation of customers. The improved model can provide more efficient transaction information, and move forward a single step in reducing transaction risks. It will also help traders to make the correct decision.",2010,0, 442,"Analysis, Testing and Re-Structuring of Web Applications","The current situation in the development of Web applications is reminiscent of the early days of software systems, when quality was totally dependent on individual skills and lucky choices. In fact, Web applications are typically developed without following a formalized process model: requirements are not captured and design is not considered; developers quickly move to the implementation phase and deliver the application without testing it. Not differently from more traditional software system, however, the quality of Web applications is a complex, multidimensional attribute that involves several aspects, including correctness, reliability, maintainability, usability, accessibility, performance and conformance to standards. In this context, aim of this PhD thesis was to investigate, define and apply a variety of conceptual tools, analysis, testing and restructuring techniques able to support the quality of Web applications. The goal of analysis and testing is to assess the quality of Web applications during their development and evolution; restructuring aims at improving the quality by suitably changing their structure.",2004,0, 443,Analyzing Demand Drivers of Enterprise Informatization Based on System Dynamics Method,"AbstractWith the popularization of networks, digitalization and automation, demand for enterprise informatization becomes more urgent. There are many factors leading to the demand for EIS. Some of the factors are from enterprise development, while others are from policy driven. In this paper, we present a relationship model by using system dynamics method to characterize the cause-result (C-R) of demand drivers of enterprise informatization. Based on the empirical studies, we reveal how the factors affect the demand for enterprise informatization, which form a cluster of causation to be used as the cause variables in our model. This procedure is to settle on the interim variables and result variables, which formed systematic dynamics C-R charts. Questionnaires and interviews from dozens of enterprises, Delphi Expert Decision method are made and analyzed, which verified the relationship between variables. The results presented in this paper provide good insights for the enterprise managers optimal decisions.",2008,0, 444,Analyzing Differences in Operational Disease Definitions Using Ontological Modeling,"

In medicine, there are many diseases which cannot be precisely characterized but are considered as natural kinds. In the communication between health care professionals, this is generally not problematic. In biomedical research, however, crisp definitions are required to unambiguously distinguish patients with and without the disease. In practice, this results in different operational definitions being in use for a single disease. This paper presents an approach to compare different operational definitions of a single disease using ontological modeling. The approach is illustrated with a case-study in the area of severe sepsis.

",2007,0, 445,Analyzing Pathways Using SAT-Based Approaches,"Recent work in network security has focused on the fact that combinations of exploits are the typical means by which an attacker breaks into a network. Researchers have proposed a variety of graph-based analysis approach, and there is often a lack of logical formalism. This paper describes a new approach to represent and analyze network vulnerability. We propose logical exploitation graph, which directly illustrate logical dependencies among exploitation goals and network configure. Our logical exploitation graph generation tool builds upon LEG-NSA, a network security analyzer based on Prolog logical programming. We demonstrate how to reason all exploitation paths using bottom-up and top-down evaluation algorithms in the Prolog logic- programming engine. We show experimental evidence that our logical exploitation graph generation algorithm is very efficient.",2007,0, 446,"Analyzing Software Artifacts Through Singular Value Decomposition to Guide Development Decisions","During development, programming teams will produce numerous types of software development artifacts. A software development artifact is an intermediate or final product that is the result or by-product of software development. Hidden relationships and structures within a software system can be illuminated through singular value decomposition using software development artifacts, and these relationships can be leveraged to help guide software development questions regarding the interactions among software files.

The goal of this research is to build and investigate a framework called Software Development Artifact Analysis (SDAA) that uses software development artifacts to illuminate underlying relationships within a system. SDAA provides guidelines for selecting and gathering software development artifacts, discovering relationships, and then leveraging the insights gained through the analysis of those relationships. We use singular value decomposition (SVD) to generate the relationships from a matrix of software development artifact metrics.

In this research, we use SDAA to create three SVD-based software development analysis techniques: an impact analysis technique, a regression test prioritization technique, and a static analysis alert filtering technique. These techniques were applied and examined on an industrial project and five open source projects and compared with comparable current techniques. In general, our techniques were shown to be more resource efficient than comparable techniques in system resources and time, while still prioritizing developer effort effectively.",2007,0, 447,Anchoring and adjustment in software estimation,"Software effort estimation requires high accuracy, but accurate estimations are difficult to achieve. Increasingly, datamining is used to improve an organization's software process quality, e.g. the accuracy of effort estimations. There are a large number of different method combination exists for software effort estimation, selecting the most suitable combination becomes the subject of research in this paper. In this study data preprocessing is implemented and effort is calculated using COCOMO Model. Then data mining techniques OLS Regression and K Means Clustering are implemented on preprocessed data and results obtained are compared and data mining techniques when implemented on preprocessed data proves to be more accurate then OLS Regression Technique.",2014,0, 448,Antecedents and consequences of team potency in software development projects,"We examined the effect of project team culture on the evolution of project team potency in a sample of 110 project teams. Little is known about the factors responsible for the development of project team potency - the collective belief of a project team that it can be effective. Results revealed that project team culture is related to project team potency, and that project team potency is related to project success. Our findings provide project leaders with a tool on how to enhance project success by influencing project team potency, through a change in project team culture",2006,0, 449,API-Based and Information-Theoretic Metrics for Measuring the Quality of Software Modularization,"We present in this paper a new set of metrics that measure the quality of modularization of a non-object-oriented software system. We have proposed a set of design principles to capture the notion of modularity and defined metrics centered around these principles. These metrics characterize the software from a variety of perspectives: structural, architectural, and notions such as the similarity of purpose and commonality of goals. (By structural, we are referring to intermodule coupling-based notions, and by architectural, we mean the horizontal layering of modules in large software systems.) We employ the notion of API (application programming interface) as the basis for our structural metrics. The rest of the metrics we present are in support of those that are based on API. Some of the important support metrics include those that characterize each module on the basis of the similarity of purpose of the services offered by the module. These metrics are based on information-theoretic principles. We tested our metrics on some popular open-source systems and some large legacy-code business applications. To validate the metrics, we compared the results obtained on human-modularized versions of the software (as created by the developers of the software) with those obtained on randomized versions of the code. For randomized versions, the assignment of the individual functions to modules was randomized",2007,0, 450,Application of Time-Series Data Mining for Fault Diagnosis of Induction Motors,"Data driven approaches are gaining popularity in the field of condition monitoring due to their knowledge based fault identification capability for wide range of motor operation. Particularly the method, based on mining the data can encompass the wide behavioral operation of induction motor drive system in industries. Therefore, appropriate low cost instrumentation embedding an efficient algorithm becomes the industrial demand for fault diagnosis of induction motor drive. A hardware friendly algorithm for multi-class fault diagnosis by applying data mining technique is proposed in this paper. Most frequently associated faults like bearing fault, stator inter-turn fault, broken rotor bar fault are investigated for a drive fed induction motor. Discrete wavelet transform-Inverse discrete wavelet transform (DWT-IDWT) algorithm is used to obtain the unique characteristics from each synthesized sub-band and these filtered signals are exploited for feature extraction. A feature selection technique based on Genetic Algorithm (GA) is utilized to identify the potential features for reducing the dimensionality of the feature space. The use of smallest length filter of 2 coefficients (db1) for DWT-IDWT algorithm and 6 relevant features has made the proposed algorithm computationally efficient. The classification accuracy for the investigated multiple faults are found to be quite appreciable. Further, a comparative study is also done using different classifiers: k-NN, MLP and RBF.",2016,0, 451,Application of Video Error Resilience Techniques for Mobile Broadcast Multicast Services (MBMS),"With data throughput for mobile devices constantly increasing, services such as video broadcast and multicast are becoming feasible. The 3GPP (3rd Generation Partnership Project) committee is currently working on a standard for mobile broadcast and multicast services (MBMS). MBMS is expected to enable easier deployment of video and multimedia services on 3G networks. We present an overview of the standard including the proposed architecture and requirements focusing on radio aspects. We discuss the issue of video error resilience in such services that is critical to maintain consistent quality for terminals. The error resilience techniques currently used in video streaming services are not suitable for MBMS services. We analyze the error resilience techniques that are applicable within the context of MBMS standard and present our early research in this area.",2004,0, 452,Applied multi-agent systems principles to cooperating robots in tomorrow?s agricultural environment,"Progress in software engineering over the past two or three decades has primarily been made through the development of increasingly powerful and natural abstractions, with which to model and develop complex systems. This progress has led to the notion of Multi-Agent Systems (MAS), which is seen as, yet, an increase of abstraction compared to the object-oriented approach. The MAS society has proposed many theories and concepts, but so far practical experiences with these theories and concepts are a limited resource. In this thesis, a number of MAS principles, including organisation, negotiation and agent capabilities, are applied to a real world context, in this case collaborating robots in an agricultural environment. A platform, called LEGOBot II platform is presented that utilises and combines the above mentioned MAS principles, which enable us to use MAS concepts on practical problems. The application of the MAS principles to the practical problems, showed that the investigated concepts were ready and mature, to undergo the transition from theoretical ideas to practical realisations. Furthermore, it was revealed that today?s MAS frameworks advantageously could combine concepts from organisation, agent reasoning and capabilities.",2005,0, 453,Applying Dynamic Fuzzy Model in Combination with Support Vector Machine to Explore Stock Market Dynamism,"

In the study, a new dynamic fuzzy model is proposed in combination with support vector machine (SVM) to explore stock market dynamism. The fuzzy model integrates various factors with influential degree as the input variables, and the genetic algorithm (GA) adjusts the influential degree of each input variable dynamically. SVM then serves to predict stock market dynamism in the next phase. In the meanwhile, the multiperiod experiment method is designed to simulate the volatility of stock market. Then, we compare it with other methods. The model from the study does generate better results than others.

",2007,0, 454,Applying Real Options Thinking to Information Security in Networked Organizations,"An information security strategy of an organization participating in a networked business sets out the plans for designing a variety of actions that ensure confidentiality, availability, and integrity of company?s key information assets. The actions are concerned with authentication and nonrepudiation of authorized users of these assets. We assume that the primary objective of security efforts in a company is improving and sustaining resiliency, which means security contributes to the ability of an organization to withstand discontinuities and disruptive events, to get back to its normal operating state, and to adapt to ever changing risk environments. When companies collaborating in a value web view security as a business issue, risk assessment and cost-benefit analysis techniques are necessary and explicit part of their process of resource allocation and budgeting, no matter if security spendings are treated as capital investment or operating expenditures.",2006,0, 455,Applying Systematic Reviews to Diverse Study Types: An Experience Report,"Systematic reviews are one of the key building blocks of evidence-based software engineering. Current guidelines for such reviews are, for a large part, based on standard meta-analytic techniques. However, such quantitative techniques have only limited applicability to software engineering research. In this paper, therefore, we describe our experience with an approach to combine diverse study types in a systematic review of empirical research of agile software development.",2007,0, 456,Applying the Architecture Tradeoff Analysis Method (ATAM) to an industrial multi-agent system application,"This technical report contains the documents used in the course of applying the Architectural Tradeoff Analysis Method (ATAM) to a real world case of Automatic Guided Vehicle (AGV) control, during the EMC2 project. The EMC2 project is a cooperation between K.U.Leuven-DistriNet and Egemin N.V., a manufacturer of AGVs. One of the goals of the project is to propose a decentralized architecture for the control of AGVs, giving them more autonomy than in the current centralized architecture. The decentralized architecture is described thoroughly in this document. As a milestone in the EMC2 project, a one-day ATAM workshop with participation from all stakeholders and architects was held. The goal of the workshop was to discuss the proposed architecture based on the functional and quality attributes. The experiences obtained from the ATAM were then used by the architects to guide their work. The goal of this report is to summarize the application of the ATAM in the AGV case. The ATAM workshop was held in June 16th, 2005. This report contains the presentations given that day, the architectural document that was used (describing the requirements, the specific project chosen, and the software architecture), as well as our experiences with the ATAM, and work that was done in response to the ATAM workshop.",2005,0, 457,Approaching OWL and MDA Through Technological Spaces,"Web Ontology Language (OWL) and Model-Driven Architectures (MDA) are two technologies being developed in parallel, but by different communities. They have common points and issues and can be brought closer together. Many authors have so far stressed this problem and have proposed several solutions. The result of these efforts is the recent OMG's initiative for defining an ontology development platform. However, the problem of transformation between an ontology and MDA-based languages has been solved using rather partial and ad hoc solutions, most often by XSLT. In this paper we analyze OWL and MDA-compliant languages as separate technological spaces. In order to achieve a synergy between these technological spaces we define ontology languages in terms of MDA standards, recognize relations between OWL and MDA-based ontology languages, and propose mapping techniques. In order to illustrate the approach, we use an MDA-defined ontology architecture that includes ontology metamodel and ontology UML Profile. Based on this approach, we have implemented a transformation of the ontology UML Profile into OWL representation.",2004,0, 458,Approximate Life Cycle Assessment of Product Concepts Using a Hybrid Genetic Algorithm and Neural Network Approach,"

Environmental impact assessment of products has been a key area of research and development for sustainable product development. Many companies copy these trends and they consider environmental criteria into the product design process. Life Cycle Assessment (LCA) is used to support the decision-making for product design and the best alternative can be selected by its estimated environmental impacts and benefits. The need for analytical LCA has resulted in the development of approximate LCA. This paper presents an optimization strategy for approximate LCA using a hybrid approach which incorporate genetic algorithms (GAs) and neural networks (NNs). In this study, GAs are employed to select feature subsets to eliminate irrelevant factors and determine the number of hidden nodes and processing elements. In addition, GAs will optimize the connection weights between layers of NN simultaneously. Experimental results show that a hybrid GA and NN approach outperforms the conventional backpropagation neural network and verify the effectiveness of the proposed approach.

",2006,0, 459,Architecting-problems rooted in requirements,"Uscreates projects show that capturing requirements to inform the design of products, services, and systems must involve truly engaging people in conversation. This is a creative process that has to be designed to suit the people involved in the discussion. It's easy to speak to the willing, but often these aren't the people from which the information is needed. Uscreates is often commissioned to reach ""hard to reach"" groups. No group of people is hard to reach if the research and designing is done in the right environments and methods to speak to them. The high levels of technology are necessary to gather information and requirements for human centered projects. It's more about the design of the space and the people facilitating the conversation; spaces that encourage creative conversation are low in technology, but high in facilitation.",2010,0, 460,"Architectural knowledge and rationale: issues, trends, challenges","

The second workshop on Sharing and Reusing Architectural Knowledge (SHARK) and Architecture rationale and Design intent (ADI) was held jointly with ICSE 2007 in Minneapolis. This report presents the themes of the workshop, summarizes the results of the discussions held, and suggests some topics for future research.

",2007,0, 461,Architecture Recovery and Evaluation Aiming at Program Understanding and Reuse,"This work focuses on architectural recovery for program understanding and reuse reengineering of legacy object-oriented systems. The proposed method is based on dynamic analysis of the system for the selected test cases that cover relevant use cases. The theory of formal concept analysis is applied to decompose the logical hierarchy of subsystems, so that parts of the system which implement similar functionality are grouped together",2000,0, 462,Artificial Neural Network Based Life Cycle Assessment Model for Product Concepts Using Product Classification Method,"

Many companies are beginning to change the way they develop products due to increasing awareness of environmentally conscious product development. To copy with these trends, designers are being asked to incorporate environmental criteria into the design process. Recently Life Cycle Assessment (LCA) is used to support the decision-making for product design and the best alternative can be selected based on its estimated environmental impacts and benefits. Both the lack of detailed information and time for a full LCA for a various range of design concepts need the new approach for the environmental analysis. This paper presents an artificial neural network (ANN) based approximate LCA model of product concepts for product groups using a product classification method. A product classification method is developed to support the specialization of ANN based LCA model for different classes of products. Hierarchical clustering is used to guide a systematic identification of product groups based upon environmental categories using the C4.5 decision tree algorithm. Then, an artificial neural network approach is used to estimate an approximate LCA for classified products with product attributes and environmental impact drivers identified in this paper.

",2005,0, 463,AS-GC: An efficient generational garbage collector for Java application servers,"

A generational collection strategy utilizing a single nursery cannot efficiently manage objects in application servers due to variance in their lifespans. In this paper, we introduce an optimization technique designed for application servers that exploits an observation that remotable objects are commonly used as gateways for client requests. Objects instantiated as part of these requests (remote objects) often live longer than objects not created to serve these remote requests (local objects). Thus, our scheme creates remote and local objects in two separate nurseries; each is properly sized to match the lifetime characteristic of the residing objects. We extended the generational collector in HotSpot to support the proposed optimization and found that given the same heap size, the proposed scheme can improve the maximum throughput of an application server by 14% over the default collector. It also allows the application server to handle 10% higher workload prior to memory exhaustion.

",2007,0, 464,Aspect-oriented software engineering: An experience of application in a help desk system.,"Aspect-oriented Requirement Engineering provides approaches for eliciting and specifying the concerns and crosscutting concerns in the early stages of software development. In this paper, we present a case study in order to assess a model of the aspect-oriented multidimensional separation of concerns. With the case study, we review their features for elicitation, analysis and traceability of requirements, as well as some limitations of the model and how it can be adapted to the development process. Also, we discuss some issues that might endorse the development and evolution of this model.",2006,0, 465,Assessing Changeability by Investigating the Propagation of Change Types,"We propose an approach to build a changeability assessment model for source code entities. Based on this model, we will assess the changeability of evolving software systems. The changeability assessment is based on a taxonomy of more than 30 change types and a classification of these in terms of change significance levels for consecutive versions of software entities. We consider change type propagation on different levels of granularity ranging from method changes to interface and class changes. We claim that this kind of assessment is effective in pointing to potential causes of maintainability problems in evolving software systems.",2007,0, 466,Assessing Classification Accuracy in the Revision Stage of a CBR Spam Filtering System,"

In this paper we introduce a quality metric for characterizing the solutions generated by a successful CBR spam filtering system called <Emphasis Type=""SmallCaps"">SpamHunting</Emphasis>. The proposal is denoted as <em>relevant information amount rate</em>and it is based on combining estimations about relevance and amount of information recovered during the retrieve stage of a CBR system. The results obtained from experimentation show how this measure can successfully be used as a suitable complement for the classifications computed by our <Emphasis Type=""SmallCaps"">SpamHunting</Emphasis>system. In order to evaluate the performance of the quality estimation index, we have designed a formal benchmark procedure that can be used to evaluate any accuracy metric. Finally, following the designed test procedure, we show the behaviour of the proposed measure using two well-known publicly available corpus.

",2007,0, 467,Assessing Cognitive Load in Adaptive Hypermedia Systems: Physiological and Behavioral Methods,"AbstractIt could be advantageous in many situations for an adaptive hypermedia system to have information about the cognitive load that the user is currently experiencing. A literature review of the methods proposed to assess cognitive load reveals: (1) that pupil size seems to be one of the most promising indicators of cognitive load in applied contexts and (2) that its suitability for use as an on-line index in everyday situations has not yet been tested adequately. Therefore, the aim of the present study was to evaluate the usefulness of the pupil size index in such situations. To this end, pupil diameter and event-related brain potentials were measured while subjects read texts of different levels of difficulty. As had been hypothesized, more difficult texts led to lower reading speed, higher subjective load ratings, and a reduced P300 amplitude. But text difficulty, surprisingly, had no effect on pupil size. These results indicate that pupil size may not be suitable as an index of cognitive load for adaptive hypermedia systems. Instead, behavioral indicators such as reading speed may be more suitable.",2004,0, 468,Assessing Software Product Maintainability Based on Class-Level Structural Measures,"

A number of structural measures have been suggested to support the assessment and prediction of software quality attributes. The aim of our study is to investigate how class-level measures of structural properties can be used to assess the maintainability of a software product as a whole. We survey, structure and discuss current practices on this topic, and apply alternative strategies on four functionally equivalent systems that were constructed as part of a multi-case study. In the absence of historical data needed to build statistically based prediction models, we apply elements of judgment in the assessment. We show how triangulation of alternative strategies as well as sensitivity analysis may increase the confidence in assessments that contain elements of judgment. This paper contributes to more systematic practices in the application of structural measures. Further research is needed to evaluate and improve the accuracy and precision of judgment-based strategies.

",2006,0, 469,Assessing the Relationship between Software Assertions and Faults: An Empirical Investigation,"The use of assertions in software development is thought to help produce quality software. Unfortunately, there is scant empirical evidence in commercial software systems for this argument to date. This paper presents an empirical case study of two commercial software components at Microsoft Corporation. The developers of these components systematically employed assertions, which allowed us to investigate the relationship between software assertions and code quality. We also compare the efficacy of assertions against that of popular bug finding techniques like source code static analysis tools. We observe from our case study that with an increase in the assertion density in a file there is a statistically significant decrease in fault density. Further, the usage of software assertions in these components found a large percentage of the faults in the bug database",2006,0, 470,Assessing the Reliability of a Human Estimator,"Human-based estimation remains the predominant methodology of choice [1]. Understanding the human estimator is critical for improving the effort estimation process. Every human estimator draws upon their background in terms of domain knowledge, technical knowledge, experience, and education in formulating an estimate. This research uses estimator demographic information to construct over 4000 classifiers which distinguish between the best and worst types of estimators. Various attribute techniques are applied to determine most significant demographics. Best case models produce accuracy rates ranging from 74 to 80 percent. Some of the best case models are presented for gaining insight into how demographics impact effort estimation.",2007,0, 471,Assessing The Value Of Mediators In Collaborative Business Networks,"Along with the fast development of tourism, nation tour information sharing and development of tourism electronic commerce become the key question that the traveling profession urgently awaits to be solved. Nanyang tourism industry has carried on collaborative e-business platform construction based on value network. In the construction process, the value network and collaborative commerce theory have been introduced. The basic thinking of tourism industry constructing collaborative e- business pattern has been proposed, and the system of collaborative e-business is designed. The electronic commerce pattern of tourism industry is provided; the structure level of application service system is constructed based on this. Proposed pattern is based on the value network theory. The proposed pattern can realize the benefit maximization, and have certain guiding sense.",2009,0, 472,Assessing Uncertainty of Software Development Effort Estimates: The Learning from Outcome Feedback,"To enable properly sized software projects budgets and plans it is important to be able to assess the uncertainty of the estimates of most likely effort required to complete the projects. Previous studies show that software professionals tend to be too optimistic about the uncertainty of their effort estimates. This paper reports the results from a preliminary study on the role of outcome feedback in the learning process on effort estimation uncertainty assessment. Software developers were given repeated and immediate outcome feedback, i.e., feedback about the discrepancy between the estimated most likely effort and the actual effort, for the purpose of investigating how much, and how, they improve (learn). We found that a necessary condition for improvement of uncertainty assessments of effort estimates may be the use of explicitly formulated uncertainty assessment strategies. By contrast, intuition-based uncertainty assessment strategies may lead to no or little learning",2005,0, 473,Assessment Of Collaborative Networks Structural Stability,"This paper addresses a new approach for predicting the generator rotor angle using an adaptive artificial neural network (AANN) for power system stability. The aim of this work is to predict the stability status for each generator when the system is under a contingency. This is based on the initial condition of an operating point, which is represented by the generator rotor angle at a certain load level. An automatic data generation algorithm is developed for the training and testing process. The proposed method has been successfully tested on the IEEE 9-bus test system and the 87-bus system for Peninsular Malaysia.",2013,0, 474,Assessment of Image-Guided Interventions,"The author describes a software package, running under MSDOS, developed to assist lecturers in the assessment of software assignments. The package itself does not make value judgments upon the work, except when it can do so absolutely, but displays the students' work for assessment by qualified staff members. The algorithms for the package are presented, and the functionality of the components is described. The package can be used for the assessment of software at three stages in the development process: (1) algorithm logic and structure, using Warnier-Orr diagrams; (2) source code structure and syntax in Modula-2; and (3) runtime performance of executable code",1992,0, 475,"Assessment of tailor-made prevention of atherosclerosis with folic acid supplementation: randomized, double-blind, placebo-controlled trials in each MTHFR C677T genotype","This study aimed at assessing the effect of folic acid supplementation quantitatively in each MTHFR C677T genotype and considered the efficiency of tailor-made prevention of atherosclerosis. Study design was genotype-stratified, randomized, double-blind, placebo-controlled trials. The setting was a Japanese company in the chemical industry. Subjects were 203 healthy men after exclusion of those who took folic acid or drugs known to effect folic acid metabolism. Intervention was folic acid 1?mg/day p.o. for 3?months. The primary endpoint was plasma total homocysteine level (tHcy). In all three genotypes, there were significant tHcy decreases. The greatest decrease was in the TT homozygote [6.61 (3.47?9.76)??mol/l] compared with other genotypes [CC: 2.59 (1.81?3.36), CT: 2.64 (2.16?3.13)], and there was a significant trend between the mutated allele number and the decrease. The tHcy were significantly lowered in all the genotypes, but the amount of the decrease differed significantly in each genotype, which was observed at both 1 and 3?months. Using these time-series data, the largest benefit obtained by the TT homozygote was appraised as 2.4 times compared with the CC homozygote. Taking into account the high allele frequency of this SNP, this quantitative assessment should be useful when considering tailor-made prevention of atherosclerosis with folic acid.",2005,0, 476,Assimilation patterns in the use of electronic procurement innovations: a cluster analysis,"Electronic procurement innovations (EPI) have been adopted by many firms as a means of improving their procurement efficiency and effectiveness, but little research has been conducted to determine whether the assimilation of EPI really increases procurement productivity and which factors influence its assimilation. Drawing on data from 166 firms, we conducted an exploratory study to address these questions, using cluster analysis that revealed four different clusters or patterns of EPI assimilation: none, focused niche, asymmetric, and broad-based deployment. The level of EPI assimilation was closely related to procurement productivity. Greater levels of EPI assimilation were associated with higher levels of top management support and greater IT sophistication. Also, interesting patterns emerged between the various elements of EPI infrastructure capability, specifically flexibility and comprehensiveness of standards, EPI security, and the level of EPI assimilation.",2006,0, 477,Assisting Concept Assignment using Probabilistic Classification and Cognitive Mapping.,"The problem of concept assignment, that is, the problem of mapping human oriented concepts to elements in the code base of a system under study, and approaches which facilitate concept assignment can be considered as central to assisting software engineers in comprehending the unfamiliar systems they encounter. This paper presents a technique called cognitive assignment that attempts to capture what expert engineers know about the systems they work with and uses that information to generate classifiers that are used to implement a ranked search over a set of software elements.",2006,0, 478,Atome - Binary Translation for Accurate Simulation,"This paper presents a strategy to speed-up the simulation of processors having SIMD extensions using dynamic binary translation. The idea is simple: benefit from the SIMD instructions of the host processor that is running the simulation. The realization is unfortunately not easy, as the nature of all but the simplest SIMD instructions is very different from a manufacturer to an other. To solve this issue, we propose an approach based on a simple 3-addresses intermediate SIMD instruction set on which and from which mapping most existing instructions at translation time is easy. To still support complex instructions, we use a form of threaded code. We detail our generic solution and demonstrate its applicability and effectiveness using a parametrized synthetic benchmark making use of the ARMv7 NEON extensions executed on a Pentium with MMX/SSE extensions.",2011,0, 479,Autobiographic Knowledge for Believable Virtual Characters,"

It has been widely acknowledged in the areas of human memory and cognition that behaviour and emotion are essentially grounded by autobiographic knowledge. In this paper we propose an overall framework of human autobiographic memory for modelling believable virtual characters in narrative story-telling systems and role-playing computer games. We first lay out the background research of autobiographic memory in Psychology, Cognitive Science and Artificial Intelligence. Our autobiographic agent framework is then detailed with features supporting other cognitive processes which have been extensively modelled in the design of believable virtual characters (e.g. goal structure, emotion, attention, memory schema and reactive behaviour-based control at a lower level). Finally we list directions for future research at the end of the paper.

",2006,0, 480,Automata Network Simulator Applied to the Epidemiology of Urban Dengue Fever,"

The main goal this paper is to describe a software simulating spatio-temporal Dengue epidemic spread based on the utilization of a generalized probabilistic cellular automata computational analysis as the dynamic model of spatial epidemiology. This epidemic spatial model permits to reproduce explicitly the interaction of two types of transmission mechanisms in terms of global and local variables, which in turn can be adjusted to simulate respectively the populational mobility and geographical neighborhood contacts. The resulting virtual laboratory was designed to run spatio-temporal simulation of the Dengue disease spreading based on local and global interactions among two distinct populations (humans and mosquitoes).

",2006,0, 481,Automated Expert Modeling for Automated Student Evaluation,"A computational enterprise model representing key facets of are organization care be are effective tool to consider where planning are enterprise information architecture. For example, a specific organization's quality management business processes and organizational structures can be represented using such a model, and then compared to a reference model of ""good"" processes and structures, such as the ISO 9000 standards. The specific and reference models can be represented using common entities, attributes, and relationships-comprising general schema or data model-which are then formally defined and constrained. These definitions and constraints can be used as inference rules applied to the models. Hence identification of differences between the models as quality problems can be automatically inferred, as can the analysis and correction of problems. In this paper; the TOTE ISO 9000 Micro-Theory is presented as a formal reference model of quality goodness. ISO 9000 requirements represented as inference rules in the micro-theory are applied to facts about an organization's quality management processes and structures, and conformance or nonconformance to requirements is automatically inferred. TOTE Ontologies for Quality Modeling are the common data and logical (formal definitions and constraints) models of the reference and specific organization's models. The example use of the micro-theory demonstrates enterprise model use for a pre-audit, which lowers the cost and time for improving quality through achieving ISO 9000 compliance. Since these enterprise models are constructed using ontologies, benefits of using ontologies such as model re-usability and sharability can be reaped.",2002,0, 482,Automated Information Systems Generation for Process-Oriented Organizations,"Currently, the development of organizational information systems remains a complex task. Final software product quality often does not match expectations. The existence of organizational models is the first step to reduce complexity in the development of information systems. Within the life cycle of an information system, activities are still very dependent in quality, time, and costs on the human resource skills that staff them. The existence of automated mechanisms to transform client requirements into characteristics of running systems would bring added value to the resulting software product, either in product quality and time perspectives. In this proposal, the manipulation of requirements must be done using an understandable model for both software engineers and business process experts. This model should be used to automatically reshape the running organizational information system and be the basis for an automated information system generation. The usage of such mechanism can be done during a development project, but also after its implementation where standalone process experts could change the organization model, knowing that the changes, in an automated mode, would be transferred into the running system.",2007,0, 483,Automatic camera path generation for graph navigation in 3D,"This paper addresses the Focus+Context issues involved in navigating very large graphs in 3D Euclidean space. The main aim of the approach presented in this material is to preserve and perhaps enhance the user mental map during the transition phase from one focus object to the other. This approach, NavAssist, accomplishes this task by filling the users view port with the maximum amount of information without compromising user orientation. A simplified 3D view point path-generating technique is presented that reveals the largest number of graph components during 3D space navigation.",2005,0, 484,Automatic Detection of Critical Epochs in coma-EEG Using Independent Component Analysis and Higher Order Statistics,"

Previous works showed that the joint use of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) allows to extract a few meaningful dominant components from the EEG of patients in coma. A procedure for automatic critical epoch detection might support the doctor in the long time monitoring of the patients, this is why we are headed to find a procedure able to automatically quantify how much an epoch is critical or not. In this paper we propose a procedure based on the extraction of some features from the dominant components: the entropy and the kurtosis. This feature analysis allowed us to detect some epochs that are likely to be critical and that are worth inspecting by the expert in order to assess the possible restarting of the brain activity.

",2006,0, 485,Automatic Extraction of Genomic Glossary Triggered by Query,"

In the domain of genomic research, the understanding of specific gene name is a portal to most Information Retrieval (IR) and Information Extraction (IE) systems. In this paper we present an automatic method to extract genomic glossary triggered by the initial gene name in query. LocusLink gene names and MEDLINE abstracts are employed in our system, playing the roles of query triggers and genomic corpus respectively. The evaluation of the extracted glossary is through query expansion in TREC2003 Genomics Track ad hoc retrieval task, and the experiment results yield evidence that 90.15% recall can be achieved.

",2006,0, 486,AUTOMATING A DESIGN REUSE FACILITY WITH CRITICAL PARAMETERS,"The actual performance of modern spectroscopy amplifiers on the field is critically dependent on the setting of some analog controls and sensitive to some second-order effects, to be taken into account by the designer. The semi-automated pole-zero cancellation control is here introduced; precision adjustment is easily obtained under operating conditions by means of a Pole-Zero Adviser???? circuit, without using auxiliary equipment (oscilloscope, etc.). Automated controls for the threshold and time constant of a gated baseline restorer are also considered. Some aspects in the design and high-resolution testing of spectroscopy amplifiers are discussed.",1982,0, 487,Automating and Simplifying Memory Corruption Attack Response Using Failure-Aware Computing,"Over the last two decades, advances in software engineering have produced new ways of creating robust, reliable software. Unfortunately, the dream of bug-free software still eludes us. When bugs are discovered in deployed software, software failures and service disruption can lead to significant losses, both monetary and otherwise. The typical failure response process is composed of three phases: failure detection, cause analysis, and solution formulation. To minimize the impact of software failures, it is critical that each of these phases be completed as quickly as possible. This thesis is separated into two parts. In the first part, we propose a general conceptual approach called emph{failure-aware computing} that aims to automate as much of the failure response process as possible. We describe the architecture of this proposed framework, some possible applications, and challenges if it were implemented. We then describe how this framework can be applied to responding to memory corruption errors. In the second part, we describe and evaluate an implementation of part of this framework for diagnosing memory corruption failures. In particular, we discuss a root cause analysis tool we have created that analyzes a program's source code to determine which memory-related program events potentially lead to a memory corruption error. Our tool then monitors the afflicted program's execution and outputs useful information to aid the developer in understanding the root cause of the failure. We finally evaluate our tool's effectiveness in identifying the root cause of memory access errors in both self-written and open-source code.",2006,0, 488,Automotive use case standard for embedded systems,"Embedded systems can be engineered using clean room software engineering methodology (CRSE) as it considers all the quality issues as integral part of the CRSE life cycle model and lays stress on the reduced size and effort of testing through statistical use testing. Both CRSE and embedded systems development methodologies are based on stimulus-response models. The stimulus-response models are used for designing the external behavioral requirements. Thus, CRSE, in a revised form, can be conveniently be used for the development of reliable embedded systems. Verification and validation of one model with the other such as verifying the external behavior models (black box structures) with the requirement specifications and vice versa is the most important built-in feature of CRSE. In the literature the verification and validation methods described are manual step by step procedures which are either intuitive or experience based. CRSE suffers from lack of formal frameworks to verify box structures with the requirement specification and vice versa. In this paper, a methodology is proposed for verifying black box structures which are derived using end-to-end processing requirements of the embedded systems. The verification mechanism is built around generation of stimulus-response sequences in two different ways and proving that the sequences generated are the same when the system has been properly built. The stimulus-response sequences generation from the perspectives of thin threads and use case models has been presented in this paper.",2009,0, 489,Barriers facing women in the IT work force,"The percentage of women working in Information Technology (IT) is falling as revealed by the 2003 Information Technology Association of America (ITAA) Blue Ribbon Panel on Information Technology (IT) Diversity report; the percentage of women in the IT workforce fell to 34.9% in 2002 down from 41% in 1996. Several studies have indicated this issue is reaching a crisis level and needs to be explored. Women working in IT at a Fortune 500 company were asked what workplace barriers they faced that had influenced their voluntary turnover decisions or the decisions of their female counterparts. Revealed causal mapping was used to evoke representations of the cognitions surrounding the barriers women face in the IT field. A causal map was developed that indicated women's actual turnover was linked to their views of their family responsibilities, the stresses they face within the workplace, various qualities of their jobs, and the flexibility they were given to determine their work schedule. Their statements regarding the barriers they faced in terms of promotion opportunities (both perceived and actual) were linked to the same four concepts. Interestingly, there was no link between promotion opportunities and voluntary turnover. Reciprocal relationships were identified between managing family responsibility and stress, work schedule flexibility and stress, managing family responsibility and job qualities, and job qualities and stress. Discrimination and lack of consistency in how management treated employees, while important, were not central to how the women in this sample thought about issues related to promotion and voluntary turnover.",2006,0, 490,"Be successful, take a hostage or ""outsourcing the outsourcing Manager""","Working on a project where the work is distributed across the globe need not be a dreadful experience. It is possible to set up a distributed team so that the regular team members are not affected by the issues pertaining to the aspects of the global setup. This paper sheds some light on the problems discovered , analyzes what often goes wrong and suggests one of many possible solutions. Take the ""full service"" approach; outsource the management of the outsourced project. Using an example of a project done between the USA West Coast and Slovakia advantages of the proposed solution are evident.",2007,0, 491,Behavioral anticipation in agent simulation,"In this article, the following is done: (1) a systematic and comprehensive classification of input is given and the relevance of perception as an important type of input in intelligent systems is pointed out, (2) a categorization of perception is given and anticipation is presented as a type of perception, (3) the inclusion of anticipation in simulation studies is clarified and other aspects of perceptions in simulation studies especially in conflict situations are elaborated.",2004,0, 492,Behavioral Similarity Matching using Concrete Source Code Templates in Logic Queries,"

Program query languages and pattern-detection techniques are an essential part of program analysis and manipulation systems. Queries and patterns permit the identification of the parts of interest in a program's implementation through a representation dedicated to the intent of the system (e.g. call-graphs to detect behavioral flaws, abstract syntax trees for transformations, concrete source code to verify programming conventions, etc). This requires that developers understand and manage all the different representations and techniques in order to detect various patterns of interest. To alleviate this overhead, we present a logic-based language that allows the program's implementation to be queried using concrete source code templates. The queries are matched against a combination of structural and behavioral program representations, including call-graphs, points-to analysis results and abstract syntax trees. The result of our approach is that developers can detect patterns in the queried program using source code excerpts (embedded in logic queries) which act as prototypical samples of the structure and behavior they intend to match.

",2007,0, 493,Best Practices for International eSourcing of Software Products and Services,"This research analyzes how eSourcing service providers can execute the Information and Communications Technology-enabled international sourcing of software-intensive systems and services (eSourcing) effectively. The extant literature falls short of providing a detailed enough set of best practices and supporting classes of information systems to help providers to manage and deliver effective services. This research presents a set of best practices for international eSourcing service providers to facilitate the execution, improvement, and management of international eSourcing services. The practices help providers to establish and execute a mature eSourcing life-cycle in order to overcome the cultural, technical, and geographical boundaries in international eSourcing. Future research should examine the introduction and application of these practices and classes in the context of various service providers to validate the proposed set.",2015,0, 494,Better Student Assessing by Finding Difficulty Factors in a Fully Automated Comprehension Measure,"

The multiple choice cloze (MCC) question format is commonly used to assess students' comprehension. It is an especially useful format for ITS because it is fully automatable and can be used on any text. Unfortunately, very little is known about the factors that influence MCC question difficulty and student performance on such questions. In order to better understand student performance on MCC questions, we developed a model of MCC questions. Our model shows that the difficulty of the answer and the student's response time are the most important predictors of student performance. In addition to showing the relative impact of the terms in our model, our model provides evidence of a developmental trend in syntactic awareness beginning around the 2nd grade. Our model also accounts for 10% more variance in students' external test scores compared to the standard scoring method for MCC questions.

",2006,0, 495,Beyond Centrality - Classifying Topological Significance Using Backup Efficiency and Alternative Paths,"

In networks characterized by broad degree distribution, such as the Internet AS graph, node significance is often associated with its degree or with centrality metrics which relate to its reachability and shortest paths passing through it. Such measures do not consider availability of efficient backup of the node and thus often fail to capture its contribution to the functionality and resilience of the network operation. In this paper we suggest the Quality of Backup (QoB) and Alternative Path Centrality (APC) measures as complementary methods which enable analysis of node significance in a manner which considers backup. We examine the theoretical significance of these measures and use them to classify nodes in the Internet AS graph while applying the BGP valley-free routing restrictions. We show that both node degree and node centrality are not necessarily evidence of its significance. In particular, some medium degree nodes with medium centrality measure prove to be crucial for efficient routing in the Internet AS graph.

",2007,0, 496,Beyond source code: The importance of other artifacts in software development (a case study).,"Current software systems contain increasingly more elements that have not usually been considered in software engineering research and studies. Source artifacts, understood as the source components needed to obtain a binary, ready to use version of a program, comprise in many systems more than just the elements written in a programming language (source code). Especially when we move apart from systems-programming and enter the realm of end-user applications, we find files for documentation, interface specifications, internationalization and localization modules and multimedia data. All of them are source artifacts in the sense that developers work directly with them, and that applications are built automatically using them as input. This paper discusses the differences and relationships between source code (usually written in a programming language) and these other files, by analyzing the KDE software versioning repository (with about 6,800,000 commits and 450,000 files). A comprehensive study of those files, and their evolution in time, is performed, looking for patterns and trying to infer from them the related behaviors of developers with different profiles, from where we conclude that studying those 'other' source artifacts can provide a great deal of insight on a software system.",2006,0, 497,Beyond the Short Answer Question with Research Methods Tutor,"Research Methods Tutor is a new intelligent tutoring system created by porting the existing implementation of the AutoTutor system to a new domain, Research Methods in Behavioural Sciences, which allows more interactive dialogues. The procedure of porting allowed for an evaluation of the domain independence of the AutoTutor framework and for the identification of domain related requirements. Specific recommendations for the development of other dialogue-based tutors were derived from our experience.",2002,0, 498,Biomedical Retrieval: How Can a Thesaurus Help?,"Summary form only given. The search for relevant and actionable information is key to achieving clinical and research goals in biomedicine. Biomedical information exists in different forms: as text and illustrations in journal articles and other documents, in ""images"" stored in databases, and as patients' cases in electronic health records. In the context of this work an ""image"" includes not only biomedical images, but also illustrations, charts, graphs, and other visual material appearing in biomedical journals, electronic health records, and other relevant databases. The tutorial will cover methods and techniques to retrieve information from these entities, by moving beyond conventional text-based searching to combining both text and visual features in search queries. The approaches to meeting these objectives use a combination of techniques and tools from the fields of Information Retrieval (IK), Content-Based Image Retrieval (CBIR), and Natural Language Processing (NLP). The tutorial will discuss steps to improve the retrieval of biomedical literature by targeting the text describing the visual content in articles (figures, including illustrations and images), a rich source of information not typically exploited by conventional bibliographic or full-text databases. Taking this a step further we will explore challenges in finding information relevant to a patient's case from the literature and then link it to the patient's health record. The case is first represented in structured form using both text and image features, and then literature and EHR databases can be searched for similar cases. Further, we will discuss steps to automatically find semantically similar images in image databases, which is an important step in differential diagnosis. Automatic image annotation and retrieval steps will be described that use image features and a combination of image and text features. We explore steps toward generating a ""visual ontology"", i.e., concepts assigned to i- - mage patches. Elements from the visual ontology are called ""visual keywords"" and are used to find images with similar concepts. The tutorial will demonstrate some of these techniques by demonstrating our Image and Text Search Engine (ITSE), a hybrid system combining NLM's Essie text search engine with CEB's image similarity engine.",2010,0, 499,BioMercator: Integrating genetic maps and QTL towards discovery of candidate genes,"

Summary: Breeding programs face the challenge of integrating information from genomics and from quantitative trait loci (QTL) analysis in order to identify genomic sequences controlling the variation of important traits. Despite the development of integrative databases, building a consensus map of genes, QTL and other loci gathered from multiple maps remains a manual and tedious task. Nevertheless, this is a critical step to reveal co-locations between genes and QTL. Another important matter is to determine whether QTL linked to same traits or related ones is detected in independent experiments and located in the same region, and represents a single locus or not. Statistical tools such as meta-analysis can be used to answer this question. BioMercator has been developed to automate map compilation and QTL meta-analysis, and to visualize co-locations between genes and QTL through a graphical interface.

Availability: Available upon request (http://moulon/~bioinfo/BioMercator/). Free of charge for academic use.

",2004,0, 500,Bio-terror Preparedness Exercise in a Mixed Reality Environment,"

The paper presents a dynamic data-driven mixed reality environment to complement a full-scale bio-terror preparedness exercise. The environment consists of a simulation of the virtual geographic locations involved in the exercise scenario, along with an artificially intelligent agent-based population. The crisis scenario, like the epidemiology of a disease or the plume of a chemical spill or radiological explosion, is then simulated in the virtual environment. The public health impact, the economic impact and the public approval rating impact is then calculated based on the sequence of events defined in the scenario, and the actions and decisions made during the full-scale exercise. The decisions made in the live exercise influence the outcome of the simulation, and the outcomes of the simulation influence the decisions being made during the exercise. The mixed reality environment provides the long-term and large-scale impact of the decisions made during the full-scale exercise.

",2007,0, 501,Branch Elimination via Multi-variable Condition Merging,"AbstractConditional branches are expensive. Branches require a significant percentage of execution cycles since they occur frequently and cause pipeline flushes when mispredicted. In addition, branches result in forks in the control flow, which can prevent other code-improving transformations from being applied. In this paper we describe profile-based techniques for replacing the execution of a set of two or more branches with a single branch on a conventional scalar processor. First, we gather profile information to detect the frequently executed paths in a program. Second, we detect sets of conditions in frequently executed paths that can be merged into a single condition. Third, we estimate the benefit of merging each set of conditions. Finally, we restructure the control flow to merge the sets that are deemed beneficial. The results show that eliminating branches by merging conditions can significantly reduce the number of conditional branches performed in non-numerical applications.",2003,0, 502,Break the Habit! Designing an e-Therapy Intervention Using a Virtual Coach in Aid of Smoking Cessation,"

E-therapy offers new means to support smokers during their attempt to quit. An embodied conversational agent can support people as a virtual coach on the internet. In this paper requirements are formulated for such a virtual coach and a global design is proposed. The requirements and the design are based on an extensive analysis of the practice of individual coaching of the Dutch organization STIVORO. In addition, the outcomes of a survey research measuring the acceptance of such a virtual coach by a potential user group are described.

",2006,0, 503,BRIDGING MDA AND OWL ONTOLOGIES,"Ontology is an effective method of knowledge representation, however, the existing methods of constructing ontology are not suitable for the field of patent. It is known that the UML modeling of graphical is more intuitive than OWL. According to the demand of the patent information, the paper gives an ontology modeling method which combined UML with OWL in order to put forward the improvement of existing modeling methods, and gives the detailed steps, then uses the method to construct the domain ontology of patent, finally takes the ""Washer"" subclass of patent field for example and gives a description with OWL. This approach improves the recall ratio and precision of the patent information, also, has some reference value to the construction of other domain ontologies.",2009,0, 504,Bridging the Gap: Exploring Interactions Between Digital Human Models and Cognitive Models,"

For years now, most researchers modeling physical and cognitive behavior have focused on one area or the other, dividing human performance into ""neck up"" and ""neck down."" But the current state of the art in both areas has advanced to the point that researchers should begin considering how the two areas interact to produce behaviors. In light of this, some common terms are defined so researchers working in different disciplines and application areas can understand each other better. Second, a crude ""roadmap"" is presented to suggest areas of interaction where researchers developing digital human form and other physical performance models might be able to collaborate with researchers developing cognitive models of human performance in order to advance the ""state-of-the-art"" in replicating and predicting human performance.

",2007,0, 505,Broadening participation in computing: issues and challenges,"Many believe that Cloud will reshape the entire ICT industry as a revolution. In this paper, we aim to pinpoint the challenges and issues of Cloud computing. We first discuss two related computing paradigms - Service-Oriented Computing and Grid computing, and their relationships with Cloud computing. We then identify several challenges from the Cloud computing adoption perspective. Last, we will highlight the Cloud interoperability issue that deserves substantial further research and development.",2010,0, 506,Build and Release Management: Supporting development of accelerator control software at CERN,"Over the years, number of design methodologies were developed. One of the state-of-the-art modeling approaches is Model Driven Architecture. This thesis is an attempt to utilize the MDA in a specific and complex domain ? real-time systems development. In MDA framework there are three levels of abstraction: computation independent, platform independent and platform specific. The target environment of the method presented in the thesis is Ada 2005 programming language which extended the old version of the language with several new object-oriented features making it suitable for using with the MDA. Application of the MDA in real-time systems domain targeted towards Ada 2005 implementation constitutes a new design method which benefits from the MDA, UML and Ada 2005 advantages. The thesis starts with presentation of the complexity of the real-time systems domain. A few real-time domain aspects are chosen as a main area for elaborating the design method. The utilizes UML Profile for Schedulability, Performance and Time for defining platform independent model. Additionally it provides its extension ? the Ada UML profile ? which constitutes the platform specific model. This is followed by specification of transformations between platform independent and specific model. The specification is used as a base for implementation of the transformations. Guidelines for code generation form the Ada UML profile are also provided. Finally, the thesis describes how the transformations can be implemented in Telelogic TAU tool.",2007,0, 507,Building measure-based prediction models for UML class diagram maintainability.,"The fact that the usage of metrics in the analysis and design of object oriented (OO) software can help designers make better decisions is gaining relevance in software measurement arena. Moreover, the necessity of having early indicators of external quality attributes, such as maintainability, based on early metrics is growing. In addition to this, the aim is to show how early metrics which measure internal attributes, such as structural complexity and size of UML class diagrams, can be used as early class diagram maintainability indicators. For this purpose, we present a controlled experiment and its replication, which we carried out to gather the empirical data, which in turn is the basis of the current study. From the results obtained, it seems that there is a reasonable chance that useful class diagram maintainability models could be built based on early metrics. Despite this fact, more empirical studies, especially using data taken form real projects performed in industrial settings, are needed in order to obtain a comprehensive body of knowledge and experience.",2003,0, 508,Building Reflective Mobile Middleware Framework on Top of the OSGi Platform,"

The literature on mobile middleware is extensive. Numerous aspects of the mobility's effect on middleware have been analysed and the amount of previous work allowed to identify the most important patterns. Although the notion of “most important middleware” depends on the application supported by the middleware, there are traits that can be discovered in most of the connected mobile applications. Based on the experience of several authors, these traits are context-awareness, reflectivity, support for off-line operation and asynchronous (message-based) communication.

This paper presents a mobile middleware system built to support these patterns and demonstrates, how the OSGi service platform can be used to realize these patterns. It will be demonstrated that although OSGi was built to support manageability requirements, the resulting platform is suitable for implementing the 4 major middleware patterns too. The paper presents the components of this context-aware, reflective middleware framework and evaluates its footprint.

",2006,0, 509,Building reusable information security courseware,"A hybrid cloud is a cloud computing environment in which an organization provides and manages some internal resources (private cloud) while the other resources are provisioned externally (public cloud). Rapid deployment of hybrid clouds for utility, cost, effectiveness and flexibility has made it necessary to assure the security and privacy of hybrid clouds as it transcends different domains. Further, successful hybrid cloud implementation requires a well-structured architecture supporting the functionalities of both private and public clouds and the seamless transitions between them. One of the challenges in a hybrid cloud is securing resource access, in particular, enforcing that the owner's policy never gets violated even when the data gets consumed and processed in multiple domains. Existing mechanisms for achieving this, including industry standards such as XACML, SAML, and OAuth, are vulnerable to indirect information leaks as they do not keep track of information flow. The Readers-Writers Flow Model (RWFM) is a novel security model with an intuitive security policy that tracks and controls the flow of information in a decentralized system. In this paper, we present an approach to building a hybrid cloud that preserves the given security and privacy policy by integrating an RWFM security module into a cloud service manager. An advantage of RWFM is that it provides a uniform solution for securing various kinds of hybrid cloud architectures ranging from the simple pairwise federation to the complex interclouds, and supporting varying degrees of flexibility in workload placement ranging from a simple static placement to fully dynamic migration. Further, RWFM framework is forensic-ready by design, because the labels of data and services readily provide the necessary forensic information.",2016,0, 510,Building the bridge between academia and practice,"This article presents the basic principles of operation for model predictive control (MPC), a control methodology that opens a new world of opportunities. MPC is a powerful technique that can fulfill the increased performance and higher efficiency demands of power converters today. The main features of this technique are presented as well as the MPC strategy and basic elements. The two main MPC methods for power converters [continuous-control-set MPC (CCS-MPC) and finite-control-set MPC (FCS-MPC)] are described, and their application to a voltage-source inverter (VSI) is shown to illustrate their capabilities. This article tries to bridge the gap between the powerful but sometimes abstract techniques developed by researchers in the control community and the empirical approach of power electronics practitioners.",2015,0, 511,Building Theories from Multiple Evidence Sources,"A method of constructing and maintaining a grid map using ultrasonic sensors based on Dempster-Shafer evidence theory (D-S evidence theory) is proposed with respect to the problem of unstructured unknown environment exploration and mapping. Mobile robot moving in an environment explores with ultrasonic sensors; D-S evidence theory is used to fuse information; The problem that D-S evidence theory can't be applied to information fusion under certain circumstances and the matter that D-S evidence theory have counter-intuitive behaviors in some cases are discussed; An approximate process algorithm is advanced to avoid above problems; Finally, a two-dimensional grid map is built. Application result shows that this method is appropriate for unstructured unknown environment mapping.",2009,0, 512,Building Theories in Software Engineering,"Job Rotation is an organizational practice in which individuals are frequently moved from a job (or project) to another in the same organization. Studies in other areas have found that this practice has both negative and positive effects on individuals' work. However, there are only few studies addressing this issue in software engineering so far. The goal of our study is to investigate the effects of job rotation on work related factors in software engineering by performing a qualitative case study on a large software organization that uses job rotation as an organizational practice. We interviewed senior managers, project managers, and software engineers that had experienced this practice. Altogether, 48 participants were involved in all phases of this research. Collected data was analyzed using qualitative coding techniques and the results were checked and validated with participants through member checking. Our findings suggest that it is necessary to find balance between the positive effects on work variety and learning opportunities, and negative effects on cognitive workload and performance. Further, the lack of feedback resulting from constant movement among projects and teams may have a negative impact on performance feedback. We conclude that job rotation is an important organizational practice with important positive results. However, managers must be aware of potential negative effects and deploy tactics to balance them. We discuss such tactics in this article.",2016,0, 513,"Building Virtual Spaces Games as Gatekeepers for the IT Workforce","The paper is concerned with the concept of a mixed reality (MR) stage environment as networked layering of physical space and virtual environments. The MR stage enables multiple performers to interact through intuitive free body interfaces. The goal is the creation of interface environments which allow participants to communicate in shared and remote physical spaces through their natural senses: hearing, seeing, speaking, gesturing, touching and moving around. Connecting the concept of the stage with the idea of digital information space comprises investigation in digital storytelling and the design of nonlinear structures. What we get is an instrument for the human body. What we see is actual movement of performers integrating virtual sound and images in real time onto the MR stage",2001,0, 514,Business case: the role of the IT Architect,Strategic business and IT alignment (SBITA) is still ranked amongst the top concerns of the enterprise's management executives. Such alignment is an organization-wide issue that influences the company's overall performance and its assessment is a fundamental input for the enterprise's managers to make informed decisions on SBITA enhancement possibilities. This paper reports the application of an enterprise architecture-based SBITA assessment metamodel in a case study conducted in an intensive IT service enterprise. The case study addresses two research questions: How can be applied the proposed enterprise architecture-based SBITA assessment metamodel in enterprises? and What is the quality and use of the results of such application? The authors have published the enterprise architecture-based SBITA assessment metamodel as a tool that combines the comprehensive and systematic modeling practices in the field of enterprise architecture with the guidance of tested and benchmarked SBITA assessment expert's method. Luftman's assessment method was selected in this research project.,2008,0, 515,Business Process Modeling: A Maturing Discipline?,"Artifact is the key business entity in the evolution of business process. Artifact-centric business process management is a typical representative of the data-centric business process management. There are many artifacts during the execution of business process system. In a real world application, such as restaurant process, we should check every artifact's correctness. In this paper, we explore the model of artifact-centric business process system from the perspective of knowledge popularization through introducing description logics to modeling, analysis and prove the bisimilar relation between two different system models. Then, we do verification of artifact through finding a pruning of raw system. At last, we apply such system model and verification to restaurant process.",2014,0, 516,Can Brotherhood Be Sold Like Soap...Online? An Online Social Marketing and Advocacy Pilot Study Synopsis,"

Having engaged one billion users by early 2006, the Internet is the world's fastest-growing mass communications medium. As it permeates into countless lives across the planet, it offers social campaigners an opportunity to deploy interactive interventions that encourage populations to adopt healthy living, environmental protection and community development behaviours. Using a classic set of social campaigning criteria, this paper explores relationships between social campaign websites and behavioural change.

",2007,0, 517,Can health care benefit from modeling and simulation methods in the same way as business and manufacturing has?,"It has been increasingly recognized that the application of simulation methods can be instrumental in addressing the multi-faceted challenges health care is facing at present and more importantly in the future. But the application of these methods seems not to be as widespread as in other sectors, where such methods when used as part of their core operation, reap significant benefits. This paper examines the potential use of modeling and simulation in health care, drawing the parallels and marking the mismatches from the business and manufacturing world. Methods from the latter sectors will be reviewed with the intention to assess their potential usefulness to healthcare. To focus this discussion, we propose and discuss seven axes of differentiation: patient fear of death; medical practitioners (for example approach to healing, investigation by experimentation and finance); healthcare support staff; health care managers; political influence and control; 'society's view'; and Utopia.",2007,0, 518,Can We Use Technology to Train Inspectors to Be More Systematic?,"

Inspection quality is dependent on the ability of inspectors to weed out defective items. When inspection is visual in nature, humans play a critical role in ensuring inspection quality with training identified as the primary intervention strategy for improving inspection performance. However, for this strategy to be successful, inspectors must be provided with the needed tools to enhance their inspection skills. In this article we outline efforts pursued at Clemson University, focusing on the development of computer-based training systems for inspection training and discuss the results of some of the research briefly.

",2007,0, 519,Capturing Scientists? Insight for DDDAS,This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.,1984,0, 520,Case study methodology designed research in software engineering methodology validation,"One of the challenging research problems in validating a software engineering methodology (SEM), and a part of its validation process, is to answer ?How to fairly collect, present and analyze the data??. This problem adds complexity, in general, when the SEM involves the use of human knowledge in its methods (phases). How should such created knowledge be captured in the methodology during a SEM process? How can such knowledge be made available for continued SEM process improvement? How can such knowledge be used in validating the SEM? Measuring such knowledge is hard, but we can benefit from the ?Case study research design? which is a valuable and an important empirical research alternative in designing a research plan that establishes a logical link from the data to be collected to the initial questions of study. In this paper, a case study research methodology (CSM) designed is presented with its application to the validation of a software requirements engineering methodology (SREM). The preliminary results show the evidence used to validate the SREM as well as the potential usage of CSM as a goal-oriented research design, practice and teaching methodology",2004,0, 521,"Case Study on Re-Architecting of Established Enterprise Software Product: Major Challenges Encountered and SDM Prescriptions from Lessons Learned","The paper studies a real word project of an enterprise software product re-architecting at a mid-sized telecommunication company. It begins with a description of the company and the software product, as well as an elaboration of the project under study. Using written surveys and follow-up interviews as the primary data gathering tools, the paper collects and tabulates first-hand experience and opinions from key project participants. Based on the survey results, the paper proposes an integrative implementation framework, based primarily on literature reviews in offshore outsourcing, systems and project management (SPM) and product design and development (PDD), for a detailed analysis of key challenges encountered by the project under study. The paper also investigates if specific key challenges could have been managed or influenced by the application of specific methods and tools within the proposed framework.",2005,0, 522,Case Study: Implementing MT for the Translation of Pre-sales Marketing and Post-sales Software Deployment Documentation at Mycom International,AbstractSeveral major telecommunications companies have made significant investment in either controlled language and/or machine translation over the past 10 years.,2004,0, 523,Case-Based Student Modeling Using Concept Maps,"We report the results of applying language technology to the bioinformatics problem of online concept annotation of biomedical text. We extend our concept annotator, CONANN, to find biomedical concepts in using concept language models. The goal of CONANN is to improve annotation speed without losing annotation accuracy as compared to offline systems, facilitating the use of concept annotation in online environments. Intrinsic and extrinsic evaluations show accuracy competitive with a state-of-the-art biomedical text concept annotator with a speed improvement of more than four times.",2008,0, 524,Cerebral Vessel Enhancement Using Rigid Registration in Three-Dimensional CT Angiography,"AbstractIn this paper, we propose a robust 3D rigid registration technique for detecting cerebral aneurysms, arterial stenosis, and other vascular anomalies in a brain CT angiography. Our method is composed of the following four steps. First, a set of feature points are selected using a 3D edge detection technique within skull base. Second, a locally weighted 3D distance map is constructed for leading our similarity measure to robust convergence on the maximum value. Third, the similarity measure between feature points is evaluated repeatedly by selective cross-correlation. Fourth, bone masking is performed for effectively removing bones. Experimental results show that the performance of our method is very promising compared to conventional methods in the aspects of its visual inspection and robustness. In particular, our method is well applied to vasculature anatomy of patients with an aneurysm in the region of the skull base.",2004,0, 525,Challenges in Business Performance Measurement: The Case of a Corporate IT Function,

Contemporary organisations are increasingly adopting performancemeasurement activity to assess their level of achievement of strategic objectivesand delivery of stakeholder value. This qualitative research sought to increaseunderstanding of the challenges involved in this area. An in-depth case study ofthe corporate IT services unit of a global company highlighted key challengespertaining to: (i) deriving value from performance measurement practices; (ii)establishing appropriate and useful performance measures; (iii) implementingeffective information collation and dashboard practices. The need to transformperformance measurement from a tool for simply monitoring/reporting to oneof learning what factors drive results (so as to be able to influence these factors)is suggested as a way to increase the value derived from such practices. This isseen to imply a need to rethink major notions of balance and strategic relevancethat have been advanced hitherto as leading design principles.

,2007,0, 526,Change Detection with Kalman Filter and CUSUM,"This paper addresses the problem of voice activity detection in noise environments. The proposed voice activity detection technique described in this paper is based on a statistical model approach, and estimates the statistical models sequentially without a prior knowledge of noise. The crucial factor as regards the statistical model-based approach is noise parameter estimation, especially non-stationary noise. To deal with this problem, a parallel non-linear Kalman filter, that is a multiplied estimator, is used for sequential noise estimation. Also, a backward estimation is used for noise estimation and likelihood calculation for speech / non-speech discrimination. In the evaluation results, we observed that the proposed method significantly outperforms conventional methods as regards voice activity detection accuracy in noisy environments.",2007,0, 527,Change Management for Distributed Ontologies,"This paper reports the summary and results of our research on providing a graph oriented formalism to represent, analyze and validate the evolution of bio-ontologies, with emphasis on the FungalWeb Ontology. In this approach Category theory along with rule-based hierarchical distributed (HD) graph transformation have been employed to propose a more specific semantics for analyzing ontological changes and transformations between different versions of an ontology, as well as tracking the effects of a change in different levels of abstractions.",2011,0, 528,Changing Induced Moods Via Virtual Reality,"

Mood Induction Procedures (MIPs) are designed to induce emotional changes in experimental subjects in a controlled way, manipulating variables inside the laboratory. The induced mood should be an experimental analogue of the mood that would occur in a certain natural situation. Our team has developed an MIP using VR (VR-MIP) in order to induce different moods (sadness, happiness, anxiety and relaxation). The virtual environment is a park, which changes according to the mood to be induced. This work will present data about the efficacy of this procedure not only to induce a mood, but also to change after the mood is induced.

",2006,0, 529,Characterizing a complex J2EE workload: A comprehensive analysis and opportunities for optimizations,"While past studies of relatively simple Java benchmarks like SPECjvm98 and SPECjbb2000 have been integral in advancing the server industry, this paper presents an analysis of a significantly more complex 3-Tier J2EE (Java 2 Enterprise Edition) commercial workload, SPECjAppServer2004. Understanding the nature of such commercial workloads is critical to develop the next generation of servers and identify promising directions for systems and software research. In this study, we validate and disprove several assumptions commonly made about Java workloads. For instance, on a tuned system with an appropriately sized heap, the fraction of CPU time spent on garbage collection for this complex workload is small (<2%) compared to commonly studied client-side Java benchmarks. Unlike small benchmarks, this workload has a rather ""flat"" method profile with no obvious hot spots. Therefore, new performance analysis techniques and tools to identify opportunities for optimizations are needed because the traditional 90/10 rule of thumb does not apply. We evaluate hardware performance monitor data and use insights to motivate future research. We find that this workload has a relatively high CPI and a branch misprediction rate. We observe that almost one half of executed instructions are loads and stores and that the data working set is large. There are very few cache-to-cache ""modified data"" transfers which limits opportunities for intelligent thread co-scheduling. We note that while using large pages for a Java heap is a simple and effective way to reduce TLB misses and improve performance, there is room to reduce translation misses further by placing executable code into large pages. We use statistical correlation to quantify the relationship between various hardware events and an overall system performance. We find that CPI is strongly correlated with branch mispredictions, translation misses, instruction cache misses, and bursty data cache misses that trigger data prefetching. - We note that target address mispredictions for indirect branches (corresponding to Java virtual method calls) are strongly correlated with instruction cache misses. Our observations can be used by hardware and runtime architects to estimate potential benefits of performance enhancements being considered",2007,0, 530,Characterizing Peer-to-Peer Traffic across Internet,"Since the appearance of various P2P IPTV systems which timely broadcast live streaming to peers, they have attracted millions of users from all over the world. It is reported that the online audience have reached more than 1.2 million in peak time by the official website of PPStream, one of the most popular IPTV system in China. However, at the same time the popularity of these systems make the amounts of video traffic grow exponentially. In order to study the global playback performance, users ' behaviors and network characteristics as well, we developed our dedicated crawler of PPStream. Based on the measurements, we make some extensive performance evaluation on this commercially successful P2P IPTV system, and some characteristics on geographic clustering, connection stability, arrival/departure pattern, playback quality, sharing ratio and topology have been revealed. We think these findings can help other researchers model such streaming systems and system operators make further optimizations.",2007,0, 531,CHI '07 extended abstracts on Human factors in computing systems,"

This interactive session discusses the quality of recommendations for improving a user interface resulting from a usability evaluation. Problems with the quality of recommendations include recommendations that are not actionable, ones that developers are likely to misunderstand, and ones that may not improve the overall usability of the application. The session will discuss characteristics of useful and usable recommendations, that is, recommendations for solving usability problems that lead to changes that efficiently improve the usability of a product. To make the session as useful as possible we have deliberately left 2-3 panel seats open for people with demonstrated abilities in writing useful and usable recommendations. We intend to fill these seats through a pre-conference contest.

",2007,0, 532,Classification of objects in images based on various object representations,"Detailed and accurate delineation of irrigated and non-irrigated land is critical to water resource management in arid and semi-arid areas, where dependence on groundwater irrigation is high. However, there is no such information available in the national land use and land cover databases such as National Land Cover Dataset and Cropland Data Layer. This study proposed an object-based image classification method to delineate the irrigated and non-irrigated cropland using remote sensing indices, evapotranspiration and other supplemental information. The method has been tested in South-Central Nebraska, and the results showed that the method produced accurate account of irrigated and non-irrigated land classification. The method is expected to be applicable in other arid and semi-arid areas.",2016,0, 533,Classifier Selection Based on Data Complexity Measures,"Tens of thousands of classifiers have been proposed so far. There is no best classifier among them for all the existing data sets. The performance of each classifier often depends on the data sets used for comparison. Even for a single classifier, suitable parameters of the classifier also depend on the data sets. That is, there is a possibility that a suited classifier and its parameter specification can be chosen beforehand if the target data sets or their characteristics were known. In recent years, a number of data complexity measures have been proposed to characterize data sets. The aim of this study is to develop a meta classifier for selecting an appropriate classifier and/or its appropriate parameter specification by means of data complexity measures. In this paper, we focus on the parameter specification of fuzzy classifiers using data complexity measures as a preliminary study. To construct a meta-classifier, we generate a large number of artificial data sets from Keel benchmark data sets. Then we generate meta-patterns which are composed of the values of data complexity measures as inputs and an appropriate fuzzy partition as an output. Using meta-patterns, a meta classifier is designed by multiobjective genetic fuzzy rule selection. We evaluate the proposed method through leave one-group out cross-validation.",2011,0, 534,Coalescing conditional branches into efficient indirect jumps,"AbstractIndirect jumps from tables are traditionally only generated by compilers as an intermediate code generation decision when translating multiway selection statements. However, making this decision during intermediate code generation poses problems. The research described in this paper resolves these problems by using several types of static analysis as a framework for a code improving transformation that exploits indirect jumps from tables. First, control-flow analysis is performed that provides opportunities for coalescing branches generated from other control statements besides multiway selection statements. Second, the optimizer uses various techniques to reduce the cost of indirect jump operations by statically analyzing the context of the surrounding code. Finally, path and branch prediction analysis is used to provide a more accurate estimation of the benefit of coalescing a detected set of branches into a single indirect jump. The results indicate that the coalescing transformation can be frequently applied with significant reductions in the number of instructions executed and total cache work. This paper shows that static analysis can be used to implement an effective improving transformation for exploiting indirect jumps.",1997,0, 535,Coalgebraic automata theory: basic results,"A new family of passive sensor radio-frequency identification devices is here proposed for applications in the context of wireless sensor networks. The new tags, working in the ultra-high frequency band, are able to detect the value or the change of some features of the tagged body without using any specific sensor. Such tags are provided with multiple chips embedded either within a cluster of cooperating antennas or in a single multiport antenna, and exploit the natural mismatch of the antenna input impedance caused by the change of the tagged object. A basic theory of multiport sensor tags is formulated with the purpose to describe the possible classification and detection performances in a unitary context. Some numerical examples and a first experiment corroborate the feasibility of the idea.",2008,0, 536,Cognitive complexity in data modeling: causes and recommendations,"

Data modeling is a complex task for novice designers. This paper conducts a systematic study of cognitive complexity to reveal important factors pertaining to data modeling. Four major sources of complexity principles are identified: problem solving principles, design principles, information overload, and systems theory. The factors that lead to complexity are listed in each category. Each factor is then applied to the context of data modeling to evaluate if it affects data modeling complexity. Redundant factors from different sources are ignored, and closely linked factors are merged. The factors are then integrated to come up with a comprehensive list of factors. The factors that cannot largely be controlled are dropped from further analysis. The remaining factors are employed to develop a semantic differential scale for assessing cognitive complexity. The paper concludes with implications and recommendations on how to address cognitive complexity caused by data modeling.

",2007,0, 537,Cognitive evaluation of information modeling methods,This work focuses on establishing coordination models and method with different information in the operation process of established construction project supply chains (CPSCs). A two-level programming model for collaborative decision making is established to find optimal solutions for all stakeholders in CPSCs. An agent-based negotiation framework for CPSCs coordination in dynamic decision environment is designed based on the intelligent agent technology and multi-attribute negotiation theory. This work also presents a relative entropy method to help negotiators (stakeholders) reaches an acceptable solution when negotiations fail or man-made termination. This is a summary of the first author's Ph.D. thesis supervised by Yaowu Wang and Geoffrey Qiping Shen (Hong Kong Polytechnic University) and defended on 25 June 2006 at the Harbin Institute of Technology (China).,2009,0, 538,Cognitive Heuristics in Software Engineering: Applying and Extending Anchoring and Adjustment to Artifact Reuse,"The extensive literature on reuse in software engineering has focused on technical and organizational factors, largely ignoring cognitive characteristics of individual developers. Despite anecdotal evidence that cognitive heuristics play a role in successful artifact reuse, few empirical studies have explored this relationship. This paper proposes how a cognitive heuristic, called anchoring, and the resulting adjustment bias can be adapted and extended to predict issues that might arise when developers reuse code and/or designs. The research proposes that anchoring and adjustment can be manifested in three ways: propagation of errors in reuse artifacts, failure to include requested functionality absent from reuse artifacts, and inclusion of unrequested functionality present in reuse artifacts. Results from two empirical studies are presented. The first study examines reuse of object classes in a programming task, using a combination of practicing programmers and students. The second study uses a database design task with student participants. Results from both studies indicate that anchoring occurs. Specifically, there is strong evidence that developers tend to use the extraneous functionality in the artifacts they are reusing and some evidence of anchoring to errors and omissions in reused artifacts. Implications of these findings for both practice and future research are explored.",2004,0, 539,Cognitive information fusion of georeferenced data for tactical applications,"Georeferenced data from multiple autonomous sources for defined AOI (areas of interest) can be fused and analyzed in support of various decision-making processes such as risk assessment, emergency response, situation awareness and tactical planning. However, data from multiple heterogeneous sources may be in different formats, scales, qualities and coverage. All these characteristics of multi-source spatial data limit the use of traditional statistics methods for information fusion. In this paper, the cognitive belief is proposed to represent georeferenced data from different sources. Uncertainty caused by inaccurate or partial information can be modeled. By applying the belief combination rule, cognitive beliefs can be fused to provide better support for decision makers.",2005,0, 540,Cognitive science implications for enhancing training effectiveness in a serious gaming context,"

Serious games use entertainment principles, creativity, and technology to meet government or corporate training objectives, but these principles alone will not guarantee that the intended learning will occur. To be effective, serious games must incorporate sound cognitive, learning, and pedagogical principles into their design and structure. In this paper, we review cognitive principles that can be applied to improve the training effectiveness in serious games and we describe a process we used to design improvements for an existing game-based training application in the domain of cyber security education.

",2007,0, 541,Collabohab: A Technology Probe into Peer Involvement in Cardiac Rehabilitation,"Given the widely acknowledged impact that social support has on health outcomes, we set out to investigate peer-involvement in cardiac rehabilitation and explore the potential for technological support thereof. We planned to deploy a purpose built technology probe into a 10-week rehabilitation program. This paper presents the findings of the probes pilot study, where rejection of technology and reluctance to involve peers highlighted important considerations for the design of peer-based health promotion technologies and methodological considerations for the study of peer-involvement in behavioural change as well as pervasive health research in general.",2009,0, 542,Collaboration using OSSmole: a repository of FLOSS data and analyses,"This paper introduces a collaborative project OSSmole which collects, shares, and stores comparable data and analyses of free, libre and open source software (FLOSS) development for research purposes. The project is a clearinghouse for data from the ongoing collection and analysis efforts of many disparate research groups. A collaborative data repository reduces duplication and promote compatibility both across sources of FLOSS data and across research groups and analyses. The primary objective of OSSmole is to mine FLOSS source code repositories and provide the resulting data and summary analyses as open source products. However, the OSSmole data model additionally supports donated raw and summary data from a variety of open source researchers and other software repositories. The paper first outlines current difficulties with the typical quantitative FLOSS research process and uses these to develop requirements for such a collaborative data repository. Finally, the design of the OSSmole system is presented, as well as examples of current research and analyses using OSSmole.",2005,0, 543,Collection and Annotation of a Corpus of Human-Human Multimodal Interactions: Emotion and Others Anthropomorphic Characteristics,"

In order to design affective interactive systems, experimental grounding is required for studying expressions of emotion during interaction. In this paper, we present the EmoTaboo protocol for the collection of multimodal emotional behaviours occurring during human-human interactions in a game context. First annotations revealed that the collected data contains various multimodal expressions of emotions and other mental states. In order to reduce the influence of language via a predetermined set of labels and to take into account differences between coders in their capacity to verbalize their perception, we introduce a new annotation methodology based on 1) a hierarchical taxonomy of emotion-related words, and 2) the design of the annotation interface. Future directions include the implementation of such an annotation tool and its evaluation for the annotation of multimodal interactive and emotional behaviours. We will also extend our first annotation scheme to several other characteristics interdependent of emotions.

",2007,0, 544,Column Pruning Beats Stratification in Effort Estimation,"Local calibration combined with stratification, also known as row pruning, is a common technique used by cost estimation professionals to improve model performance. The results presented in this paper raise several serious questions concerning the benefits of row pruning for improving effort estimation indicating the need to rethink standard practice. Firstly, the mean size of improvements from row pruning appears to be relatively small compared to the size of the standard deviations in effort estimation data. Secondly, the advantages of row pruning especially for the purposes of deleting spurious outliers can be achieved using column pruning much more effectively. Hence, we advise against row pruning and advocate column pruning instead.",2007,0, 545,Combining software evidence: arguments and assurance,"We present a novel approach for probabilistic risk assessment (PRA) of systems which require high assurance that they will function as intended. Our approach uses a new model i.e., a dynamic event/fault tree (DEFT) as a graphical and logical method to reason about and identify dependencies between system components, software components, failure events and system outcome modes. The method also explicitly includes software in the analysis and quantifies the contribution of the software components to overall system risk/ reliability. The latter is performed via software quality analysis (SQA) where we use a Bayesian network (BN) model that includes diverse sources of evidence about fault introduction into software; specifically, information from the software development process and product metrics. We illustrate our approach by applying it to the propulsion system of the miniature autonomous extravehicular robotic camera (mini-AERCam). The software component considered for the analysis is the related guidance, navigation and control (GN&C) component. The results of SQA indicate a close correspondence between the BN model estimates and the developer estimates of software defect content. These results are then used in an existing theory of worst-case reliability to quantify the basic event probability of the software component in the DEFT.",2007,0, 546,Combining Study Designs and Techniques Working Group Results,"AbstractOn Tuesday, June 27, 2006, as part of the Empirical Paradigm session of the Dagstuhl Seminar, an ad hoc working group met for approximately 1.5 hours to discuss the topic of combining study designs and techniques. The ad hoc group members were:Carolyn Seaman, Guilherme Horta Travassos, Marek Leszak, Helen Sharp, Hakan Erdogmus, Tracy Hall, Marcus Ciolkowski, Martin Host, James Miller, Sira Vegas, Mike Mahoney.This paper attempts to capture and organize the discussion that took place in the ad hoc working group.",2007,0, 547,Combining the Semantic Web with the Web as Background Knowledge for Ontology Mapping,"

We combine the Semantic Web with the Web, as background knowledge, to provide a more balanced solution for Ontology Mapping. The Semantic Web can provide mappings that are missed by the Web, which can provide many more, but noisy, mappings. We present a combined technique that is based on variations of existing approaches. Our experimental results in two real-life thesauri are compared with previous work, and they reveal that a combined approach to Ontology Mapping can provide more balanced results in terms of precision, recall and confidence measure of mappings. We also discover that a reduced set of 3 appropriate Hearst patterns can eliminate noise in the list of discovered mappings, and thus techniques based exclusively in the Web can be improved. Finally, we also identify open questions derived from building a combined approach.

",2007,0, 548,CommLang: Communication for Coachable Agents,"To preserve ongoing connections and minimize the packet loss when the mobile node (MN) performs a handover, we introduce the low loss mobility agent (LLMA) in to the SIP systems. This agent works as an intermediate B2BUA between the MN and CN to minimize the loss packet when MN experiences a handover. In this paper, we will provide details working procedure to support the UDP session, and brief discussion on the TCP case when a handover occurs.",2009,0, 549,Communicability Criteria of Law Equations Discovery,"

The ""laws"" in science are not the relations established by only the objective features of the nature. They have to be consistent with the assumptions and the operations commonly used in the study of scientists identifying these relations. Upon this consistency, they become communicable among the scientists. The objectives of this literature are to discuss a mathematical foundation of the communicability of the ""scientific law equation"" and to demonstrate ""Smart Discovery System (SDS)"" to discover the law equations based on the foundation. First, the studies of the scientific law equation discovery are briefly reviewed, and the need to introduce an important communicability criterion called ""Mathematical Admissibility"" is pointed out. Second, the axiomatic foundation of the mathematical admissibility in terms of measurement processes and quantity scale-types are discussed. Third, the strong constraints on the admissible formulae of the law equations are shown based on the criterion. Forth, the SDS is demonstrated to discover law equations by successively composing the relations that are derived from the criterion and the experimental data. Fifth, the generic criteria to discover communicable law equations for scientists are discussed in wider view, and the consideration of these criteria in the SDS is reviewed.

",2007,0, 550,Communication of Business?s and Software Engineers,"Communications for the purpose of creating useful software are imperative. Techniques need to be in place to facilitate the flow of correct information from the requestor, and the creator. Without this the software will be of little use and discarded before it should be. In this paper we will present a systematic way to creating a question and answer flow of information. Details of the difference are between using the question and answer system and not will be reported in various ways. Our results will show the increase in efficiency of software creation by having fewer changes later in the project and improved usefulness to the end user.",2007,0, 551,Communication: the neglected technical skill?,Presents the title page from the conference proceedings.,2005,0, 552,Communities of Practice,"The topic of ""transactive energy"" (TE) has received more and more attention over the past 18 months. For example, it has been a part of the New York Reforming the Energy Vision discussions and the topic of activities such as the National Institute of Standards (NIST) TE Challenge. This growing discussion stems, in part, from the realization that new approaches are needed to efficiently and reliably integrate growing numbers of distributed energy resources (DERs).",2016,0, 553,Community Aware Content Adaptation for Mobile Technology Enhanced Learning,"

Mobile technology enhanced learning pertains to the delivery of multimedia learning resource onto mobile end devices such as cell phones and PDAs. It also aims at supporting personalized adaptive learning in a community context. This paper presents a novel approach to supporting both aspects. The community aware content adaptation employs the MPEG-7 and MPEG-21 multimedia metadata standards to present the best possible information to mobile end devices. Meanwhile, interest patterns are derived from a community aware context analysis. We designed and developed a technology enhanced learning platform supporting architecture professionals' study at city excursions and other mobile tasks.

",2006,0, 554,Comparative Assessment of Network-Centric Software Architectures,"Security plays a crucial role in software systems. Existing research efforts have addressed the problem of how to model the security aspect of software at a particular phase of software lifecycle. However, security is still not integrated in all the phases of software lifecycle. In this paper we introduce how classical MDA framework can be extended to consider the security aspect. Such extension offers early assessment and early validation of security requirement, which helps to discover security flaws early in the software development process and reduce the cost of removing flaws.",2009,0, 555,Comparing Rule Measures for Predictive Association Rules,"Credible association rule(CAR) is a new type of association pattern in which items are highly affiliated with each other. The presence of an item in one transaction strongly implies the presence of every other item in the same CAR. And a maximal CAR is a CAR whose each superset isn't a CAR, so the maximal CAR specifies a more compact representation of a group of CARs. In this paper, we introduce some measures for CAR which all represent the affinity of the items in a CAR. A mining method based on maximal clique is also presented to mine all of the maximal CARs. The experimental results demonstrate that the mining method is more effective than the other stand mining methods for association rules.",2009,0, 556,Comparing the Performance of Distributed Hash Tables Under Churn,"Recently, it has been argued that reputation mechanisms could be used to improve routing by conditioning next-hop decisions to the past behavior of peers. However, churn may severely hinder the applicability of reputations mechanisms. In particular, short peer lifetimes imply that reputations are typically generated from a small number of transactions and are few reliable. To examine how high rates of churn affect reputation systems, we present an analytical model to study the potential damage done by malicious peers together with churn. With our model, we show that it cannot be expected in general that reputations are reliable. We then analyze the impact of this result by proposing a new routing protocol for Chord. Mainly, the protocol exploits reputation to improve the decision about which neighbor select as next-hop peer. Our experimental results show that routing algorithms can obtain important benefits from reputation - even when peer lifetimes are short and the fraction of bad users is moderate.",2009,0, 557,Comparison of AI Techniques for Fighting Action Games - Genetic Algorithms/Neural Networks/Evolutionary Neural Networks,"

Recently many studies have attempted to implement intelligent characters for fighting action games. They used genetic algorithms, neural networks, and evolutionary neural networks to create intelligent characters. This study quantitatively compared the performance of these three AI techniques in the same game and experimental environments, and analyzed the results of experiments. As a result, neural network and evolutionary neural network showed excellent performance in the final convergence score ratio while evolutionary neural network and genetic algorithms showed excellent performance in convergence speed. In conclusion, evolutionary neural network which showed excellent results in both the final convergence score ratio and the convergence score is most appropriate AI technique for fighting action games.

",2007,0, 558,Comparison of Computerized Image Analyses for Digitized Screen-Film Mammograms and Full-Field Digital Mammography Images,"

We have developed computerized methods for the analysis of mammographic lesions in order to aid in the diagnosis of breast cancer. Our automatic methods include the extraction of the lesion from the breast parenchyma, the characterization of the lesion features in terms of mathematical descriptors, and the merging of these lesion features into an estimate of the probability of malignancy. Our initial development was performed on digitized screen film mammograms. We report our progress here in converting our methods for use with images from full-field digital mammography (FFDM). It is apparent from our initial comparisons on CAD for SFMD and FFDM that the overall concepts and image analysis techniques are similar, however reoptimization for a particular lesion segmentation or a particular mammographic imaging system are warranted.

",2006,0, 559,Comparison of Selected Survey Instruments for Software Team Communication Research,"One of the factors that influence task productivity is communication pertaining to task, resources, and organizational issues. The objective of this research is to explore the availability of validated survey instruments in the area of organizational and team communication and assess their applicability in software development team research. Based on past studies, we have selected six comprehensive instruments. This paper provides a brief summary of the instruments and the various dimensions that they cover. The instruments are classified using a Group Variable Classification System (GVCS - input-process - output - feedback) model for team research. As most of the instruments have been used in manufacturing or service environments in the past, these instruments need to be tailored for use in software team research. They can also be used as a basis for developing new instruments",2006,0, 560,Comparison of Two Contributive Analysis Methods Applied to an ANN Modeling Facial Attractiveness,"Artificial neural networks (ANNs) are powerful predictors. ANNs, however, essentially function like 'black boxes' because they lack explanatory power regarding input contribution to the model. Various contributive analysis algorithms (CAAs) have been developed to apply to ANNs to illuminate the influences and interactions between the inputs and thus, to enhance understanding of the modeled function. In this study two CAAs were applied to an ANN modeling facial attractiveness. Conflicting results from these CAAs imply that more research is needed in the area of contributive analysis and that researchers should be cautious when selecting a CAA method",2006,0, 561,Comparison of Two Methods to Develop Breast Models for Simulation of Breast Tomosynthesis and CT,"

Dedicated x-ray computed tomography (CT) of the breast using a cone-beam flat-panel detector system is a modality under investigation by a number of research teams. Several teams, including ours, have fabricated prototype, bench-top flat-panel CT breast imaging (CTBI) systems. We also use computer simulation software to optimize system parameters. We are developing a methodology to use high resolution, low noise CT reconstructions of fresh mastectomy specimens in order to generate an ensemble of 3D digital breast phantoms that realistically model 3D compressed and uncompressed breast anatomy. The resulting breast models can then be used to simulate realistic projection data for both Breast Tomosynthesis (BT) and Breast CT (BCT) systems thereby providing a powerful evaluation and optimization mechanism for research and development of novel breast imaging systems as well as the optimization of imaging techniques for such systems. Having the capability of using breast object models and simulation software is clinically significant because prior to a clinical trial of any prototype breast imaging system many parameter tradeoffs can be investigated in a simulation environment. This capability is worthwhile not only for the obvious benefit of improving patient safety during initial clinical trials but also because simulation prior to prototype implementation should result in reduced cost and increased speed of development. The main goal of this study is to compare results obtained using two different methods to develop breast object models in order to select the better technique for developing the entire ensemble.

",2008,0, 562,Comparison of UML and text based requirements engineering,"Inspired by BLAST and related sequence comparison algorithms, we have developed a method for the direct comparison of query text against database text as an improvement upon traditional keyword-based searches. The primary application of our implementation, eTBLAST, is to better select those database entries (abstracts, in the case of MEDLlNE) of most relevance to a given query. eTBLAST takes as input natural text instead of keywords, allows refinement of retrieved hits through iteration, can be applied to any text (demonstrated here on biomedical databases), and allows inspection of the local space around a query through simple visualization methods.",2004,0, 563,Competency Rallying Processes in Virtual Organizations,"The paper's contribution is a description of a knowledge management process and tools for a virtual organization. A virtual organization is described as an interconnected set of management systems. The first connection is conceptual in nature and is seen through a project which defines the need for knowledge management. The second connection is informational and is experienced through electronic management tools. Knowledge management is composed of knowledge creation, assimilation, and dissemination. Management tools connect management systems for improved information flow and rapid knowledge growth to effect project execution. These concepts are demonstrated using a project view of a virtual organization's tasks",1996,0, 564,Competitive Maintenance of Minimum Spanning Trees in Dynamic Graphs,

We consider the problem of maintaining a minimum spanning tree within a graph with dynamically changing edge weights. An online algorithm is confronted with an input sequence of edge weight changes and has to choose a minimum spanning tree after each such change in the graph. The task of the algorithm is to perform as few changes in its minimum spanning tree as possible.

We compare the number of changes in the minimum spanning tree produced by an online algorithm and that produced by an optimal offline algorithm. The number of changes is counted in the number of edges changed between spanning trees in consecutive rounds.

For any graph with <em>n</em>vertices we provide a deterministic algorithm achieving a competitive ratio of $\mathcal{O}(n^2)$. We show that this result is optimal up to a constant. Furthermore we give a lower bound for randomized algorithms of ??????(log<em>n</em>). We show a randomized algorithm achieving a competitive ratio of $\mathcal{O}(n\log n)$ for general graphs and $\mathcal{O}(\log n)$ for planar graphs.

,2007,0, 565,Complex Network-Based Information Systems (CNIS) Standards: Toward an Adoption Model,"AbstractThis paper proposes an adoption model for complex network-based information systems (CNIS) standards which extends current diffusion of innovation theory within a specific technological context, that of ambient intelligence (AmI). The issue of open and closed standards is especially important for networked information systems; however, a range of factors impact the adoption decision and challenge existing models of adoption. Such models are based on DOI theories that have their roots in more simplistic technological innovations. In order to extend the current view on adoption, the adoption context must be closely considered. Agile organizations must constantly survey the external environment to determine the potential of emerging technology. Open standards may make organizations less vulnerable to environmental flux due to uncertainties caused by the lack of transparency of proprietary standards. Accordingly, the proposed model moves toward providing a means to assess factors impacting the adoption of open and proprietary standards.",2006,0, 566,Complexity of indexing: Efficient and learnable large database indexing,"To realize content-based retrieval of large image databases, it is required to develop an efficient index and retrieval scheme. This paper proposes an index algorithm of clustering called CMA, which supports fast retrieval of large image databases. CMA takes advantages of k-means and self-adaptive algorithms. It is simple and works without any user interactions. There are two main stages in this algorithm. In the first stage, it classifies images in a database into several clusters, and automatically gets the necessary parameters for the next stage — k-means iteration. The CMA algorithm is tested on a large database of more than ten thousand images and compare it with k-means algorithm. Experimental results show that this algorithm is effective in both precision and retrieval time.",2005,0, 567,Complexity of Information Systems Development Projects: Conceptualization and Measurement Development,"Information systems development projects (ISDPs) are known for a high failure rate. This high failure rate can largely be attributed to the complexity and changes involved during ISDP lifecycles. In complex, dynamic environments, ISDP team flexibility to respond to various changes becomes critical to improved project performance. However, there has been a lack of understanding about the relationships between flexibility, complexity and the performance of ISDPs.

The objectives of this dissertation are three-fold. First, the dissertation conceptualizes ISDP flexibility and develops a measurement for the construct. Second, it develops a conceptual framework and a measurement for assessing ISDP complexity. Third, drawing upon the contingency perspectives, it examines the effects of ISDP flexibility on IS performance under varying degrees of ISDP complexity.

This dissertation uses a five-phase research method to achieve the research objectives. It employs field interviews, focus group discussions, pretests, pilot tests, and a large-scale cross-sectional survey with 505 ISDP managers.

The dissertation examines two types of ISDP flexibility: the extent of response by the project team to changes and the efficiency of response. A 25-item measurement of ISDP flexibility is developed and validated using confirmatory factor analysis. A two dimensional conceptual framework is proposed to define four types of ISDP complexity: structural organizational complexity, structural IT complexity, dynamic organizational complexity, and dynamic IT complexity. A 20-item measurement of ISDP complexity is developed and validated.

The research finds that both types of ISDP flexibility have positive main effects on IS performance. However, the effect of ISDP flexibility on IS performance is stronger when ISDP complexity is higher than when it is lower. There was a negative relationship between the two types of ISDP flexibility.

The research provides conceptual frameworks and validated measures of ISDP flexibility and ISDP complexity. With these measures, IS managers can use them to assess and better manage their projects. The research provides empirical evidence about the relationships between flexibility, complexity and the performance of ISDPs. The findings suggest that flexibility generally improves project performance. However, managers need to assess the degree of project complexity to determine the appropriate level of ISDP flexibility.",2003,0, 568,Component and Service Technology Families,"A new design is proposed to enhance an existing architecture for delivering future millimeter-waveband (mm-WB) Radio-over-Fibre (RoF) for wireless services with the use of Dense Wavelength Division Multiplexing (DWDM) architecture over a Passive Optical Network (PON). For the conceptual illustration, we deployed the original DWDM RoF-PON system then redesigned its infrastructure. In the downlink, the mm-WB RF signal is obtained at each Optical Network Unit (ONU) by using optical Remote Heterodyning Detection (RHD) between two optical carriers simultaneously, which is generated using a single laser source. The generated RF modulated signal has a frequency of 12.5 GHz. Such RoF system is simple, cost-effective, low-maintenance and is immune to laser phase noise in principle.",2015,0, 569,Computational Intelligence in Bioinformatics,Provides notice of upcoming conference events of interest to practitioners and researchers.,2011,0, 570,Computational investigations of quasirandom sequences in generating test cases for specification-based tests,"This paper presents work on generation of specification-driven test cases based on quasirandom (low-discrepancy) sequences instead of pseudorandom numbers. This approach is novel in software testing. This enhanced uniformity of quasirandom sequences leads to faster generation of test cases covering all possibilities. We demonstrate by examples that quasirandom sequences can be a viable alternative to pseudorandom numbers in generating test cases. In this paper, we present a method that can generate test cases from a decision table specification more effectively via quasirandom numbers. Analysis of a simple problem in this paper shows that quasirandom sequences achieve better data than pseudorandom numbers, and have the potential to converge faster and so reduce the computational burden. The use of different quasirandom sequences for generating test cases is presented in this paper",2006,0, 571,"Computational Math, Science, and Technology: A New Pedagogical Approach to Math and Science Education","AbstractWe describe math modeling and computer simulations as a new pedagogical approach to math and science education. Computational approach to Math, Science, and Technology (CMST) involves inquiry-based, project-based, and team-based instruction. It takes the constructivist approach recommended by national learning standards. Our college has formed a partnership with local school districts to study impact of CMST on student achievement in math and science. We have trained more than 60 middle and high school teachers and teacher candidates. Preliminary results indicate that CMST-based professional development contributed an increase in passing rate (from 39% to 85%) of Rochester City School District in New York State high school math exam. This paper establishes relevant literature supporting CMST as an important scientific and educational methodology.",2004,0, 572,"Computational Representation of Collaborative Learning Flow Patterns using IMS Learning Design","This paper proposes collaborative learning flow patterns (CLFPs), which represent best practices in collaborative learning structuring, as a central element of a kind of bi-directional linkage that facilitates that teachers can play the role of designers influencing in the behavior of CSCL (computer-supported collaborative learning) technological solutions. Additionally, this paper describes a technological approach for achieving such a scenario. That approach is based on the Collage authoring tool that provides CLFPs as IMS LD templates and the Gridcole system, capable of interpreting the resulting CLFP-based LDs and integrating the service-oriented tools needed to support the (collaborative) learning activities as prescribed in those LDs",2006,0, 573,Computer aided teaching of programming languages,"Sim8086 is a Microsoft Windows-based computer-aided learning (CAL) system for supporting the teaching of assembly language programming to first-year undergraduate students. This paper takes the development of Sim8086 as a case study and uses it to illustrate some of the professional, economic and educational issues faced by teachers who design and develop CAL systems. In particular, the paper charts the interaction between these elements and seeks to offer guidance to educators who find it difficult to embark on CAL developments. Further, we report our efforts to make opportunistic use of resources as and when they become available, in order to improve the chance of delivering CAL software",1996,0, 574,Computer-Assisted Item Generation for Listening Cloze Tests and Dictation Practice in English,"As the steps of globalization accelerate, learning foreign languages has become a modern challenge for everyone. Obtaining a broad range of learning and practice material will boost the efficiency of language learning, and the Web serves as a rich source of text material. We offer methods for algorithmically creating test items that may meet needs of individual learners and instructors of English. At the current stage, we explore the generation of test items for students' practicing listening cloze in English, using text material obtained from the Web. Relying on the text corpus, a voice-synthesizer software, and linguistics-based criteria, our system identifies candidate sentences and selects distractors for composing test items for listening cloze. Teachers can select and compose the machine-generated items as they wish, and allow students to practice the composed items. In addition, the current system records histories of the performance of individual student, so the resulting system paves our way to adoptively supporting students' activities for polishing their competence in listening English.",2005,0, 575,Computerized Assessment Tool for Mouse Operating Proficiency,"AbstractThis paper substantiates the process of developing a computerized mouse proficiency assessment tool (CAT-MP), which could be used to measure proficiency of clients in mouse operating skills. Moreover, CAT-MP also helps evaluator to diagnose specific difficulties and provide individual remedies for the persons with limitations to access computer. Based on the results of task analysis of mouse operating, clinical experiences and related literature review, CAT-MP was designed containing four modules responsible for communicating interfaces and databases, organizing test tasks, collecting data and analyzing data respectively. Beside the contents of these modules, the tasks of four subtests, the procedure of measurement, and the results of reliability and validity of CAT-MP will be addressed in detail in this paper.",2004,0, 576,Computer-Mediated Collaborative Reasoning and Intelligence Analysis,"Fuzzy-timing high-level Petri nets and workflow technology are introduced to model and analyze collaborative design activities. A set of reasoning rules and criteria are proposed to manage the uncertainty of temporal parameters in collaborative activities, verify the temporal logic and analyze quantitatively the collaboration performance. An example is given to explain how to model collaborative design activities and analyze their performance by using these rules and criteria",2006,0, 577,"Computing Curricula 2005, The Overview Report","This paper reports on the activities and results from the 4 International Workshop on Net-Centric Computing (NCC 2005) that was held on September 25, 2005 in Budapest, Hungary as part of the 13 Software Technology and Engineering Practice conference (STEP 2005). The theme of NCC 2005 was ""Middleware: The Next Generation."" The workshop focused on issues related to emerging middleware technologies in an NCC environment. Representative topics included gap analysis of current middleware offerings, empirical studies of middleware as an enabling technology for NCC applications, and forecasts for new directions in middleware in the coming years",2005,0, 578,Computing Ripple Effect for Object Oriented Software,"Reusable product line software architectures and supporting components are the focus of an increasing number of software organizations attempting to reduce software costs. One essential attribute of a product line architecture is that it effectively isolates the logical, or static, aspects of the application from any product specific variations in the physical architecture or execution environment. A primary element of this isolation is hardware and low-level software (e.g., operating system) independence. This paper describes our experiences on developing object-oriented physical architectures for large scale reusable embedded systems, and on various ways that physical architecture attributes can be designed for flexibility without introducing volatility into the application architecture",2001,0, 579,Conceptual Model of Risk: Towards a Risk Modelling Language,"With the rapid development of underground metropolitan transportation, the problem of safety risks, especially the fire risks, have always been accorded great attention by researchers and practitioners. In order to reduce the uncertainty of fire risks in underground metropolitan transportation, the main objective of this paper is to devise a proactive approach to dynamically predict and control conditions leading to fire hazards. A literature review of fire risks in underground metropolitan transportation is conducted firstly. Then, a predictive model that is based on the continuous tracking of fire hazards is applied to predict the fire risks of underground metropolitan transportation. Using the model, the certain system of underground metropolitan transportation can be identified as ""under control"" or ""out of control"" based on the methods of sampling and control charts. This research would provide us with an effective and valid method to lessen the uncertainty of fire hazards in underground metropolitan transportation as much as possible.",2008,0, 580,Conceptual Modeling of Structure and Behavior with UML The Top Level Object-oriented Framework (tloof) Approach,"

In the last decade UML has emerged as the standard object-oriented conceptual modeling language. Since UML is a combination of previous languages, such as Booch, OMT, Statecharts, etc., the creation of multi-views within UML was unavoidable. These views, which represent different aspects of system structure and behavior, overlap, raising consistency and integration problems. Moreover, the object-oriented nature of UML sets the ground for several behavioral views in UML, each of which is a different alternative for representing behavior. In this paper I suggest a Top-Level Object-Oriented Framework (TLOOF) for UML models. This framework, which serves as the glue of use case, class, and interaction diagrams, enables changing the refinement level of a model without losing the comprehension of the system as a whole and without creating contradictions among the mentioned structural and behavioral views. Furthermore, the suggested framework does not add new classifiers to the UML metamodel, hence, it does not complicate UML.

",2005,0, 581,Conceptual-level workflow modeling of scientific experiments using NMR as a case study,"AbstractBackgroundScientific workflows improve the process of scientific experiments by making computations explicit, underscoring data flow, and emphasizing the participation of humans in the process when intuition and human reasoning are required. Workflows for experiments also highlight transitions among experimental phases, allowing intermediate results to be verified and supporting the proper handling of semantic mismatches and different file formats among the various tools used in the scientific process. Thus, scientific workflows are important for the modeling and subsequent capture of bioinformatics-related data. While much research has been conducted on the implementation of scientific workflows, the initial process of actually designing and generating the workflow at the conceptual level has received little consideration.ResultsWe propose a structured process to capture scientific workflows at the conceptual level that allows workflows to be documented efficiently, results in concise models of the workflow and more-correct workflow implementations, and provides insight into the scientific process itself. The approach uses three modeling techniques to model the structural, data flow, and control flow aspects of the workflow. The domain of biomolecular structure determination using Nuclear Magnetic Resonance spectroscopy is used to demonstrate the process. Specifically, we show the application of the approach to capture the workflow for the process of conducting biomolecular analysis using Nuclear Magnetic Resonance (NMR) spectroscopy.ConclusionUsing the approach, we were able to accurately document, in a short amount of time, numerous steps in the process of conducting an experiment using NMR spectroscopy. The resulting models are correct and precise, as outside validation of the models identified only minor omissions in the models. In addition, the models provide an accurate visual description of the control flow for conducting biomolecular analysis using NMR spectroscopy experiment.",2007,0, 582,Conducting Qualitative Research in an International and Distributed Research Team: Challenges and Lessons Learned,"In this paper, we discuss challenges for planning and executing qualitative research conducted by an international research project team. The study comprised an exploratory examination of strategies of offshoring and onshoring for software development. An important methodological challenge is that the members of the research team live in different countries, rely on different languages and originate from different cultures. These challenges are in many ways analogous to those inherent in the subject we are researching, distributed software development. To explore these issues, we present the difficulties we faced on collecting and analyzing the qualitative data. Our main contribution is the identification of challenges, strategies to overcome them, and a set of lessons learned.",2008,0, 583,Confidence in software cost estimation results based on MMRE and PRED,"

Bootstrapping is used to approximate the standard error and 95% confidence intervals of MMRE and PRED for a number of COCOMO I model variations applied to four PROMISE data sets. This is used to illustrate a lack of confidence in numerous published cost estimation research results based on MMRE and PRED comparisons such as model selection. We show that many such results are of questionable significance due to large possible variations resulting from population sampling error and suggest that a number of inconsistent and contradictory results may be explained by this. By using more standard statistical approaches that account for standard error, we may reduce the incidence of this and obtain greater confidence cost estimation in research results.

",2008,0, 584,Configuration of Dynamic SME Supply Chains Based on Ontologies,"Supply chain management has gained renewed interest among researchers. This is primarily due to the availability of timely information across the various stages of the supply chain, and therefore the need to effectively utilize the information for improved performance. Although information plays a major role in effective functioning of supply chains, there is a paucity of studies that deal specifically with the dynamics of supply chains and how data collected in these systems can be used to improve their performance. In this paper we develop a framework, with machine learning, for automated supply chain configuration. Supply chain configuration used to be mostly a one-shot problem. Once a supply chain was configured, researchers and practitioners were more interested in means to improve performance given that initial configuration. However, developments in eCommerce applications and faster communication over the Internet in general necessitate dynamic (re)configuration of supply chains over time to take advantage of better configurations. We model each actor in the supply chain as an agent who makes independent decisions based on information gathered from the next level upstream. Using examples, we show performance improvements of the proposed adaptive supply chain configuration framework over static configurations.",2004,0, 585,Constraint Patterns and Search Procedures for CP-Based Random Test Generation,"

Constraint Programming (CP) technology has been extensively used in Random Functional Test Generation during the recent years. However, while the existing CP methodologies are well tuned for traditional combinatorial applications e.g. logistics or scheduling, the problem domain of functional test generation remains largely unexplored by the CP community and many of its domain specific features and challenges are still unaddressed. In this paper we focus on the distinctive features of CP for the random functional test generation domain and show how these features can be addressed using a classical CP engine with custom extensions. We present some modeling and solving problems arising in this context and propose solutions. In particular, we address the way of model building in the problem domain of test generation which we refer to as multi-layer modeling. In this context we introduce constraint patterns of composite variable, implied condition and implied composite variable condition, define their semantics and propose schemes for their CSP modeling. The paper also addresses specific problems arising at the solving stage in the problem domain of random test generation. We propose solutions to these problems by means of custom random search algorithms. This approach is illustrated on the examples of the disjunction constraint and conditional variable instantiation. The latter algorithm addresses the feature of dynamic modeling required in the test generation task. To demonstrate the effectiveness of our approach we present experimental results based on the implementation using ILOG Solver as a CP engine.

",2007,0, 586,Constructing a core literature for computing education research,"After four decades of research on a broad range of topics, computing education has now emerged as a mature research community, with its own journals, conferences, and monographs. Despite this success, the computing education research community still lacks a commonly recognized core literature. A core literature can help a research community to develop a common orientation and make it easier for new researchers to enter the community. This paper proposes an approach to constructing and maintaining a core literature for computing education research. It includes a model for classifying research contributions and a methodology for determining whether they should be included in the core. The model and methodology have been applied to produce an initial list of core papers. An annotated list of these papers is given in appendix A.",2005,0, 587,Constructing a market domain model for start-up software technology companies: A case study,"The market for a complex technology product is sometimes called reference business because references are emphasized by corporate customers. A first customer reference is especially important for a start-up technology company attempting to enter the business-to-business market with complex products. Topics relating to customer references have received scant attention from scholars although they embed substantial business relevance. This case study concentrates on evaluating concepts that are central to customer references from the viewpoint of the start-up technology companies. Of concepts prevalent in current literature, those concerning the use of the first customer references, in particular, form an inadequate base for research and are often vague. The purpose here is to introduce a domain model that describes the key concepts and the relationships between them concerning the focus of this present article. The domain modeling technique is a well-known and widely used tool for defining concepts for large-scale information systems. Domain modeling increases understanding of the present problem domain by structuring knowledge into classes, attributes, and relations. This case study approaches the identification of concepts via an illustrative example from software business. Previously known concepts, close to the topic of this present article, are then re-evaluated based on our literature review. Redefinitions of the customer reference and related concepts are introduced.",2007,0, 588,Constructing a Process Model for Decision Analysis and Resolution on COTS Selection Issue of Capability Maturity Model Integration,"The use of commercial off-the-shelf (COTS) software product can potentially reduce cost and time for software system development. But this promise is often not realized in practice because COTS product does not serve organizations' expectations. Thus, decision making on selecting COTS product becomes a critical task which should be performed in a systematic and repeatable manner. The decision analysis and resolution (DAR) process area of capability maturity model integration (CMMI) provides practices for formal evaluation process which could be applied to COTS selection. However, CMMI does not describe how to conduct the process that can achieve its defined goals. This research presents the DAR on COTS selection (DARCS) process model according to DAR process area of the CMMI. The model is consisted of three layers: core workflow layer, workflow details layer, and description layer.",2007,0, 589,Constructing Meta-CASE Workbenches by Exploiting Visual Language Generators,"In this paper, we propose an approach for the construction of meta-CASE workbenches, which suitably integrates the technology of visual language generation systems, UML metamodeling, and interoperability techniques based on the GXL (graph exchange language) format. The proposed system consists of two major components. Environments for single visual languages are generated by using the modeling language environment generator (MEG), which follows a metamodel/grammar-approach. The abstract syntax of a visual language is defined by UML class diagrams, which serve as a base for the grammar specification of the language. The workbench generator (WoG) allows designers to specify the target workbench by means of a process model given in terms of a suitable activity diagram. Starting from the supplied specification WoG generates the customized workbench by integrating the required environments.",2006,0, 590,Context Aware Body Area Networks for Telemedicine,"In this paper, an impedance matching network is designed for an RF energy harvester used in the context of Body Sensor Area Networks. It is a fully passive circuit and allows compensating the antenna impedance variations linked with the presence of the body. The designed RF energy harvester has an average RF-to-DC conversion of 40 % for an incident power of -10 dBm for a variation of 30 to 120 Ω for the real part and of 0 to 150 Ω for the imaginary part of the antenna impedance.",2015,0, 591,Contexts of Relevance for Information Retrieval System Design,"Recently, the Korean Pharmaceutical Association has a public service advertisements for medicinal misusing prevention and environment protection by medicinal substances. And the cost and procedure complexity was increased by specialization of dispensary and medical practice. To overcome this situation, we felt the need of a medicine information system. Ubiquitous and convergence technology have become trends in the IT field. In this research, we designed prototype of medicine information system for management and prevention of medicinal misusing using IT technologies.",2010,0, 592,Contextual Classifier Ensembles,"An idea of contextual classifier ensembles extends the application possibility of additional measures of quality of base and ensemble classifiers in the process of contextual ensembles design. These measures besides the obvious classifier accuracy and diversity/similarity take under consideration the complexity, interpretability and significance. The complexity (the number of used measures and multi level measure structure), the diversity of the scales of used measures and the necessity of the fusion of different measures to one assessment value are the reasons for user support in contextual classifier ensembles design using fuzzy logic and multi criteria analysis. The aim for this paper is an idea of the framework of the process of contextual ensemble design.",2015,0, 593,Contextual effects on the usability dimensions of mobile value-added services: a conceptual framework,"

The emergence of mobile value-added services has introduced a broad range of new use contexts, which were not faced in the stationary PC environment. Thus, extant usability models need to be modified in order to capture this change. As such, this paper suggests a conceptual mobile usability model, based on Nielsen's (1993) usability definition. The model explores the role of the type of the mobile service and its characteristics in determining the importance of the various usability dimensions. Avenues for validating this framework are then drawn. Overall, this framework forms the foundations for future mobile usability research, and enables stakeholders to focus on relevant mobile usability dimensions.

",2006,0, 594,"Continuous Evaluation of Information System Development A Reference Model","The article builds the coordinating development evaluation index system of energy-economy-environment (3E) which shows for hierarchical structural composed of 44 indexes based on investigation of energy, economic and environmental of Hebei Province, and uses Principal Component Analysis method to calculate the comprehensive development level index of each model; on this basis, it calculates the 3E system coordinating degree by building up Membership Function model. The result shows that Hebei Province' 3E system is compared coordinating status, but the contradiction is standing out in some year and some subsystem, which provides theoretical basis and date support for Hebei Province in formulating sustainable development policy of overall consideration.",2010,0, 595,Continuous SPA: Continuous Assessing and Monitoring Software Process,"In the past ten years many assessment approaches have been proposed to help manage software process quality. However, few of them are configurable and real-time in practice. Hence, it is advantageous to find ways to monitor the current status of software processes and detect the improvement opportunities. In this paper, we introduce a web- based prototype system (Continuous SPA) on continuous assessing and monitoring software process, and perform a practical study in one process area: project management. Our study results are positive and show that features such as global management, well-defined responsibility and visualization can be integrated in process assessment to help improve the software process management.",2007,0, 596,Coordination Network Analysis: A Research Framework for Studying the Organizational Impacts of Service-Orientation in Business Intelligence,"Business intelligence (BI) technology and research is maturing. In evidence, some practitioners have indicated a shift in the nature of their data warehousing challenges from being technical in nature to organizational. This research is based on a case study involving the BI services unit of a large, Fortune 500, financial services organization that is experiencing some of those ""organizational"" challenges. As a solution, this organization decided to implement a service-oriented enterprise (SOE) structure to address some of those challenges such as collaboration and standardization in a complex, interdependent environment. However, because of the newness of SOE and the limited volume of rigorous research in the area, it is difficult understand or estimate the organizational and human impacts it will have. This paper proposes a coordination network analysis as a research methodology for estimating and optimizing the impacts of SOE at the individual and group level",2007,0, 597,Corpus-Based Empirical Account of Adverbial Clauses Across Speech and Writing in Contemporary British English,"

Adverbial subordinators are an important index of different types of discourse and have been used, for example, in automatic text classification. This article reports an investigation of the use of adverbial clauses based on a corpus of contemporary British English. It demonstrates on the basis of empirical evidence that it is simply a misconceived notion that adverbial clauses are typically associated with informal, unplanned types of discourse and hence spoken English. The investigation initially examined samples from both spoken and written English, followed by a contrastive analysis of spontaneous and prepared speech, to be finally confirmed by evidence from a further experiment based on timed and untimed university essays. The three sets of experiments consistently produced empirical evidence which irrefutably suggests that, contrary to claims by previous studies, the proportion of adverbial clauses are consistently much lower in speech than in writing and that adverbial clauses are a significant characteristic of planned, elaborated discourse.

",2006,0, 598,Cost estimation for cross-organizational ERP projects: research perspectives,"

There are many methods for estimating size, effort, schedule and other cost aspects of IS projects, but only one specifically developed for Enterprise Resource Planning (ERP) (Stensrud, Info Soft Technol 43(7):413---423, 2001) and none for simultaneous, interdependent ERP projects in a cross-organizational context. The objective of this paper is to sketch the problem domain of cross-organizational ERP cost estimation, to survey available solutions, and to propose a research program to improve those solutions. In it, we: (i) explain why knowledge in the cost estimation of cross-organizational ERP is fragmented, (ii) assess the need to integrate research perspectives, and (iii) propose research directions that an integrated view of estimating cross-organizational ERP project cost should include.

",2008,0, 599,Coverage Metrics for Continuous Function Charts,"Continuous Function Charts are a diagrammatical language for the specification of mixed discrete-continuous embedded systems, similar to the languages of Matlab/Simulink, and often used in the domain of transportation systems. Both control and data flows are explicitly specified when atomic units of computation are composed. The obvious way to assess the quality of integration test suites is to compute known coverage metrics for the generated code. This production code does not exhibit those structures that would make it amenable to ""relevant"" coverage measurements. We define a translation scheme that results in structures relevant for such measurements, apply coverage criteria for both control and dataflows at the level of composition of atomic computational units, and argue for their usefulness on the grounds of detected errors.",2004,0, 600,Creating a Distributed Field Robot Architecture for Multiple Robots,"Applications in Field Robotics often require synchronous acquiring various environment information using multiple sensors and devices, including digital cameras. For such applications, this paper proposes a software architecture that is designed to cope with large amount of mixed type data, acquired with various sample times, and gathered from multiple sensors and sensing devices. By allowing the integration of cameras having widely available communication interfaces, such as USB and Ethernet, the presented architecture is intended to allow a cost-effective and flexible robotic system configuration, providing consistent data and loss-less images to be further used in automated data processing. The paper presents the software design, which was implemented in LabVIEW environment, and an illustrative hardware configuration.",2015,0, 601,Creating Human Resources for Information Technology - A Systemic Study,"Information Technology has emerged as a key dominant sector of the Indian economy. Various agencies have studied the issue of developing Human Resources for the IT sector. The quantitative need is sought to be filled by opening more and more engineering colleges ? a short-sighted measure. The issue of human resources development for IT cannot be handled through a linear Cause - Effect model. In this report we analyze the needs and constraints of various stakeholders affected by the issue through a Systems Approach. We derive viable design interventions that would balance the needs and constraints. We attempt to map the knowledge and skill components for different educational streams against taxonomy of IT careers. We recognize the constraints in terms of instructional material and faculty. The limited funds available for the education sector have to be judiciously balanced between conventional brick and mortar infrastructure and providing an e-learning environment. We bring out the interconnectedness in the roles of Government, Academia, Industry and Professional bodies. Although we would have liked to tackle the issue of IT education in its entirety, we have focused on Software for the simple reason that we have a lesser understanding about the other components, notably Hardware and Networks. The design solutions contained in the paper need a wide discussion and eventual consensus among educationists, planners and industry if India's potential position as an IT superpower is to be realized. ",2005,0, 602,Creation of a Master Table for Checking Indication and Contraindication of Medicine,"

To develop a system for checking indication and contraindication of medicines in prescription order entry system, a master table consisting of the disease names corresponding to the medicines adopted in a hospital is needed. The creation of this table requires a considerable manpower. We developed a Web-based system for constructing a medicine/disease thesaurus and a knowledge base. By authority management of users, this system enables many specialists to create the thesaurus collaboratively without confusion. It supports the creation of a knowledge base using concept names by referring to the thesaurus, which is automatically converted to the check master table. When a disease name or medicine name was added to the thesaurus, the check table was automatically updated. We constructed a thesaurus and a knowledge base in the field of circulatory system disease. The knowledge base linked with the thesaurus proved to be efficient for making the check master table for indication/contraindication of medicines.

",2004,0, 603,Critic Systems -- Towards Human--Computer Collaborative Problem Solving,"We take a visual analytics approach towards an operational model of the human-computer system. In particular, the approach combines ideas from (human-centered) interactive visualization and cognitive science. The model we derive is a first step on the path to a more complete evaluated and validated model. However, even at this stage important principles can be extracted for visual analytics systems that closely couple automated analyses with human analytic reasoning and decision-making. These improved systems can then be applied effectively to difficult, open-ended problems involving complex data. Another advantage of this approach is that specific gaps are revealed in both visual analytics methods and cognitive science understanding that must be filled in order to create the most effective systems. Related to this is that the resulting visual analytics systems built upon the human-computer model will provide testbeds to further evaluate and extend cognitive science principles.",2016,0,71 604,Critical factors in software outsourcing: a pilot study,"Software outsourcing partnership (SOP) is mutually trusted inter-organisational software development relationship between client and vendor organisations based on shared risks and benefits. SOP is different to conventional software development outsourcing relationship, SOP could be considered as a long term relation with mutual adjustment and renegotiations of tasks and commitment that exceed mere contractual obligations stated in an initial phase of the collaboration. The objective of this research is to identify various factors that are significant for vendors in conversion of their existing outsourcing contractual relationship to partnership. We have performed a systematic literature review for identification of the factors. We have identified a list of factors such as 'mutual interdependence and shared values', 'mutual trust', 'effective and timely communication', 'organisational proximity' and 'quality production' that play vital role in conversion of the existing outsourcing relationship to a partnership.",2014,0, 605,Cross-Cultural Study of Avatars? Facial Expressions and Design Considerations Within Asian Countries,"

Avatars are increasingly used to express our emotions in our online communications. Such avatars are used based on the assumption that avatar expressions are interpreted universally among any cultures. However, our former study showed there are cultural differences in interpreting avatar facial expressions. This paper summarizes the results of cross cultural evaluations of avatar expressions among five Asian countries. The goals of this study are: 1) to investigate cultural differences in avatar expression evaluation and apply findings from Psychological study in human facial expression recognition, 2) to identify design features that cause cultural differences in avatar facial expression interpretation. The results confirmed that 1) there are cultural differences in interpreting avatars' facial expressions among Asian countries, and the psychological theory that suggests physical proximity affects facial expression recognition accuracy is also applicable to avatar facial expressions, 2) use of gestures and gesture marks may sometimes cause counter-effects in recognizing avatar facial expressions.

",2007,0, 606,Cross-cutting techniques in program specification and analysis,"D is a new programming language. This is an object-oriented, imperative, multi-paradigm system programming language. Regression testing on D programming language still untouched by researchers. Our research attempts to bridge this gap by introducing a techniques to revalidate D programs. A framework is proposed which automates both the regression test selection and regression testing processes for D programming language. As part of this approach, special consideration is given to the analysis of the source code of D language. In our approach system dependence graph representation will be used for regression test selection for analyzing and comparing the code changes of original and modified program. First we construct a system dependence graph of the original program from the source code. When some modification is executed in a program, the constructed graph is updated to reflect the changes. Our approach in addition to capturing control and data dependencies represents the dependencies arising from object-relations. The test cases that exercise the affected model elements in the program model are selected for regression testing. Empirical studies carried out by us show that our technique selects on an average of 26.36. % more fault-revealing test cases compared to a UML based technique while incurring about 37.34% increase in regression test suite size.",2014,0, 607,Crossing Boundaries with Web-Based Tools for Learning Object Evaluation,"With the development of computer science and multimedia technology, Web-based learning becomes increasingly popular. User evaluation plays a significant role in the process of guided learning. In recent years, there is great progress in the development of evaluation technology. However, few evaluation methods fully take online learning activity analysis and individual differences into account. This paper proposes a personalized user evaluation model for Web-based learning systems. The model is utilized to record and analyze various learning activities throughout the entire learning process. Considering individual differences, learners are clustered and specific evaluation standards are set for different clusters. Comprehensive evaluation is achieved by combining Analytic Hierarchy Process, Fuzzy C-Means clustering and normalization algorithm. Through the comparison with several other common evaluation methods, experimental results show that the proposed method outperforms existing ones on the accuracy of learner evaluation.",2014,0, 608,Cultural Differences in Temporal Perceptions and its Application to Running Efficient Global Software Teams,"Global software development has been found to be a difficult undertaking, in particular, when members of a single team are not co-located. Studies have looked at the impact of different cultural backgrounds, communication structures and temporal distance on the team's effectiveness. This research proposes to examine the impact of culturally based perceptions of time. A gap analysis is proposed to carry out this examination. The gap that will be measured is the gap between time-based attitudes and behavior in team unit A and team unit B where units A and B are part of the same team but are not co-located. These time-based attitudes and behavior will be compared to measures of team satisfaction and team effectiveness. A model of the impact of the temporal cultural differences and their effect on team performance is presented and the proposed research for testing this model is described",2006,0, 609,"Cumulvs: Interacting with High-Performance Scientific Simulations, for Visualization, Steering and Fault Tolerance","

High-performance computer simulations are an increasingly popular alternative or complement to physical experiments or prototypes. However, as these simulations grow more massive and complex, it becomes challenging to monitor and control their execution. CUMULVS is a middleware infrastructure for visualizing and steering scientific simulations while they are running. Front-end ""viewers"" attach dynamically to simulation programs, to extract and collect intermediate data values, even if decomposed over many parallel tasks. These data can be graphically viewed or animated in a variety of commercial or custom visualization environments using a provided viewer library. In response to this visual feedback, scientists can ""close the loop"" and apply interactive control using computational steering of any user-defined algorithmic or model parameters. The data identification interfaces and gathering protocols can also be applied for parallel data exchange in support of coupled simulations, and for application-directed collection of key program data in checkpoints, for automated restart in response to software or hardware failures. CUMULVS was originally based on PVM, but interoperates well with simulations that use MPI or other parallel environments. Several alternate messaging systems are being integrated with CUMULVS to ease its applicability, e.g. to MPI. CUMULVS has recently been integrated with the Common Component Architecture (CCA) for visualization and parallel data redistribution (referred to as ""MxN""), and also with Global Arrays. This paper serves as a comprehensive overview of the CUMULVS capabilities, their usage, and their development over several years.

",2006,0, 610,Curriculum Development for Undergraduate Information Forensics Education,"A description is given of a successful 1987 program in which the Zenith Electronic Corporation in Glenview, Illinois, contracted with the University of Illinois at Urbana-Champaign (UIUC) for a nine-month continuing engineering education program. The program was aimed at the middle-level engineer, the level where the engineers had some individual experience in the target areas. The electrical engineering technology training program consisting of two separate series of courses, each with its own set of learning objectives. Six different courses were taught in the analog systems series and four were taught in the digital systems and microprocessor series. Not only were company retraining needs met, but the UIUC electrical and computer engineering department professors teaching the onsite program said that they were taking back to their classrooms valuable lessons learned from the 140 engineers in the Zenith program. Program schedule and cost as well as lessons learned and the program's future are discussed",1988,0, 611,Customized Predictive Models for Process Improvement Projects,"Currently, a number of specific international standards are made available within software engineering discipline to support Software Process Improvement (SPI) such as Capability Maturity Model Integration (CMMI), ISO/IEC 15504, ISO/IEC 90003 and ISO/IEC 12207. Some suggest on integrating and harmonizing these standards to reduce risks and enhance practicality, however there is no official initiative being made to date to implement this reality. Integrated Software Process Assessment (iSPA) is a proposed initiative being developed on the premise to harmonize and integrate a number of existing software process assessments and practices including improvement standards, models and benchmarks. A survey was conducted on thirty software practitioners to measure the strength and weaknesses of their organization's current software process. The survey also attempts to evaluate the acceptance and needs of implementation of a customized SPI model for Malaysia's SME.",2011,0, 612,CVS integration with notification and chat: lightweight software team collaboration,"Code management systems like Concurrent Version System (CVS) can play an important role in supporting coordination in software development, but often at some time removed from original CVS log entries or removed from the informal conversations around the code. The focus of this paper is one team's long term use of a solution where CVS is augmented with a lightweight event notification system, Elvin, and a tickertape tool where CVS messages are displayed and where developers can also chat with one another. Through a statistical analysis of CVS logs, and a qualitative analysis of tickertape logs and interview data, there is evidence of the tool transforming archival log entries into communicative acts and supporting timely interactions. Developers used the close integration of CVS with chat for growing team culture, stimulating focused discussion, supplementing log information, marking phases of work, coordinating and negotiating work, and managing availability and interruptibility. This has implications for consideration of more lightweight solutions for supporting collaborative software development, as well as managing awareness and interruptions more generally.",2006,0, 613,CyberIR ? A Technological Approach to Fight Cybercrime,"

Fighting cybercrime is an international engagement. Therefore, the understanding and cooperation of legal, organizational and technological affairs across countries become an important issue. Many difficulties exist over this international cooperation and the very first one is the accessing and sharing of related information. Since there are no standards or unification over these affairs across countries, the related information which is dynamically changing and separately stored in free text format is hard to manage. In this study, we have developed a method and information retrieval (IR) system to relieve the difficulty. Techniques of vector space model, genetic algorithm (GA), relevance feedback and document clustering have been applied.

",2008,0, 614,D4.3.1 Ontology Mediation Patterns Library V1,"EU-IST Integrated Project (IP) IST-2003-506826 SEKT Deliverable D4.3.1 (WP4) This deliverable describes a library of ontology mapping patterns, as well as a mapping language based on these patterns. This language, together with the mapping patterns, allows the user to more easily identify mappings and to describe mappings in an intuitive way. The mappings are organized in a library in a hierarchical fashion in order to allow for easy browsing and retrieving of mappings.",2005,0, 615,Data Mining in Tourism Demand Analysis: A Retrospective Analysis,"Due to high competition in today's business and the need for satisfactory communication with customers, companies understand the inevitable necessity to focus not only on preventing customer churn but also on predicting their needs and providing the best services for them. The purpose of this article is to predict future services needed by wireless users, with data mining techniques. For this purpose, the database of customers of an ISP in Shiraz, which logs the customer usage of wireless internet connections, is utilized. Since internet service has three main factors to define (Time, Speed, Traffics) we predict each separately. First, future service demand is predicted by implementing a simple Recency, Frequency, Monetary (RFM) as a basic model. Other factors such as duration from first use, slope of customer's usage curve, percentage of activation, Bytes In, Bytes Out and the number of retries to establish a connection and also customer lifetime value are considered and added to RFM model. Then each one of R, F, M criteria is alternately omitted and the result is evaluated. Assessment is done through analysis node which determines the accuracy of evaluated data among partitioned data. The result shows that CART and C5.0 are the best algorithms to predict future services in this case. As for the features, depending upon output of each features, duration and transfer Bytes are the most important after RFM. An ISP may use the model discussed in this article to meet customers' demands and ensure their loyalty and satisfaction.",2013,0, 616,Data Semantics in Location-Based Services,"Conventional positioning approaches for Location-based Services (LBS) such as those provided by Google and Apple, are solely driven by geometric spatial data. Especially in proactive LBS scenarios, in which users are notified as soon as they reach a certain area, locations are mostly defined by geofences and do not incorporate any further information from the semantics of the location, such as the points of interest in the vicinity or more detailed information about the district the user is in. Leveraging LBS with the extensive pool of interconnected data in the Linking Open Data (LOD) Cloud will improve the LBS experience and will enable the development of sophisticated proactive services. In this paper, we present a Semantic Positioning Platform that enhances classic positioning methods by semantic features. This platform utilizes the OpenMobileNetwork, which is a Live Crowd sourcing Platform providing static as well as dynamic mobile network topology data based on the principles of Linked Data. It further uses the Positioning Enabler that enables persistent user background tracking and subscription to Semantic LBS Services. The Semantic Positioning approach allows LBS providers to locate users with respect to the semantics of their position instead of defining spatial geofences. As a proof-of-concept, a Restaurant Recommender Service is presented and its applicability is evaluated.",2013,0, 617,Dataset Issues in Object Recognition,"A vision system that automatically generates an object recognition strategy from a 3D model and recognizes the object by this strategy is presented. In this system, the appearances of an object from various view directions are described with 2D features, such as parallel lines and ellipses. These appearances are then ranked, and a tree-like strategy graph is generated. It shows an efficient feature search order when the viewer direction is unknown. The object is recognized by feature detection guided by the strategy. After the features are detected, the system compares the line representation generated from a 3D model and the image features to localize the object. Perspective projection is used in the localization process to obtain the precise position and attitude of the object, while orthographic projection is used in the strategy generation process to allow symbolic manipulation",1990,0, 618,Decision station: situating decision support systems,"The objective to develop clinical decision support system (CDSS) tools is to help physicians making faster and more reliable clinical decisions. The first step in their development is choose a machine learning classifier as the system core. Previous works reported implementation of artificial neural networks, support vector machines, genetic algorithms, etc. as core classifiers for CDSS; however, these works do not report the parameters considered or the selection process for their implemented classifier. This paper is focus on the selection of a classifier to develop a CDSS. The options were reduced by reviewing advantages and disadvantages of each classifier, comparing them and considering the project parameters. The results of the analysis that take into consideration the project parameters show that some classifiers could be affected negatively in their performance, leaving support vector machines as the more suitable classifier to develop the CDSS by concluding that SVM disadvantages do not affect the accuracy and its advantages are not affected negatively by the project parameters.",2015,0, 619,Decision support for software project planning and enactment.,"Planning global software development (GSD) projects is a challenging task, as it involves balancing both technical and business related issues. While planning GSD projects, project managers face decision-making situations such as, choosing the right site to distribute work and finding an optimal work distribution considering both the cost and duration of the project. Finding an optimal solution for these decision-making situations is a difficult task without some kind of automated support, as there are many possible alternative work allocation solutions and each solution affects the cost and duration of project differently. To assist project managers in these situations, we propose a tool for planning GSD projects. The tool uses multi-objective genetic algorithms for finding optimal work allocation solutions in a trade off between cost and time. This article discusses the implementation of the tool and application of the tool using two scenarios.",2014,0, 620,"Decision Support System for Risk Management in Aquatic Products Export Trade, China","The fishing industry not only acts as foreign exchange earner but also plays an important role in Chinas economy. But with the development of technology and liberalization of international trade, the foreign countries adopted in succession trade barriers to limit Chinas fishery product export, which had make Chinas fishery product export disproportionated with fishery products production. By the literature analysis, we find practical research aiming at aquatic products export trade is in shortage greatly. So it is necessary to provide risk management for export trade of aquatic products. A decision support system for risk management in aquatic products export trade had been developed by China Agricultural University. Based on questionnaire and interviews, we analyze the decision problems, user needs and the difficulties involved in developing the aquatic products risk management system. The system architecture and its components, such as database, knowledge base and model base are described. At last we discussed on problems we had encountered during development and promotion.",2008,0, 621,Decision Support Systems: A Historical Overview,"The objective to develop clinical decision support system (CDSS) tools is to help physicians making faster and more reliable clinical decisions. The first step in their development is choose a machine learning classifier as the system core. Previous works reported implementation of artificial neural networks, support vector machines, genetic algorithms, etc. as core classifiers for CDSS; however, these works do not report the parameters considered or the selection process for their implemented classifier. This paper is focus on the selection of a classifier to develop a CDSS. The options were reduced by reviewing advantages and disadvantages of each classifier, comparing them and considering the project parameters. The results of the analysis that take into consideration the project parameters show that some classifiers could be affected negatively in their performance, leaving support vector machines as the more suitable classifier to develop the CDSS by concluding that SVM disadvantages do not affect the accuracy and its advantages are not affected negatively by the project parameters.",2015,0, 622,Decomposing Composition: Service-Oriented Software Engineers,"This article deals with software development life cycles to support development in service-centric software systems. The explosion of information technology (including service-oriented architecture) and its underlying capabilities has led to the evolution of software development life cycles over the past three decades. Software engineers are continuously exploring approaches to software and system development that are domain, application, and technology independent. Early approaches included waterfall life cycles that promote creating concrete requirements before any significant design or development occurs.",2007,0, 623,Default-Mode Network Activity Identified by Group Independent Component Analysis,"

Default-mode network activity refers to some regional increase in blood oxygenation level-dependent (BOLD) signal during baseline than cognitive tasks. Recent functional imaging studies have found co-activation in a distributed network of cortical regions, including ventral anterior cingulate cortex (vACC) and posterior cingulate cortex (PPC) that characterize the default mode of human brain. In this study, general linear model and group independent component analysis (ICA) were utilized to analyze the fMRI data obtained from two language tasks. Both methods yielded similar, but not identical results and detected a resting deactivation network at some midline regions including anterior and posterior cingulate cortex and precuneus. Particularly, the group ICA method segregated functional elements into two separate maps and identified ventral cingulate component and fronto-parietal component. These results suggest that these two components might be linked to different mental function during ""resting"" baseline.

",2009,0, 624,Defect Data Analysis Based on Extended Association Rule Mining,"This paper describes an empirical study to reveal rules associated with defect correction effort. We defined defect correction effort as a quantitative (ratio scale) variable, and extended conventional (nominal scale based) association rule mining to directly handle such quantitative variables. An extended rule describes the statistical characteristic of a ratio or interval scale variable in the consequent part of the rule by its mean value and standard deviation so that conditions producing distinctive statistics can be discovered As an analysis target, we collected various attributes of about 1,200 defects found in a typical medium-scale, multi-vendor (distance development) information system development project in Japan. Our findings based on extracted rules include: (l)Defects detected in coding/unit testing were easily corrected (less than 7% of mean effort) when they are related to data output or validation of input data. (2)Nevertheless, they sometimes required much more effort (lift of standard deviation was 5.845) in case of low reproducibility, (i)Defects introduced in coding/unit testing often required large correction effort (mean was 12.596 staff-hours and standard deviation was 25.716) when they were related to data handing. From these findings, we confirmed that we need to pay attention to types of defects having large mean effort as well as those having large standard deviation of effort since such defects sometimes cause excess effort.",2007,0, 625,Defining a Data Quality Model for Web Portals,"Open Government concept is experiencing an upswing. Open Government is based on three concepts (transparency, participation and collaboration) that require accessing data. To provide this access, Open Data Portals are being implemented around the world by every kind of organizations, mainly in the public sector. The aim of an Open Data Portal is exposing data in such a way that reusing is facilitated. Therefore, it is necessary to define a quality and maturity model to evaluate the characteristics of an Open Data Portal, considering different factors that can contribute to reusing potential, such like visualization, usability, granularity, data integration, reputation, relevancy, availability and reutilization. Also, effectively promoting data reusing implies setting specific norms to promote standardization among institutions, ministries and central governments' offices into the same country. This paper presents a formal proposal to evaluate - based in expert criteria - the quality and the maturity of an open data portal.",2015,0, 626,Defining an Integrated Agile Governance for Large Agile Software Development Environments,"

This paper highlights the important aspect of IT governance, with the objective of defining an unaddressed aspect of agile governance, by the application of an iterative, inductive, instantaneous analysis and emergent interpretation of appropriate data-grounded conceptual categories of IT governance. An effective agile governance approach will facilitate the achievement of desired discipline, rationale, business value, improved performance, monitoring, as well as control of large agile software development environments by aligning business goals and agile software development goals.

",2007,0, 627,Delegation in Virtual Team: the Moderating Effects of Team Maturity and Team Distance,"Virtual teams are becoming an important work structure in software development projects. However, a number of issues arise due to the complexity and newness of the virtual team context. One such issue relates to when and how team leaders should delegate authority and responsibility to the team. Previous studies have yielded conflicting results. This work aims to answer this question about delegation by investigating the moderating effects of team maturity and team distance on the relationship between leader delegation and team outcomes. A research model and specific propositions are presented. This paper provides useful insights for future virtual team leadership research and for organizations interested in developing virtual team leadership",2006,0, 628,Deriving a Valid Process Simulation from Real World Experiences,"

This paper presents a systematic approach to develop and configure a process simulation model that relates process capabilities to business parameters in order to support process improvement projects within Siemens. The research work focuses on the systematic set up of a validated and acknowledged model that matches the company's process improvement needs by involving experts to adapt an existing mathematical framework and simulation application. The methodology consists of three complementary steps: An approved conceptual model is used as structural skeleton, quantitative parameters are derived by a prospective expert survey, and final adaptation and customization is facilitated in order to be useable for process experts themselves (instead of model developers).

",2007,0, 629,Description of Anatomic Coordinate Systems and Rationale for Use in an Image-Guided Total Hip Replacement System,"AbstractLowering the risks of a surgical procedure is extremely important, especially for high-volume procedures such as total hip replacement. Significant work has been done to study total hip replacement procedures and provide the surgeon with techniques and tools to achieve better patient outcomes. Computer-assisted intervention allows surgeons to close the loop in medical research, allowing the surgeon to preoperatively plan, interoperatively navigate, and postoperatively analyze medical procedures, then use the results to repeat or improve the quality of future procedures. In order to expedite the cycle of planning, execution, and analysis amoung multiple research groups, standards for description, measurement, and procedure are necessary. In this work, the authors preset the coordinate systems used in their suite of computer-based tools for planning, executing, and evaluating the total hip replacement procedure. Rationales for the choices of each system are given along with experimental data which support the definitions.",2000,0, 630,DESCRY: A Method for Evaluating Decision-Supporting Capabilities of Requirements Engineering Tools,"

Complex decision-making is a prominent aspect of requirements engineering (RE) and the need for improved decision support for RE decision-makers has been identified by a number of authors in the research literature. Decision-supporting features and qualities can be integrated in RE tools. Thus, there is a need to evaluate the decision-supporting capabilities of RE tools. In this paper, we introduce a summative, criteria-based evaluation method termed DESCRY, which purpose is to investigate to what extent RE tools have decision-supporting capabilities. The criteria and their related questions are empirically as well as theoretically grounded.

",2008,0, 631,Design and Evaluation of a Handled Trackball as a Robust Interface in Motion,"

In this study, a handled trackball was developed aiming at future use in a vibration environment within cockpits, ships, or other carriers. The study was to determine an optimal handle posture for the handled device from combinations of three forward slopes (0°, 15°, and 30°) and lateral slopes (0°, 15°, and 30°). The device was also compared with a table trackball for basic operation properties. An experimental cursor movement task was used to measure the response time of each design, accompanied by subjective fatigue and usability evaluations. The results found that the forward 30° and lateral 30° combination reached the top cursor movement performance without imposing undue fatigue to the operator. The study suggests using the forward 30° and lateral 30° handled trackball as the optimal design solution to maintain the performance when the operation of the trackball is under severe vibration environment.

",2007,0, 632,Design And Evaluation Of Emergent Behaviour For Artificial Haemostasis,"This report describes an investigation into the specification, design and simulation of artificial haemostasis. Natural haemostasis (blood clotting) in the human body is reviewed, and used as a guide in creating the specification. A swarm of software agents are used to implement a set of emergent primitives (behavioural rules), designed after a discussion of development issues include distribution, parallelism, communication, common pitfalls and suitable development methodologies. The Breve environment is chosen as a simulator, based on its support for 3D visualisation and Newtonian mechanics, excellent documentation and intuitive programming language and modelling paradigm. Seven design candidates are simulated, combining mobility, negative feedback mechanisms and artificial chemical fields. The simulation is limited by the Breve physics engine and computational inefficiency, resulting in each design candidate being simulated on two contrasting wound models with small numbers of artificial platelets for 25 real-world milliseconds. The primitives performed well, especially on initialising the haemostatic response. They fared less well on wound coverage and building suitable clot shapes. Recommendations for further work to improve wound coverage based on providing geometric knowledge and using different areas of effect for different primitives are discussed.",2006,0, 633,Design Implications of Simultaneous Contrast Effects Under Different Viewing Conditions,"

This paper proposed that the viewing conditions for printed matters and projected images are quite different for three major reasons. Therefore, the brightness perception phenomenon and brightness perception theory generated from the printed matters should be revised and modified when applied to the projected images. The purposes of the present research were to examine the effects of brightness illusions while viewing the projected images, to understand the brightness perception process in projection environment, and thus to generate design implications for better usage of the projectors.

",2007,0, 634,Design Knowledge as a Learning Resource,"The organization way of learning resources is one of the core problems of resources learning system research. This paper analyzes existing problems of the traditional learning resources organization way and finds out hot spot of the current resources organization research and puts forward the ""knowledge point"" label way to describe and organize digital learning resources in combination with the learning resources organization characteristics of specific subject. In this paper it takes primary school English, the specific subject as an example to describe these three service frameworks: the intelligent learning resources labeling, personalized resources aggregation and learning partner group based on learning resources. It organizes learning resources of specific subject into an order, efficient, easy intelligent English resources learning system.",2013,0, 635,Design of a Modelling Language for Information System Security Risk Management,"With the rapid development of computer, insecurity factors(such as computer virus, hacker) influences the development of MIS, safety risk evaluation has been an important task. This paper describes a risk evaluation method for MIS based on fuzzy set theory which uses linguistic variables and respective fuzzy numbers to evaluate the factors. The primary weights of factors and evaluation of alternatives are determined by applying linguistic variables and fuzzy numbers. The notion of Shapely value is used to determine the global value of each factor in accomplishing the overall objective of the risk evaluation process, so the primary weights are revised, thus the importance of factors can be reflected more precisely. A major advantage of the method is that it allows experts and engineers to express their opinions on safety risk evaluation in linguistic variables rather than crisp values. An illustration is presented to demonstrate the application of the method in risk evaluation. The results are consistent with the results calculated by conventional risk evaluation method. The research demonstrates that the method is objective and accurate, and is an application value in the risk evaluation for MIS.",2010,0, 636,Design of VY: A Mini Visual IDE for the Development of GUI in Embedded Devices,"Graphical user interface (GUI) for applications of embedded devices (ED) has been increasingly in demand. Rapid GUI development tools are being required more than ever. However, traditional way of GUI development for ED applications has some limitation in support of visual environment and blocks rapid development. In this paper, we introduce a new mini visual IDE, called VY, which supports rapid development utilizing control library, and simulation of GUI. VY can realize the development of GUI in ED efficiently by means of ""what you see is what you get""(WYSIWYG). It consists of five modules: simulator, control library, script complier, script virtual machine (SVM) and visual window. By utilizing the hardware-independent control library of graphical interfaces, an attractive GUI can be produced with much less script effort on the visual window. Then, the script file is compiled into script data (including instructions) which will be executed by the SVM with visiting system calls in ED, and the Simulator will display the resulting GUI. After all these steps, the final GUI can be downloaded into the ED system. In this way, VY supports rapid development of hardware-independent GUI for ED applications on a visual metaphor.",2007,0, 637,Design Pattern Detection in Eiffel Systems,"The use of design patterns in a software system can provide strong indications about the rationale behind the system's design. As a result, automating the detection of design pattern instances could be of significant help to the process of reverse engineering large software systems. In this paper, we introduce DPVK (Design Pattern Verification toolKit), the first reverse engineering tool to detect pattern instances in Eiffel systems. DPVK is able to detect several different design patterns by examining both the static structure and the dynamic behaviour of a system written in Eiffel. We present three case studies that were performed to assess DPVK's effectiveness.",2005,0, 638,Design Patterns for Agent-Based Service Composition in theWeb,"Increasingly, distributed systems are being constructed by composing a number of discrete components. This practice, termed composition, is particularly prevalent within the Web service domain. Here, enterprise systems are built from many existing discrete applications, often legacy applications exposed using Web service interfaces. There are a number of architectural configurations or distribution patterns, which express how a composed system is to be deployed. However, the amount of code required to realise these distribution patterns is considerable. In this paper, we propose a novel Model Driven Architecture using UML 2.0, which takes existing Web service interfaces as its input and generates an executable Web service composition, based on a distribution pattern chosen by the software architect.",2006,0, 639,Design steps for developing software measurement standard etalons for ISO 19761 (COSMIC-FFP),"

Material measurement standard etalons are widely recognized as critical for accurate measurements in any field. The absence of standard etalons in software measurement is having a negative impact on software engineers when they come to use measurement results in decision-making. To verify measurement results and ensure unambiguous comparability across contexts, researchers in software measurement should design standard etalons and incorporate them into the design of every measure proposed. Since the design process for establishing standard etalons for software measures has not yet been investigated, this paper tackles this issue and illustrates the application of this process using ISO 19761: COSMIC-FFP.

",2007,0, 640,"Design, development and evaluation of online interactive simulation software for learning human genetics","SummaryOBJECTIVE: In this paper, the authors describe the design, development and evaluation of specific simulation software for Cytogenetics training in order to demonstrate the usefulness of computer simulations for both teaching and learning of complex educational content. BACKGROUND: Simulations have a long tradition in medicine and can be very helpful for learning complex content, for example Cytogenetics, which is an integral part of diagnostics in dysmorphology, syndromology, prenatal and developmental diagnosis, reproductive medicine, neuropediatrics, hematology and oncology. METHODS AND MATERIALS: The simulation software was developed as an Interactive Learning Object (ILO) in Java2, following a user-centered approach. The simulation was tested on various platforms (Windows, Linux, Mac-OSX, HP-UX) without any limitations; the evaluation was based on questionnaires and interviews amongst 600 students in 15 groups. CONCLUSION: This simulation has proved its worth in daily teaching since 2002 and further demonstrates that computer simulations can be helpful for both teaching and learning of complex content in Cytogenetics.ZusammenfassungZIELSETZUNG: In dieser Arbeit beschreiben die Autoren Design, Entwicklung und Evaluierung einer internetfhigen Lernsoftware zur Karyotypisierung fr den Einsatz im Medizinstudium. Dabei wird auch der Frage nachgegangen, ob Computersimulationen den hohen Anforderungen bei der Vermittlung komplexer Inhalte gerecht werden knnen. Es wird gezeigt, dass dieser Ansatz sowohl fr Lernende als auch fr Lehrende Mehrwerte bringt und den traditionellen Methoden in diesem Bereich berlegen ist, wenn die Simulation didaktisch richtig eingesetzt wird. HINTERGRUND: Simulationen haben in der Medizin eine lange Tradition und erweisen sich insbesondere dann als hilfreich, wenn es darum geht, hochkomplexe Zusammenhnge verstndlicher darzustellen, wie dieses Beispiel aus der Zytogenetik, einem Spezialgebiet der Humangenetik, zeigt. Die zytogenetische Diagnostik beschftigt sich mit reproduzierbaren strukturellen und numerischen Vernderungen der menschlichen Chromosomen und kommt in der Dysmorphologie, Syndromologie, Prnatal- und Entwicklungsdiagnostik, Reproduktionsmedizin, Neuropdiatrie, Hmatologie und Onkologie zum Einsatz. MATERIAL UND METHODEN: Die Simulationssoftware wurde als interaktives Lern-Objekt (ILO) entwickelt. Als Entwicklungsumgebung wurde die Java2-Plattform gewhlt; die Entwicklung selbst erfolgte nach den Grundstzen des User-Centered-Design. Die Software wurde Plattform-unabhngig ausgelegt und konnte auf verschiedenen Systemarchitekturen (Windows, Linux, MacOSX, HP-UX) erfolgreich und ohne Beschrnkungen getestet werden. Die durchgefhrte Evaluierung basierte auf Fragebgen und Interviews mit rund 600 Studenten in 15 Gruppen. SCHLUSSFOLGERUNGEN: Die Simulationssoftware hat ihre Alltagstauglichkeit seit 2002 im Lehrbetrieb unter Beweis gestellt. Anhand der Evaluierung konnte an diesem Beispiel gezeigt werden, dass diese Lernsoftware sehr gut geeignet ist, den diagnostisch-analytischen Prozess in der Zytogenetik anschaulicher zu vermitteln als die traditionelle papierbasierte Methode. Insbesondere wirkt sich die Einbettung in ein integratives Unterrichtskonzept positiv, sowohl auf Lernende als auch auf Lehrende, aus.",2008,0, 641,Designing a cooperative education program to support an IT strategic plan,"

This paper is a qualitative field report describing how undergraduate computing majors in a cooperative education (co-op) program are supporting a large company's efforts to acquire Java application development expertise and experience. The program has been in place for two years. The major finding to date is that talented undergraduate computing majors can make significant contributions to a corporate IT division undergoing a transition from legacy to contemporary software development platforms. A second finding is that common information systems problems present good intellectual challenges to students majoring in computer science, in information technology, and in software engineering. Finally, student teams that include representation from two or more computing disciplines can effectively combine their differing skill sets to solve software problems. More generally, these and related findings suggest that undergraduate computing majors represent an underutilized technical and economic asset.

",2004,0, 642,Designing an application?specific programming language for mobile robots,"The process of programming mobile robots is improved by this work. The tools used for programming robot systems have not advanced significantly, while robots themselves are rapidly becoming more capable because of advances in computing power and sensor technology. Industrial robotics relies on simple programming tools usable by non-expert programmers, while robotics researchers tend to use general purpose languages designed for programming in other domains. The task of developing a robot cannot be assumed to be identical to developing other software-based systems. The nature of robot programming is that there are different and additional challenges when programming a robot than when programming in other domains. A robot has many complex interfaces, must deal with regular and irregular events, real-time issues, large quantities of data, and the dangers of unknown conditions. Mobile robots move around and are capable of affecting everything in the environment. They are found in cluttered environments, rather than the carefully-controlled work spaces of industrial robots, increasing the risk to life and property and the complexity of the software. An analysis of the process of developing robots provides insight into how robot programming environments can be improved to make the task of robot development easier. Three analyses have been performed: a task analysis, to determine the important components of the robot development process, a use case analysis, to determine what robot developers must do, and a requirements analysis, to determine the requirements of a robot programming environment. From these analyses, the important features of a robot development environment were found. They include features such as data types for data commonly found in robotics, semantics for managing reactivity, and debugging facilities such as simulators. The analyses also found that the language is an important component of the programming environment. An application-specific language designed for robot programming is proposed as a solution for providing this component. Application-specific languages are designed for a particular domain of programming, allowing them to overcome the difficulties in that domain without concern for their usefulness in other domains. To test the hypothesis that such a language would improve robot development a set of language extensions has been created. These extensions, named RADAR, provide explicit support for robotics. The prototype implementation uses the Python programming language as the base language. RADAR provides support for two of the necessary features found in the analyses. The first is support for dimensioned data via a new primitive data type, ensuring all dimensioned data is consistent throughout a program. Dimensional analysis support is provided, allowing the safe mixing of data with compatible units, the creation of more complex units from simple single-dimension units, and built-in checking for errors in dimensioned data such as performing operations involving incompatible dimensioned data. For example, the data type will prevent the addition of distance and speed values. Several dimensional analysis systems have been developed for general purpose languages in the past. However, this is the first application of the concept specifically to robotics. The second feature is semantics for managing reactivity. In RADAR, the principle of ease-of-use through simplicity is followed. Special objects represent events and responses, and a special syntax is used for both specifying these objects and managing the connections between them in response to the changing state of the program. This reduces programming complexity and time. There are many other languages for managing reactivity, both the more general languages, such as Esterel, and languages for robotics, such as TDL and Colbert. RADAR is simpler than the general languages, as it is aimed solely at the needs of robot developers. However, it takes a different approach to its design than other languages for reactivity in robotics. These are designed to provide support for a specific architecture or architecture style; RADAR is designed based on the needs of robot developers and so is architecture-independent. RADAR?s design philosophy is to provide robot-specific features with simple semantics. RADAR is designed to support what robot developers need to do with the language, rather than providing a special syntax for supporting a particular robot, architecture or other system. RADAR has been shown to provide an improvement in dimensioned data management and reactivity management for mobile robot programming. It increases the readability, writability and reliability of robot software, and can reduce programming and maintenance costs. RADAR shows that an application-specific approach to developing a robot programming language can improve the process of robot development.",1992,0, 643,Designing an Aural Comprehension Aid for Interlingual Communication,"

This study presents an aural comprehension aid to help Japanese travelers hear a counter clerk's questions at fast food restaurants in the US. The prototype of the aid employed a speech recognition method in which a user assists the speech recognizer of the mobile device. The user presses the device's button as promptly as possible when missed words were spoken so that the recognizer perceives the moment, which is utilized for improving recognition accuracy. More than a hundred dialogs between a Japanese traveler and fast-food clerks were recorded and used to evaluate the prototype. The evaluation showed that the proposed method could improve recognition accuracy, though the improvement was not sufficient for practical use.

",2007,0, 644,Designing and Developing Medical Device Software Systems Using the Model Driven Architecture (MDA),"On the surface, model-driven architecture (MDA) appears to be a fundamentally new paradigm compared to traditional software development. Upon closer examination, however, MDA mainly shifts the focus of iterative development to a higher level of abstraction. The traditional waterfall software development process (and its variations) dictates that the system development be driven by low-level design and coding. This can introduce many productivity, maintenance and documentation issues into the process. Using the MDA pushes development to a higher level, where platform-independent analysis and detailed platform-specific design modeling make it easier to trace back to the requirements, thereby introducing a more stringent governance over the project. Also, it introduces a technology and platform independent standardized development process, system interoperability internally as well as the ability to provide communication bridges with external systems. The systems can be portable, which allows for what the creators of the MDA, the object management group (OMG), refer to as ""future proofing"" of software systems. This is the ability to have long-lived models that can be applied to any new implementation technologies that will ultimately be created and introduced to the software development world. This paper seeks to introduce and demystify MDA concepts and features, and show how their application can be used to develop highly interoperable and robust medical device software systems. In particular, if medical devices are designed using the MDA approach, they can quickly be adapted to utilize any interoperability (or ""plug and play"") standard that evolves in the future.",2007,0, 645,Designing Graphical Elements for Cognitively Demanding Activities: An Account on Fine-Tuning for Colors,"

Interactive systems evolve: during their lifetime, new functions are added, and hardware or software parts are changed, which can impact graphical rendering. Tools and methods to design, justify, and validate user interfaces at the level of graphical rendering are still lacking. This not only hinders the design process, but can also lead to misinterpretation from users. This article is an account of our work as designers of colors for graphical elements. Though a number of tools support such design activities, we found that they were not suited for designing the subtle but important details of an interface used in cognitively demanding activities. We report the problems we encountered and solved during three design tasks. We then infer implications for designing tools and methods suitable to such graphical design activities.

",2008,0, 646,Designing Secure and Usable Systems,"In order to improve Liquid fertilizer efficiency, Liquid variable fertilizing control system was designed with two working modes: a manual control and an automatic control mode. Taking the S3C44B0X microprocessor of ARM7 series as the core device, according to the fertilizing amount of the current location, the system was able to combined together for the machine speed and access to data, and the fertilizing amount from digital quantity to the flow of Liquid output was transformed. Flow Sensor was taken, a closed-loop feedback regulation to control the opening of electrical actuators was formed, adjusted valve opening to compete the variable fertilization. The test results in the field showed that the relative error of the variable fertilizing amount was less than 5%, When the fertilizing amount was 245-294kg/hm2, the system can realize the requirements of variable fertilizing.",2011,0, 647,Designing Web Sites for Customer Loyalty Across Business Domains: A Multilevel Analysis,"

Web sites are important components of Internet strategy for organizations. This paper develops a theoretical model for understanding the effect of Web site design elements on customer loyalty to a Web site. We show the relevance of the business domain of a Web site to gain a contextual understanding of relative importance of Web site design elements. We use a hierarchical linear modeling approach to model multilevel and cross-level interactions that have not been explicitly considered in previous research. By analyzing data on more than 12,000 online customer surveys for 43 Web sites in several business domains, we find that the relative importance of different Web site features (e.g., content, functionality) in affecting customer loyalty to a Web site varies depending on the Web site's domain. For example, we find that the relationship between Web site content and customer loyalty is stronger for information-oriented Web sites than for transaction-oriented Web sites. However, the relationship between functionality and customer loyalty is stronger for transaction-oriented Web sites than for information-oriented Web sites. We also find that government Web sites enjoy greater word-of-mouth effect than commercial Web sites. Finally, transaction-oriented Web sites tend to score higher on mean customer loyalty than do information-oriented Web sites.

",2007,0, 648,Desktop Search Engine Visualisation and Evaluation,"In the dynamic and interactive world of the Internet, we as a technology community have learned that we can make good Web-based applications better by adding rich visualization and analysis capabilities to control and navigate the applications' user interface. Noteworthy examples include Google Earth which, through visualization, turns hundreds of terabytes of raw satellite images and aero-photographs into actionable information shared with and enjoyed by millions every day. But for every problem the community addresses, plenty more go unrecognized or unexplored. In this article, the author examines a longstanding Web-application problem.",2008,0, 649,Detecting Disease-Specific Dysregulated Pathways Via Analysis of Clinical Expression Profiles,"

We present a method for identifying connected gene subnetworks significantly enriched for genes that are dysregulated in specimens of a disease. These subnetworks provide a signature of the disease potentially useful for diagnosis, pinpoint possible pathways affected by the disease, and suggest targets for drug intervention. Our method uses microarray gene expression profiles derived in clinical case-control studies to identify genes significantly dysregulated in disease specimens, combined with protein interaction data to identify connected sets of genes. Our core algorithm searches for minimal connected subnetworks in which the number of dysregulated genes in each diseased sample exceeds a given threshold. We have applied the method in a study of Huntington's disease caudate nucleus expression profiles and in a meta-analysis of breast cancer studies. In both cases the results were statistically significant and appeared to home in on compact pathways enriched with hallmarks of the diseases.

",2008,0, 650,Detecting Intrusions in Agent System by Means of Exception Handling,"

We present a formal approach to conception of a dedicated security infrastructure based on the exception handling in the protected agents. Security-related exceptions are identified and handled by a dedicated reflective layer of the protected agent, or delegated to specialized intrusion management agents in the system if the local reflective layer fails to address the problem. Incidents are handled either directly, if a known remedy exists or indirectly, when an appropriate solution must be identified before response execution. The cooperation between the intrusion management agents and aggregation of their observations can make the system more resilient to misclassification than a solution based purely on signature matching.

",2007,0, 651,Detection and Analysis of Off-Task Gaming Behavior in Intelligent Tutoring Systems,"Identifying off-task behaviors in intelligent tutoring systems is a practical and challenging research topic. This paper proposes a machine learning model that can automatically detect students' off-task behaviors. The proposed model only utilizes the data available from the log files that record students' actions within the system. The model utilizes a set of time features, performance features, and mouse movement features, and is compared to 1) a model that only utilizes time features and 2) a model that uses time and performance features. Different students have different types of behaviors; therefore, personalized version of the proposed model is constructed and compared to the corresponding nonpersonalized version. In order to address data sparseness problem, a robust Ridge Regression algorithm is utilized to estimate model parameters. An extensive set of experiment results demonstrates the power of using multiple types of evidence, the personalized model, and the robust Ridge Regression algorithm.",2010,0, 652,Detection of Breast Lesions in Medical Digital Imaging Using Neural Networks,AbstractThe purpose of this article is to present an experimental application for the detection of possible breast lesions by means of neural networks in medical digital imaging. This application broadens the scope of research into the creation of different types of topologies with the aim of improving existing networks and creating new architectures which allow for improved detection.,2006,0, 653,Determining Inspection Cost-Effectiveness by Combining Project Data and Expert Opinion,"There is a general agreement among software engineering practitioners that software inspections are an important technique to achieve high software quality at a reasonable cost. However, there are many ways to perform such inspections and many factors that affect their cost-effectiveness. It is therefore important to be able to estimate this cost-effectiveness in order to monitor it, improve it, and convince developers and management that the technology and related investments are worth while. This work proposes a rigorous but practical way to do so. In particular, a meaningful model to measure cost-effectiveness is proposed and a method to determine cost-effectiveness by combining project data and expert opinion is described. To demonstrate the feasibility of the proposed approach, the results of a large-scale industrial case study are presented and an initial validation is performed.",2005,0, 654,Determining Practice Achievement in Project Management using a Two-Phase Questionnaire on Small and Medium Enterprises,"This paper aims to obtain a baseline snapshot of Project Management practices using a two-phase questionnaire to identify both performed and non- performed practices. The proposed questionnaire is based on the Level 2 process areas of the Capability Maturity Model Integration for Development vl.2. It is expected that the application of the questionnaire to the processes will help small and medium software enterprises to identify those practices which are performed but not documented, which practices need more attention, and which are not implemented due to bad management or unawareness.",2007,0, 655,Determining the Cost-Quality Trade-Off for Automated Software Traceability,"Major software development standards mandate the establishment of trace links among software artifacts such as requirements, architectural elements, or source code without explicitly stating the required level of detail of these links. However, the level of detail vastly affects the cost and quality of trace link generation and important applications of trace analysis such as conflict analysis, consistency checking, or change impact analysis. In this paper, we explore these cost-quality trade-offs with three case study systems from different contexts - the open-source ArgoUML modeling tool, an industrial route-planning system, and a movie player. We report the cost-quality trade-off of automated trace generation with the Trace Analyzer approach and discuss its expected impact onto several applications that consume its trace information. In the study we explore simple techniques to predict and manipulate the cost-benefit trade-off with threshold-based filtering. We found that (a) 80% of the benefit comes from only 20% of the cost and (b) weak trace links are predominantly false trace links and can be efficiently eliminated through thresholds.",2005,0, 656,"DEVELOPING A PATIENT DATA MINING SYSTEM FOR THE UNIVERSITY OF GHANA HOSPITAL","Patient medical data are collected at the University of Ghana Hospital but they are not processed electronically. In the current medical record management system, the majority of out-patients do not have a full medical record. Thus, physician?s time is wasted by having to collect all the information again. In addition, it becomes very difficult to keep track of the patients and to review a patient?s medical record. A Data Mart was designed and built using Microsoft Access 2000 Database Management Systems (DBMS) to collect, store, organize and retrieve medical information of patients at the Medical Records Unit of the University of Ghana Hospital. The Data Mart was mined to provide timely, accurate and reliable reports adequate for clinical research and improving health care continuity. The Data Mart interfaces are intuitive and easy to use. The queries are flexible and the reports well labeled.",2005,0, 657,Developing Active Learning Experiences for Adaptive Personalised eLearning,"AbstractDeveloping adaptive, rich-media, eLearning courses tends to be a complex, highly-expensive and time-consuming task. A typical adaptive eLearning course will involve a multi-skilled development team of technologists, instructional developers, subject matter experts and integrators. Even where the adaptive course attempts to reuse existing digital resources, considerable effort is still required in the integration of the adaptive techniques and curriculum. This paper tackles the fundamental challenges of extending adaptivity across not only content (based on prior knowledge, goals, learning styles, connectivity etc.) but also across adaptive pedagogic approaches, communication tools and a range of e-activity types which are required for effective, deeper learning. This paper identifies key activities and requirements for adaptive course construction and presents the design of a tool to allow the rapid construction of such courses. The paper outlines the usage of this tool in the form of a case study and presents its research findings.",2004,0, 658,Developing an IS/ICT management capability maturity framework,"This study investigates the use of Through Life Capability Management perspective for refinement of the Conceptual Framework for Assessing and Measuring System Maturity, System Readiness and Capability Readiness using Architecture Frameworks. Metrics and measurement frameworks have no meaning if they are not used to make decisions. The importance of decision making at the architectural level is therefore discussed which is particularly pertinent for System Maturity.",2010,0, 659,Developing Decision Support Systems in Clinical Bioinformatics,"The objective to develop clinical decision support system (CDSS) tools is to help physicians making faster and more reliable clinical decisions. The first step in their development is choose a machine learning classifier as the system core. Previous works reported implementation of artificial neural networks, support vector machines, genetic algorithms, etc. as core classifiers for CDSS; however, these works do not report the parameters considered or the selection process for their implemented classifier. This paper is focus on the selection of a classifier to develop a CDSS. The options were reduced by reviewing advantages and disadvantages of each classifier, comparing them and considering the project parameters. The results of the analysis that take into consideration the project parameters show that some classifiers could be affected negatively in their performance, leaving support vector machines as the more suitable classifier to develop the CDSS by concluding that SVM disadvantages do not affect the accuracy and its advantages are not affected negatively by the project parameters.",2015,0, 660,Developing Portable Software,"ISO/IEC/IEEE 26515:2012 specifies the way in which user documentation can be developed in agile development projects. It is intended for use in all organizations that are using agile development, or are considering implementing their projects using these techniques. It applies to people or organizations producing suites of documentation, to those undertaking a single documentation project, and to documentation produced internally, as well as to documentation contracted to outside service organizations. ISO/IEC/IEEE 26515:2012 addresses the relationship between the user documentation process and the life cycle documentation process in agile development. It describes how the information developer or project manager may plan and manage the user documentation development in an agile environment. It is intended neither to encourage nor to discourage the use of any particular agile development tools or methods.",2012,0, 661,Developing Search Strategies for Detecting Relevant Experiments for Systematic Reviews,"Information retrieval is an important problem in any evidence-based discipline. Although evidence- based software engineering (EBSE) is not immune to this fact, this question has not been examined at length. The goal of this paper is to analyse the optimality of search strategies for use in systematic reviews. We tried out 29 search strategies using different terms and combinations of terms. We evaluated their sensitivity and precision with a view to finding an optimum strategy. From this study of search strategies we were able to analyse trends and weaknesses in terminology use in articles reporting experiments.",2007,0, 662,Developing Software Products for Mobile Markets: Need for Rethinking Development Models and Practices,"In the mobile domain, successful software product development requires the incorporation of market elements to the development process in order to gain a wide customer-base for the product. However, the focus of current IS process approaches is on contextual elements and users of the particular customer organization. The existing IS approaches tend to overlook the various views of stakeholders in the market, who have an active role in building, influencing, buying, or using the product. We aim to demonstrate this gap in the IS development processes, especially in the gathering and managing of information concerning the various parties contributing to the market success of a product. Further, we review the market-related New Product Development (NPD) discussion and show that this perspective could offer valuable insights for refining the knowledge and information management of the development process for mobile products.",2005,0, 663,Developing the technical in technical communication: advancing in a nonmanagement career path,"All companies provide promotion opportunities to talented employees demonstrating management skills. However, for those not desiring to pursue a management track, yet desiring increasing responsibilities technically, the opportunities to progress may seem limited. Although technical communicators already bring technical skills to the table, increasing domain expertise in a technical area outside of communication (e.g., engineering, programming) may provide continued challenges for one's career. As an example, technical communicators may look to engineering programs for technical fellows as examples of how a nonmanagement career path with domain expertise may provide advancement. This paper (1) outlines the technical nature of technical communication, (2) discusses the technical nature of technical communication, (3) examines the characteristics of an existing technical excellence program as an example of a nonmanagement career path, and (4) briefly discusses how one might build technical expertise, which might result in being a potential candidate for a technical excellence career path.",2005,0, 664,Developing your Project Proposal,A bidirectional process for agreeing on product requirements proves effective in overcoming misunderstandings that arise in the traditional handoff of requirements specifications to development teams.,2010,0, 665,Development of a Generic Design Framework for Intelligent Adaptive Systems,"The Intelligent adaptive tunnel lighting system is a design approach in which the tunnel interior lighting system adapts to the real time roadway environment conditions. The level of tunnel lighting can be reduced or dimmed when the intensity of tunnel exterior daylight is decreased. Moreover, it can also be reduced when traffic on access road to the tunnel is absent. Dimming the luminaries to meet the minimum requirements would save on power consumption as well as maintenance costs. An adaptive tunnel lighting system is developed with the purpose of providing sufficient tunnel interior luminance so that motorist can access and exit the tunnel conveniently. Three levels were developed with integration of motion and light sensors. The results indicate that average of 22.1 % of power consumption can be reduced by using this system. Furthermore, LED is highly recommended to be use in the system due to its luminous efficiency and long lifecycle.",2015,0, 666,Development of a PC-Based Object-Oriented Real-Time Robotics Controller,"Real-time controllers have traditionally used computer hardware systems that require expertise in hardware knowledge before the controller can be effectively implemented. This paper introduces a single-processor PC-based real-time motion controller developed using QNX 6.0 Neutrino operating system. Using the advantages of a distributed software system and an object-oriented architecture, the developed controller can be easily modified to suit any application. Common real-time software development issues such as timing, data logging and hardware management are discussed in detail, along with straight-forward solutions to address these problems in QNX. The entire system is implemented on a 2D cable-based high-speed pick-and-place robot, and the controller performance is compared to Delta Tau PMAC, a commercial controller. Developing a PC-based modular real-time controller allows researchers to easily implement their control algorithms at a reasonable cost",2005,0, 667,Development of a System to Measure Visual Functions of the Brain for Assessment of Entertainment,"

The unique event related brain potential (ERP) called the eye fixation related potential (EFRP) is obtained with averaging EEGs at terminations of saccadic eye movements. Firstly, authors reviewed some studies on EFRP in games and in ergonomics and, secondly introduced a new system for assessment of visual entertainments by using EFRP. The distinctive feature of the system is that we can measure the ERP under the conditions where a subject moves eyes. This system can analyze EEG data from many sites on the head and can display in real time the topographical maps related to the brain activities. EFRP is classified into several components at latent periods. We developed a new system to display topographical maps at three latent regions in order to analyze in more detail psychological and neural activities in the brain. This system will be useful for assessment of the visual entertainment.

",2005,0, 668,Development of Flexible and Adaptable Fault Detection and Diagnosis Algorithm for Induction Motors Based on Self-organization of Feature Extraction,"

In this study, the datamining application was achieved for fault detection and diagnosis of induction motors based on wavelet transform and classification models with current signals. Energy values were calculated from transformed signals by wavelet and distribution of the energy values for each detail was used in comparing similarity. The appropriate details could be selected by the fuzzy similarity measure. Through the similarity measure, features of faults could be extracted for fault detection and diagnosis. For fault diagnosis, neural network models were applied, because in this study, it was considered which details are suitable for fault detection and diagnosis.

",2005,0, 669,Dialog-based protocol: an empirical research method for cognitive activities in software engineering,"This paper proposes dialog-based protocol for the study of the cognitive activities during software development and evolution. The dialog-based protocol, derived from the idea of pair programming, is a significant alternative to the common think-aloud protocol, because it lessens the Hawthorne and placebo effects. Using screen-capturing and voice recording instead of videotaping further reduces the Hawthorne effect. The self-directed learning theory provides an encoding scheme and can be used in analyzing the data. A case study illustrates this new approach.",2005,0, 670,Dialogue Modes in Expert Tutoring,"We investigate the automatic classification of student emotional states in a corpus of human-human spoken tutoring dialogues. We first annotated student turns in this corpus for negative, neutral and positive emotions. We then automatically extracted acoustic and prosodic features from the student speech, and compared the results of a variety of machine learning algorithms that use 8 different feature sets to predict the annotated emotions. Our best results have an accuracy of 80.53 % and show 26.28 % relative improvement over a baseline. These results suggest that the intelligent tutoring spoken dialogue system we are developing can be enhanced to automatically predict and adapt to student emotional states.",2003,0, 671,DiCoT: A Methodology for Applying Distributed Cognition to the Design of Teamworking Systems,"

Distributed Cognition is growing in popularity as a way of reasoning about group working and the design of artefacts within work systems. DiCoT (Distributed Cognition for Teamwork) is a methodology and representational system we are developing to support distributed cognition analysis of small team working. It draws on ideas from Contextual Design, but re-orients them towards the principles that are central to Distributed Cognition. When used to reason about possible changes to the design of a system, it also draws on Claims Analysis to reason about the likely effects of changes from a Distributed Cognition perspective. The approach has been developed and tested within a large, busy ambulance control centre. It supports reasoning about both existing system design and possible future designs.

",2005,0, 672,Differences in gene expression between B-cell chronic lymphocytic leukemia and normal B cells: A meta-analysis of three microarray studies,"

Motivation: A major focus of current cancer research is to identify genes that can be used as markers for prognosis and diagnosis, and as targets for therapy. Microarray technology has been applied extensively for this purpose, even though it has been reported that the agreement between microarray platforms is poor. A critical question is: how can we best combine the measurements of matched genes across microarray platforms to develop diagnostic and prognostic tools related to the underlying biology?

Results: We introduce a statistical approach within a Bayesian framework to combine the microarray data on matched genes from three investigations of gene expression profiling of B-cell chronic lymphocytic leukemia (CLL) and normal B cells (NBC) using three different microarray platforms, oligonucleotide arrays, cDNA arrays printed on glass slides and cDNA arrays printed on nylon membranes. Using this approach, we identified a number of genes that were consistently differentially expressed between CLL and NBC samples.

Availability: Glass slide cDNA array data are available through the public archives of Stanford University at http://cmgm.stanford.edu/pbrown/. Oligonucleotide array data are available in the supplemental material from Klein ηl. Nylon membrane cDNA microarray data are freely available at our website, http://bioinformatics.mdanderson.org/pubdata.html. Software to batch-process gene annotations (GeneLink) is available at http://bioinformatics.mdanderson.org/GeneLink.html. Image quantification software (ScanAlyze) is available at http://rana.lbl.gov/downloads/ScanAlyze.zip. Statistical software (S-Plus 2000) is available commercially from Insightful Corp., Seattle, WA. Software to determine the co-occurrence of terms in PubMed abstracts (PDQ_MED) is commercially available from Inpharmix Inc., Greenwood, IN.

",2004,0, 673,Diffusion of E-Government Innovations in the Dutch Public Sector: The Case of Digital Community Policing,"

This article examines the diffusion of an e-government innovation - called SMS-alert - among Dutch police forces. A conceptual framework for the diffusion of e-government innovations in the public sector is developed which combines a functional and a constructivist (or cultural) approach of diffusion. The framework focuses on diffusion as a mutual process of communication, learning and sense making. Based on this framework and by using data from documentation, websites and interviews, the process of diffusion and adoption of SMS-alert is reconstructed and the factors and mechanisms explaining this process are identified. The case study demonstrates that although SMS-alert has diffused rather rapidly until now, the diffusion process is currently facing some difficulties, mainly due to the existence of competing innovations. By demonstrating the importance of both the functional, political and institutional meaning of the innovation, the article confirms the value of combining different approaches in studying the diffusion of e-government innovations.

",2007,0, 674,Digital Therapy: The Role of Digital Positive Psychotherapy in Successful Self-regulation,"

We are currently developing a digital positive psycho-therapy intervention. The intervention will be presented at the 3rd International Conference on Persuasive Technology 2008. By means of installing positive emotions, digital positive psycho-therapy may help prevent ego-depletion and hence increase the chances for successful self-regulation. This may turn out to be an important component in many health behaviour interventions. The current paper discusses some basic insights regarding how digital psychotherapy interventions can be designed and why they hold the potential to make a valuable contribution.

",2008,0, 675,Discovering and Visualizing Network Communities,"There are several large-scale entities that are related with each other. Web hyperlink networks, social networks and metabolic networks are the examples of such networks. Discovering dense subnetworks (communities) from given networks is important for detecting macroscopic and microscopic structures. Although many discovery methods are proposed, qualitative and quantitative differences among them are not fully discussed. As the first step for interactive analysis of network structures, the authors are developing a system for discovering and visualizing network communities. The system has abilities for divisive and agglomerative discovery of communities from given networks based on modularity.",2007,0, 676,Discovering Relations Among GO-Annotated Clusters by Graph Kernel Methods,"

The biological interpretation of large-scale gene expression data is one of the challenges in current bioinformatics. The state-of-theart approach is to perform clustering and then compute a functional characterization via enrichments by Gene Ontology terms [1]. To better assist the interpretation of results, it may be useful to establish connections among different clusters. This machine learning step is sometimes termed cluster meta-analysis, and several approaches have already been proposed; in particular, they usually rely on enrichments based on flat lists of GO terms. However, GO terms are organized in taxonomical graphs, whose structure should be taken into account when performing enrichment studies. To tackle this problem, we propose a kernel approach that can exploit such structured graphical nature. Finally, we compare our approach against a specific flat list method by analyzing the cdc15- subset of the well known Spellman's Yeast Cell Cycle dataset [2].

",2007,0, 677,"Discrete Continuity of Information System, Knowledge System, and e-Business System","Since information systems are pervasive in the business and non-business areas, the issue of extending researches on information systems to knowledge systems and e-business systems is one of the most profitable topics of researches. We propose a historical, discontinuous changes introducing ambiguity in explaining and interpreting innovative nature of three paradigms of systems: information systems, knowledge systems, and e-business systems. Resorting to the historical perspective in developing ideas into meaningful themes, we proposed a discrete continuity in interpreting changes of paradigms of systems. Discrete continuity may be explained by ambiguously-shared meaningful perspectives applied to different paradigms of systems and interpretive elements of each system. The discrete continuity has been adopted to make ambiguity utilized may have instrumental contribution in researches. The engrafted ambiguity in systems design, development, and use could have enduring instrumental value in interpreting the types or variants of systems in each paradigm of systems.",2007,0, 678,Distributed Regression for Heterogeneous Data Sets,The present paper compares the bias and scatter of the Weibull shape parameter as estimated with the Maximum Likelihood method (ML) and four Linear Regression techniques (LR). For small data sets the bias and scatter can be very significant. It is found that ML and weighted LR give similar results. LR requires to select a plotting position. The effect of the plotting position is shown. Two changes may be used for the revised IEEE guide: plots with expected plotting positions and re-evaluation of ML and LR,1999,0, 679,"Distributed, Integrated, and Collaborative Research Environment (DiCore): An Architectural Design","In this paper, we propose to develop a Distributed, Integrated, and Collaborative Research Environment (DiCore) that may enable a new generation of scientific discovery, learning, and innovation in all scientific areas. DiCore seamlessly integrates a set of carefully selected high quality web-based software tools with ajax-style rich user interfaces. These tools collectively satisfy the daily research needs of researchers. Underneath DiCore is a set of intelligent machine learning and data mining algorithms that are designed to serve researchers, educators, and students in different aspects, such as recommending reading materials based on researchers' interests, and suggesting working research groups. DiCore is a potential enabling research and educational environment that may satisfy our next generation collaborative research and educational needs. A wide range of new collaborative research and educational projects in all areas of computing become possible under DiCore.",2007,0, 680,Distribution Dimensions in Software Development Projects: A Taxonomy,"For many economic and technological reasons, companies are increasingly conducting projects on a global level. Global projects are highly distributed, with experts from different companies, countries, and continents working together. Such distribution requires new techniques for project coordination, document management, and communication. Distribution complexities include various project types - such as global, interorganizational, or open source software projects - that is distributed in different ways and face particular challenges. A literature-based taxonomy identifies four distribution dimensions in distributed software development. A case study illustrates their application in a real-world development project",2006,0, 681,Distributive Medical Management System,"It is becoming more vital than ever before for business to manage customer relationship and build customer loyalty. Information systems and data mining techniques have significant contributions in the customer relationship management process. In the dynamic business environment, information systems need to evolve to adapt to the change in requirements, which is driven by customer relationship management. An evolving information system is proposed in this paper and discussed by focusing on a specific type of knowledge, namely association rule. With new classes and attributes created based on new knowledge and user requirements, the evolving information system is capable of collecting, processing and providing more valuable information on customers, to support customer relationship management.",2008,0, 682,Do computer science students know what they know?: a calibration study of data structure knowledge,"This paper describes an empirical study that investigates the knowledge that Computer Science students have about the extent of their own previous learning. The study compares self-generated estimates of performance with actual performance on a data structures quiz taken by undergraduate students in courses requiring data structures as a pre-requisite. The study is contextualized and grounded within a research paradigm in Psychology called calibration of knowledge that suggests that self-knowledge across a range of disciplines is highly unreliable. Such self-knowledge is important because of its role in meta-cognition, particularly in cognitive self-regulation and monitoring. It is also important because of the credence that faculty give to student self-reports. Our results indicate that Computer Science student self-estimates correlate moderately with their performance on a quiz, more so for estimates provided after they have taken the quiz than before. The pedagogical implications are that students should be provided with regular opportunities for empirical validation of their knowledge as well as being taught the metacognitive skills of regular self-testing in order to overcome validation bias.",2005,0, 683,Do personality traits affect the acceptance of e-finance?,"This paper presents a method for inferring the Positive and Negative Affect Schedule (PANAS) and the BigFive personality traits of 35 participants through the analysis of their implicit responses to 16 emotional videos. The employed modalities to record the implicit responses are (i) EEG, (ii) peripheral physiological signals (ECG, GSR), and (iii) facial landmark trajectories. The predictions of personality traits/PANAS are done using linear regression models that are trained independently on each modality. The main findings of this study are that: (i) PANAS and personality traits of individuals can be predicted based on the users' implicit responses to affective video content, (ii) ECG+GSR signals yield 70%±8% F1-score on the distinction between extroverts/introverts, (iii) EEG signals yield 69%±6% F1-score on the distinction between creative/non creative people, and finally (iv) for the prediction of agreeableness, emotional stability, and baseline affective states we achieved significantly higher than chance-level results.",2015,0, 684,Documenting Theories Working Group Results,"In this paper, we present a preliminary work within the CEG to evaluate the LTE-advanced proposal as a candidate for further improvements of LTE earlier releases. We consider an open-source LTE simulator that implements most of the LTE features. We focus on channel state variations and related link adaptation. We study the performance of channel quality report to the transmitter to track the downlink time-frequency channel variations among users. This process aims at performing link adaptation. To insure this procedure, we implement a CQI feedback scheme to an LTE open-source simulator. SINRs per subcarriers are computed supposing a perfect channel knowledge then mapped into a single SINR value for the whole bandwidth. This SINR is represented by a CQI value that allows for dynamic MCS allocation. CQI mapping is evaluated through different SINR mapping schemes.",2010,0, 685,Document-Oriented Views of Guideline Knowledge Bases,"The development of multimodal user interfaces is a complex process that needs useful guidelines and hints. This paper summarizes the overall results from a previous experimental program and then suggests useful guidelines for creating usable and acceptable knowledge-based system (KBS) interfaces. The summary of the overall experimental results ranks three experimental conditions of multimodal interaction, according to five evaluation domains. These domains were: ability to recall, ability to use knowledge effectively, ability to use knowledge efficiently, extended usability attitudes and user acceptance. The proposed guidelines offer a set of useful hints while also describing the sources of the variance between the experimental conditions. The empirically derived and proposed guidelines were: knowledge communication from a single point of contact, audio-visual metaphors to tackle information overload, task complexity and knowledge-intensity as key factors, and socially rich presence which allowed for the first impression to last longer. These guidelines have presented a roadmap for both researchers and practitioners working within the field of knowledge-based software engineering (KBSE).",2014,0, 686,Does Information Content Influence Perceived Informativeness? An Experiment in the Hypermedia,"

This paper reviews research in both information content and perceived informativeness in the literature, and examines the causal effect of two information content factors on perceived informativeness. A 2×2 factorial design was adopted in an experiment involving a hypothetical online retailer. Results from 120 surveys collected show strong support of the two hypotheses in the expected direction, i.e., both price and quality information had a significantly positive effect on perceived informativeness. Data also indicate that perceived informativeness is a significant predictor of visitor attitude toward the site and visitor intention to revisit.

",2007,0, 687,Does task training really affect group performance?,The purpose of this paper is to examine the important relationships between task training experience and software review performance. A total of 192 subjects voluntarily participated in a laboratory experimental research and were randomly assigned into 48 four-member groups. Subjects were required to find defects from a design document. The main finding shows that task training experience has no significant effect on software review performance.,2004,0, 688,Does Test-Driven Development Improve the Program Code? Alarming Results from a Comparative Case Study,"

It is suggested that test-driven development (TDD) is one of the most fundamental practices in agile software development, which produces loosely coupled and highly cohesive code. However, how the TDD impacts on the structure of the program code have not been widely studied. This paper presents the results from a comparative case study of five small scale software development projects where the effect of TDD on program design was studied using both traditional and package level metrics. The empirical results reveal that an unwanted side effect can be that some parts of the code may deteriorate. In addition, the differences in the program code, between TDD and the iterative test-last development, were not as clear as expected. This raises the question as to whether the possible benefits of TDD are greater than the possible downsides. Moreover, it additionally questions whether the same benefits could be achieved just by emphasizing unit-level testing activities.

",2008,0, 689,Does Use of Development Model Affect Estimation Accuracy and Bias?,"AbstractObjective. To investigate how the use of incremental and evolutionary development models affects the accuracy and bias of effort and schedule estimates of software projects. Rationale. Advocates of incremental and evolutionary development models often claim that use of these models results in improved estimation accuracy. Design of study. We conducted an in-depth survey, where information was collected through structured interviews with 22 software project managers in 10 different companies. We collected and analyzed information about estimation approach, effort estimation accuracy and bias, schedule estimation accuracy and bias, completeness of delivered functionality and other estimation related information. Results. We found no impact from the development model on the estimation approach. However, we found that incremental and evolutionary projects were less prone to effort overruns. The degree of delivered functionality and schedule estimation accuracy, on the other hand, were seemingly independent of development model. Conclusion. The use of incremental and evolutionary development models may reduce the chance of effort overruns.",2004,0, 690,Dwell-Based Pointing in Applications of Human Computer Interaction,"As contribution the research study as outcome developed a software application and programmed several new recognition gestures and presented a set of tools supporting the spectrum of software engineering natural user interfaces to be used in University promotion and marketing. This research study tries to foster several contributions. Firstly, this system is built on basis of separate library used for the gesture recognition part of the application, which can also work separately from the system and can be used in different contexts. It is very extendable and allows the users to implement new gestures on top of the library, thus enriching its context and provides easy start for the user allowing him to concentrate to the business logic instead of the low level programming of the gesture recognition algorithms. Secondly, in addition to the basic gestures, created are several other gestures that add additional value to the library itself. New gestures that are added to the library are DownSwipe, UpSwipe, ClockwiseCircle, CounterClockwiseCircle, Push gestures, ManualDoorOpen gesture, AutomaticDoorOpen gesture and WalkingGesture. Thirdly, this research also contributes to researchers and professionals who are dedicated to a research in the area of natural user interfaces and their efforts in finding an effective algorithm for human gesture recognition. This research investigates different approached to gesture recognition, such as Record and recognize approach, Neural Networks approach and Gesture composition approach, presented which are their advantages and weaknesses and presented why chosen is the Gesture Composition approach.",2014,0, 691,Dynamic Analysis Techniques for the Reconstruction of Architectural Views,"Gaining an understanding of software systems is an important discipline in many software engineering contexts. It is essential that sofiware engineers are assisted as much as possible during this task, e.g., by using tools and techniques that provide architectural views on the software at hand. This Ph.D. research addresses this issue by employing dynamic analysis for the reconstruction of such views from running systems. The aim is to devise new abstraction techniques and novel visualizations, to combine them, and to assess the benefits through substantial case studies and controlled experiments. This paper describes our approach, reports on the results thus far, and outlines our future steps.",2007,0, 692,Dynamic User Modeling in Health Promotion Dialogs,This paper describes the motivation and methodology for representing a dynamic model of a participant user in a health coaching intervention.,2014,0, 693,E- Health System for Coagulation Function Management by Elderly People,"

E- Health is a developing area of major social, medical and economic importance especially for the elderly population and citizens of remote areas. Our objectives were to identify visualization methods for a patient-oriented system of collection, storage, and retrieval of coagulation function data, The research group included 25 elderly (72.2± 5.5 years) and 25 young participants (30.4±4.9 years).The participants completed tasks based on different visualization models for data entry and follow-up of clinical information, in three experimental websites equipped with hidden tracking programs. We followed functional parameters (time, acuity), subjective parameters (preference, satisfaction) and physiological parameters (pulse, skin temperature, sweating, respiratory rate, and muscle tension). Time for task completion was significantly longer in elderly compared to younger participants in all experimental websites, without significant differences in accuracy. Yet, in specific tasks the elderly performed better than young participants. Specific suggestions for data entry and data visualization are presented.

",2007,0, 694,"E-commerce security threats: awareness, trust and practice","

Electronic commerce (e-commerce) has always been accompanied by security concerns. Despite numerous studies in the areas of security and trust, to date there is a dearth of research that addresses the impact of security trust and security awareness on the prevalence of online activities. This study investigates the relationships among awareness of security threats, security trust, frequency of e-commerce activities and security practices. The results presented herein suggest that security awareness level is positively correlated to security practices. The security trust level of frequent e-commerce users is higher than that of infrequent e-commerce users. The study also found that there was no significant correlation between trust level and awareness.

",2008,0, 695,"eDiab: A System for Monitoring, Assisting and Educating People with Diabetes","

In this paper, a system developed for monitoring, assisting and educating people with diabetes, named eDiab, is described. A central node (PDA or mobile phone) is used at the patient's side for the transmission of medical information, health advices, alarms, reminders, etc. The software is adapted to blind users by using a screen reader called Mobile Speak Pocket/Phone. The glucose sensor is connected to the central node through wireless links (Zigbee/Bluetooth) and the communication between the central node and the server is established with a GPRS/GSM connection. Finally, a subsystem for health education (which sends medical information and advice like treatment reminder), still under development, is briefly described

",2006,0, 696,Editorial: Introduction to special section on Evaluation and Assessment in Software Engineering EASE06,"EASE06 was the 10th International Conference on Evaluation and Assessment in Software Engineering (EASE). EASE provides a forum for empirical researchers to present their latest research, as well as to discuss issues related to evaluation and empirical studies. The conference welcomes theoretical papers discussing empirical methods and practical papers reporting the results of empirical evaluations.",2007,0, 697,Editorial: Single- versus double-blind reviewing,"

This editorial analyzes from a variety of perspectives the controversial issue of single-blind versus double-blind reviewing. In single-blind reviewing, the reviewer is unknown to the author, but the identity of the author is known to the reviewer. Double-blind reviewing is more symmetric: The identity of the author and the reviewer are not revealed to each other. We first examine the significant scholarly literature regarding blind reviewing. We then list six benefits claimed for double-blind reviewing and 21 possible costs. To compare these benefits and costs, we propose a double-blind policy for TODS that attempts to minimize the costs while retaining the core benefit of fairness that double-blind reviewing provides, and evaluate that policy against each of the listed benefits and costs. Following that is a general discussion considering several questions: What does this have to do with TODS, does bias exist in computer science, and what is the appropriate decision procedure? We explore the “knobs” a policy design can manipulate to fine-tune a double-blind review policy. This editorial ends with a specific decision.

",2007,0, 698,Editor's corner: An assessment of systems and software engineering scholars and institutions (2001-2005),"This paper presents the findings of a five-year study of the top scholars and institutions in the systems and software engineering field, as measured by the quantity of papers published in the journals of the field in 2001-2005. The top scholar is Magne Jorgensen of Simula Research Laboratory, Norway, and the top institution is Korea Advanced Institute of Science and Technology, Korea. This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period.",2008,0, 699,EDOC to EJB transformations within MDA,"The Model Driven Architecture is a proposition of the framework of software development process where main accent is put on system models, i.e. platform independent and platform dependent models, and transformations between them. Applying the MDA is related with preparation of these two types of assets. In the thesis, the EDOC and the EJB platform are considered as an examples of platform-independent and platform dependent models. In order to complete this picture, additionally transformations between these two models are required. The authors focus in the thesis on transformations. In particular, the authors present the transformation specification description, and specify set of transformations between the EDOC and EJB models. The transformations are narrowed to the subset of structural aspects of the EDOC models.",2005,0, 700,Educational Objectives for Empirical Methods,"New educational pedagogies are emerging in an effort to increase the number of new engineers available to enter the workforce in the coming years. One of the re-occurring themes in these pedagogies is some form of the flipped classroom. Often the additional classroom time gained from flipping is used to reinforce learning objectives. This paper suggests that it might be more beneficial to students if some of that time is used to address common non-cognitive barriers that prevent students from succeeding in the major. This experiment was conducted on a freshman Introductory Computer Science course with students whom are less traditionally prepared. Three different pedagogies were compared: a hybrid lecture-active learning pedagogy, a fully flipped classroom pedagogy, and a fully flipped classroom with added barrier interventions pedagogy. All three groups were in SCALE-UP classrooms. While fully flipping the classroom showed a slight increase to student progression over the hybrid classroom, it was not significant. When barrier interventions were added to address motivation and interest, opportunity, psychosocial skills, cognitive skills, and academic preparedness a significant increase in student progression occurred. This suggests that students might benefit from some classroom time being spent on non-technical skills.",2016,0, 701,Effective identifier names for comprehension and memory,"AbstractReaders of programs have two main sources of domain information: identifier names and comments. When functions are uncommented, as many are, comprehension is almost exclusively dependent on the identifier names. Assuming that writers of programs want to create quality identifiers (e.g., identifiers that include relevant domain knowledge), one must ask how should they go about it. For example, do the initials of a concept name provide enough information to represent the concept? If not, and a longer identifier is needed, is an abbreviation satisfactory or does the concept need to be captured in an identifier that includes full words? What is the effect of longer identifiers on limited short term memory capacity? Results from a study designed to investigate these questions are reported. The study involved over 100 programmers who were asked to describe 12 different functions and then recall identifiers that appeared in each function. The functions used three different levels of identifiers: single letters, abbreviations, and full words. Responses allow the extent of comprehension associated with the different levels to be studied along with their impact on memory. The functions used in the study include standard computer science textbook algorithms and functions extracted from production code. The results show that full-word identifiers lead to the best comprehension; however, in many cases, there is no statistical difference between using full words and abbreviations. When considered in the light of limited human short-term memory, well-chosen abbreviations may be preferable in some situations since identifiers with fewer syllables are easier to remember.",2007,0, 702,Effective preparation for design review: using UML arrow checklist leveraged on the Gurus' knowledge,"

The process of design construction and design review is notorious in its endless debates and discussions. Sometimes, these ""gurus' style"" debates are triggered by simple mistakes that could have been easily avoided. Precious time is lost, both because of the designer's lack of experience and the reviewer's overconfidence. If both sides were effectively prepared and armed with the same ammunition of knowledge, the ""balance of terror"" would reduce the will to argue and focus would be maintained. Reinforcing the abilities to face the intimidating guru's knowledge can be by formal encapsulation of that knowledge and systematic gurus' guidance to the design steps.

This paper presents a methodology that captures the recommended selection of entities' relationships using UML notation. The UML arrow methodology is comprised of a checklist, which provides immediate yes/no feedback to simple guiding questions and a governing iterative process. The consolidated questions encompass best practice design methodologies, composed, edited and corroborated by the company's experienced designers. It contains a common process based on capturing the gurus' recommendations according to the company's specific needs.

The methodology was evaluated in practitioners' workshops as well as practical design sessions, based on data received through questionnaires. The results obtained from 62 participants reveal that (1) usage of this methodology led to effective identification of inappropriate entities' relationships; (2) shortened the design review duration; (3) removed redundant technical remarks; (4) improved the preparation stages as well as brainstorming sessions, and (4) overall maintained a focused discussions regarding the problem.

",2007,0, 703,Effective software management: where do we falter?,"A tool that supports rapid software development and effective process management is presented in this paper. The tool is designed in accordance with an integrated method that asks less design work for speeding software development and also, for effective management, directs the development of system components by imposing a layered specification and construction of these components through its process of development activities where Petri net techniques and Scrum features are imposed to support such management issues as progress monitoring and analysis. Since the tool supports a layered development of system components and the management of its development activities is featured in an integrated manner, team productivities can be greatly enhanced by intimate collaborations between development and management staff. For illustration, an example application is presented that directs the development of a software system with business-oriented services.",2013,0, 704,Effective Software Merging in the Presence of Object-Oriented Refactorings,"Current text based Software Configuration Management (SCM) systems have trouble with refactorings. Refactorings result in global changes which lead to merge conflicts. A refactoring-aware SCM system reduces merge conflicts. This paper describes MolhadoRef, a refactoring-aware SCM system and the merge algorithm at its core. MolhadoRef records change operations (refactorings and edits) used to produce one version, and replays them when merging versions. Since refactorings are change operations with well defined semantics, MolhadoRef treats them intelligently. A case study and a controlled experiment show that MolhadoRef automatically solves more merge conflicts than CVS while resulting in fewer merge errors.",2008,0, 705,Effective Software Project Management Education through Simulation Models: An Externally Replicated Experiment,"AbstractIt is an undeniable fact that software project managers need reliable techniques and robust tool support to be able to exercise a fine control over the development process so that products can be delivered in time and within budget. Therefore, managers need to be trained so that they could learn and use new techniques and be aware of their possible impacts. In this context, effective learning is an issue. A small number of empirical studies have been carried out to study the impact of software engineering education. One such study is by Pfahl et al [11] in which they have performed a controlled experiment to evaluate the learning effectiveness of using a process simulation model for educating computer science students in software project management. The experimental group applied a Systems Dynamics simulation model while the control group used the COCOMO model as a predictive tool for project planning. The results indicated that students using the simulation model gain a better understanding about typical behaviour patterns of software development projects. Experiments need to be externally replicated to both verify and generalise original results. In this paper, we will discuss an externally replicated experiment in which we keep the design and the goal of the above experiment intact. We then analyse our results in relation to the original experiment and another externally replicated experiment, discussed in [12].",2004,0, 706,Effectiveness of Content Preparation in Information Technology Operations: Synopsis of a Working Paper,"

Content preparation is essential for web design [25]. The objective of this paper is to establish a theoretical foundation for the development of methods to evaluate the effectiveness of content preparation in information technology operations. Past studies identify information as the dominant concern of users, and delivery mechanism as a secondary concern [20]. The best presentation of the wrong information results in a design with major usability problems and does not aid the user in accomplishing his task. This paper shifts the focus of existing usability evaluation methods. It attempts to fill the void in usability literaoture by addressing the information aspect of usability evaluation. Combining the strengths of content preparation and usability evaluation yields major implications for a broad range of IT uses.

",2007,0, 707,Effectiveness of end-user debugging software features: are there gender issues?,"Although gender differences in a technological world are receiving significant research attention, much of the research and practice has aimed at how society and education can impact the successes and retention of female computer science professionals-but the possibility of gender issues within software has received almost no attention. If gender issues exist with some types of software features, it is possible that accommodating them by changing these features can increase effectiveness, but only if we know what these issues are. In this paper, we empirically investigate gender differences for end users in the context of debugging spreadsheets. Our results uncover significant gender differences in self-efficacy and feature acceptance, with females exhibiting lower self-efficacy and lower feature acceptance. The results also show that these differences can significantly reduce females' effectiveness.",2005,0, 708,Effects of Refactoring Legacy Protocol Implementations: A Case Study,"We report on our experience of applying collaboration-based protocol design in combination with software refactoring as enabling technologies for re-engineering legacy protocol implementations. We have re-engineered a subsystem of a large enterprise communications product. The subsystem implements a standards-based communication protocol with numerous proprietary extensions. Due to many enhancements which the code has undergone, it showed clear signs of design degradation. The business purpose of the re-engineering project was to improve intelligibility and changeability of the code without changing or breaking existing functionality and without imposing a significant performance penalty. We used the re-engineering effort as experimental context for evaluating the enabling technologies. This article reports on our findings and discusses why collaboration-based protocol design in combination with software refactoring worked well in achieving success with our re-engineering effort.",2004,0, 709,Effects of the Office Environment on Health and Productivity,"Nanotechnology presents opportunities to create new and better products. It has the potential to improve assessment and prevention of environmental risks. However, there are unanswered questions about the impacts of nanomaterials and nanoproducts on human health, safety and the environment. This paper describes the issues that should be considered to ensure that society benefits from advances in environmental protection that nanotechnology may offer, and to understand and address any potential risks from environmental exposure to nanomaterials. Attempts have been made in this work to answer 1) How nanoparticles might change over time once present in the environment, 2) What effect they might have on organisms and 3) What effect they might have on human health.",2011,0, 710,Efficiency evaluation of data warehouse operations,"Many real world problems have fuzzy nature. Efficiency measurement and ranking various decision making units (DMUs) with different fuzzy inputs and outputs using fuzzy Data Envelopment Analysis (FDEA) is an applicable tool for managers. Efficiency evaluation of manufacturing designs (MD) while effective criteria on them have fuzzy essence, particularly manufacturing designs with unmanageable (UM) criteria is not possible with the current available FDEA models. We utilize a new FDEA model in this paper, where the inputs and outputs are fuzzy (including UM inputs). There are 7 different MD that require to perform a part of manufacturing process to be done in nature, where each one has three inputs (workers, equipment and weather condition from the view of profession/experiences, development level of equipment and problematic respectively) and two outputs (cycle time ratio and work in progress respectively) which has been considered as fuzzy data. Each MD introduces the inputs and the outputs for the production plan and they were used to evaluate the efficiency and ranking of the designs.",2012,0, 711,"Effort estimation: how valuable is it for a web company to use a cross-company data set, compared to using its own single-company data set?","[Context]: The numerous challenges that can hinder software companies from gathering their own data have motivated over the past 15 years research on the use of cross-company (CC) datasets for software effort prediction. Part of this research focused on Web effort prediction, given the large increase worldwide in the development of Web applications. Some of these studies indicate that it may be possible to achieve better performance using CC models if some strategy to make the CC data more similar to the within-company (WC) data is adopted. [Goal]: This study investigates the use of a recently proposed approach called Dycom to assess to what extent Web effort predictions obtained using CC datasets are effective in relation to the predictions obtained using WC data when explicitly mapping the CC models to the WC context. [Method]: Data on 125 Web projects from eight different companies part of the Tukutuku database were used to build prediction models. We benchmarked these models against baseline models (mean and median effort) and a WC base learner that does not benefit of the mapping. We also compared Dycom against a competitive CC approach from the literature (NN-filtering). We report a company-by- company analysis. [Results]: Dycom usually managed to achieve similar or better performance than a WC model while using only half of the WC training data. These results are also an improvement over previous studies that investigated the use of different strategies to adapt CC models to the WC data for Web effort estimation. [Conclusions]: We conclude that the use of Dycom for Web effort prediction is quite promising and in general supports previous results when applying Dycom to conventional software datasets.",2015,0, 712,Embodied Creative Agents: A Preliminary Social-Cognitive Framework,"

The goal of this paper is to open discussion about industrial creativity as a potential application field for Embodied Conversational Agents. We introduce the domain of creativity and especially focus on a collective creativity tool, the brainstorming: we present the related research in Psychology which has identified several key cognitive and social mechanisms that influence brainstorming process and outcome. However, some dimensions remain unexplored, such as the influence of the partners' personality or the facilitator's personality on idea generation. We propose to explore these issues, among others, using Embodied Conversational Agents. The idea seems original given that Embodied Agents were never included into brainstorming computer tools. We draw some hypotheses and a research program, and conclude on the potential benefits for the knowledge on creativity process on the one hand, and for the field of Embodied Conversational Agents on the other hand.

",2007,0, 713,Emergent Rhythms through Multi-agency in Max/MSP,"

This paper presents a multi-agents architecture created in Max/MSP that generates polyphonic rhythmic patterns which continuously evolve and develop in a musically intelligent manner. Agent-based software offers a new method for real-time composition that allows for complex interactions between individual voices while requiring very little user interaction or supervision. The system described, <em>Kinetic Engine</em>is an environment in which networked computers, using individual software agents, emulate drummers improvising within a percussion ensemble. Player agents assume roles and personalities within the ensemble, and communicate with one another to create complex rhythmic interactions. The software has been premiered in a recent work, <em>Drum Circle</em>, which is briefly described.

",2008,0, 714,Emotion Classification of Audio Signals Using Ensemble of Support Vector Machines,"The purpose of speech emotion recognition system is to differentiate the speaker's utterances into four emotional states namely happy, sad, anger and neutral. Automatic speech emotion recognition is an active research area in the field of human computer interaction (HCI) with wide range of applications. Extracted features of our project work are mainly related to statistics of pitch and energy as well as spectral features. Selected features are fed as input to Support Vector Machine (SVM) classifier. Two kernels linear and Gaussian radial basis function are tested with binary tree, one against one and one versus the rest classification strategies. The proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately and combining both, SAVEE database as well as with a self made database in Malayalam language containing samples of female only. Finally results for different combination of the features and on different databases are compared and explained. The highest accuracy is obtained with the feature combination of MFCC +Pitch+ Energy on both Malayalam emotional database (95.83%) and Berlin emotional database (75%) tested with binary tree using linear kernel.",2015,0, 715,Emotions in Speech: Juristic Implications,"Automatic recognition of emotional states via speech signal has attracted increasing attention in recent years. A number of techniques have been proposed which are capable of providing reasonably high accuracy for controlled studio settings. However, their performance is considerably degraded when the speech signal is contaminated by noise. In this paper, we present a framework with adaptive noise cancellation as front end to speech emotion recognizer. We also introduce a new feature set based on cepstral analysis of pitch and energy contours. Experimental analysis shows promising results.",2010,0, 716,Empirical Analysis of High Maturity Quality Management Practices in a Globally Outsourced Software Development Environment,"The growth of information technology outsourcing and distributed software development have posed new challenges in managing software projects. In this paper we develop and test empirical models of project performance in the context of high maturity distributed software development. By analyzing data collected on more than forty large distributed commercial projects from a leading software vendor we study the effect of resource dispersion between remote development centers on software quality and productivity. Audited and fine grained data collected from these distributed projects operating at very high process maturity level (CMM- 5) enables us to examine the effect of prevention, appraisal and failure-based quality management approaches on software project performance. Our results indicate that dispersion in software tasks between remote development centers negatively affect development productivity. However, we also find that the effect of dispersion on software quality can be mitigated using disciplined and highly mature development processes. Further in contrast to findings from the manufacturing quality literature, we find that failure-based quality management practices do help to improve software project performance. These empirical findings could explain the increased adoption of high process maturity frameworks such as CMM-5 among software service firms embarking on distributed development.",2004,0, 717,Empirical analysis on the correlation between GCC compiler warnings and revision numbers of source files in five industrial software projects,"

This article discusses whether using warnings generated by the GNU C++ compiler can be used effectively to identify source code files that are likely to be error prone. We analyze five industrial projects written in C++ and belonging to the telecommunication domain. We find a significant positive correlation between the number of compiler warnings and the number of source files changes. We use such correlation to conclude that compiler warnings may be used as an indicator for the presence of software defects in source code. The result of this research is useful for finding defect-prone modules in newer projects, which lack change history.

",2007,0, 718,Empirical evaluation and review of a metrics-based approach for use case verification.,"In this article, an empirical evaluation and review of some metrics?based verification heuristics for use cases are presented. This evaluation is based on empirical data collected from requirements documents developed by Software Engineering students at the University of Seville using REM, a free XML?based requirements management tool developed by one of the authors. The analysis of the empirical data has not only confirmed the validity of the intuitions that gave rise to the verification heuristics but has also made possible to review and adjust some of their parameters, consequently enhancing their accuracy in predicting defects in use cases. One of the most interesting results derived from the analysis of empirical data is a number of possible enhancements that could be applied to the underlying metamodel of use cases implemented in REM, in which the heuristics are based on, thus providing an important feedback to our current research in Requirements Engineering.",2004,0, 719,"Empirical Evaluation in Software Engineering: Role, Strategy, and Limitations","AbstractThough there is a wide agreement that software technologies should be empirically investigated and assessed, software engineering faces a number of specific challenges and we have reached a point where it is time to step back and reflect on them. Technologies evolve fast, there is a wide variety of conditions (including human factors) under which they can possibly be used, and their assessment can be made with respect to a large number of criteria. Furthermore, only limited resources can be dedicated to the evaluation of software technologies as compared to their development. If we take an example, the development and evaluation of the Unified Modeling Language (UML) as an analysis and design representation, major revisions of the standard are proposed every few years, many specialized profiles of UML are being developed (e.g., for performance and real-time) and evolved, it can be used within the context of a variety of development methodologies which use different subsets of the standard in various ways, and it can be assessed with respect to its impact on system comprehension, the design decision process, but also code generation, test automation, and many other criteria. Given the above statement and example, important questions logically follow: (1) What can be a realistic role for empirical investigation in software engineering? (2) What strategies should be adopted to get the most out of available resources for empirical research? (3) What does constitute a useful body of empirical evidence?",2007,0, 720,Empirical Evaluation of Agile Software Development: The Controlled Case Study Approach,"AbstractAgile software development, despite its novelty, is an important domain of research within software engineering discipline. Agile proponents have put forward a great deal of anecdotal evidence to support the application of agile methods in various application domains and industry sectors. Scientifically grounded empirical evidence is, however, still very limited. Most scientific research to date has been conducted on focused practices performed in university settings. In order to generate impact on both the scientific and practical software engineering community, new approaches are needed for performing empirically validated agile software development studies. To meet these needs, this paper presents a controlled case study approach, which has been applied in a study of extreme programming methodology performed in close-to-industry settings. The approach considers the generation of both quantitative and qualitative data. Quantitative data is grounded on three data points (time, size, and defect) and qualitative data on developers research diaries and post-mortem sessions.",2004,0, 721,Empirical evaluation of optimization algorithms when used in goal-oriented automated test data generation techniques,"

Software testing is an essential process in software development. Software testing is very costly, often consuming half the financial resources assigned to a project. The most laborious part of software testing is the generation of test-data. Currently, this process is principally a manual process. Hence, the automation of test-data generation can significantly cut the total cost of software testing and the software development cycle in general. A number of automated test-data generation approaches have already been explored. This paper highlights the goal-oriented approach as a promising approach to devise automated test-data generators. A range of optimization techniques can be used within these goal-oriented test-data generators, and their respective characteristics, when applied to these situations remain relatively unexplored. Therefore, in this paper, a comparative study about the effectiveness of the most commonly used optimization techniques is conducted.

",2007,0, 722,Empirical Paradigm ? The Role of Experiments,"This study analyzes the role of collaborative information technology (CIT) on team performance. We propose that the role of CIT can be more clearly understood by examining how it functions in the particular context of departure of an individual from a work team. Using an analytical model and laboratory experiments, we show that CIT serves to moderate the negative impact of a team memberâs departure, and it can play an indirect but potentially significant role in enhancing group performance. We explain why and how team performance benefits from CIT when departure occurs. Moreover, we employ transactive memory theory to explain how individuals develop and exchange knowledge in a group and how skills and knowledge can be lost due to departure.",2013,0, 723,Empirical Software Engineering: Teaching Methods and Conducting Studies,"While empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behaviour. The paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout",1999,0, 724,Empirical studies in reverse engineering: state of the art and future trends,"

Starting with the aim of modernizing legacy systems, often written in old programming languages, reverse engineering has extended its applicability to virtually every kind of software system. Moreover, the methods originally designed to recover a diagrammatic, high-level view of the target system have been extended to address several other problems faced by programmers when they need to understand and modify existing software. The authors' position is that the next stage of development for this discipline will necessarily be based on empirical evaluation of methods. In fact, this evaluation is required to gain knowledge about the actual effects of applying a given approach, as well as to convince the end users of the positive cost---benefit trade offs. The contribution of this paper to the state of the art is a roadmap for the future research in the field, which includes: clarifying the scope of investigation, defining a reference taxonomy, and adopting a common framework for the execution of the experiments.

",2007,0, 725,Empirical study of industrial decision making for software modernizations,"This paper describes the results of an empirical study focusing on software modernization decision making in software industry. 29 decision making experts were interviewed. The main aim was to gather versatile information about their views by posing 26 questions concerning decision making. Topics of interest of these questions included: decision makers, decision making process, used and needed methods and tools, confirmation of decisions, and decision criteria. Six important themes were identified and discussed: role of intuition, economical evaluation, confirmation of the decisions, group decision making, tool support, and success and limitations of the conducted empirical study. The most important findings include the following: use of intuition in decision making is polarized, economical evaluation is important and pursued but reliable estimation of benefits is hard fo achieve, decisions are seldom confirmed, group decision support aspects appear to be important, and tool support for expert judgment needs to be improved.",2005,0, 726,Employee Security Perception in Cultivating Information Security Culture,"This paper discusses employee security perception perspective. Perception is important as employee behaviour can be influenced by it. The intention is not to attempt an exhaustive literature review, but to understand the perception concept that can be used to cultivate an information security culture within an organisation. The first part highlights some of the concepts of perception. The second part interprets the employee security perception in the case study. Finally, a synthesized perspective on this perception is presented.",2005,0, 727,Employer satisfaction with ICT graduates,"Unlike the fancies of spring, a successful job search must be an affair of the head, not of the heart. Every expectant graduate hopes for a job/career romance that will last well past the early honeymoon in the first job as an engineering professional. This is the time for clear thinking, careful planning and deliberate decision-making to avoid the job market seductions of cleverly orchestrated interviews, well-rehearsed interviewers, exciting locations, and enticing job titles and salaries. The key to job selection consists of putting the passion of the job search moment aside and concentrating on the “morning after” realities of the engineering positions being offered.",1985,0, 728,Enabling an Online Community for Sharing Oral Medicine Cases Using Semantic Web Technologies,"

This paper describes how Semantic Web technologies have been used in an online community for knowledge sharing between clinicians in oral medicine in Sweden. The main purpose of this community is to serve as repository of interesting and difficult cases, and as a support for monthly teleconferences. All information regarding users, meetings, news, and cases is stored in RDF. The community was built using the Struts framework and Jena was used for interacting with RDF.

",2006,0, 729,Enabling the Evolution of Service-Oriented Solutions Using an UML2 Profile and a Reference Petri Nets Execution Platform,"The activities developed by a company (business processes) have to change frequently to adapt to the environment. The implementation of business processes should support these changes without any receding. In this work, we provide with an approach for modelling and executing agile and adaptable business processes. Our approach is based on UML2 separating choreography (stable interaction patterns) and orchestration (implementation of the evolving business process, also called workflows), allowing the transformation and execution of the models by means of a flexible SOA-based dynamic platform based on reference Petri nets.",2008,0, 730,Enacting Proactive Workflows Engine in e-Science,"Composition represents today one of the most challenging approach to design complex software systems, especially in distributed environments. While two different views (in time and in space) are considered by researchers to compose applications, we aim at applying these two views in an integrated approach. In particular, we believe that large-scale composition in an open world is simplified by using composition in time, whereas, in the closed environment of a cluster, composition in space is more effective. In the paper, we propose the integration of a workflow engine with ProActive objects to support together the two kinds of composition at different scale-level. The paper shows the first results of this integration and highlights the plan for achieving a stronger integration with the use of GCM components.",2008,0, 731,Encapsulating Real-Life Experience,"Thermal runaway continues to be a problem in many VRLA battery applications. A test method to verify the propensity for thermal runaway has been published for some time by Bellcore (TR-NWT-000766) as also by another specification (ANSI TI 330-1997). In both, the battery is subjected to progressively increasing voltages and the ensuing temperature and current response analyzed to comment on the propensity for thermal runaway. However, these test methods do not always represent or explain real life incidents. Thus, a combination of exposure to elevated voltage, together with sustained high temperature ambient condition, will increase the danger of thermal runaway. Also, charging (from deep discharge) at high rates and elevated temperature, is yet another prime situation that promotes thermal runaway. Data from special tests to simulate these situations are discussed for possible inclusion in the test procedures",1998,0, 732,Energy-Efficient Wireless Packet Scheduling with Quality of Service Control,"In this paper, we study the problem of packet scheduling in a wireless environment with the objective of minimizing the average transmission energy expenditure under individual packet delay constraints. Most past studies assumed that the input arrivals followed a Poisson process or were statistically independent. However, traffic from a real source typically has strong time correlation. We model a packet scheduling and queuing system for a general input process in linear time-invariant systems. We propose an energy-efficient packet scheduling policy that takes the correlation into account. Meanwhile, a slower transmission rate implies that packets stay in the transmitter for a longer time, which may result in unexpected transmitter overload and buffer overflow. We derive the upper bounds of the maximum transmission rate under an overload probability and the upper bounds of the required buffer size under a packet drop rate. Simulation results show that the proposed scheduler improves up to 15 percent in energy savings compared with the policies that assume statistically independent input. Evaluation of the bounds in providing QoS control shows that both deadline misses and packet drops can be effectively bounded by a predefined constraint.",2007,0, 733,Engaging Students in Distributed Software Engineering Courses,"In this paper we describe distributed Scrum augmented with best practices in global software engineering (GSE) as an important paradigm for teaching critical competencies in GSE. We report on a globally distributed project course between the University of Victoria, Canada and Aalto University, Finland. The project-driven course involved 16 students in Canada and 9 students in Finland, divided into three cross-site Scrum teams working on a single large project. To assess learning of GSE competencies we employed a mixed-method approach including 13 post-course interviews, pre-, post-course and iteration questionnaires, observations, recordings of Daily Scrums as well as collection of project asynchronous communication data. Our analysis indicates that the Scrum method, along with supporting collaboration practices and tools, supports the learning of important GSE competencies, such as distributed communication and teamwork, building and maintaining trust, using appropriate collaboration tools, and inter-cultural collaboration.",2013,0, 734,Engaging undergraduates in computer security research,The rapid advances in computer graphics and multimedia security technologies have heralded a new age of explosive growth in multimedia applications such as real-time 3D game and secure multimedia content. The past decade has witnessed a proliferation of powerful multimedia systems and an increasing demand for practice of computer graphics and multimedia security (CGMS). CGMS has moved into the mainstream of multimedia and has become a key technology in determining future research and development activities in many academic and industrial branches.,2011,0, 735,Engaging Youths Via E-Participation Initiatives: An Investigation into the Context of Online Policy Discussion Forums,"AbstractAdvances in information and communication technologies (ICTs) have offered governments new opportunities to enhance citizen participation in democratic processes. The participation opportunities afforded by ICT may be particularly pertinent for youths, who are more likely to be ICT-savvy and yet are reported to show declining participation in politics. The currently increasing exclusion of youths from democratic processes has been attributed to their apathy toward politics and a lack of participation channels for them. ICT as a familiar tool for this specific age group may present an opportunity to elicit youths participation in democratic processes. In this study we examine an e-participation initiative targeted at youths and seek to investigate the factors contributing to their participation in an online discussion forum employed for policy deliberation. We build upon theoretical bases from the political science and information systems literature to construct a research model of participation in online policy discussion forums. As an initial study of youths e-participation, our survey indicates that collective and selective incentives may positively impact youths participation intention. In addition, civic skills and political efficacy of individuals may also contribute to their participation. Connectivity with an online policy discussion forum can enhance youths perceptions of selective process incentives while communality negatively impacts their intention to participate. Overall, our study aims to inform theory by showing that existing participation theories may be applicable to youths participation in the electronic context. Further, ICT features (connectivity and communality) are found to have both positive and negative effects on participation. The findings may provide insights to practitioners for promoting inclusion of youths in democratic processes via e-participation initiatives.",2006,0, 736,Engineering approaches toward biological information integration at the systems level.,"Our understanding of biological systems has improved dramatically due to decades of exploration. This process has been accelerated even further during the past ten years, mainly due to the genome projects, new technologies such as microarray, and developments in proteomics. These advances have generated huge amounts of data describing biological systems from different aspects. Still, integrating this knowledge to reconstruct a biological system in silico has been a significant challenge for biologists, computer scientists, engineers, mathematicians and statisticians. Engineering approaches toward integrating biological information can provide many advantages and capture both the static and dynamic information of a biological system. Methodologies, documentation and project management from the engineering field can be applied. This paper discusses the process, knowledge representation and project management involved in engineering approaches used for biological information integration, mainly using software engineering as an example. Developing efficient courses to educate students to meet the demands of this interdisciplinary approach will also be discussed.?",2006,0, 737,Engineering education and the design of intelligent mobile robots for real use.,"Designing mobile robots requires the integration of physical components, sensors, actuators, energy sources, embedded computing and decision algorithms into one system. Additional expertise is also beneficial when the desired robotic platform must fulfill a specific purpose in a real application. This paper describes three initiatives involving robot design projects following different educational approaches. Because mobile robotics is still an emerging technology with important challenges and opportunities for discoveries and applications, designing these systems as part of initiatives in engineering education allows developing proof-of-concept prototypes while providing a stimulating and motivating learning environrnent for engineering students.",2007,0, 738,Engineering Emergence,"The need of the development of computer support techniques for engineering design requires a thorough understanding of the design activity. In this paper, design problem solving methodologies are investigated, and a model of the design process is proposed, based on design synthesis-analysis transformations. The model enables formalizing the problem solving process and promotes a better understanding of design synthesis, analysis, and the phenomenon of emergence in manufacturing. A methodological framework for computer-assisted design synthesis is described, and some aspects concerning its practical realization are discussed",1999,0, 739,"Engineering Safety Requirements, Safety Constraints, and Safety-Critical Requirements","Many software failures stem from inadequate requirements engineering. This view has been supported both by detailed accident investigations and by a number of empirical studies; however, such investigations can be misleading. It is often difficult to distinguish between failures in requirements engineering and problems elsewhere in the software development lifecycle. Further pitfalls arise from the assumption that inadequate requirements engineering is a cause of all software related accidents for which the system fails to meet its requirements. This paper identifies some of the problems that have arisen from an undue focus on the role of requirements engineering in the causes of major accidents. The intention is to provoke further debate within the emerging field of forensic software engineering.",2006,0, 740,Enhancing Structured Review with Model-Based Verification,"We propose a development framework that extends the scope of structured review by supplementing the structured review with model-based verification. The proposed approach uses the Unified Modeling Language (UML) as a modeling notation. We discuss a set of correctness arguments that can be used in conjunction with formal verification and validation (V&V) in order to improve the quality and dependability of systems in a cost-effective way. Formal methods can be esoteric; consequently, their large scale application is hindered. We propose a framework based on the integration of lightweight formal methods and structured reviews. Moreover, we show that structured reviews enable us to handle aspects of V&V that cannot be fully automated. To demonstrate the feasibility of our approach, we have conducted a study on a security-critical system - a patient document service (PDS) system.",2004,0, 741,Enhancing Student Learning across Disciplines: A Case Example using a Systems Analysis and Design Course for MIS and ACS Majors,"This paper illustrates an approach used to enhance student learning outcomes in a combined cross-listed Systems Analysis and Design (SA&D) course and examines benefits perceived by students through analysis of assessment and students feedback. The SA&D course is a required course in both the Management Information Systems (MIS) major and the Applied Computer Science (ACS) major. The SA&D course was taught to a combined cross-listed class of MIS and ACS students over a period of two years. Two strategies were adopted to make the course a worthwhile learning experience for students in both majors. The first was to extend the scope of the course within the System Development Life Cycle spectrum to include planning before analysis and implementation (prototype) after the design. The second strategy was to have a running group project as the main assessment (accounting for 50% of the course grade) where each group had at least one student from each of the two majors. These groups carried out a system development project with four phased deliverables: system proposal, requirements specifications, design specifications and a working prototype with emphasis on user interfaces. This paper provides a comprehensive overview of how the combined cross-listed course was designed, delivered and refined for future offerings. It also examines the value of teamwork using students? feedback. The students? experiences were studied over a two-year period. Two different instruments were used to gather feedback and to analyze the effectiveness of the combined cross-listed strategy: a qualitative study and a quantitative study concerning student perception on learning enhancement. Initially a qualitative study with open-ended questions was carried out to identify areas for improvement and to examine how well these strategies had worked. The three problems identified were lack of sufficient time for the last phase (working prototype), lack of time for team meetings, and lack of a comprehensive example case. These problems were addressed in subsequent course offerings. The study also revealed that about 80 percent of the students liked working in a mixed group setting on the extended course project, and 75 percent of the students indicated that working on the mixed group project offered them real-world experience. Encouraged by such positive observations, a quantitative study was conducted on students? perceptions concerning specific learning outcomes for carrying out the various system development tasks and the development of skills (including soft skills) among and between the two majors. The results indicated that the students from both majors perceived more than average learning outcomes and skills development. It also indicated that while the ACS students claimed to have learned relatively more on feasibility analysis and information gathering, the MIS students claimed to have learned relatively more on user interface design and architectural design. However, the results indicated that the perceived differences in the learning outcomes between the two majors were not significant. The analysis confirms an enhanced learning outcome for both the ACS and the MIS majors due to knowledge sharing made available through teamwork. ",2005,0, 742,Enjoyment or Engagement? Role of Social Interaction in Playing Massively Mulitplayer Online Role-Playing Games (MMORPGS),"

Based on data collected through 40 in-depth interviews, it is found that (a) the balance between perceived challenges and skills, and (b) the types of in-game social interactions can both facilitate and impede the enjoyment of game playing. Through these two factors, a conclusive link was also found between game enjoyments and a gamer's engagement level. Engaged gamers experience optimal enjoyment more frequently and value the importance of social interactions more than non-engaged gamers. In addition, game enjoyment can be enhanced through game design and it can also be adversely affected by real world contextual factors and technical difficulties. More importantly, the study underlines the importance of social interaction. Social interaction is the key factor that determines the level of engagement of gamers. For engaged gamers, social interaction is essential in this gaming experience. For non-engaged gamers, social interaction is not important and they have little tolerance of negative social interaction within the game.

",2006,0, 743,Ensuring reliable datasets for environmental models and forecasts.,"At the dawn of the 21st century, environmental scientists are collecting more data more rapidly than at any time in the past. Nowhere is this change more evident than in the advent of sensor networks able to collect and process (in real time) simultaneous measurements over broad areas and at high sampling rates. At the same time there has been great progress in the development of standards, methods, and tools for data analysis and synthesis, including a new standard for descriptive metadata for ecological datasets (Ecological Metadata Language) and new workflow tools that help scientists to assemble datasets and to diagram, record, and execute analyses. However these developments (important as they are) are not yet sufficient to guarantee the reliability of datasets created by a scientific process ? the complex activity that scientists carry out in order to create a dataset. We define a dataset to be reliable when the scientific process used to create it is (1) reproducible and (2) analyzable for potential defects. To address this problem we propose the use of an analytic web, a formal representation of a scientific process that consists of three coordinated graphs (a data-flow graph, a dataset-derivation graph, and a process-derivation graph) originally developed for use in software engineering. An analytic web meets the two key requirements for ensuring dataset reliability: (1) a complete audit trail of all artifacts (e.g., datasets, code, models) used or created in the execution of the scientific process that created the dataset, and (2) detailed process metadata that precisely describe all sub-processes of the scientific process. Construction of such metadata requires the semantic features of a high-level process definition language. In this paper we illustrate the use of an analytic web to represent the scientific process of constructing estimates of ecosystem water flux from data gathered by a complex, real-time multi-sensor network. We use Little-JIL, a high-level process definition language, to precisely and accurately capture the analytical processes involved. We believe that incorporation of this approach into existing tools and evolving metadata specifications (such as EML) will yield significant benefits to science. These benefits include: complete and accurate representations of scientific processes; support for rigorous evaluation of such processes for logical and statistical errors and for propagation of measurement error; and assurance of dataset reliability for developing sound models and forecasts of environmental change.",2007,0, 744,Enterprise Architecture and IT Governance: A Risk-Based Approach,"The USCP had enormous challenges with its IT program and support to the internal and external stakeholders of the department, because of a fragile IT infrastructure. The IT program was not able to provide the basic assistance to the end-user, adequate reporting to middle and senior management, and lacked training of IT and end-user staff to venture into the rapidly changing technologies in network management, operating systems, data security, risk management, and systems integration, as well as, the need for innovative data management. The need for these services were exacerbated by increased demands on the IT services group and budgetary pressures restricting the resources available to accomplish the mission until an IT governance structure was adopted and the development and implementation of an enterprise architecture with corresponding risk management planning was undertaken. In order to overcome the inadequacies in the IT program, USCP established several ambitious goals for updating its strategic planning process, developing and implementing an enterprise architecture and risk management plan, setting up an IT governance structure to provide the necessary standards and guidance, as well as the relevance, accessibility, and timeliness of its information technology support. The office of information systems, set out to transforming itself into a performance-based organization. The envisioned ""to be"" system architecture helped USCP focus scarce assets on prioritized application and infrastructure projects to directly support USCP mission requirements, both operational and administrative. Additionally an IT security program was implemented to include compliance with FISMA; established a configuration and change management board; instituted earned value management techniques into project management activities, during the system acquisition process",2007,0, 745,Enterprise Resource Planning Model for Connecting People and Organization in Educational Settings,"In this age of overwhelming technological innovation, organizations expect better prospectus and strategy for improving business in their own specific areas of interest. Organizations want to open new horizons by developing new communication techniques for communicating with their clients. With this step, they want to reduce the communication barrier between their clients and the organization itself. In future, organizations will use improved techniques for getting connected with their clients. When we talk about an educational organization it becomes very important for an educational institution to communicate with their students. The educational organizations have improved a lot in providing services to their students. But, there is lots of work to be done in the field of connecting the students with the educational institutions. If there is a good means of communication between students and the organization then there would be more scope for the aspiring students to join the institution. If an educational organization develops a medium for better communication with their students, that will help the university in finding new relations with aspiring students. In my thesis, I will explore the frameworks used in an educational setting using Enterprise Resource Planning model in developing Student community portal. Enterprise Resource Planning is used to integrate all the modules of the community portal, to maintain integrity of the system. I will also find the requirements that are necessary for developing such interactive system. This investigation will be helpful for developing a Student community portal; it helps to know about the services provided by the educational organization and for mutual communication in the student community. By this approach, it becomes very easy for a two way communication between the university and Student community. This will help to develop a framework and creates Design Pattern Architecture for the development of interactive system for student community in educational organizations.",2007,0, 746,"Environments, Methodologies and Languages for upporting Users in building a Chemical Ontology","Ontologies are widely used by different communities for several purposes. With the advent of the Semantic Web, ontologies are becoming increasingly popular amongst members of the scientific community. This is because they provide a powerful way to formally express the nature of a domain or subject area. By defining shared and common domain theories, ontologies help both people and machines to communicate concisely, which promotes knowledge reuse and integration. During the process of building an ontology several questions arise related to the methodologies, tools and languages that should be used in the process of development. The Council for the Central Laboratory of the Research Councils (CCLRC) is developing a Data Portal to store and retrieve experimental data from across the spectrum of the sciences. Ontologies are a key part of this effort as they are used to provide a common indexing mechanism for these data. Therefore this dissertation aims to review how simple tools and techniques can be used to gather ontological information from a community as a form of consensus building. This project is part of a wider effort by the Council for the Central Laboratory of the Research Councils CCLRC to develop a data portal to store and retrieve experimental data from across the spectrum of the sciences. This dissertation discusses how simple tools and techniques can be used to gather ontological information from a community as a form of consensus building. After looking at a number of tools, Prot?g? and the web ontology language (OWL) were chosen as the best combination for building both heavy and lightweight Evaluation notes were added to the output document. To get rid of these notes, please order your copy of ePrint IV now. 7 ontologies. Using these tools, a Topic Map of Chemistry was converted into an ontology. Due to their lack of formal semantics SKOS (Simple knowledge Organisation systems) and topic maps proved unsuitable for building heavy weight ontologies. ",2005,0, 747,EPC markup language (EPML): an XML-based interchange format for event-driven process chains (EPC),"AbstractThis article presents an XML-based interchange format for event-driven process chains (EPC) that is called EPC markup language (EPML). EPML builds on EPC syntax related work and is tailored to be a serialization format for EPC modelling tools. Design principles inspired by other standardization efforts and XML design guidelines have governed the specification of EPML. After giving an overview of EPML concepts we present examples to illustrate its features including flat and hierarchical EPCs, business views, graphical information, and syntactical correctness.",2006,0, 748,Epistemological and Ontological Representation in Software Engineering,"The AT&T Advanced Software Construction Center (ASCC) has developed a system that provides a powerful implementation of software reliability engineering (SRE) measurement. The Data Analysis and Representation Engine (DARE) supports quantitative analysis from data definition and collection through to metric calculation, visualization and use. Data input is automated from several vendor software development tools, and a form-based interface assists in data collection from developers. A number of metrics related to SRE are calculated and a metric visualization interface provides access to the graphs and reports via World Wide Web (WWW) browsers. DARE also provides a facility for further data analysis and is integrated with the AT&T Silver Bullet software development processes. This paper describes the design and features of DARE and how it supports SRE",1995,0, 749,EQ-Mine: Predicting Short-Term Defects for Software Evolution,"

We use 63 features extracted from sources such as versioning and issue tracking systems to predict defects in short time frames of two months. Our multivariate approach covers aspects of software projects such as size, team structure, process orientation, complexity of existing solution, difficulty of problem, coupling aspects, time constrains, and testing data. We investigate the predictability of several severities of defects in software projects. Are defects with high severity difficult to predict? Are prediction models for defects that are discovered by internal staff similar to models for defects reported from the field?

We present both an exact numerical prediction of future defect numbers based on regression models as well as a classification of software components as defect-prone based on the C4.5 decision tree. We create models to accurately predict short-term defects in a study of 5 applications composed of more than 8.000 classes and 700.000 lines of code. The model quality is assessed based on 10-fold cross validation.

",2007,0, 750,Ergonomists and Usability Engineers Encounter Test Method Dilemmas with Virtual Work Environments,"

Today's ergonomists and usability engineers need a broad understanding of the characteristics and demands of complex sociotechnical systems in order to develop virtual work systems and mobile communication tools for workers. Familiarity with appropriate ergonomics tests and evaluation methods is a prerequisite of this understanding. The literature review about ergonomics methods was performed. Applicable, potential and inapplicable ergonomics test methods for virtual work systems have been identified, based on the validity analysis and case example. The large number of available methods is confusing for ergonomists and therefore a hierarchical top-down approach is needed for method selection. The issues highlighted in this paper may be useful for ergonomists and usability practitioners who are participating design processes in complex virtual work environments.

",2007,0, 751,Error resilient video over multimedia broadcast multicast services (MBMS,"With data throughput for mobile devices constantly increasing, services such as video broadcast and multicast are becoming feasible. The 3GPP (3rd Generation Partnership Project) committee is currently working on a standard for mobile broadcast and multicast services (MBMS). MBMS is expected to enable easier deployment of video and multimedia services on 3G networks. We present an overview of the standard including the proposed architecture and requirements focusing on radio aspects. We discuss the issue of video error resilience in such services that is critical to maintain consistent quality for terminals. The error resilience techniques currently used in video streaming services are not suitable for MBMS services. We analyze the error resilience techniques that are applicable within the context of MBMS standard and present our early research in this area.",2004,0, 752,Establishing Evidence for Safety Cases in Automotive Systems ? A Case Study,"

The upcoming safety standard ISO/WD 26262 that has been derived from the more general IEC 61508 and adapted for the automotive industry, introduces the concept of a safety case, a scheme that has already been successfully applied in other sectors of industry such as nuclear, defense, aerospace, and railway. A safety case communicates a clear, comprehensive and defensible argument that a system is acceptably safe in its operating context. Although, the standard prescribes that there should be a safety argument, it does not establish detailed guidelines on how such an argument should be organized and implemented, or which artifacts should be provided.

In this paper, we introduce a methodology and a tool chain for establishing a safety argument, plus the evidence to prove the argument, as a concrete reference realization of the ISO/WD 26262 for automotive systems. We use the Goal-Structuring-Notation to decompose and refine safety claims of an emergency braking system (EBS) for trucks into sub-claims until they can be proven by evidence. The evidence comes from tracing the safety requirements of the system into their respective development artifacts in which they are realized.

",2007,0, 753,"Estimating Effort by Use Case Points: Method, Tool and Case Study","Use case point (UCP) method has been proposed to estimate software development effort in early phase of software project and used in a lot of software organizations. Intuitively, UCP is measured by counting the number of actors and transactions included in use case models. Several tools to support calculating UCP have been developed. However, they only extract actors and use cases and the complexity classification of them are conducted manually. We have been introducing UCP method to software projects in Hitachi Systems & Services, Ltd. To effective introduction of UCP method, we have developed an automatic use case measurement tool, called U-EST. This paper describes the idea to automatically classify the complexity of actors and use cases from use case model. We have also applied the U-EST to actual use case models and examined the difference between the value by the tool and one by the specialist. As the results, UCPs measured by the U-EST are similar to ones by the specialist.",2004,0, 754,Estimating software maintenance effort: a neural network approach,"Software Maintenance is an important phase of software development lifecycle, which starts once the software has been deployed at the customer's end. A lot of maintenance effort is required to change the software after it is in operation. Therefore, predicting the effort and cost associated with the maintenance activities such as correcting and fixing the defects has become one of the key issues that need to be analyzed for effective resource allocation and decision-making. In view of this issue, we have developed a model based on text mining techniques using machine learning method namely, Radial Basis Function of neural network. We apply text mining techniques to identify the relevant attributes from defect reports and relate these relevant attributes to software maintenance effort prediction. The proposed model is validated using `Browser' application package of Android Operating System. Receiver Operating Characteristics (ROC) analysis is done to interpret the results obtained from model prediction by using the value of Area Under the Curve (AUC), sensitivity and a suitable threshold criterion known as the cut-off point. It is evident from the results that the performance of the model is dependent on the number of words considered for classification and therefore shows the best results with respect to top-100 words. The performance is irrespective of the type of effort category.",2015,0, 755,Estimating web services reliability: a semantic approach,"Semantic web services have received a significant amount of attention in the last years and many frameworks, algorithms and tools leveraging them have been proposed. Nevertheless surprisingly little effort has been put into the evaluation of the approaches so far. The main blocker of thorough evaluations is the lack of large and diverse test collections of semantic web services. In this paper we analyze requirements on such collections and shortcomings of the state of the art in this respect. Our contribution to overcoming those shortcomings is OPOSSum, a portal to support the community to build the necessary standard semantic web service test collections in a collaborative way.",2008,0, 756,Estimation Practices Efficiencies: A Case Study,"Software Project Estimation has been one of the hot topics of research in the software engineering industry for a long time. Solutions for estimation are in great demand. By knowing the estimates early in the software project life cycle, project managers can manage resources efficiently. The objective of this paper is to investigate the estimation practices within an individual software company and to assess their reliability. We perform a methodical review of predictions from a within-company model, based on our analysis of their historical project data. We analyze their estimation practices and compute prediction accuracies and thereby, suggest improvements or modifications. The data analysis revealed that the company used expert judgment in the early years but gradually switched to parametric approaches (calibrated COCOMO II hybrid model). We describe our systematic review of the estimation process, perform experiments and analyze the results. Our findings suggest that these methods should be employed.",2007,0, 757,Ethical Problems Inherent in Psychological Research based on Internet Communication as Stored Information,"AbstractThis paper deals with certain ethical problems inherent in psychological research based on internet communication as stored information. Section 1 contains an analysis of research on Internet debates. In particular, it takes into account a famous example of deception for psychology research purposes. In section 2, the focus is on research on personal data in texts published on the Internet. Section 3 includes an attempt to formulate some ethical principles and guidelines, which should be regarded as fundamental in research on stored information.",2007,0, 758,Ethnographically-informed empirical studies of software practice,"As the build system, i.e. the infrastructure that constructs executable deliverables out of source code and other resources, tries to catch up with the ever-evolving source code base, its size and already significant complexity keep on growing. Recently, this has forced some major software projects to migrate their build systems towards more powerful build system technologies. Since at all times software developers, testers and QA personnel rely on a functional build system to do their job, a build system migration is a risky and possibly costly undertaking, yet no methodology, nor best practices have been devised for it. In order to understand the build system migration process, we empirically studied two failed and two successful attempts of build system migration in two major open source projects, i.e. Linux and KDE, by mining source code repositories and tens of thousands of developer mailing list messages. The major contributions of this paper are: (a) isolating the phases of a common methodology for build system migrations, which is similar to the spiral model for source code development (multiple iterations of a waterfall process); (b) identifying four of the major challenges associated with this methodology: requirements gathering, communication issues, performance vs. complexity of build system code, and effective evaluation of build system prototypes; (c) detailed analysis of the first challenge, i.e., requirements gathering for the new build system, which revealed that the failed migrations did not gather requirements rigorously. Based on our findings, practitioners will be able to make more informed decisions about migrating their build system, potentially saving them time and money.",2012,0, 759,Evaluating ERP projects using DEA and regression analysis,"

Enterprise Resource Planning (ERP) projects appear to be a dream come true. This study evaluates ten ERP projects based on their productivity using Data Envelopment Analysis (DEA). The results of the DEA for preeminent ERP projects are uploaded into a project database. Regression analysis is then applied to the data in the project database to predict the efforts required for new ERP projects to acquire high productivity. Extensive literature survey shows that the DEA is the preeminent slant to evaluate ERP projects. The upshots of the study are: (1) Function Points (FP) and project efforts are the performance indicators of the ERP projects from the viewpoint of software engineering; (2) Lines of Code (LOC) have considerable influence over the efficiency of ERP projects; and (3) DEA, in combination with regression analysis, produces fruitful results for the ERP projects. Future directions in the performance enhancement of ERP projects are also indicated.

",2008,0, 760,Evaluating Graph Kernel Methods for Relation Discovery in GO-Annotated Clusters,"

The application of various clustering techniques for large-scale gene-expression measurement experiments is an established method in bioinformatics. Clustering is also usually accompanied by functional characterization of gene sets by assessing statistical enrichments of structured vocabularies, such as the Gene Ontology (GO) [1]. If different cluster sets are generated for correlated experiments, a machine learning step termed cluster meta-analysis may be performed, in order to discover relations among the components of such sets. Several approaches have been proposed for this step: in particular, kernel methods may be used to exploit the graphical structure of typical ontologies such as GO. Following up the formulation of such approach [2], in this paper we present and discuss further results about its applicability and its performance, always in the context of the well known Spellman's Yeast Cell Cycle dataset [3].

",2007,0, 761,Evaluating guidelines for empirical software engineering studies,"This paper describes an industrial experience of requirements engineering for the business process improvement in one of the biggest electronics companies in Korea. To improve the definition of the TV department business process requirements, we have applied a few methods to facilitate communications amongst stakeholders, based on their requirements. Our methods include iterative review processes, shared templates, and professional technical writing training. After applying our methods in a pilot project, the project stakeholders have confirmed that the approach provides a better requirements understanding and has improved requirements elicitation for the given examples. While implementing the project, we were also able to learn about both technical and nontechnical obstacles. Nontechnical obstacles were created by the organizational culture, including issues such as reduced empowerment, low levels of communication with other stakeholders, and a non-uniformly defined and not clearly understood mission statement. Most of the developers are very good at accomplishing their goals. They are very quick to respond to their management's requests, without excuses. However, often times, stakeholders have usually emphasized the importance of the results, rather than focusing on developing a strong quality process. The quality of work was highly dependent on the product development process. Therefore, in this study, the authors have analyzed the impact on the requirements gathering created by the stakeholders having a manufacturing process background and evaluating most decisions from a manufacturing perspective. The impact of the national cultural work style on the requirements engineering processes was also examined. In the future, we will continue to apply and expand the mentioned findings to further improve the requirements business process management.",2014,0, 762,Evaluating guidelines for reporting empirical software engineering studies.,"Background: Some scientific fields, such as automobile, drugs discovery or engineer have used simulation-based studies (SBS) to faster the observation of phenomena and evolve knowledge. All of them organize their working structure to perform computerized experiments based on explicit research protocols and evidence. The benefits have been many and great advancements are continuously obtained for the society. However, could the same approach be observed in Software Engineering (SE)? Are there research protocols and evidence based models available in SE for supporting SBS? Are the studies reports good enough to support their understanding and replication? AIM: To characterize SBS in SE and organize a set of reporting guidelines aiming at improving SBS' understandability, replicability, generalization and validity. METHOD: To undertake a secondary study to characterize SBS. Besides, to assess the quality of reports to understand the usually reported information regarding SBS. RESULTS: From 108 selected papers, it has been observed several relevant initiatives regarding SBS in software engineering. However, most of the reports lack information concerned with the research protocol, simulation model building and evaluation, used data, among others. SBS results are usually specific, making their generalization and comparison hard. No reporting standard has been observed. CONCLUSIONS: Advancements can be observed in SBS in Software Engineering. However, the lack of reporting consistency can reduce understandability, replicability, generalization and compromise their validity. Therefore, an initial set of guidelines is proposed aiming at improving SBS report quality. Further evaluation must be accomplished to assess the guidelines feasibility when used to report SBS in Software Engineering.",2012,0, 763,Evaluating Measurement Models for Web Purchasing Intention,"To predict the intention of the user on the Internet is more important for the e-business. This paper is the first one applying the hidden Markov model, the stochastic tool used in information extraction, in predicting the behavior of the users on the Web. We collect the log of Web servers, clean the data and patch the paths that the users pass by. Based on the HMM, we construct a specific model for the Web browsing that can predict whether the users have the intention to purchase in real time. The related measures, such as speeding up the operation, kindly guide and other comfortable operations, can take effects when a user is in a purchasing mode. The simulation shows that our model can predict the purchase intention of uses with a high accuracy.",2005,0, 764,Evaluating Object-Oriented Designs with Link Analysis,"The hyperlink induced topic search algorithm, which is a method of link analysis, primarily developed for retrieving information from the Web, is extended in this paper, in order to evaluate one aspect of quality in an object-oriented model. Considering the number of discrete messages exchanged between classes, it is possible to identify ""God"" classes in the system, elements which imply a poorly designed model. The principal eigenvectors of matrices derived from the adjacency matrix of a modified class diagram, are used to identify and quantify heavily loaded portions of an object-oriented design that deviate from the principle of distributed responsibilities. The non-principal eigenvectors are also employed in order to identify possible reusable components in the system. The methodology can be easily automated as illustrated by a Java program that has been developed for this purpose.",2004,0, 765,Evaluating Pair Programming with Respect to System Complexity and Programmer Expertise,"A total of 295 junior, intermediate, and senior professional Java consultants (99 individuals and 98 pairs) from 29 international consultancy companies in Norway, Sweden, and the UK were hired for one day to participate in a controlled experiment on pair programming. The subjects used professional Java tools to perform several change tasks on two alternative Java systems with different degrees of complexity. The results of this experiment do not support the hypotheses that pair programming in general reduces the time required to solve the tasks correctly or increases the proportion of correct solutions. On the other hand, there is a significant 84 percent increase in effort to perform the tasks correctly. However, on the more complex system, the pair programmers had a 48 percent increase in the proportion of correct solutions but no significant differences in the time taken to solve the tasks correctly. For the simpler system, there was a 20 percent decrease in time taken but no significant differences in correctness. However, the moderating effect of system complexity depends on the programmer expertise of the subjects. The observed benefits of pair programming in terms of correctness on the complex system apply mainly to juniors, whereas the reductions in duration to perform the tasks correctly on the simple system apply mainly to intermediates and seniors. It is possible that the benefits of pair programming will exceed the results obtained in this experiment for larger, more complex tasks and if the pair programmers have a chance to work together over a longer period of time",2007,0, 766,Evaluating performances of pair designing in industry,"This study uses data envelopment analysis (DEA) to explore the efficiency of the computer communication equipment industry in United States. The financial data of this study are obtained from the COMPUSTAT database, and the patent data are collected from the United States Patent and Trademark Office (USPTO) database from 2002 to 2004. Moreover, the input variables of this study are total assets, R&D expenditures, and employee productivity, and the output variables are patent counts and patent citations. The average efficiency score of the CCR model and that of the BCC model are 17.21% and 24.56%, and there are three efficient firms in the CCR model while there are five efficient firms in the BCC model. Besides, this study finds out that there is the advantage of firm size for patent performance, and demonstrates that R&D expenditures and employee productivity have positive effects for patent performance in this industry. Results of this study don't only provide a valuable reference for managers of computer communication equipment companies in reviewing their patent performance and efficiency, but also find out there is the advantage of firm size and suggest them to enhance their employee productivity and R&D expenditures.",2007,0, 767,Evaluating Quality of AI-Based Systems,"The Brazilian energy sector's recently adopted deverticalized model imposes new rules that must be established in such a way as to satisfy the consumer market. In this context, the power quality itself has become one of the most important issues to be attended. During the last two years the Brazilian Transmission ISO (ONS-Operador Nacional do Sistema Eletrico) has established indices and standards in order to accomplish the adequate power quality (PQ) performance in the basic transmission grid. However, the establishment of indices and standards corresponds the first stage of the management process. As a second stage, the power quality performance throughout the transmission grid should be followed, including the aspects related with measurement strategies, measurement devices, data acquisition philosophy, communication protocols, data base requirements, analysis software to support action decisions, and so forth. In order to support the decisions related with the mentioned first stage, the ONS, in cooperation with the players and their representative agencies, the national electric energy agency (ANEEL), universities and research centers, has implementing measurement campaigns in order to know the actual basic grid performance concerning some power quality indices as well as, in some cases, to evaluate the measurement confidence itself. We present some news and conclusions related with recent measurement campaign (flicker, harmonic distortion and voltage sag) developed by ONS. In order to follow the basic grid power quality performance-second stage - the ONS is developing basic strategies concerning a power quality management system (PQMS) to be implemented in the near future. We present the conceptual issues of this system for some power quality performance indices concerning issues like how, when and where to measure.",2002,0, 768,Evaluating the Effectiveness of Tutorial Dialogue Instruction in an Exploratory Learning Context,"

In this paper we evaluate the instructional effectiveness of tutorial dialogue agents in an exploratory learning setting. We hypothesize that the creative nature of an exploratory learning environment creates an opportunity for the benefits of tutorial dialogue to be more clearly evidenced than in previously published studies. In a previous study we showed an advantage for tutorial dialogue support in an exploratory learning environment where that support was administered by human tutors [9]. Here, using a similar experimental setup and materials, we evaluate the effectiveness of tutorial dialogue agents modeled after the human tutors from that study. The results from this study provide evidence of a significant learning benefit of the dialogue agents.

",2006,0, 769,Evaluating the efficacy of test-driven development: industrial case studies,"Test driven development (TDD) is a software engineering technique to promote fast feedback, task-oriented development, improved quality assurance and more comprehensible low-level software design. Benefits have been shown for non-reusable software development in terms of improved quality (e.g. lower defect density). We have carried out an empirical study of a framework of reusable components, to see whether these benefits can be shown for reusable components. The framework is used in building new applications and provides services to these applications during runtime. The three first versions of this framework were developed using traditional test-last development, while for the two latest versions TDD was used. Our results show benefits in terms of reduced mean defect density (35.86%), when using TDD, over two releases. Mean change density was 76.19% lower for TDD than for test-last development. Finally, the change distribution for the TDD approach was 33.3% perfective, 5.6% adaptive and 61.1% preventive.",2008,0, 770,Evaluating the learning effectiveness of using simulations in software project management education: Results from a twice replicated experiment,"Due to increasing demand for software project managers in industry, efforts are needed to develop the management-related knowledge and skills of the current and future software workforce. In particular, university education needs to provide to their computer science students not only with technology-related skills but, in addition, a basic understanding of typical phenomena occurring in industrial (and academic) software projects. The paper presents a controlled experiment that evaluates the effectiveness of using a process simulation model for university education in software project management. The experiment uses a pre-test-post-test control group design with random assignment of computer science students. The treatment of the experimental group involves a system dynamics simulation model. The treatment of the control group involves a conventional predictive model for project planning, i.e. the well-known COCOMO model. In addition to the presentation of the results of the empirical study, the paper discusses limitations and threats to validity. Proposals for modifications of the experimental design and the treatments are made for future replications",2001,0, 771,Evaluating The PLUSS Domain Modeling Approach by Modeling the Arcade Game Maker Product Line,"Most published approaches for software product line engineering only address the software problems but not the systems problems. To tackle that problem the PLUSS Domain Modeling approach has been introduced at system level for requirements reuse within the systems engineering process. The PLUSS approach (Product Line Use case modeling for Systems and Software engineering) is a domain modeling method that utilizes Features, use cases and Use case realizations. An Arcade Game Maker Product Line example is used to evaluate the PLUSS approach. In this evolution the PLUSS notations for Feature Modeling and Use Case modeling are used to identify the similarities and variations between the three game products of an Arcade Game Maker Product Line. In this evaluation process some evolution criteria were defined and graded according to them. The results show that the PLUSS approach provides good overview of the domain with easily understandable documentation when compared with some standard notations of domain modeling. Hence the PLUSS approach is a good domain modeling approach and can be applied on any domain which is in the software product line strategy.",2005,0, 772,Evaluating user interactions with clinical information systems: a model based on human-computer interaction models,"Objectives: This article proposes a model for dimensions involved in user evaluation of clinical information systems (CIS). The model links the dimensions in traditional CIS evaluation and the dimensions from the human-computer interaction (HCI) perspective.Proposed method: In this article, variables are defined as the properties measured in an evaluation, and dimensions are defined as the factors contributing to the values of the measured variables. The proposed model is based on a two-step methodology with: (1) a general review of information systems (IS) evaluations to highlight studied variables, existing models and frameworks, and (2) a review of HCI literature to provide the theoretical basis to key dimensions of user evaluation.Results: The review of literature led to the identification of eight key variables, among which satisfaction, acceptance, and success were found to be the most referenced.Discussion: Among those variables, IS acceptance is a relevant candidate to reflect user evaluation of CIS. While their goals are similar, the fields of traditional CIS evaluation, and HCI are not closely connected. Combining those two fields allows for the development of an integrated model which provides a model for summative and comprehensive user evaluation of CIS. All dimensions identified in existing studies can be linked to this model and such an integrated model could provide a new perspective to compare investigations of different CIS systems.",2005,0, 773,Evaluating Web Services: Towards a framework for emergent contexts.,"The paper develops an evaluation framework with specific reference to Web Services. It is argued that the essential characteristics for such an approach, noted as qualitative, are captured in these constructs through an augmentation of theoretical considerations and empirical findings. A review of the innovation and diffusion literature indicates a considerable amount of research where attention is given to a range of features which may support Web Service adoption. It is argued that the framework proposed in this paper is of value in highlighting the specific situations for an effective evaluation in this respect.",2005,0, 774,Evaluation of commercial web engineering processes.,"Our time is branded by the orientation towards e-learning in several education fields. Yet, the remote teaching means satisfying the needs of specific engineering studies remain limited to simulated applications or virtual laboratories which involve a reduced assimilation of the taught material. This paper introduces a real laboratory control based on a Web embedded system and an interactive Web application. The laboratory is designed for process engineering education. In order to conceive a Web-based process management for learning features, five essential design issues have been investigated: requirement specification, architecture selection, system implementation, design of a Web-based human-computer interface, and an access control system for the interactive learning environment and work validation. A didactic lift is used as an engineering educational process to demonstrate our design methodology. A set of software modules is embedded in the local control system in order to be shared by multiple communicating users. Time delay due to Internet traffic has been overcome by using miniature Web server dedicated to the Web-based laboratory supervision and control. The experimental results have shown that the Internet-based real laboratory offers similar behaviour to a local laboratory.",2006,0, 775,Evaluation of Hospital Portals Using Knowledge Management Mechanisms,"

Hospital portals are becoming increasingly popular since they play an important role to provide, acquire and exchange information. Knowledge management (KM) mechanisms will be useful to hospitals that need to manage health related information, and to exchange and share information with their patients and visitors. This paper presents a comprehensive analysis of knowledge management mechanisms used by 20 hospital portals from North America and Asia to access, create and transfer knowledge. We developed a systematic and structured approach to evaluate how well the portals captured and delivered information to patients and visitors about the hospitals' business processes, products, services, and customers from the perspective of three KM mechanisms (i.e. knowledge access, knowledge creation and knowledge transfer). Our results show that our selected hospital portals provided varying degrees of support for these KM mechanisms.

",2007,0, 776,Evaluation of integrated software development environments: Challenges and results from three empirical studies,"Evidence shows that integrated development environments (IDEs) are too often functionality-oriented and difficult to use, learn, and master. This article describes challenges in the design of usable IDEs and in the evaluation of the usability of such tools. It also presents the results of three different empirical studies of IDE usability. Different methods are sequentially applied across the empirical studies in order to identify increasingly specific kinds of usability problems that developers face in their use of IDEs. The results of these studies suggest several problems in IDE user interfaces with the representation of functionalities and artifacts, such as reusable program components. We conclude by making recommendations for the design of IDE user interfaces with better affordances, which may ameliorate some of most serious usability problems and help to create more human-centric software development environments.",2005,0, 777,"EVALUATION OF JUPITER: A LIGHTWEIGHT CODE REVIEW FRAMEWORK","As an important component in NASA's new frontiers program, the Jupiter polar orbiter (Juno) mission is designed to investigate in-depth physical properties of Jupiter. It will include the giant planet's ice-rock core and atmospheric studies as well as exploration of its polar magnetosphere. It will also provide the opportunity to understand the origin of the Jovian magnetic field. Due to severe radiation environment of the Jovian system, this mission inherently presents a significant technical challenge to attitude control system (ACS) design since the ACS sensors must survive and function properly to reliably maneuver the spacecraft throughout the mission. Different gyro technologies and their critical performance characteristics are discussed, compared and evaluated to facilitate a choice of appropriate gyro-based inertial measurement unit to operate in a harsh Jovian environment to assure mission success.",2007,0, 778,Evaluation of object-oriented design patterns in game development,"In this study, we developed a game-based learning system on a social network platform to instruct customs and cultures in the English speaking countries in order to investigate the increase of learning willingness and motivation by using the digital game-based learning(DBGL) model. To determine the learning behaviors and results of the learning design which combines game-based and social network theories, this study involved students from an elementary school to implement a multi-player, real-time quiz game, known as 'Challenger', on the Facebook platform. As players, the learners can ""Call Out"" for help or assistance; however, online friends hold the decision of ""Replying"" or ""Not to Reply"" the players' ""Call Outs"". This mechanism is so called ""Peer Feedback"". Results of this study showed that the DGBL incorporated into a social network website is a feasible and sound model for teaching. By the model, English learning will become more interesting which makes students more enjoyable in learning auxiliary materials after school if they cannot fully comprehend the course in the classroom. By the outcome, we found peer influence seems not only motivates learner's participation in the game but also catalyze learning effectiveness.",2012,0, 779,Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems,"The goal of this master?s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS?s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Na?ve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Na?ve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.",2007,0, 780,Evaluation of techniques for manufacturing process analysis,"In response to the PhysioNet/CinC Challenge 2013: Noninvasive Fetal ECG [1] we developed an algorithm for fetal QRS (fQRS) positions estimation based on a set of classic filters, which enhances the fetal ECG, combined with a robust QRS detection technique based on Christov's beat detection algorithm. These steps provides necessary information for the maternal ECG (mECG) cancellation, which is based on the technique provided by the Challenge organizers. Our work extends the provided algorithm with mECG reduction quality check and in case of insufficient reduction the mECG reduction algorithm is applied again until the criteria for sufficient reduction based on energy around the maternal QRS complex are satisfied. After noise reduction two techniques for fQRS were applied - one provided by the organizers and second based on entropy estimation. Results from both detectors are then corrected creating another set of fQRS positions estimates and from all sets of fQRS estimates there is selected one with the smallest standard deviation of fetal R-R distances. Our method results are 249.784 for Event 1/4 and 21.989 for Event 2/5 respectively. We did not participate in Event 3 - QT interval estimation.",2013,0, 781,Evaluation of the Quality of Ultrasound Image Compression by Fusion of Criteria with a Genetic Algorithm,"In the framework of a robotized tele-echography, ultrasound images are compressed and sent from a patient station to an expert one. An important task concerns the evaluation of the quality of the compressed images. Indeed, transmitted images are the only feedback information available to the medical expert to remotely control the distant robotized system and to propose a diagnosis. Our objective is to measure the image quality with a statistical criterion and with the same reliability as the medical assessment. We propose in this work a new method for the comparison of compression results. The proposed approach combines different statistical criteria and uses the medical assessment in a training phase with a support vector machine. We show the benefit of this methodology through some experimental results.",2005,0, 782,Evaluation of Visual Aid Suite for Desktop Searching,"

The task of searching for documents is becoming more challenging as the volumes of data stored continues to increase, and retrieval systems produce longer results list. Graphical visualisations can assist users to more efficiently and effectively understand large volumes of information. This work investigates the use of multiple visualisations in a desktop search tool. These visualisations include a List View, Tree View, Map View, Bubble View, Tile View and Cloud View. A preliminary evaluation was undertaken by 94 participants to gauge its potential usefulness and to detect usability issues with its interface and graphical presentations. The evaluation results show that these visualisations made it easier and quicker for them to find relevant documents. All of the evaluators found at least one of the visualisations useful and over half of them found at least three of the visualisations to be useful. The evaluation results support the research premise that a combination of integrated visualisations will result in a more effective search tool. The next stage of work is to improve the current views in light of the evaluation findings in preparation for the scalability and longitudinal tests for a series of increasingly larger result sets of documents.

",2007,0, 783,Evaluation: An Imperative to Do No Harm,"The focus of the paper is on the comparison of results obtained using group screening versus not using group screening in an experimental design methodology applied to a semiconductor manufacturing simulation model. The experiments were performed on the cycle time for the main product in the fabrication, which takes about 250 steps before completion. High utilization and large queue sizes were the basis for determining the five most critical workstations in the fabrication. Three parameters for each workstation were set as factors for investigation plus another more general important factor making a total of 16 input factors. A 2-stage group-screening experiment and a 2k-p factional factorial were performed to identify the significant factors affecting the cycle time for the product. The results showed that the two methods could be very similar or very different depending on the choice of significance level for group screening, particularly at the early stages of eliminating group factors",2000,0, 784,Evaluations of an Information and Communication Technology (ICT) Training Programme for Persons with Intellectual Disabilities,"Abstract106 persons with intellectual disabilities were recruited for the evaluation of an information and communication technology (ICT) training programme (77 in the experimental and 29 in the control group). The main features of the programme were a specially designed training curriculum with software designed in appropriate language and appropriate levels for people with intellectual disabilities. In the training programme, participants were taught about the operations of mouse and keyboard and browsing the Internet using Internet Explorer (IE). Participants in the control group underwent equal number of hours of ICT training by the staff working in their centers. All participants were assessed on ICT competence at pre- and post-training and one month follow up using a skill-based checklist. Results from repeated measure ANOVA and t-tests showed that participants acquired a higher level of computer competence after training and retained skills within one-month follow-up period, [F (75) = 70.06, p=.000]. For the control group, there was no statistically significant difference in the score on sub-tasks of use of mouse and keyboard [t(28) = 1.51, p > .05], the sub-task of internet browsing [t(28) = 1.00, p > .05] and the overall score [t(28) = .90, p > .05]. Results indicated that persons with intellectual disabilities have the capacity to learn ICT skills in a structured group with appropriate learning assistance and appropriate training tools.",2004,0, 785,EVEDIN: A system for automatic evaluation of educational influence,"These days' people who are learning or getting an education need significantly less time to reach the information they need. It is mostly caused by and increased use of ICT which is present in almost all segments of human activities. The increased use of ICT in education and learning is manifested in increased demands in using e-learning systems, which nowadays know no cultural, national or language barriers. Regardless of the community in which a system is applied in education or learning, in most cases the principal purpose of such systems is informing or applying some form of lifelong education so further development of such systems undoubtedly depends on constant evaluation of the existing solutions and on systematic creation of new ones.",2007,0, 786,Evidence-Based Cost Estimation for Better-Quality Software,"Our work on COSEEKMO is hardly enough to change experimental methods in the cost estimation community. So, we're also running a new workshop series called PROMISE devoted to repeatable software engineering experiments. Evidence-based reasoning is becoming common in many fields. Evidence-based approaches demand that, among other things, practitioners systematically track down the best evidence relating to some practice; critically relating to some practice; critically appraise that evidence for validity, impact, and applicability; and carefully document it. The software community can bring to bear many methods that could further improve evidence-based cost estimation",2006,0, 787,Evidence-based practice in human-computer interaction and evidence maps,"At the onset of evidence-based practice in software engineering, prospective disciples of this approach should inspect and learn from similar attempts in other disciplines. Having participated in the National Cancer Institute's multi-year effort compiling evidence-based guidelines for information-rich web-site design, I bring my personal experiences as a member of that group to the discussions at the workshop. From my experience doing other empirical research, I propose using an evidence map to communicate research questions, the available evidence to answer those questions, the relationship between the questions, and the meaning of different paths through the evidence map. I have used this device for several empirical studies, both in HCI and in software engineering, and have found it to be a useful organization tool that could help in pursuing evidence-based software engineering.",2005,0, 788,Evidence-Based Software Engineering,"Evidence-based research has been matured and established in many other disciplines such as in Medicine and Psychology. One of the methods that has been widely used to support evidence-based practices is the Systematic Literature Review (SLR) method. The SLR is a review method that aims to provide unbiased or fair evaluation to existing research evidence. The aim of this study is to gather the trends of evidence-based software engineering (SE) research in Malaysia in particular to identify the usage of SLR method among researchers, academics or practitioners. Based on our tertiary study, we found only 19 published work utilizing evidence-based practices in Malaysia within SE and Computer Science related domains. We have also conducted a survey during SLR workshops for the purpose of gathering perceptions on using SLR. The survey was participated by 78 academics and researchers from five universities in Malaysia. Our findings show that researchers in this country are still at preliminary stage in practicing evidence-based approach. We believe that knowledge and skill on using SLR should be promoted to encourage more researchers to apply it in their research.",2014,0, 789,Evidence-Based Software Engineering and Systematic Literature Reviews,"Software outsourcing partnership (SOP) is mutually trusted inter-organisational software development relationship between client and vendor organisations based on shared risks and benefits. SOP is different to conventional software development outsourcing relationship, SOP could be considered as a long term relation with mutual adjustment and renegotiations of tasks and commitment that exceed mere contractual obligations stated in an initial phase of the collaboration. The objective of this research is to identify various factors that are significant for vendors in conversion of their existing outsourcing contractual relationship to partnership. We have performed a systematic literature review for identification of the factors. We have identified a list of factors such as 'mutual interdependence and shared values', 'mutual trust', 'effective and timely communication', 'organisational proximity' and 'quality production' that play vital role in conversion of the existing outsourcing relationship to a partnership.",2014,0, 790,Evidence-Based Software Engineering for Practitioners,"Software managers and practitioners often must make decisions about what technologies to employ on their projects. They might be aware of problems with their current development practices (for example, production bottlenecks or numerous defect reports from customers) and want to resolve them. Or, they might have read about a new technology and want to take advantage of its promised benefits. However, practitioners can have difficulty making informed decisions about whether to adopt a new technology because there's little objective evidence to confirm its suitability, limits, qualities, costs, and inherent risks. This can lead to poor decisions about technology adoption. Software engineers might make incorrect decisions about adopting new techniques it they don't consider scientific evidence about the techniques' efficacy. They should consider using procedures similar to ones developed for evidence-based medicine. Software companies are often under pressure to adopt immature technologies because of market and management pressures. We suggest that practitioners consider evidence-based software engineering as a mechanism to support and improve their technology adoption decisions.",2005,0, 791,Evolution of a Parallel Performance System,The possibility of a routing accelerator for VLSI routing based on general-purpose processors and a general-purpose architecture which can be used to speed up other phases of the VLSI design cycle is investigated. Research into a novel technique for exploiting parallelism in the automatic routing process for hierarchical VLSI circuit design is presented. The routing tools are incorporated into a routing system which can be used to exploit parallelism in the routing process whilst making the fullest use of the structural hierarchy of the VLSI layout. In VLSI design routing is performed in two stages: loose routing followed by detailed routing. The model clearly identifies where parallelism can be best exploited in each of these stages,1989,0, 792,Evolutionary Dilemmas in a Social Network,"A culturally diverse group of people are now participating in military multinational coalition operations (e.g., combined air operations center, training exercises such as Red Flag at Nellis AFB, NATO AWACS), as well as in extreme environments. Human biases and routines, capabilities, and limitations strongly influence overall system performance; whether during operations or simulations using models of humans. Many missions and environments challenge human capabilities (e.g., combat stress, waiting, fatigue from long duty hours or tour of duty). This paper presents a team selection algorithm based on an evolutionary algorithm. The main difference between this and the standard EA is that a new form of objective function is used that incorporates the beliefs and uncertainties of the data. Preliminary results show that this selection algorithm will be very beneficial for very large data sets with multiple constraints and uncertainties. This algorithm will be utilized in a military unit selection tool",2007,0, 793,Evolutionary Discovery of Arbitrary Self-replicating Structures,"Previous computational models of self-replication using cellular automata (CA) have been manually designed, a difficult and time-consuming process. We show here how genetic algorithms can be applied to automatically discover rules governing self-replicating structures. The main difficulty in this problem lies in the choice of the fitness evaluation technique. The solution we present is based on a multiobjective fitness function consisting of three independent measures: growth in number of components, relative positioning of components, and the multiplicity of replicants. We introduce a new paradigm for CA models with weak rotational symmetry, called orientation-insensitive input, and hypothesize that it facilitates discovery of self-replicating structures by reducing search-space sizes. Experimental yields of self-replicating structures discovered using our technique are shown to be statistically significant. The discovered self-replicating structures compare favorably in terms of simplicity with those generated manually in the past, but differ in unexpected ways. These results suggest that further exploration in the space of possible self-replicating structures will yield additional new structures. Furthermore, this research sheds light on the process of creating self-replicating structures, opening the door to future studies on the discovery of novel self-replicating molecules and self-replicating assemblers in nanotechnology",1997,0, 794,"Evolutionary software engineering, a review","The author describes a software package, running under MSDOS, developed to assist lecturers in the assessment of software assignments. The package itself does not make value judgments upon the work, except when it can do so absolutely, but displays the students' work for assessment by qualified staff members. The algorithms for the package are presented, and the functionality of the components is described. The package can be used for the assessment of software at three stages in the development process: (1) algorithm logic and structure, using Warnier-Orr diagrams; (2) source code structure and syntax in Modula-2; and (3) runtime performance of executable code",1992,0,795 795,"Evolutionary software engineering, a review.","The author describes a software package, running under MSDOS, developed to assist lecturers in the assessment of software assignments. The package itself does not make value judgments upon the work, except when it can do so absolutely, but displays the students' work for assessment by qualified staff members. The algorithms for the package are presented, and the functionality of the components is described. The package can be used for the assessment of software at three stages in the development process: (1) algorithm logic and structure, using Warnier-Orr diagrams; (2) source code structure and syntax in Modula-2; and (3) runtime performance of executable code",1992,0, 796,Evolving an experience base for software process research,"Software has gained a critical role in the automotive domain that is becoming more and more complex. The ever-growing complexity in automotive software development is due to its high variability. In this scenario, automotive companies need to adopt cost-effective development processes in order to manage the variability of the produced software. A well-known solution for dealing with this problem is the adoption of Software Product Lines (SPL). In this paper we report an experience we performed in collaboration with the Fiat Chrysler Automobiles (FCA) company for the application of the SPL in one of its Model-Based Design (MBD) processes. SPL were supported by AutoMative, a software infrastructure we implemented for the semi-automatic generation of Product Architectures from specification documents.",2016,0, 797,Evolving Case-Based Reasoning with Genetic Algorithm in Wholesaler?s Returning Book Forecasting,"AbstractIn this paper, a hybrid system is developed by evolving Case-Based Reasoning (CBR) with Genetic Algorithm (GA) for reverse sales forecasting of returning books. CBR systems have been successfully applied in several domains of artificial intelligence. However, in conventional CBR method each factor has the same weight which means each one has the same influence on the output data that does not reflect the practical situation. In order to enhance the efficiency and capability of forecasting in CBR systems, we applied the GAs method to adjust the weights of factors in CBR systems, GA/CBR for short. The case base of this research is acquired from a book wholesaler in Taiwan, and it is applied by GA/CBR to forecast returning books. The result of the prediction of GA/CBR was compared with other traditional methods.",2005,0, 798,Examining IT professionals' adaptation to technological change: the influence of gender and personal attributes,"This paper examines the challenge of adapting to technological changes in IS departments. It develops a set of hypotheses about how two personal attributes (tolerance of ambiguity and openness to experience) will be associated with IT professionals' ability to adapt to a technological innovation. It also examines the literature on gender in the IT profession, positing that women IT employees will exhibit some differences in job performance (relative to men), but no differences in terms of job satisfaction or turnover intentions. Based on a mixed-method study of two firms that were adopting client/server development, the paper first describes the different implementation strategies employed by each firm, and then analyzes employees' responses to the change. In combining the insights from both case studies and surveys, the results showed that four out of eight hypotheses were fully supported and two received partial support. Women reported lower job satisfaction on a dimension that captures job stress, and this effect was exacerbated in the firm that expected its IT employees to demonstrate considerable initiative to master the innovation. In contrast, the women at the second firm, while showing no differences in job stress (relative to their male peers), nevertheless exhibited a very different pattern of job skills and performance than the men. Finally, the personal attribute that was strongly associated with employees' job satisfaction (openness to experience) was negatively correlated with one aspect of job performance - directly opposite to what was hypothesized. The paper concludes with insights for IS researchers and managers interested in IS personnel and technology implementation.",2004,0, 799,Examining The Antecedents To Innovation In Electronic Networks Of Practice,"AbstractThe way in which firms innovate ideas and bring them to market is undergoing a fundamental change. Useful knowledge is increasingly dispersed outside the firms boundaries and the exceptionally fast time to market for many products and services suggest that some very different organising principles for innovation are needed. These developments have led to an increased interest in the electronic network of practice concept to facilitate innovation. This paper argues that innovative behaviour in electronic networks of practice is determined by three interacting systems individual motivations, network communication structure, and the social context of the network. The theoretical position of the interactive process theoiy of innovation is used to support this claim.",2007,0, 800,Examining the relationship between gender and the research productivity of IS faculty,"In this study, we examine whether there exist gender differences between the rates of scholarly publications by IS researchers. Triggered, in part, by a recent study of so-called ""top"" IS researchers that featured just two women out of the leading 30 IS scholars [24], we sought to determine whether women IS scholars publish at rates similar to their male counterparts in the leading, scholarly IS journals. Using a different ""basket"" of 12 IS journals, our results showed that, of IS researchers who had published at least three papers in these journals, approximately 17% were women - a figure that is slightly less than the 21% of women IS faculty that we estimated. We also found that women comprised 13 of the Top 76 IS researchers for the period 1999-2003 (17%), and 42 of the top 251 IS scholars with three of more publications in these journals (16.7% women). Our study raises several implications for how to assess whether women have achieved equity in the IS academic field.",2006,0, 801,"Examining the Relationship Between Individual Characteristics, Product Characteristics, and Media Richness Fit on Consumer Channel","

This study examines the relationship between individual characteristics, product characteristics and media richness fit to explain the consumer channel preference. Based on prior research, hypotheses were tested with a sample of 749 consumers. The results show that the media richness fit moderates the degree of the relationship between the individual/product characteristics and the consumer channel preference. Thus, depending on the level of the media richness fit, the level of confidence in the channel, the attitude towards the channel, the experience level with the channel, the perceived risk vis-à-vis the channel, the perceived product complexity, the perceived product intangibility, and the consumer's product involvement correlate differently with the consumer channel preference. Theoretical and managerial implications of the findings and avenues for future research are discussed.

",2007,0, 802,"Examining the Relationship Between Individual Characteristics, Product Characteristics, and Media Richness Fit on Consumer Channel Preference","

This study examines the relationship between individual characteristics, product characteristics and media richness fit to explain the consumer channel preference. Based on prior research, hypotheses were tested with a sample of 749 consumers. The results show that the media richness fit moderates the degree of the relationship between the individual/product characteristics and the consumer channel preference. Thus, depending on the level of the media richness fit, the level of confidence in the channel, the attitude towards the channel, the experience level with the channel, the perceived risk vis-à-vis the channel, the perceived product complexity, the perceived product intangibility, and the consumer's product involvement correlate differently with the consumer channel preference. Theoretical and managerial implications of the findings and avenues for future research are discussed.

",2007,0, 803,Examining the role of general and firm-specific human capital in predicting IT professionals' turnover behaviors,"This study examines the effects of general and firm-specific human capital on IT professionals' turnover behaviors. In doing so, we make two contributions to IT research. First, we examine actual turnover behaviors rather than turnover intentions. Second, we go beyond prior IT turnover research to hypothesize and test a curvilinear relationship between human capital predictors and turnover behavior. Using survival analysis, we analyze archival work history data and find that the likelihood of turnover is reduced when IT professionals accumulate firm specific human capital. However, the likelihood of turnover increases with higher levels of general IT human capital. We conclude by discussing the results, suggesting possible areas for future research and noting the implications for practice.",2006,0, 804,Execution Engine of Meta-learning System for KDD in Multi-agent Environment,"

Meta-learning system for KDD is an open and evolving platform for efficient testing and intelligent recommendation of data mining process. Meta-learning is adopted to automate the selection and arrangement of algorithms in the mining process of a given application. Execution engine is the kernel of the system to provide mining strategies and services. An extensible architecture is presented for this engine based on mature multi-agent environment, which connects different computing hosts to support intensive computing and complex process control distributedly. Reuse of existing KDD algorithms is achieved by encapsulating them into agents. We also define a data mining workflow as the input of our engine and detail the coordination process of various agents to process it. To take full advantage of the distributed computing resources, an execution tree and a load balance model are designed too.

",2005,0, 805,Experience of using a lightweight formal specification method for a commercial embedded system product line,"A simple specification method is introduced and the results of its application to a series of projects in Philips are reported. The method is principally designed to ensure that that every unusual scenario is considered in a systematic way. In practice, this has led to high-quality specifications and accelerated product development. While the straightforward tabular notation used has proved readily understandable to non-technical personnel, it is also a formal method, producing a model of system behaviour as a finite state machine. In this respect, the notation is unusual in being designed to preserve as far as possible a view of the overall system state and how this changes. The notation also features a constraint table which may be described as a kind of spreadsheet for invariants to help define the states of the system.",2005,0, 806,Experience Research,"Programs for Research Experience for Undergraduates are now offered in a variety of topics. At Utah State University, a unique program was offered in the summer of 2008 with the topic area of Coding and Communications. Eight students from around the country were selected to participate. The program began with three weeks of intensive introduction to a variety of technical topics. This background set the stage for presentation of a large number of technical problems from which the student could choose, spending the remainder of the ten weeks engaged in focused research. To keep the energy level up, a variety of social activities were also introduced, including daily ""tea"" and weekly outdoor outings. As a result of this experience, several conference papers were produced, laying foundations for ongoing research.",2009,0, 807,"Experience, gender composition, social presence, decision process satisfaction and group performance","The aim of this paper is to examine the important relationships among social presence, decision process satisfaction, group member's relevant experience, and group performance. The effects of gender composition on social presence and decision process satisfaction were also examined. Seventy-two voluntarily university students which were randomly assigned into 24 three-member groups were asked to work on a decision making task. The main findings include that (1) there is a positive relationship between groups' perceived degree of social presence and their decision process satisfaction, (2) there is a positive relationship between groups' decision process satisfaction and group performance, (3) there is a positive relationship between relevant experience gained in the same organizational environment and group performance, and (4) social presence of mixed-gender groups is higher than that of same-gender groups. Also, relevant experience is a moderator of the relationship between decision process satisfaction and group performance.",2004,0, 808,Experiences and Methods from Integrating Evidence-Based Software Engineering into Education,"

In today's software development organizations, methods and tools are employed that frequently lack sufficient evidence regarding their suitability, limits, qualities, costs, and associated risks. For example, in Communications of the ACM (Communications of the ACM May 2004/Vol. 47, No. 5) Robert L. Glass, taking the standpoint of practitioners, asks for help from research: “Here's a message from software practitioners to software researchers: We (practitioners) need your help. We need some better advice on how and when to use methodologies”. Therefore, he demands:

– a taxonomy of available methodologies, based upon their strengths and weaknesses;

– a taxonomy of the spectrum of problem domains, in terms of what practitioners need;

– a mapping of the first taxonomy to the second (or the second to the first).

The evidence-based Software Engineering Paradigm promises to solve parts of these issues by providing a framework for goal-oriented research leading to a common body of knowledge and, based on that, comprehensive problemoriented decision support regarding SE technology selection.

One issue that is becoming more and more important in the context of the evidence-based SE Paradigm is the teaching of evidence-based Software Engineering. A major discussion with regard to this issue revolves around the question of how to “grow the seeds”; that is, how can we teach evidence-based SE in a way that encourages students to practice paradigm in their professional life.

The goal of this workshop is to discuss issues related to fostering the evidence-based paradigm. The results from the workshop and especially from the working groups will be published in the “Workshop Series on Empirical Software Engineering”, Vol.3.

The workshop itself is the fourth one in the workshop series on Empirical Software Engineering. The first one was held in conjunction with PROFES 2002 in Rovaniemi, the second one was held in conjunction with the Empirical Software Engineering International Week 2003 in Rome, and the third one was held in conjunction with PROFES 2005 in Oulu.

",2006,0, 809,"Experiences from large embedded systems development projects in education, involving industry and research","We present experiences from a final year M.Sc. course. The overall aim of the course is to provide knowledge and skills to develop products in small or large development teams. The course is implemented in terms of large projects in cooperation with external partners, in which the students, based on a product specification, apply and integrate their accumulated knowledge in the development of a prototype. This course, which has been running and further elaborated for 20 years, has been proven successful in terms of being appreciated by the students and by the external partners. The course has during the recent years more frequently been carried out in close connection to research groups. Our experiences indicate benefits by carrying out these types of large projects in an educational setting, with external partners as project providers, and in close cooperation with research groups.Having external partners as project providers feeds the course, students and faculty with many industrially relevant problems that are useful for motivational purposes, and in other courses for exemplification and for case studies in research. Carrying out the projects in close connection to research groups provides synergy between research and education, and can improve the academic level of the projects. A further interesting dimension is accomplished when the projects run in iterations, requiring new groups of students to take over an already partly developed complex system, and work incrementally on this system. The students are then faced with a very typical industrial situation. We advocate that students should be exposed to a mixture of ""build from scratch"" and ""incremental"" projects during the education.",2007,0, 810,Experiences using systematic review guidelines,"In a business world where competitive pressure is constantly increasing, firms are continuously trying to differentiate themselves. The advent of Web 2.0 technologies such as social media allowed firms to communicate and interact with consumers and online users in order to collect information and to perform R&D, marketing and sales tasks. This study uses a systematic literature review in order to identify which social media tools can be used in the product life cycle phases. The results show that most studies focus on the earlier phases of the product life cycle, for innovation purposes. This study offers a systematic overview of literature and suggests many insights to help future researchers and managers in their use of social media in a product life cycle context, which also includes innovation process.",2016,0, 811,Experiences with Extreme Programming in Telehealth: Developing and Implementing a Biosecurity Health Care Application,"There has been limited research on how non-conventional system development methodologies such as, Agile modeling methods could improve the successful development and implementation of telehealth services. The goal of this research was to increase the understanding of the impact of using the Extreme Programming process, an Agile modeling approach, to the development effort of a biosecurity telehealth project. Overall, the research indicates that Extreme Programming is an effective methodology to develop health care applications. The rapid prototyping enabled IT developers and health care users to clarify system requirements, communicate openly, and quickly build rapport. Further, the research found that that where the technology was new or foreign, the Extreme Programming process was flexible enough to support several iterations of technology and produce prototypes in a timely manner.",2005,0, 812,Experimental Software Engineering: A New Conference,"An interesting issue facing software engineering relates to the evidence for adopting new techniques, tools, languages methodologies, and so on. We shouldn't always reject new models based on pure argument and logic, but ideally, we should subject such developments to some form of validation. The software engineering community has addressed this issue in part by the establishment of specialist conferences. Two of these are merging, and the Technical Council on Software Engineering thought you would like to know why.",2006,0, 813,Expertise Management in a Distributed Context,"A resource-management mechanism is presented for a multiprocessor system consisting of a pool of homogeneous processing elements interconnected by multistage networks. The mechanism aims at making effective use of hardware resources of the multiprocessor system in support of high-performance parallel computations. It can create many physically independent subsystems simultaneously without incurring internal fragmentation,. Each subsystem can configure itself to form a desired topology for matching the structure of the parallel computation. The mechanism is distributed in nature; it is divided into three functionally disjoint procedures that can reside in different loci for handling various resource-management tasks concurrently. Simulation results show that, by eliminating internal fragmentation, the mechanism achieves better source utilization than a reference machine",1988,0, 814,Explaining Recommendations,"

This thesis investigates the properties of a good explanation in a movie recommender system. Beginning with a summarized literature review, we suggest seven criteria for evaluation of explanations in recommender systems. This is followed by an attempt to define the properties of a useful explanation, using a movie review corpus and focus groups. We conclude with planned experiments and evaluation.

",2007,0, 815,Exploiting parallelism in the design of peer-to-peer overlays,"Structured peer-to-peer overlays provide a natural infrastructure for resilient routing via efficient fault detection and precomputation of backup paths. These overlays can respond to faults in a few hundred milliseconds by rapidly shifting between alternate routes. In this paper, we present two adaptive mechanisms for structured overlays and illustrate their operation in the context of Tapestry, a fault-resilient overlay from Berkeley. We also describe a transparent, protocol-independent traffic redirection mechanism that tunnels legacy application traffic through overlays. Our measurements of a Tapestry prototype show it to be a highly responsive routing service, effective at circumventing a range of failures while incurring reasonable cost in maintenance bandwidth and additional routing latency.",2003,0, 816,Exploring consumer adoption of mobile payments - A qualitative study,"This paper presents a qualitative study on consumer adoption of mobile payments. The findings suggest that the relative advantage of mobile payments is different from that specified in adoption theories and include independence of time and place, availability, possibilities for remote payments, and queue avoidance. Furthermore, the adoption of mobile payments was found to be dynamic, depending on certain situational factors such as a lack of other payment methods or urgency. Several other barriers to adoption were also identified, including premium pricing, complexity, a lack of critical mass, and perceived risks. The findings provide foundation for an enhanced theory on mobile payment adoption and for the practical development of mobile payment services.",2007,0, 817,Exploring Knowledge Management with a Social Semantic Desktop Architecture,"

The motivation of this paper is to research the individual and the team levels of knowledge management, in order to unveil prominent knowledge needs, interactions and processes, and to develop a software architecture which tackles these issues. We derive user requirements, using ethnographic methods, based on user studies obtained at TMI, an international management consultancy. We build the IKOS software architecture which follows the p2p model and relies on the use of Social Semantic Desktop for seamless management of personal information and information shared within groups. Finally, we examine the way our approach matches the requirements that we derived.

",2007,0, 818,Exploring Motivational Differences between Software Developers and Project Managers,"

In this paper, we describe our investigation of the motivational differences between project managers and developers. Motivation has been found to be a central factor in successful software projects. However the motivation of software engineers is generally poorly understood and previous work done in the area is thought to be largely out-of-date. We present data collected from 6 software developers and 4 project managers at a workshop we organized at the XP2006 international conference.

",2007,0, 819,Exploring the Effects of Interactivity in Television Drama,"This paper presents some contributions towards Interactive Digital Television, focusing on interactivity for citizenship in the context of Brazilian Digital Television System Terrestrial (SBTVD-T/ISDB-TB). The paper is focused on the necessary infrastructure, applications and services that can help with the challenges of promoting digital inclusion in Brazil. In the infrastructure issue, the BluTV (Bringing All Users to the Television) is discussed, specially in terms of its components to develop interactive applications and to explore the back channel (interactivity channel). As a product of this investigation, a prototype of Interactive TV Application Guide to promote citizenship through digital inclusion is presented.",2012,0, 820,Exploring the influence of perceptual factors in the success of web-based spatial DSS,"Increasing reliance on the web for decision-making combined with higher demand on technologies that can efficiently deal with large volumes of data make visualization an important decision-making tool. Spatial decision support systems (SDSS) using the latest advances in geographic information systems (GIS) could be the appropriate approach in making DSS available to mass web-users for making decisions that have spatial components. Hence, it is important to explore the factors that impact perceived successful use of web-based SDSS. In this paper, we synthesize task-technology, goal setting, and self-efficacy theories in developing a conceptual model and the subsequent empirical study for exploring the perceptual factors impacting the perceived performance of web-based SDSS.",2007,0, 821,Exploring the origins of new transaction costs in connected societies,"There is a considerable amount of literature in management science, which claims that the digital economy is a frictionless economy, where hierarchies and institutions disappear replaced by dynamic and self-organized webs of companies and consumers. This vision may influence the way managers build market strategies and manage organizations, but also the way policy-makers address relevant issues concerned with the so-called digital divide in the knowledge society. In this chapter we have addressed the frictionless vision, challenging the communication symmetry fallacy, on which is based the idea that the network economy is automatically eliminating the information and institutional hierarchies (even though we still believe that the Internet introduces radical changes in the way economic institutions are built and the way businesses are conducted). We provide primary and secondary empirical evidence that does not support the frictionless hypothesis. The complexity of our interconnected world, the evolutionary nature of trust and learning dynamics, and the economics of mediation (the economics of relationships plus the economics of information infrastructure), play a major role in both the creation and reduction of these new hierarchies and transaction costs in digital society. The result is complex and not deterministically driven by network technology.",2004,0, 822,Extracting Useful Information from Security Assessment Interviews,"We conducted N=68 interviews with managers, employees, and information technologists in the course of conducting security assessments of 15 small- and medium-sized organizations. Assessment interviews provide a rich source of information about the security culture and norms of an organization; this information can complement and contextualize the traditional sources of security assessment data, which generally focus on the technical infrastructure of the organization. In this paper we began the process of systematizing audit interview data through the development of a closed vocabulary pertaining to security beliefs. We used a ground-up approach to develop a list of subjects, verbs, objects, and relationships among them that emerged from the audit interviews. We discuss implications for improving the processes and outcomes of security auditing.",2006,0, 823,Extraction of Index Components Based on Contents Analysis of Journal?s Scanned Cover Page,"AbstractIn this paper, a method for automatically indexing the contents to reduce the effort that used to be required for input paper information and constructing index is sought. Various contents formats for journals, which have different features from those for general documents, are described. The principal elements that we want to represent are titles, authors, and pages for each paper. Thus, the three principal elements are modeled according to the order of their arrangement, and then their features are generalized. The content analysis system is then implemented based on the suggested modeling method. The content analysis system, implemented for verifying the suggested method, gets its input in the form containing more than 300 dpi gray scale image and analyze structural features of the contents. It classifies titles, authors and pages using efficient projection method. The definition of each item is classified according to regions, and then is extracted automatically as index information. It also helps to recognize characters region by region. The experimental result is obtained by applying to some of the suggested 6 models, and the system shows 97.3% success rate for various journals.",2006,0, 824,Extraction of Informative Genes from Integrated Microarray Data,"Hepatocellular Carcinoma (HCC) is the one of leading causes of cancer related deaths worldwide. In most cases, the patients are first infected with Hepatitis C virus (HCV) which then progresses to HCC. HCC is usually diagnosed in its advanced stages and is more difficult to treat or cure at this stage. Early diagnosis increases survival rate as treatment options are available for early stages. Therefore, accurate biomarkers of early HCC diagnosis are needed. DNA microarray technology has been widely used in cancer research. Scientists study DNA microarray gene expression data to identify cancer gene signatures which helps in early cancer diagnosis and prognosis. Most studies are done on single data sets and the biomarkers are only fit to work with these data sets. When tested on any other data sets, classification is poor. In this paper, we combined four different data sets of liver tissue samples (100 HCV-cirrhotic tissues and 61 HCV-cirrhotic tissues from patients with HCC). Differently expressed genes were studied by use of high-density oligonucleotide arrays. By analyzing the data, an ensemble feature extraction-classifier was constructed. The classifier was used to distinguish HCV samples from HCV-HCC related samples. We identified a generic gene signature that would predict whether an HCV tissue also infected with HCC or not.",2012,0, 825,Face for Ambient Interface,"This paper proposes a multidimensional recursive ambient modal analysis algorithm called recursive frequency domain decomposition (recursive FDD or RFDD). The method enables simultaneous processing of a large number of synchrophasor measurements for real-time ambient modal estimation. The method combines a previously proposed multidimensional block processing algorithm FDD with a single input recursive least square (RLS) algorithm into developing a new frequency domain multidimensional recursive algorithm. First, an auto-regressive model is fitted onto the sampled data of each signal using the time-domain RLS approach. Subsequent modal analysis is carried out in frequency domain in the spirit of FDD. The conventional FDD method uses non-parametric methods for power spectrum density (PSD) estimation. The proposed method in this paper by estimating PSD with a parametric method provides smoother PSD estimation which results in less standard deviation in RFDD estimates compared to FDD. The algorithm is tested on archived synchrophasor data from a real power system.",2017,0, 826,Face Recognition by Spatiotemporal ICA Using Facial Database Collected by AcSys FRS Discover System,"In this paper, we proposed a joint spatial and temporal ICA method for face recognition, and compared the performances of different ICA approaches (spatiotemporal ICA and spatial ICA). In our study, two face datasets collected by AcSys FRS discovery system were used. One face dataset involves less variation in terms of face expression and head movement, while the other encompasses much more change. The experimental results led to the following conclusions: 1) the number of features affects the recognition rate; 2) spatiotemporal ICA outperforms spatial ICA in every scenarios; 3) the type of classifier is also a fact that affects the recognition rate. These findings justify the promise of spatiotemporal ICA for face recognition",2006,0, 827,Facilitating cross-organisational workflows with a workflow view approach,"Diverse requirements of the participants involved in a business process bring forth the need for a flexible process model that is capable of providing appropriate process information for the various participants. However, the current activity-based approach is inadequate for providing the different participants with varied process information. This paper describes a novel process-view model for workflow management. A process view is an abstracted process derived from a base process to provide abstracted process information. The underlying concepts and a formal model of a process view are presented. Moreover, a novel ordering-preserved approach is proposed to derive a process view from a base process. The proposed approach enhances the flexibility and functionality of conventional activity-based workflow management systems.",2001,0, 828,Facilitating experience reuse among software project managers,"Organizations have lost billions of dollars due to poor software project implementations. In an effort to enable software project managers to repeat prior successes and avoid previous mistakes, this research seeks to improve the reuse of a specific type of knowledge among software project managers, experiences in the form of narratives. To meet this goal, we identify a set of design principles for facilitating experience reuse based on the knowledge management literature. Guided by these principles we develop a model called Experience Exchange for facilitating the reuse of experiences in the form of narratives. We also provide a proof-of-concept instantiation of a critical component of the Experience Exchange model, the Experience Exchange Library. We evaluate the Experience Exchange model theoretically and empirically. We conduct a theoretical evaluation by ensuring that our model complies with the design principles identified from the literature. We also perform an experiment, using the developed instantiation of the Experience Exchange Library, to evaluate if technology can serve as a medium for transferring experiences across software projects.",2008,0, 829,Factors affecting duration and effort estimation errors in software development projects,"The purpose of this research was to fill a gap in the literature pertaining to the influence of project uncertainty and managerial factors on duration and effort estimation errors. Four dimensions were considered: project uncertainty, use of estimation development processes, use of estimation management processes, and the estimator's experience. Correlation analysis and linear regression models were used to test the model and the hypotheses on the relations between the four dimensions and estimation errors, using a sample of 43 internal software development projects executed during the year 2002 in the IT division of a large government organization in Israel. Our findings indicate that, in general, a high level of uncertainty is associated with higher effort estimation errors while increased use of estimation development processes and estimation management processes, as well as greater estimator experience, are correlated with lower duration estimation errors. From a practical perspective, the specific findings of this study can be used as guidelines for better duration and effort estimation. Accounting for project uncertainty while managing expectations regarding estimate accuracy; investing more in detailed planning and selecting estimators based on the number of projects they have managed rather than their cumulative experience in project management, may reduce estimation errors.",2007,0, 830,Factors affecting the implementation success of Internet-based information systems,"There has been rapid growth in the number of implementations of executive information systems (EIS). However the success rate of these systems has not been great. To minimize the risk of failed implementations, studies of theories and success factors for EIS implementation are recommended. This paper focuses on testing a model of successful EIS implementation and identifies success factors. An empirical study of mature EIS in large organizations in Australia was conducted. The path analyses of this model revealed that both EIS team communication skills and user attitude towards the EIS directly influenced EIS implementation. Both user computer experience and user involvement indirectly influenced EIS implementation. Findings from this study provide a better understanding for practitioners in developing effective information systems and provide a basis for future research in IS implementation.",1999,0, 831,Factors Affecting Web Page Similarity,"In this study We investigate the learners in School of Network Education of Beijing University of Posts and Telecommunications (ab. BUPTNU) and Tianjin Radio and Television University (ab. TJRTVU). We make use of methods of questionnaire investigation, interviewing, observing and literature research to reveal the status that learners take part in online tutorial and the factors affecting them. Through one year's investigation and analysis, I find the many conclusions, such as the lag in network infrastructure is still the bottleneck of online tutorial in China, etc.",2011,0, 832,Feature Diagrams: A Survey and a Formal Semantics,"Feature diagrams (FD) are a family of popular modelling languages used for engineering requirements in software product lines. FD were first introduced by Kang as part of the FODA (feature oriented domain analysis) method back in 1990, Since then, various extensions of FODA FD were devised to compensate for a purported ambiguity and lack of precision and expressiveness. However, they never received a proper formal semantics, which is the hallmark of precision and unambiguity as well as a prerequisite for efficient and safe tool automation, In this paper, we first survey FD variants. Subsequently, we generalize the various syntaxes through a generic construction called free feature diagrams (FFD). Formal semantics is defined at the FFD level, which provides unambiguous definition for ail the surveyed FD variants in one shot. All formalisation choices found a clear answer in the original FODA FD definition, which proved that although informal and scattered throughout many pages, it suffered no ambiguity problem. Our definition has several additional advantages: it is formal, concise and generic. We thus argue that it contributes to improve the definition, understanding, comparison and reliable implementation of FD languages",2006,0, 833,"Feedback of workplace data to individual workers, workgroups or supervisors as a way to stimulate working environment activity: a cluster randomized controlled study","Objective:?To test whether feedback and discussion of ergonomic and psychosocial working-environment data during one short session with individual, groups or supervisors of white-collar computer workers had an effect on activity to modify workplace design, working technique and psychosocial aspects of work.?Methods:?A total of 36 workgroups from nine organizations representing different trades was randomized (stratified for organization) to three feedback conditions or control with no feedback. Data were collected 1?month before and 6?months after feedback sessions. The effects studied were: (1) change in the proportion of workgroup members who reported any modification regarding workplace design or working technique; (2) change in the proportion of workgroup members who reported any modification regarding psychosocial aspects; (3) average number of modification types regarding workplace design or working technique per individual in a workgroup; (4) average number of modification types regarding psychosocial aspects per individual in a workgroup.?Results:?All feedback conditions differed positively from controls regarding change in the proportion of workgroup members who reported any modification in workplace design or working technique. No such effect was found for psychosocial aspects. For change in average number of psychosocial modification types per individual in a workgroup an effect was observed for feedback to supervisors. No intervention effect was observed for the average number of modifications in workplace design or working technique per individual in a workgroup.?Conclusion:?Feedback and discussion of ergonomic and psychosocial working-environment data during one short session with individual, groups or supervisors of white-collar computer workers may have a positive effect on how many people in a workgroup modify (or have modifications done regarding) workplace design and working technique. Feedback to supervisors may have an effect on the average number of psychosocial modification types per individual in a workgroup. Feedback to group supervisors appeared to be the most cost-effective variant",2004,0, 834,Finding a history for software engineering.,"Historians and software engineers are both looking for a history for software engineering. For historians, it is a matter of finding a point of perspective from which to view an enterprise that is still in the process of defining itself. For software engineers, it is the question of finding a usable past, as they have sought to ground their vision of the enterprise on historical models taken from science, engineering, industry, and the professions. The article examines some of those models and their application to software engineering.",2004,0, 835,Finding evidence of community from blogging co-citations: a social network analytic approach,"In this paper, we examine the problem of evaluating communities in blogs. We describe the construction and instrumentation of a research blog (on Canadian independent music) designed as a tool for measuring community effects in blogging. We then identify a number of measures concerning strength and type of communities. Using research results from sociology and psychology concerning how communities grow and function, as well as clustering algorithms from the physical and applied sciences, we demonstrate how these measures can be used in a case study based on the research blog that we developed. In addition to providing these results, the paper also introduces a computational framework based on social network analysis, which can be used to measure and evaluate community in blogs",2006,0, 836,First Elements on Knowledge Discovery Guided by Domain Knowledge (KDDK),"

In this paper, we present research trends carried out in the Orpailleur team at loria, showing how knowledge discovery and knowledge processing may be combined. The knowledge discovery in databases process (KDD) consists in processing a huge volume of data for extracting significant and reusable knowledge units. From a knowledge representation perspective, the kdd process may take advantage of domain knowledge embedded in ontologies relative to the domain of data, leading to the notion of ""knowledge discovery guided by domain knowledge"" or kddk. The kddk process is based on the classification process (and its multiple forms), e.g. for modeling, representing, reasoning, and discovering. Some applications are detailed, showing how kddk can be instantiated in an application domain. Finally, an architecture of an integrated KDDK. system is proposed and discussed.

",2006,0, 837,First Experiences with Group Projects in CSE Education,"In an effort to improve the way in which computational science and engineering is taught, the authors worked on two project-based software-focused modules in two different studyprograms at two German universities. This article describes both their expectations andoutcomes and addresses the question of whether-and how-software engineering practices should be taught in CSE courses.",2006,0, 838,First Impressions with Websites: The Effect of the Familiarity and Credibility of Corporate Logos on Perceived Consumer Swift Trust of Websites,"

The current study extends theory related to the truth effect and mere-exposure effect by detailing how increased familiarity with third-party vendor logos will increase consumer short-term trust in unfamiliar websites, based on short-term impressions. The study uses a controlled 254-participant experiment. The results indicate that familiarity with a third-party logo positively impacts the credibility and short-term (swift) trust of an unfamiliar website. Additionally, the study finds that credibility of a third-party logo positively impacts the swift trust a visitor has in a website. Overall, the study concludes that both familiarity and credibility of third-party logos positively impacts swift trust in consumer websites, and familiarity has a positive impact on increasing credibility.

",2007,0, 839,First-principles computation of the electronic and dynamical properties of solids and nanostructures with ABINIT.,"The field of first-principles simulation of materials and nanosystems has seen an amazing development in the past twenty years. Using the density-functional theory and complementary approaches, like many-body perturbation theory, it is now possible to compute both basic quantities, like crystalline lattice parameters, and more involved properties, like optical or phonon spectra. New and improved concepts, approximations, algorithms, and numerical techniques appear every year. In order to cope with the increasing software complexity, it became apparent over a decade ago that software engineering techniques and a group collaborative effort would be major ingredients of a successful first-principles project. The open source ABINIT project was launched in 1997; as of now, there are more than 1000 members on the main mailing list, over 40 active contributors, and more than 200 articles have been published in international journals using ABINIT. I will present the overall structure of the package, and its main and most exciting capabilities. Specific recent examples of projects will demonstrate some of the full palette of features in ABINIT. These projects are often carried out in close collaboration with experimentalists, and ABINIT is also used by or for a number of industrial partners.",2008,0, 840,Fish?n?Steps: Encouraging Physical Activity with an Interactive Computer Game,"A sedentary lifestyle is a contributing factor to chronic diseases, and it is often correlated with obesity. To promote an increase in physical activity, we created a social computer game, Fish?n?Steps, which links a player?s daily foot step count to the growth and activity of an animated virtual character, a fish in a fish tank. As further encouragement, some of the players? fish tanks included other players? fish, thereby creating an environment of both cooperation and competition. In a fourteen-week study with nineteen participants, the game served as a catalyst for promoting exercise and for improving game players? attitudes towards physical activity. Furthermore, although most player?s enthusiasm in the game decreased after the game?s first two weeks, analyzing the results using Prochaska?s Transtheoretical Model of Behavioral Change suggests that individuals had, by that time, established new routines that led to healthier patterns of physical activity in their daily lives. Lessons learned from this study underscore the value of such games to encourage rather than provide negative reinforcement, especially when individuals are not meeting their own expectations, to foster long-term behavioral change.",2006,0, 841,Flexible and efficient IR using array databases,"Recently scientific data are increasingly managed by array database technologies such as SciDB and Rasdaman. Some work has shown that array database technologies are particularly useful to support multi-dimensional data management and analysis. In the geospatial domain, remote sensing images, as a kind of multi-dimensional scientific data, could certainly take advantage of array database technologies. This paper investigates existing array database technologies and evaluates the pros and cons of several typical solutions for managing remote sensing images. It compares the running environments, data structures, APIs, and query languages of three solutions: SciDB, Rasdaman, and Oracle GeoRaster. A benchmark experiment is also designed to test the performance of these three solutions. The results show that Array Database technologies provide more flexibility in managing and analyzing remote sensing images than the traditional relational database solutions. Finally, an study case on VCI derivation for crop vegetation condition monitoring is presented based on Rasdaman database to demonstrate the powerful and flexible capability of in-situ image data processing.",2016,0, 842,Flexible Support and Management of Adaptive Workflow Processes,"As a technology that can improve the efficiency of the business process, workflow is drawing more and more attentions of researchers and software product vendors. But there are still many problems waiting to be solved for workflow. One of these problems is the poor adaptability of workflow. This paper intends to enhance the adaptability from the view of supporting dynamic process change. In the paper, a workflow model that is fit for dynamic change is defined and the verification rules are firstly set up for this model. Specifically, our method is based on the concept of executable path of workflow. Based on the executable path, the concepts and algorithms of valid process and complete subprocess are introduced to solve the problems brought by process model dynamic change.",2004,0, 843,"Flexible surrogate marker evaluation from several randomized clinical trials with continuous endpoints, using R and SAS","The evaluation of surrogate endpoints is thought to be first studied by Prentice, who presented a definition of a surrogate as well as a set of criteria. These criteria were later supplemented with the so-called proportion explained after notifying some drawbacks in Prentice's approach. Subsequently, the evaluation exercise was framed within a meta-analytic setting, thereby overcoming difficulties that necessarily surround evaluation efforts based on a single trial. The meta-analytic approach for continuous outcomes is briefly reviewed. Advantages and problems are highlighted by means of two case studies, one in schizophrenia and one in ophthalmology, and a simulation study. One of the critical issues for the broad adoption of methodology like the one presented here is the availability of flexible implementations in standard statistical software. Generically applicable SAS macros and R functions are developed and made available to the reader.",2007,0, 844,Flexing digital library systems.,"Conceptual design of the upgrade to NSTX, explored designs sized to accept the worst loads that power supplies could produce. This produced excessive structures that would have been difficult to install and were much more costly than needed to meet the scenarios required for the upgrade mission. Instead, the project decided to rely on a digital coil protection system (DCPS). Initial sizing was then based on the 96 scenarios in the project design point with some headroom to accommodate operational flexibility and uncertainty. This has allowed coil support concepts that minimize alterations to the existing hardware. The digital coil protection system theory, hardware and software are described in another paper at this conference. The intention of this paper is to describe the generation of stress multipliers, and algorithms that are used to characterize the stresses at key areas in the tokamak, as a function of either loads calculated by the influence coefficients computed in the DCPS software, or directly from the coil currents.",2011,0, 845,Flow Experience of MUD Players: Investigating Multi-User Dimension Gamers from the USA,"

Playing MUDs (Multi-User Dimensions or Multi-User Dungeons, or Multi-User Domain), text-only online gaming environments, may initiate flow experience. Online survey research was administered within the sample population of 13,662 MUD players from the United States of America, using the specially designed questionnaire with four categories of questions related to: flow experience, experience in playing MUDs, interaction patterns, and demographics. Replies of respondents (N = 287) fit a five factor model. All the correlations between the factors are significant (p < 0.05). Since players experienced flow while MUDding, it was proposed that flow is one of the sources of the long-time attractiveness for MUD players.

",2007,0, 846,Forecasting the Volatility of Stock Price Index,"In this paper, we have proposed artificial neural network for the prediction of Saudi stock market. The proposed predictions model, with its high degree of accuracy, could be used as investment advisor for the investors and traders in the Saudi stock market. The proposed model is based mainly on Saudi Stock market historical data covering a large span of time. Achieving reasonable accuracy rate of predication models will surely facilitate an increased confidence in the investment in the Saudi stock market. We have only used the closing price of the stock as the stock variable considered for input to the system. The number of windows gap to determine the numbers of previous data to be used in predicting the next day closing price data has been choosing based on heuristics. Our results indicated that the proposed ANN model predicts the next day closing price stock market value with a low RMSE and high correlation coefficient of up to 99.9% for the test set, which is an indication that the model adequately mimics the trend of the market in its prediction. This performance is really encouraging and thus the proposed system will impact positively the analysis and prediction of Saudi stock market in general.",2011,0, 847,Formalism Challenges of the Cougaar Model Driven Architecture,"The model driven architecture (MDA) is an approach to software engineering in which models are systematically developed and transformed into code. This paper discusses some of the issues which would need to be overcome when attempting to certify a safety critical design or software developed with the MDA approach, partially based on our experience with an avionics software case study. We particularly focus on the need to certify MDA artefacts and produce a compelling system safety case",2007,0, 848,Formally analyzing software architectural specifications using SAM.,"In the past decade, software architecture has emerged as a major research area in software engineering. Many architecture description languages have been proposed and some analysis techniques have also been explored. In this paper, we present a graphical formal software architecture description model called software architecture model (SAM). SAM is a general software architecture development framework based on two complementary formalisms--Petri nets and temporal logic. Petri nets are used to visualize the structure and model the behavior of software architectures while temporal logic is used to specify the required properties of software architectures. These two formal methods are nicely integrated through the SAM software architecture framework. Furthermore, SAM provides the flexibility to choose different compatible Petri net and temporal logic models according to the nature of system under study. Most importantly, SAM supports formal analysis of software architecture properties in a variety of well-established techniques--simulation, reachability analysis, model checking, and interactive proving, In this paper, we show how to formally analyze SAM software architecture specifications using two well-known techniques--symbolic model checking with tool Symbolic Model Verifier, and theorem proving with tool STeP.",2004,0, 849,Formation of pseudorandom sequences with improved autocorrelation properties,"A new method to improve the statistical properties of number sequences generated by maximum-period nonlinear congruential generators derived from the Renyi chaotic map is proposed. The characteristic feature of the method is the simultaneous usage of numbers generated by the Renyi map implemented in a finite-state machine, and symbols generated by the same map. The period of sequences generated can be significantly longer than the period of sequences produced by previously defined generators using the same map. It is shown that output sequences obtained with the proposed method can pass all, or almost all, statistical tests from the standard NIST 800-22 statistical test suite for many integer or non-integer values of the parameter.",2009,0, 850,Forming effective worker teams with multi-functional skill requirements,"Throughout much of the past century, manufacturing efficiencies were gained by constructing systems from independently designed and optimized tasks. Recent theories and practice have extolled the virtues of team-based practices that rely on human flexibility and empowerment to improve integrated system performance. The formation of teams requires consideration of innate tendencies and interpersonal skills as well as technical skills. In this project we develop and test mathematical models for formation of effective human teams. Team membership is selected to ensure sufficient breadth and depth of technical skills. In addition, measures of worker conative tendencies are used along with empirical results on desirable team mix to form maximally effective teams. A mathematical programming formulation for the team selection problem is presented. A heuristic solution is proposed and evaluated.",2005,0, 851,Form-Semantics-Function ? A Framework for Designing Visual Data Representations for Visual Data Mining,"

Visual data mining, as an art and science of teasing meaningful insights out of large quantities of data that are incomprehensible in another way, requires consistent visual data representations (information visualisation models). The frequently used expression ""the art of information visualisation"" appropriately describes the situation. Though substantial work has been done in the area of information visualisation, it is still a challenging activity to find out the methods, techniques and corresponding tools that support visual data mining of a particular type of information. The comparison of visualisation techniques across different designs is not a trivial problem either. This chapter presents an attempt for a consistent approach to formal development, evaluation and comparison of visualisation methods. The application of the approach is illustrated with examples of visualisation models for data from the area of team collaboration in virtual environments and from the results of text analysis.

",2008,0, 852,Foundations for Security Aware Software Development Education,"Most instances of software exploitation are really software failure. Even though we cannot eliminate vulnerability from modern information systems, we can reduce exploitable code long term with sound, robust development practices. We argue that the current hot topic of so-called ""secure coding"" represents commonly taught coding techniques that ensure robustness, rather than ensuring any commonly understood concept of security. Weaving the practice of rigorous coding techniques into curriculum is essential — coding for security is useless apart from fault-tolerant foundations. However, security-specific coding techniques need to be integrated pedagogically alongside robustness so that students can differentiate the two. We propose in this paper a shift in instructional methods based on this distinction to help future programmers, developers, and software engineers produce ""security-aware"" software.",2006,0, 853,Foundations of Human Computing: Facial Expression and Emotion,"We have established an emotional model to enhance a virtual worker simulation, which could be also used to support robots in a joined human-robot work-task inside an industrial setting. The robot is able to understand people's individual and specific knowledge as well as capabilities, which are ultimately linked to an emotional consequence. As a result, the emotional model outputs the emotional valence calculated as positive or negative values, respective to reward and punishment. This output is applied as value function for a reinforcement learning agent. There we use an actor critic algorithm extended by eligibility traces and task specific conditions to learn the optimal action sequences. We show the influence of emotional reward leads to differences in the learned action sequences in comparison to a simple task performance evaluation reward. Therefore the robot is able to calculate emotional feelings of a human during a given working task, is able to decide if there is a better, more emotional stable path to doing this working task and moreover the robot is able to decide when the human is needed help or even not.",2014,0, 854,Foundations of Model (driven) (Reverse) Engineering - Episode I: Story of the Fidus Papyrus and the Solarus,"Model Driven Engineering (MDE) received a lot of attention in the last years, both from academia and industry. However, there is still a debate on which basic concepts form the foundation of MDE. The Model Driven Architecture (MDA) from the OMG does not provided clear answers to this question. This standard instead provides a complex set of interdependent technologies. This paper is the first of a series aiming at defining the foundations of MDE independently from a particular technology. A megamodel is introduced in this paper and incrementally refined in further papers from the series. This paper is devoted to a single concept, the concept of model, and to a single relation, the RepresentationOf relation. The lack of strong foundations for the MDA? 4-layers meta-pyramid leads to a common mockery: ""So, MDA is just about Egyptology?!"". This paper is the pilot of the series called ""From Ancient Egypt to Model Driven Engineering"". The various episodes of this series show that Egyptology is actually a good model to study Model Driven Engineering.",2004,0, 855,Fragment Class Analysis for Testing of Polymorphism in Java Software,"Testing of polymorphism in object-oriented software may require coverage of all possible bindings of receiver classes and target methods at call sites. Tools that measure this coverage need to use class analysis to compute the coverage requirements. However, traditional whole-program class analysis cannot be used when testing incomplete programs. To solve this problem, we present a general approach for adapting whole-program class analyses to operate on program fragments. Furthermore, since analysis precision is critical for coverage tools, we provide precision measurements for several analyses by determining which of the computed coverage requirements are actually feasible for a set of subject components. Our work enables the use of whole-program class analyses for testing of polymorphism in partial programs, and identifies analyses that potentially are good candidates for use in coverage tools.",2004,0, 856,Framework for Virtual Community Business Success: The Case of the Internet Chess Club,"Prior work has identified, in piecemeal fashion, desirable characteristics of virtual community businesses (VCBs) such as inimitable information assets, persistent handles fomenting trust, and an economic infrastructure. The present work develops a framework for the success of a subscription-based VCB by taking into account the above elements and considering as well an interplay of the membership (both regular members and volunteers), technical features of the interface, and an evolutionary business model that supports member subgroups as they form. Our framework is applied by an in-depth survey of use and attitude of regular members and volunteers in the Internet Chess Club (ICC), a popular subscription-based VCB. The survey results reveal that key features of the model are supported in the ICC case: member subgroups follow customized communication pathways; a corps of volunteers is supported and recognized, and the custom interface presents clear navigation pathways to the ICCs key large-scale information asset, a multi-million game database contributed by real-world chess Grandmasters who enjoy complimentary ICC membership. We conclude by discussing VCBs in general and how the framework might apply to other domains.",2004,0, 857,Framework to Evaluate Software Process Improvement in Small Organizations,"Software Process Improvement is an important mechanism to boost competitiveness and efficiency in software companies. Maturity models such as CMMI and MR MPS can support development organizations and contribute to the attainment of their quality goals when used as guides for software process improvement. However, implementing process improvements based on models implies long-term and large investments projects. This is particularly critical in small and mid-sized enterprises (SMEs) which usually have the largest financial constraints. To reduce some of these influences the MPS.BR Programme encourages the assembling of company group formation as a facilitating element in the implementing of software process improvements in SMEs. This work aims at discussing an approach for initiatives to implement software process improvement in groups of small and mid sized Brazilian companies and also a formal process set to minimize the influence of elements such as risks and critical factors in their initiatives.",2010,0, 858,From a 2D Image to a 3D Form,"Most of the TV manufacturers have released 3DTVs in the summer of 2010 using shutter-glasses technology. 3D video applications are becoming popular in our daily life, especially at home entertainment. Although more and more 3D movies are being made, 3D video contents are still not rich enough to satisfy the future 3D video market. There is a rising demand on new techniques for automatically converting 2D video content to stereoscopic 3D video displays. In this paper, an automatic monoscopic video to stereoscopic 3D video conversion scheme is presented using block-based depth from motion estimation and color segmentation for depth map enhancement. The color based region segmentation provides good region boundary information, which is used to fuse with block-based depth map for eliminating the staircase effect and assigning good depth value in each segmented region. The experimental results show that this scheme can achieve relatively high quality 3D stereoscopic video output.",2010,0, 859,From conference to journal publication: How conference papers in software engineering are extended for publication in journals.,"

In software engineering (SE) and in the computing disciplines, papers presented at conferences are considered as formal papers and counted when evaluating research productivity of academic staff. In spite of this, conference papers may still be extended for publication in academic journals. In this research, we have studied the process of extension from conference to journal publication, and tried to explain the different purposes these two forms of publication serve in the field. Twenty-two editors in chief and associate editors in chief of major publications in SE and related fields were interviewed, and 122 authors of extended versions of conference papers answered a Web questionnaire regarding the extension of their papers. As a result, the process of extending conference papers for journal publication in SE is recorded. In the conclusion, we comment on the following: (a) the role of the conference in the development of the research work; (b) the review process at the conference and at the journal stage; and (c) the different purposes conference and journal publication fulfill in SE. © 2008 Wiley Periodicals, Inc.

",2008,0, 860,From e-learning to games-based e-learning: using interactive technologies in teaching an IS course,"

The e-phenomenon has profoundly changed many aspects of society and, inevitably, has a commensurate impact on higher education. E-learning has now evolved from a marginal form of education to a commonly accepted alternative to traditional face-to-face education. The term can cover different delivery models ranging from courses that are delivered fully online to courses that provide some face-to-face interaction and some online provision. Within this continuum, interactive technologies can play a significant role in engaging the learner and providing a rich learning experience. This paper examines the e-phenomenon as it relates to e-learning and how different interactive technologies, such as visualisations and simulation games, can be used to enrich the learning experiences of students with different learning styles. The theory is related to the teaching of Information Systems (IS) in a postgraduate MSc Management of eBusiness course that uses a range of interactive technologies.

",2007,0, 861,From Human to Automatic Summary Evaluation,"The measurement and evaluation method of human exposure to mechanical vibration is summarized firstly. A special testing system for measuring and evaluating human vibration is designed. The hardware platform is developed by assembling the sensors, the amplifier and the data acquisition etc. Based on Virtual Instrument, the signal obtained by hardware systems is analyzed by computer program. According to the evaluation method of human exposure to mechanical vibration, the analysis software for the evaluation of human vibration is programmed from the time-domain and the frequency-domain with MATLAB and IMC FAMOS. Through the time-domain method, the crest factor of vibration could be calculated in order to choose the suitable evaluation method. Through the frequency-domain method, the frequency character could be analyzed in order to improve the vibration. The testing system could be satisfied for the measurement and evaluation of human vibration. The testing system is used to measure and evaluate the vibration in motorcycles, which indicates the testing system is convenience and reliable.",2009,0, 862,From object orientation to goal orientation: A paradigm shift for requirements engineering.,"Requirements engineering (RE) is concerned with the elicitation of the objectives to be achieved by the system envisioned, the operationalization of such objectives into specifications of services and constraints, the assignment of responsibilities for the resulting requirements to agents such as humans, devices and software, and the evolution of such requirements over time and across system families. Getting high-quality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of primary concern in software engineering research and practice. The paper reviews the important limitations of OO modeling and formal specification technology when applied to this early phase of the software lifecycle. It argues that goals are an essential abstraction for eliciting, elaborating, modeling, specifying, analyzing, verifying, negotiating and documenting robust and conflict-free requirements. A safety injection system for a nuclear power plant is used as a running example to illustrate the key role of goals while engineering requirements for high assurance systems.",2004,0, 863,From Silver Bullets to Philosophers? Stones: Who Wants to Be Just an Empiricist?,"AbstractFor a long time scientists have been committed to describe and organize information acquired by observations from the field. To improve the comprehension and testability of the observed information, Bacons works proposed to organize the way that the experiences should be structured and somehow formalized, starting with the experimental method idea. From that point in time, the ideas regarding experimentation have been explored and evolved into different scientific areas, including physics, agriculture, medicine, engineering and social sciences among others. It has not been different in Software Engineering. By applying the scientific method to organize their experimental studies, software engineers have intensively worked to understand the application and evolution of software processes and technologies. Acquiring knowledge through different categories of experimental studies has supported researchers and practitioners to build a Software Engineering body of knowledge. Families of studies start to be planed and shared among the research community, composing a common research agenda to enlarge such body of knowledge. Based on this, evidence based software engineering is becoming a reality. Nowadays, besides the experimental studies, the experimentation approach represents an important tool to allow the transfer of software technology to the industry and to improve software processes.",2007,0, 864,Functional size measurement revisited,"A crucial factor in obtaining accurate efficiency characterization by way of the Wheeler cap is the post-processing method (PPM) that is employed to extract efficiency values from raw measured data. The complementary Q-factor method (CQFM) is the fastest amongst the six PPMs that are currently available. CQFM is an end-to-end methodology for broadband efficiency measurements, which exploits inherently wideband Q-calculation formulas that are based on frequency derivatives of antenna input impedance and on the concept of matched VSWR bandwidth. CQFM is useful both for narrow- and wide-band antennas, regardless of whether the latter exhibit closely or widely spaced multiple (anti-)resonances. Experimental data indicated that the CQFM systematically over-estimates the radiation efficiency of the antenna-under-test (AUT). Two possible reasons theoretically exist: high losses in the materials, and inaccurate calculation of the AUT Q-factor inside the Wheeler cavity. This paper proves that even a mild distortion of the near field of the AUT by the cavity produces an artificially larger capped Q-factor.",2014,0, 865,"Fusion of ICA Spatial, Temporal and Localized Features for Face Recognition","Independent component analysis (ICA) has found its application in face recognition successfully. In practice several ICA representations can be derived. Particularly they include spatial ICA, spatiotemporal ICA, and localized spatiotemporal ICA, which respectively extract features of face images in terms of space domain, time-space domain, and local region. Our work has shown that while spatiotemporal ICA outperforms other ICA representations, further improvement can be made by a fusion of variety of ICA features. However, simply combining all features will not work as well as expected. For this reason an optimization method for feature selection and combination is proposed in this paper. We present here an optimizing process of feature selection about which features and how many features from each individual ICA feature set are selected. The experimental results show that feature fusion method can improve face recognition rate up to 94.62% compared with that of 86.43% by using spatiotemporal ICA alone.",2007,0, 866,Fusion: A System For Business Users To Manage Program Variability,"In order to make software components more flexible and reusable, it is desirable to provide business users with facilities to assemble and control them without their needing programming knowledge. This paper describes a fully functional prototype middleware system where variability is externalized so that core applications need not be altered for anticipated changes. In this system, application behavior modification is fast and easy, making this middleware suitable for frequently changing programs.",2005,0, 867,Fuzzy Patterns in Multi-level of Satisfaction for MCDM Model Using Modified Smooth S-Curve MF,"

Present research work relates to a methodology using modified smooth logistic membership function (MF) in finding out fuzzy patterns in multi-level of satisfaction (LOS) for Multiple Criteria Decision-Making (MCDM) problem. Flexibility of this MF in applying to real world problem has been validated through a detailed analysis. An example elucidating an MCDM model applied in an industrial engineering problem is considered to demonstrate the veracity of the proposed methodology. The key objective of this paper is to guide decision makers (DM) in finding out the best candidate-alternative with higher degree of satisfaction with lesser degree of vagueness under tripartite fuzzy environment. The approach presented here provides feedback to the decision maker, implementer and analyst.

",2005,0, 868,Fuzzy Semantic Action and Color Characterization of Animation Movies in the Video Indexing Task Context,"

This paper presents a fuzzy statistical approach for the semantic content characterization of the animation movies. The movie action content and color properties play an important role in the understanding of the movie content, being related to the artistic signature of the author. That is why the proposed approach is carried out by analyzing several statistical parameters which are computed both from the movie shot distribution and the global color distribution. The first category of parameters represents the movie mean shot change speed, the transition ratio and the action ratio while the second category represents the color properties in terms of color intensity, warmth, saturation and color relationships. The semantic content characterizations are achieved from the low-level parameters using a fuzzy representation approach. Hence, the movie content is described in terms of action, mystery, explosivity, predominant hues, color contrasts and the color harmony schemes. Several experimental tests were performed on an animation movie database. Moreover, a classification test was conducted to prove the discriminating power of the proposed semantic descriptions for their prospective use as semantic indexes in a content-based video retrieval system.

",2006,0, 869,Fuzzy set theory applications in production management research: a literature survey,"AbstractFuzzy set theory has been used to model systems that are hard to define precisely. As a methodology, fuzzy set theory incorporates imprecision and subjectivity into the model formulation and solution process. Fuzzy set theory represents an attractive tool to aid research in production management when the dynamics of the production environment limit the specification of model objectives, constraints and the precise measurement of model parameters. This paper provides a survey of the application of fuzzy set theory in production management research. The literature review that we compiled consists of 73 journal articles and nine books. A classification scheme for fuzzy applications in production management research is defined. We also identify selected bibliographies on fuzzy sets and applications.",1998,0, 870,Gaining customer knowledge through analytical CRM,"Targeting at customers, CRM system is definitely of strategic importance for enterprises. However, its huge potential, the analytical function, is not fully exploited. After introducing the customer knowledge and analyzing the current situation of analytical CRM, this paper proposed a model for analytic CRM, and illustrated its application in customer grouping and corresponding strategy.",2010,0, 871,Game theory perspectives on client: vendor relationships in offshore software outsourcing,"The objective of this paper is to provide the initial literature based insights into the game theory specifically with the viewpoint of client - vendor relationships in offshore software outsourcing. Game theory has been used for long in understanding various contexts in economics and other disciplines. Offshore software outsourcing relates to the situation in which client and vendor are operating from different countries. Subsequently, in this paper, the initial understanding of game theory focusing on software engineering community is developed. Particularly risk, rationality, payoffs, and other elements of game theory are explored in terms of how they affect offshore software outsourcing. The paper is structured as follows. Section one provides introduction to game theory concept. Section two explores the history, representation and types of games. Section three compares offshore software outsourcing with types and elements of game theory. Section four discusses one of the most famous game theory examples - 'prisoners-dilemma' and relates it to software outsourcing context. Finally, section five concludes this paper with the intended future work.",2006,0, 872,Gapped Local Similarity Search with Provable Guarantees,"AbstractWe present a program qhash, based on q-gram filtration and high-dimensional search, to find gapped local similarities between two sequences. Our approach differs from past q-gram-based approaches in two main aspects. Our filtration step uses algorithms for a sparse all-pairs problem, while past studies use suffix-tree-like structures and counters. Our program works in sequence-sequence mode, while most past ones (except QUASAR) work in pattern-database mode.We leverage existing research in high-dimensional proximity search to discuss sparse all-pairs algorithms, and show them to be subquadratic under certain reasonable input assumptions. Our qhash program has provable sensitivity (even on worst-case inputs) and average-case performance guarantees. It is significantly faster than a fully sensitive dynamic-programming-based program for strong similarity search on longsequences.",2004,0, 873,Gender in end-user software engineering,"Despite the considerable HCI research relevant to end-user problem solving and end-user software engineering, researchers have not focused on potential gender HCI issues. In particular we focus on IT workers, a majority of whom do not have background in computer/information science. (In this document, we contrast these IT workers with females who are professional programmers, whom we term computer science females.) We believe that research on end-user problem-solving environments must delve into how these systems can support both genders for the following critical reasons: Women working in IT-dependent fields face a glass ceiling and ignorance of gender HCI issues is risky. If gender HCI issues in end-user software engineering continue to be neglected, software supporting the growing millions of end-user programmers may be making the same mistakes as did our academic ancestors: excluding women from full participation.",2003,0, 874,Gender Talk: Differences in Interaction Style in CMC,"

Qualitative analysis was used to investigate the nature of the interactions of different gender pairings doing a negotiation task via computer-mediated communication (CMC). Preliminary results indicate that female pairs used more language of fairness, saving face, and acknowledgement in their conversation than did male pairs. Male pairs made more procedural statements about meeting management and actions than female pairs. The study provides a preliminary understanding of how gender interactions may affect performance in CMC tasks.

",2007,0, 875,Generalized ?Stigma?: Evidence for Devaluation-by-Inhibition Hypothesis from Implicit Learning,"

Recently, a new fundamental discovery has been made of the relationship between attentional system and affective system of human brain, giving rise to the devaluation-by-inhibition hypothesis. It is shown that selective attention has an affective impact on an otherwise emotionally bland stimulus. Particularly, if a neutral stimulus was inhibited by selective attention in a prior task, it would be valued less in a subsequent affective evaluation task than it would otherwise have been. In the present study, we extend this line of research on the affective consequence of attention and demonstrate that prior attentional states (attended or inhibited) associated with a group of neutral stimuli (character strings) can even influence subsequent preference judgment about previously-unseen stimuli if these new stimuli share certain basic features (e.g., follow the same rule) with those encountered in a previous stage.

",2007,0, 876,General-Purpose Framework for Efficient High-Fidelity Collision Detection between Complex Deformable Models for the HLA,"Collision detection is fundamental to many kinds of simulation. Any simulation that needs to model interactions between solid objects needs some form of collision detection. However, despite this need, a general-purpose collision detection framework has not been developed for the High Level Architecture (HLA). This research paper proposes a framework which facilitates this need. The framework differs from previous solutions by conforming to the principles of low coupling and high cohesion, which are cornerstones of the HLA ideology, which promotes reuse of simulation components. To this end, the framework does not bind itself to the existing Object Model of the simulation it supports. The HLA Data Distribution Management (DDM) services are used to increase the network and processing efficiency of the solution. By incorporation of advanced spatial partitioning and collision detection algorithms, the solution provides an accurate, fast collision detection service to HLA federates.",2005,0, 877,Generating Fast Feedback in Requirements Elicitation,"

Getting feedback fast is essential during early requirements activities. Requirements analysts need to capture interpret and validate raw requirements and information. In larger projects, a series of interviews and workshops is conducted. Stakeholder feedback for validation purposes is often collected in a second series of interviews, which may take weeks to complete. However, this may (1) delay the entire project, (2) cause stakeholders to lose interest and commitment, and (3) result in outdated, invalid requirements. Based on our ""By Product-Approach"", we developed the ""Fast Feedback"" technique to collect additional information during initial interviews. User interface mock-ups are sketched and animated during the first interview and animated using the use case steps as guidance. This shortcut saves one or two interview cycles. A large administrative software project was the trigger for this work.

",2007,0, 878,Generating Test Data for Specification-Based Tests Via Quasirandom Sequences,"This paper presents work on generation of specification-driven test cases based on quasirandom (low-discrepancy) sequences instead of pseudorandom numbers. This approach is novel in software testing. This enhanced uniformity of quasirandom sequences leads to faster generation of test cases covering all possibilities. We demonstrate by examples that quasirandom sequences can be a viable alternative to pseudorandom numbers in generating test cases. In this paper, we present a method that can generate test cases from a decision table specification more effectively via quasirandom numbers. Analysis of a simple problem in this paper shows that quasirandom sequences achieve better data than pseudorandom numbers, and have the potential to converge faster and so reduce the computational burden. The use of different quasirandom sequences for generating test cases is presented in this paper",2006,0, 879,Genetic algorithms to support software engineering experimentation,"Empirical software engineering is concerned with running experimental studies in order to establish a broad knowledge base to assist software developers in evaluating models, methods and techniques. Running multiple experimental studies is mandatory, but complex and the cost is high. Besides, replications may impose constraints difficult to meet in real contexts. Researchers face additional problems and cost restrictions when conducting meta-analysis on combined data from multiple experiments. In this paper we are concerned with both issues, of assisting users in carrying out meta-analysis tasks and gathering a meaningful body of data from experimental studies. We show how the genetic algorithms optimization model can effectively handle a specific meta-analysis problem that is not amenable to standard statistical approaches. We also introduce an approach to expand the universe of data by mapping the experimental design and known results into a suitable genetic algorithm model that simulates new results. The simulation allows researchers to prospect how the variation of different experimental parameters affects the results, without incurring in the cost of actually running additional experiments. We show that it is possible to simulate statistically valid data, expanding the universe of data for analysis and opening up some interesting possibilities for replicators.",2005,0, 880,"Geneways: a system for extracting, analyzing, visualizing, and integrating molecular pathway data","The immense growth in the volume of research literature and experimental data in the field of molecular biology calls for efficient automatic methods to capture and store information. In recent years, several groups have worked on specific problems in this area, such as automated selection of articles pertinent to molecular biology, or automated extraction of information using natural-language processing, information visualization, and generation of specialized knowledge bases for molecular biology. GeneWays is an integrated system that combines several such subtasks. It analyzes interactions between molecular substances, drawing on multiple sources of information to infer a consensus view of molecular networks. GeneWays is designed as an open platform, allowing researchers to query, review, and critique stored information.",2004,0, 881,GENSIM 2.0: A Customizable Process Simulation Model for Software Process Evaluation,"

Software process analysis and improvement relies heavily on empiricalresearch. Empirical research requires measurement, experimentation, andmodeling. Moreover, whatever evidence is gained via empirical research isstrongly context dependent. Thus, it is hard to combine results and capitalizeupon them in order to improve software development processes in evolving developmentenvironments. The process simulation model GENSIM 2.0 addressesthese challenges. Compared to existing process simulation models in the literature,the novelty of GENSIM 2.0 is twofold: (1) The model structure is customizableto organization-specific processes. This is achieved by using a limited setof macro-patterns. (2) Model parameters can be easily calibrated to availableempirical data and expert knowledge. This is achieved by making the internalmodel structures explicit and by providing guidance on how to calibrate modelparameters. This paper outlines the structure of GENSIM 2.0, shows examplesof how to calibrate the simulator to available empirical data, and demonstratesits usefulness through two application scenarios. In those scenarios, GENSIM2.0 is used to rank feasible combinations of verification and validation (V&V)techniques with regards to their impact on project duration, product quality andresource consumption. Though results confirm the expectation that doing moreV&V earlier is generally beneficial to all project performance dimensions, theexact rankings are sensitive to project context.

",2008,0, 882,Getting the Best out of Software Process Simulation and Empirical Research in Software Engineering,"This position paper sets out our views on the need to use simulation and quantitative experiments in combination in order to maximise the benefit of both to software engineering research. Each approach should be used to overcome weaknesses in the other in attempting to predict the behaviour of software processes when new or modified processes, tools or techniques are employed. We also express our concern at the frequently-encountered use of the term 'experiment' to describe quantitative simulation-based investigations.",2007,0, 883,Global Sensitivity Analysis of Predictor Models in Software Engineering,"Predictor models are an important tool in software projects for quality and cost control as well as management. There are various models available that can help the software engineer in decision-making. However, such models are often difficult to apply in practice because of the amount of data needed. Sensitivity analysis offers provides means to rank the input factors w.r.t. their importance and thereby reduce and optimise the measurement effort necessary. This paper presents an example application of global sensitivity analysis on a software reliability model used in practice. It describes the approach and the possibilities offered.",2007,0, 884,Global value numbering using random interpretation,"Here the optimized gate level area problem in digit serial MCM designs, and design architectures is being introduced. Digit serial design offers less complexity MCM operation by increasing the delay of operation. Several efficient and accurate algorithms have been introduced to design lower complexity bit-parallel multiplication operation for multiple constant multiplication. In this project, direct form, transposed form as well as proposed MCM has been implemented. Using MCM technique, area and power is reduced in compromise with delay constraint. In order to reduce delay even more we can use Global Valued Numbering (GVN). Using variable assignment method, reduced delay as well as better area and power minimization can be obtained.",2014,0, 885,Graph-Based Acquisition of Expressive Knowledge,"It is shown how knowledge acquisition can improve the design, production, and testing of complex electronic, mechanical, and hydraulic systems. A method for automating knowledge acquisition and integrating it into the design process is discussed. The author presents NASA's CAD/CAE Knowledge Base Development Tool (KBDT) as a prototype for demonstrating the concept of automated knowledge acquisition. The basic structure of KBDT, and its operation and integration with NASA'S Knowledge-based Autonomous Test Engineer (KATE), are described. The combination of voice synthesis, voice recognition, natural language processing, and knowledge acquisition processing components, integrated using a blackboard architecture, is discussed",1992,0, 886,Graphic Symbol Recognition of Engineering Drawings Based on Multi-Scale Autoconvolution Transform,"

In this paper, a novel graphic symbol recognition of scanned engineering drawing method based on multi-scale autoconvolution transform and radial basis probabilistic neural network (RBPNN) is proposed. Firstly, the recently proposed affine invariant image transform called Multi-Scale Autoconvolution (MSA) is adopted to extract invariant features. Then, the orthogonal least square algorithm (OLSA) is used to train the RBPNN and the recursive OLSA is adopted to optimize the structure of the RBPNN. The experimental result shows that, compared with another affine invariant technique, this new method provides a good basis for the scanned engineering drawing recognition task where the disturbances of graphic symbol can be approximated with spatial affine transformation.

",2007,0, 887,Graphical Data Displays and Database Queries: Helping Users Select the Right Display for the Task,"

This paper describes the process by which we have constructed an adaptive system for external representation (ER) selection support, designed to enhance users' ER reasoning performance. We describe how our user model has been constructed – it is a Bayesian network with values seeded from data derived from experimental studies. The studies examined the effects of users' background knowledge-of-external representations (KER) upon performance and their preferences for particular information display forms across a range of database query types.

",2005,0, 888,Grid-Enabled Non-Invasive Blood Glucose Measurement,"In this paper, a novel non-invasive sensor for the measurement of the glucose concentrations in blood is presented. By using a microstrip band pass filter, a wireless sensor is achieved. In the introduced design, the thumb is placed on the structure of the filter as a superstrate. The response of the filter is dependent on the permittivity of the superstrate. A compact size, linearity and cost effectiveness are the most important advantages of the proposed sensor. The linear behaviour of the filter in terms of the frequency is investigated and for a linear behaviour, a certain frequency for operation is selected. The introduced sensor can be used by diabetics for continuous self-monitoring of the glucose level. The structure of the proposed sensor is designed on the low-cost substrate, FR4, by compact dimensions of 50 mm × 40 mm × 1.6 mm. A prototype of the proposed filter was fabricated and the performance of the filter was investigated, experimentally.",2015,0, 889,GridFoRCE: A comprehensive resource kit for teaching grid computing.,"A comprehensive suite of pedagogical resources is presented that will enable an instructor to embed grid computing concepts in a traditional distributed system course. Rapidly advancing Internet technologies and ever expanding application domains have created excitement in teaching distributed systems. Many fundamental concepts developed decades earlier, such as remote procedure calls and multithreading, have come to play key roles in modern distributed systems. Standards such as eXtensible Markup Language (XML) and Simple Object Access Protocol (SOAP) have been developed to enable interoperability among heterogeneous distributed systems. However, a plethora of new paradigms, a wide variety of technological choices, and short cycles of technological obsolescence challenge the introduction of these important concepts into a distributed systems course. This paper describes how the author addressed these challenges in teaching grid computing. The paper also provides details of the resources developed during this process. The pedagogical resource kit developed includes course curriculum, lecture notes, a set of laboratory assignments, a Globus Toolkit-based experimental grid adapted to classroom assignments, and valuable lessons learned from the course offerings during the past two years. The material provided in this paper is expected to help to ""jumpstart"" educators considering the introduction of grid computing into their curricula",2007,0, 890,Group Processes in Software Effort Estimation,"Software effort estimation requires high accuracy, but accurate estimations are difficult to achieve. Increasingly, datamining is used to improve an organization's software process quality, e.g. the accuracy of effort estimations. There are a large number of different method combination exists for software effort estimation, selecting the most suitable combination becomes the subject of research in this paper. In this study data preprocessing is implemented and effort is calculated using COCOMO Model. Then data mining techniques OLS Regression and K Means Clustering are implemented on preprocessed data and results obtained are compared and data mining techniques when implemented on preprocessed data proves to be more accurate then OLS Regression Technique.",2014,0, 891,GSMA: Software implementation of the genome search meta-analysis method,"

Meta-analysis can be used to pool results of genome-wide linkage scans. This is of great value in complex diseases, where replication of linked regions occurs infrequently. The genome search meta-analysis (GSMA) method is widely used for this analysis, and a computer program is now available to implement the GSMA.

Availability: http://www.kcl.ac.uk/depsta/memoge/gsma/

Contact: Cathryn.lewis@genetics.kcl.ac.uk

",2005,0, 892,Guest Editors' Introduction: Realizing Service-Centric Software Systems,"Service-centric software system is a multidisciplinary paradigm concerned with software systems that are constructed as compositions of autonomous services. These systems extend the service-oriented architecture paradigm by focusing on the design, development, and maintenance of software built under SOAs. In this special issue, we present five articles that tackle service-centric software systems.",2007,0, 893,GUI for model checkers,"In order to develop highly secure database systems to meet the requirements for class B2, an extended formal security policy model based on the BLP model is presented in this paper. A method for verifying security model for database systems is proposed. According to this method, the development of a formal specification and verification to ensure the security of the extended model is introduced. During the process of the verification, a number of mistakes have been identified and corrections have been made. Both the specification and verification are developed in Coq proof assistant. Our formal security model was improved and has been verified secure. This work demonstrates that our verification method is effective and sufficient and illustrates the necessity for formal verification of the extended model by using tools.",2008,0, 894,Hardware support of JPEG,"Image formats specified by the joint photographic expert group (JPEG) are preferred for images with high colour content. Along with graphic interchange format (GIF) images, JPEG is the most commonly used image format on the Internet and JPEG is the de facto still image format for digital cameras. All major digital camera manufacturers use JPEG as its exclusive or primary still image format. Most JPEG processing occurs in software programs like Microsoft Paint and Adobe Photoshop. Relatively few hardware solutions exist for processing JPEG images. This paper presents a comprehensive literature survey of hardware solutions for JPEG images. Wherever possible, different JPEG formats and accompanying hardware are presented and compared to illustrate various advantages/disadvantages. The performance of hardware and software for JPEG is compared and an overview of commercial hardware for JPEG is also provided",2005,0, 895,Harnessing Digital Evolution,"In digital evolution, self-replicating computer programs-digital organisms-experience mutations and selective pressures, potentially producing computational systems that, like natural organisms, adapt to their environment and protect themselves from threats. Such organisms can help guide the design of computer software.",2008,0, 896,HCI and the Face: Towards an Art of the Soluble,"We propose a computer vision-based pipeline that enables altering the appearance of faces in videos. Assuming a surveillance scenario, we combine GMM-based background subtraction with an improved version of the GrabCut algorithm to find and segment pedestrians. Independently, we detect faces using a standard face detector. We apply the neural art algorithm, utilizing the responses of a deep neural network to obfuscate the detected faces through style mixing with reference images. The altered faces are combined with the original frames using the extracted pedestrian silhouettes as a guideline. Experimental evaluation indicates that our method has potential in producing de-identified versions of the input frames while preserving the utility of the de-identified data.",2016,0, 897,HCI reality?an ?Unreal Tournament??,"A common solution to multi-agent decision problems is to commit a team of collaborating agents to a joint plan. Once committed, any deviation from the plan by an agent, is hazardous. Hence, in many such systems, agents ignore potential beneficial actions not in the plan, (even if such ""opportunistic"" actions increase the expected utility of the team). We model the (stochastic) tradeoff of such opportunistic actions vs. continued commitment to the joint plan, and address this issue as a formal decision problem under uncertainty. We use a modified version of the AWOL model, presented in an earlier paper, showing how it works in the context of a simplified version of the unreal tournament (TM) computer game. Empirical results suggest that it is faster to compute the solution to the AWOL abstraction than to solve the problem in the original domain.",2006,0, 898,Healthcare Knowledge Management: The Art of the Possible,"Healthcare organizations are increasingly adopting knowledge management systems (KMS) for clinical use, which have been established in technical support organizations for several years. While a technical support organization can utilize its KMS to directly address customer needs using its infrastructure and established processes, the effectiveness and success of KMS in a healthcare organization relies on the collective practice of healthcare professionals. In this research, the knowledge management processes and infrastructure in the two industries is compared. Seven hypotheses are developed and tested using a survey in two organizations, one in each industry to measure the contributions of different components of knowledge management infrastructure and processes towards organizational effectiveness. The results indicate that culture plays a larger role than structure in healthcare. Knowledge acquisition processes are more important in healthcare, compared with conversion and application in technical support. These results have implications for the selection and implementation of KMS in healthcare.",2008,0, 899,HEGESMA: Genome search meta-analysis and heterogeneity testing,"

Summary: Heterogeneity and genome search meta-analysis (HEGESMA) is a comprehensive software for performing genome scan meta-analysis, a quantitative method to identify genetic regions (bins) with consistently increased linkage score across multiple genome scans, and for testing the heterogeneity of the results of each bin across scans. The program provides as an output the average of ranks and three heterogeneity statistics, as well as corresponding significance levels. Statistical inferences are based on Monte Carlo permutation tests. The program allows both unweighted and weighted analysis, with the weights for each study as specified by the user. Furthermore, the program performs heterogeneity analyses restricted to the bins with similar average ranks.

Availability: http://biomath.med.uth.gr

Contact: zintza@med.uth.gr

",2005,0, 900,Helps and Hints for Learning with Web Based Learning Systems: The Role of Instructions,"AbstractThis study investigated the role of specific and unspecific tasks for learning declarative knowledge and skills with a web based learning system. Results show that learners with specific tasks where better for both types of learning. Nevertheless, not all kinds of learning outcomes were equally influenced by instruction. Therefore, instructions should be selected carefully in correspondence with desired learning goals.",2004,0, 901,Hemodynamic Analysis of Cerebral Aneurysm and Stenosed Carotid Bifurcation Using Computational Fluid Dynamics Technique,"

Cerebrovascular diseases are one of the three major mortalities in Japan, such as the rupture of cerebral aneurysm and cerebral infarction caused by carotid stenosis. The growth mechanism of the cerebral aneurysm and carotid stenosis has not been clearly understood. In this research, we are introducing a numerical simulation tool; Computational Fluid Dynamics (CFD) technique, to simulate and predict the hemodynamics of blood passing through the cerebral aneurysms and stenosed carotid arteries. The results of a ruptured and an unruptured cerebral aneurysm were compared. Energy losses were calculated in ruptured and unruptured cerebral aneurysms, the results were 167 Pa and 6.3 Pa respectively. The results also indicated that the blood flows took longer residence inside of bleb of the ruptured aneurysm. The maximum wall shear stress was observed at 70% stenosis from the simulation results of stenosed carotid bifurcation. The result qualitatively agrees with classical treatments in carotid bifurcation therapy.

",2007,0, 902,Heuristic Expert Review Model and Tool,"Surgical simulators present a safe and potentially effective method for surgical training, and can also be used in robot-assisted surgery for pre- and intra-operative planning. Accurate modeling of the interaction between surgical instruments and organs has been recognized as a key requirement in the development of high-fidelity surgical simulators. Researchers have attempted to model tool-tissue interactions in a wide variety of ways, which can be broadly classified as (1) linear elasticity-based, (2) nonlinear (hyperelastic) elasticity-based finite element (FE) methods, and (3) other techniques not based on FE methods or continuum mechanics. Realistic modeling of organ deformation requires populating the model with real tissue data (which are difficult to acquire in vivo) and simulating organ response in real time (which is computationally expensive). Further, it is challenging to account for connective tissue supporting the organ, friction, and topological changes resulting from tool-tissue interactions during invasive surgical procedures. Overcoming such obstacles will not only help us to model tool-tissue interactions in real time, but also enable realistic force feedback to the user during surgical simulation. This review paper classifies the existing research on tool-tissue interactions for surgical simulators specifically based on the modeling techniques employed and the kind of surgical operation being simulated, in order to inform and motivate future research on improved tool-tissue interaction models.",2008,0, 903,Heuristics for information visualization evaluation,"This paper presents a review of heuristic evaluation and recommendations for how to apply the method for information visualization evaluation. Heuristic evaluation is a widely known and popular method within the area of human-computer interaction and the information visualization community now also recognizes its usefulness. However, in this area it is not applied to the same extent. In its original form the method has limitations that need to be considered in order for it to be optimal for information visualization. The aim with this paper is to provide the reader with knowledge about the method and awareness of what issues that call for refined or supplemental actions and resources in order for it to generate as valid and useful results as possible. The paper also discusses the research challenges for future work in how to further improve the method.",2012,0, 904,Hierarchy and Centralization in Free and Open Source software team communications,"AbstractFree/Libre Open Source Software (FLOSS) development teams provide an interesting and convenient setting for studying distributed work. We begin by answering perhaps the most basic question: what is the social structure of these teams? We conducted social network analyses of bug-fixing interactions from three repositories: Sourceforge, GNU Savannah and Apache Bugzilla. We find that some OSS teams are highly centralized, but contrary to expectation, others are not. Projects are mostly quite hierarchical on four measures of hierarchy, consistent with past research but contrary to the naive image of these projects. Furthermore, we find that the level of centralization is negatively correlated with project size, suggesting that larger projects become more modular, or possibly that becoming more modular is a key to growth. The paper makes a further methodological contribution by identifying appropriate analysis approaches for interaction data. We conclude by sketching directions for future research.",2006,0, 905,Higher Order Color Mechanisms for Image Segmentation,"In this paper, we propose the efficient approach to tackle the multi-label interactive image segmentation issue by applying the higher order Conditional Random Fields model which associates superpixel as higher order energy. People did take advantage of CRF model for unsupervised segmentation for years, but it requires training set for providing neccessary information. Therefore, unsupervised strategy is fairly restrictive for the variety of image contexts and categorizations. For this reason, the user interaction seems inevitable to help us address the multi- label segmentation's riddle in accordance with exploiting CRF perspectives. The promising experiments are conducted in MSRC and Berkeley dataset comparing with the original Conditional Random Fields framework.",2012,0, 906,High-level design for user and component interfaces.,"Future industrial products will incorporate embedded microcomputers that will require advanced graphical user interfaces (GUIs). These GUIs will incorporate innovative input and display technologies, such as gestural input, multimedia, three dimensional displays, as well as new metaphors, and agents. These technology advances present challenges and opportunities for designers of human-computer communication and interaction",1992,0, 907,How Do Adults Solve Digital Tangram Problems? Analyzing Cognitive Strategies Through Eye Tracking Approach,"

Purpose of the study is to investigate how adults solve tangram based geometry problems on computer screen. Two problems with different difficulty levels were presented to 20 participants. The participants tried to solve problems by placing seven geometric objects into correct locations. In order to analyze the process, the participants and their eye movements were recorded by an Tobii Eye Tracking device while solving the problems. The results showed that the participants employed different strategies while solving problems with different difficulty levels

",2007,0, 908,How Does Collaborative Group Technology Influence Social Network Structure?,"The relationship between technology and elements of the formal organization structure has long been of interest to information systems and organization researchers. A less-studied issue is how technology may also influence the informal social network structure. This research examines how various types of technological expertise relate to an individual's network centrality in the project teams of 99 MBA, MISM, and MAIS students at a large public university. To further understand this relationship, the project task was varied in terms of ""uncertainty "" and the formal group structures in terms of departmentation. Results indicate that individuals who are proficient with various types of technologies tend to be more central in their class advice network. However, this relationship depends on both the level of task uncertainty and group departmentation. Implications are drawn for practice and research.",2008,0, 909,How personality type influences decision paths in the unfolding model of voluntary job turnover: an application to IS professionals,"

A new model for understanding job turnover was introduced into the management literature a decade ago [26], analyzing the process by which employees decide to leave their jobs. This ""unfolding model of voluntary turnover"" is a radical departure from traditional models of job turnover, positing that turnover is not necessarily triggered by job dissatisfaction. In addition to empirical testing with nurses, accountants, and other knowledge workers, the unfolding model has also been applied to study IS personnel. Based on a study of IS graduates from two American universities, Niederman and Sumner [34] concluded that IS employees appear not to follow the common decision paths identified by Lee and Mitchell in their initial conceptualization of the unfolding model; instead, a vast majority of respondents followed turnover decisions path not specified in the model. Although other modifications to the model have since been made [12], it is still not clear why the study of IS professionals diverged so much from prior studies of other types of knowledge workers. We first explore and identify the divergence of results between IS employees and other occupations that have been studied with the model, and then propose that an individual's personality type can affect the likelihood that he or she will follow specific decision paths in the model -- such as leaving without having a new job arranged in advance. We contribute to the IS personnel literature by offering a novel explanation for the divergence in prior empirical results. In addition, by examining personality type, we seek to open a new area of study, in terms of examining the relationship between personality type and employees' preferences for following certain paths leading to job turnover.

",2007,0, 910,"How to Perform Credible Verification,Validation, and Accreditation for Modeling and Simulation","Many large-scale system development efforts use modeling and simulation (M&S) to lower their life-cycle costs and reduce risks. Unfortunately, these M&S tools can introduce new risks associated with potential errors in creating the model (programming errors) and inadequate fidelity (errors in accuracy when compared to real-world results). To ensure that a valid model and a credible simulation exist, verification and validation (V&V) of the model and the resulting simulation must be completed. This article discusses conducting effective and credible V&V on M&S",2005,0, 911,How to steer an embedded software project: tactics for selecting the software process model,"Modern large new product developments (NPD) are typically characterized by many uncertainties and frequent changes. Often the embedded software development projects working on such products face many problems compared to traditional, placid project environments. One of the major project management decisions is then the selection of the project's software process model. An appropriate process model helps coping with the challenges, and prevents many potential project problems. On the other hand, an unsuitable process choice causes additional problems. This paper investigates the software process model selection in the context of large market-driven embedded software product development for new telecommunications equipment. Based on a quasi-formal comparison of publicly known software process models including modern agile methodologies, we propose a process model selection frame, which the project manager can use as a systematic guide for (re)choosing the project's process model. A novel feature of this comparative selection model is that we make the comparison against typical software project problem issues. Some real-life project case examples are examined against this model. The selection matrix expresses how different process models answer to different questions, and indeed there is not a single process model that would answer all the questions. On the contrary, some of the seeds to the project problems are in the process models themselves. However, being conscious of these problems and pitfalls when steering a project enables the project manager to master the situation.",2005,0, 912,Human and social factors of software engineering: workshop summary,"Software is developed for people and by people. Human and social factors have a very strong impact on the success of software development endeavours and the resulting system. Surprisingly, much of software engineering research in the last decade is technical, quantitative and deemphasizes the people aspect. The workshop on Human and Social Factors in Software Engineering has been picking up on the some of the soft aspects in software development that was highlighted in the early days of software engineering. It also follows a recent trend in the software industry, namely the introduction of agile methods, and provides a scientific perspective on these. Including and combining approaches of software engineering with social science, the workshop looked at software engineering from a number of perspectives, including those of agile methods and communication theory, in order to point out solutions and conditions for human-centred software engineering.",2005,0, 913,Human Computing and Machine Understanding of Human Behavior: A Survey,"Understanding human behaviors is a challenging problem in computer vision that has recently seen important advances. Human behavior understanding combines image and signal processing, feature extraction, machine learning, and 3-D geometry. Application scenarios range from surveillance to indexing and retrieval, from patient care to industrial safety and sports analysis. Given the broad set of techniques used in video-based behavior understanding and the fast progress in this area, in this paper we organize and survey the corresponding literature, define unambiguous key terms, and discuss links among fundamental building blocks ranging from human detection to action and interaction recognition. The advantages and the drawbacks of the methods are critically discussed, providing a comprehensive coverage of key aspects of video-based human behavior understanding, available datasets for experimentation and comparisons, and important open research issues.",2013,0, 914,Human-Aware Computer System Design,"In order to improve Liquid fertilizer efficiency, Liquid variable fertilizing control system was designed with two working modes: a manual control and an automatic control mode. Taking the S3C44B0X microprocessor of ARM7 series as the core device, according to the fertilizing amount of the current location, the system was able to combined together for the machine speed and access to data, and the fertilizing amount from digital quantity to the flow of Liquid output was transformed. Flow Sensor was taken, a closed-loop feedback regulation to control the opening of electrical actuators was formed, adjusted valve opening to compete the variable fertilization. The test results in the field showed that the relative error of the variable fertilizing amount was less than 5%, When the fertilizing amount was 245-294kg/hm2, the system can realize the requirements of variable fertilizing.",2011,0, 915,Hybrid Modeling of Test-and-Fix Processes in Incremental Development,"V-model and its variants have become the most common process models adopted in automotive industry guiding the development of systems on a variety of refinement levels. Along with the exponentially growing complexity of modern vehicle systems, however, the late verification and validation in the conventional V-model expand in uncontrollable ways that result in higher cost of development and higher risk of failure than ever. This paper describes an inc-V development process for automotive industry that improves the conventional V-model and variants by introducing and institutionalizing early and continuous integrated verification enabled by simulation-based development. We developed a continuous simulation model of the inc-V process, and the initial version is used to investigate the characteristics of the inc-V compared to V. The preliminary finding from the simulations of an example project is that the inc-V process is able to improve the traditional V process by saving effort, shortening duration, and increasing product quality. The finding also show how the advance of development technology impacts the systems engineering processes.",2016,0, 916,I See What You See: Eye Movements in Real-World Scenes Are Affected by Perceived Direction of Gaze,"

In this chapter, we report an investigation the influence of the saliency of another person's direction of gaze on an observer's eye movements through real-world scenes. Participants' eye movements were recorded while they viewed a sequence of scene photographs that told a story. A subset of the scenes contained an actor. The actor's face was highly likely to be fixated, and when it was, the observer's next saccade was more likely to be toward the object that was the focus of the actor's gaze than in any other direction. Furthermore, when eye movement patterns did not show an immediate saccade to the focused object, observers were nonetheless more likely to fixate the focused object than a control object within close temporal proximity of fixation on the face. We conclude that during real-world scene perception, observers are sensitive to another's direction of gaze and use it to help guide their own eye movements.

",2008,0, 917,ICT assessment: Moving beyond journal outputs,"AbstractThere are increasing moves to deploy quantitative indicators in the assessment of research, particularly in the university sector. In Australia, discussions surrounding their use have long acknowledged the unsuitability of many standard quantitative measures for most humanities, arts, social science, and applied science disciplines. To fill this void, several projects are running concurrently. This paper details the methodology and initial results for one of the projects that aims to rank conferences into prestige tiers, and which is fast gaining a reputation for best practice in such exercises. The study involves a five-stage process: identifying conferences; constructing a preliminary ranking of these; engaging in extensive consultation; testing performance measures based on the rankings on live data; and assessing the measures.In the past, many similar attempts to develop a ranking classification for publication outlets have faltered due to the inability of researchers to agree on a hierarchy. However the Australian experience suggests that when researchers are faced with the imposition of alternative metrics that are far less palatable, consensus is more readily achieved.",2008,0, 918,ICT for Patient Safety: Towards a European Research Roadmap,"

This paper analyses key issues towards a research roadmap for eHealth-supported patient safety. The raison d'etre for research in this area is the high number of adverse patient events and deaths that could be avoided if better safety and risk management mechanisms were in place. The benefits that ICT applications can bring for increased patient safety are briefly reviewed, complemented by an analysis of key ICT tools in this domain. The paper outlines the impact of decision support tools, CPOE, as well as incident reporting systems. Some key research trends and foci like data mining, ontologies, modelling and simulation, virtual clinical trials, preparedness for large-scale events are touched upon. Finally, the synthesis points to the fact that only a multilevel analysis of ICT in patient safety will be able to address this complex issue adequately. The eHealth for Safety study will give insights into the structure of such an analysis in its lifetime and arrive at a vision and roadmap for more detailed research on increasing patient safety through ICT.

",2006,0, 919,ICT-Mediated Synchronous Communication in Creative Teamwork: From Cognitive Dust to Semantics,"

A substantial amount of research has focused on small group meetings and how technology can support the meetings. Many applications and tools have resulted from such work but very few are used regularly because, we believe, they are not flexible enough to accommodate the naturally ill-structured processes of ""creative teams"". Our research aims at developing effective ICT-based support for such teams by understanding what is happening during creative teamwork - at both human-human and human-technology levels-through multimodal observational channels and providing appropriate and timely intervention. This paper describes the infrastructure for capturing ICT-mediated interactions (cognitive dust) and the approach for transforming these low-level data into meaningful and useful information (semantics), and presents the initial result of our work on transforming cognitive dust into semantics.

",2007,0, 920,Identification and Prediction of Economic Regimes to Guide Decision Making in Multi-Agent Marketplaces,"Supply chain management is commonly employed by businesses to improve organizational processes by optimizing the transfer of goods, information, and services between buyers and suppliers. Traditionally, supply chains have been created and maintained through the interactions of human representatives of the various companies involved. However, the recent advent of autonomous software agents opens new possibilities for automating and coordinating the decision making processes between the various parties involved.

Autonomous agents participating in supply chain management must typically make their decisions in environments of high complexity, high variability, and high uncertainty since only limited information is visible.

We present an approach whereby an autonomous agent is able to make tactical decisions, such as product pricing, as well as strategic decisions, such as product mix and production planning, in order to maximize its profit despite the uncertainties in the market. The agent predicts future market conditions and adapts its decisions on procurement, production, and sales accordingly.

Using a combination of machine learning and optimization techniques; the agent first characterizes the microeconomic conditions, such as over-supply or scarcity, of the market. These conditions are distinguishable statistical patterns that we call economic regimes. They are learned from historical data by using a Gaussian Mixture Model to model the price density of the different products and by clustering price distributions that recur across days.

In real-time the agent identifies the current dominant market condition and forecasts market changes over a planning horizon. Methods for the identification of regimes are explored in detail, and three different algorithms are presented. One is based on exponential smoothing, the second on a Markov prediction process, and the third on a Markov correction-prediction process. We examine a wide range of tuning options for these algorithms, and show how they can be used to predict prices, price trends, and the probability of receiving a customer order.

We validate our methods by presenting experimental results from the Trading Agent Competition for Supply Chain Management, an international competition of software agents that has provided inspiration for this work. We also show how the same approach can be applied to the stock market.",2007,0, 921,Identifying and Acquiring the Ideal Business/Technology Teaching In English (BTTIE) Educators for Asian International Schools,"International educational institutions teaching all subjects in English, from elementary to graduate schools, are proliferating across Asia. Many Asian school personnel, both local and foreign, have amassed a plethora of expertise and experience re: teaching english to speakers of other languages (TESOL) and even Business English Teaching (BET), but the relatively new area of business and technology teaching in english (BTTIE) remains terra incognita. Typically, at the average Asian educational institution: the foreign Native English Speaking Teachers (NESTs) possess neither extensive business/technology backgrounds nor advanced degrees in related subjects; the local and foreign business/technology teachers either aren't proficient in English or are capable but not considered NESTs. Newcomers might be both content and teaching experts, but it's unlikely they'll know how to teach effectively in Asia. This paper: describes the characteristics and qualifications re: the ideal BTTIE educators for Asian international schools; suggests alternative sources and processes for obtaining teachers due to the current/future shortage of such professionals. Educational technology utilization is touted as a necessary factor for both teaching and teachers. The interstate new teacher assessment and support consortium (INTASC) Standards are cited and adapted to the BTTIE-specific educator/education discussion.",2007,0, 922,Identifying Data Transfer Objects in EJB Applications,"Data transfer object (DTO) is a design pattern that is commonly used in Enterprise Java applications. Identification of DTOs has a range of uses for program comprehension, optimization, and evolution. We propose a dynamic analysis for identifying DTOs in Enterprise Java applications. The analysis tracks the reads and writes of object fields, and maintains information about the application tier that initiates the field access. The lifecycle of a DTO is represented by a finite state automaton that captures the relevant run-time events and the location of the code that triggers these events. We implemented the proposed approach using the JVMTI infrastructure in Java 6, and performed a study on a real-world Enterprise Java application which is deployed for commercial use. Our results indicate that the dynamic analysis achieves high precision and has acceptable overhead.",2007,0, 923,Identifying Failure Causes in Java Programs: An Application of Change Impact Analysis,"During program maintenance, a programmer may make changes that enhance program functionality or fix bugs in code. Then, the programmer usually will run unit/regression tests to prevent invalidation of previously tested functionality. If a test fails unexpectedly, the programmer needs to explore the edit to find the failure-inducing changes for that test. Crisp uses results from Chianti, a tool that performs semantic change impact analysis, to allow the programmer to examine those parts of the edit that affect the failing test. Crisp then builds a compilable intermediate version of the program by adding a programmer-selected partial edit to the original code, augmenting the selection as necessary to ensure compilation. The programmer can reexecute the test on the intermediate version in order to locate the exact reasons for the failure by concentrating on the specific changes that were applied. In nine initial case studies on pairs of versions from two real Java programs, Daikon and Eclipse jdt compiler, we were able to use Crisp to identify the failure-inducing changes for all but 1 of 68 failing tests. On average, 33 changes were found to affect each failing test (of the 67), but only 1-4 of these changes were found to be actually failure-inducing",2006,0, 924,Identifying Rare Events for Forensic Purposes,"Detection of outliers and anomalous behavior is a well-known problem in the data mining and statistics fields. Although the problem of identifying single outliers has been extensively studied in the literature, little effort has been devoted to detecting small groups of outliers that are similar to each other but markedly different from the entire population. Many real-world scenarios have small groups of outliers--for example, a group of students who excel in a classroom or a group of spammers in an online social network. In this article, the authors propose a novel method to solve this challenging problem that lies at the frontiers of outlier detection and clustering of similar groups. The method transforms a multidimensional dataset into a graph, applies a network metric to detect clusters, and renders a representation for visual assessment to find rare events. The authors tested the proposed method to detect pathologic cells in the biomedical science domain. The results are promising and confirm the available ground truth provided by the domain experts.",2015,0, 925,Identifying the characteristics of successful expert systems: an empirical evaluation,"

The effective use of Information Technology (IT) to help modern companies improve service quality, financial performance, customer satisfaction and productivity is a very crucial issue nowadays. Intelligent solutions, based on Expert Systems (ES), to solve complicated practical problems in various sectors are becoming more and more widespread. However, the real success of applied expert systems in the improvement of companies' performance is being investigated by the research community. In this framework, the primary objective of this paper is to present what is important for a successful ES application in terms of quality and what kind of mistakes can be made. Our analysis is based on an empirical evaluation of three ES applications that have successfully been in industrial use for a long time and in the development of which we have been personally involved. The key findings are expressed as 14 hypotheses for a successful ES. The support of each application to the hypotheses is discussed.

",2006,0, 926,IGSTK: An Open Source Software Toolkit for Image-Guided Surgery,"Image-guided surgery applies leading-edge technology and clinical practices to provide better quality of life to patients who can benefit from minimally invasive procedures. Reliable software is a critical component of image-guided surgical applications, yet costly expertise and technology infrastructure barriers hamper current research and commercialization efforts in this area. IGSTK applies the open source development and delivery model to this problem. Agile and component-based software engineering principles reduce the costs and risks associated with adopting this new technology, resulting in a safe, inexpensive, robust, shareable, and reusable software infrastructure.",2006,0, 927,Image is everything: Advancing HCI knowledge and interface design using the system image,"As the field of human-computer interaction matures, the need for proven, dependable engineering processes for interface development becomes apparent. Our continuing work in developing LINK-UP, an integrated design and reuse environment, suggests that a better understanding of the system image is key to the successful evaluation of design prototypes, and an aide in applying knowledge from the repository. This paper describes our ongoing work to enhance LINK-UP by developing and augmenting the system image to make it the central communication point between different stages of design and between different stakeholders. We report on a study of the new task flow that demonstrated the value of the system image within a broader design context. Overall, our findings indicate that the effective creation and use of knowledge repositories by novice HCI designers hinges on successful application of existing HCI design concepts within a practical integrated design environment.",2005,0, 928,Impact of organizational structure on distributed requirements engineering processes: lessons learned,"The requirements engineering program at Siemens Corporate Research has been involved with process improvement, training and project execution across many of the Siemens operating companies. We have been able to observe and assist with process improvement in mainly global software development efforts. Other researchers have reported extensively on various aspects of distributed requirements engineering, but issues specific to organizational structure have not been well categorized. Our experience has been that organizational and other management issues can overshadow technical problems caused by globalization. This paper describes some of the different organizational structures we have encountered, the problems introduced into requirements engineering processes by these structures, and techniques that were effective in mitigating some of the negative effects of global software development.",2006,0, 929,Impact of software engineering research on the practice of Software Configuration Management.,"The past ten years have seen a radical shift in business application software development. Rather than developing software from scratch using a conventional programming language, the majority of commercial software is now developed through reuse - the adaptation and configuration of existing software systems to meet specific organizational requirements. The most widespread form of reuse is through the use of generic systems, such as ERP and COTS systems, that are configured to meet specific organizational requirements. In this paper, I discuss the implications of software construction by configuration (CbC) for software engineering. Based on our experience with systems for medical records and university administration, I highlight some of the issues and problems that can arise in 'construction by configuration'. I discuss problems that arise in CbC projects and identify a number of challenges for research and practice to improve this approach to software engineering.",2008,0, 930,Implementation difficulties of hospital information systems,"Most current virtual maintenance systems are independent of CAD systems and the results in modification, maintainability analysis and evaluation can not be shared by CAD systems automatically and directly, reducing design efficiency and making information silo. The approach integrating virtual reality function into CAD system is adopted to build product information model of virtual maintenance integrated into CAD System (VM-CADPIM) by extending CAD product information model and making it virtual maintenance system a part of CAD system. A virtual maintenance system integrated into CATIA seamlessly is developed based on the VM-CADPIM to implement the bidirectional information transmission between virtual maintenance system and CAD system. The correctness and effectiveness of the information model in eliminating the inherent defects of virtual maintenance system independent of CAD and making virtual maintenance system more practical and significant in engineering applications are shown.",2009,0, 931,Implementing Industrial Multi-agent Systems Using JACK,"Industrial control systems have been globally connected to the open computer networks for decentralized management and control purposes. Most of these networked control systems that are not designed with security protection can be vulnerable to network attacks nowadays, so there is a growing demand of efficient and scalable intrusion detection systems (IDS) in the network infrastructure of industrial plants. In this paper, we present a multi-agent IDS architecture that is designed for decentralized intrusion detection and prevention control in large switched networks. An efficient and biologically inspired learning model is proposed for anomaly intrusion detection in the multi-agent IDS. The proposed model called ant colony clustering model (ACCM) improves the existing ant-based clustering approach in searching for near-optimal clustering heuristically, in which meta-heuristics engages the optimization principles in swarm intelligence. In order to alleviate the curse of dimensionality, four unsupervised feature extraction algorithms are applied and evaluated on their effectiveness to enhance the clustering solution. The experimental results on KDD-Cup99 IDS benchmark data demonstrate that applying ACCM with one of the feature extraction algorithms is effective to detect known or unseen intrusion attacks with high detection rate and recognize normal network traffic with low false positive rate",2005,0, 932,Implementing logic spreadsheets in less,"We developed a technique that detects superficial ocular pathologies based on the measurement of electrical impedance spectra. The sensor used is a small microelectrode made of platinum insulated from a cylindrical counterelectrode built of surgical stainless steel. The sensor has the shape of a truncated cone made of acrylic with dimensions identical to that of a standard Goldman prism. The sensor is applied to normal and pathological subject eyes with a constant force provided by a commercial tonometer. The circuit is closed through the lacrimal layer and the epithelial and endothelial cells. We measure the electrical impedance with a programmable logic device in which we implemented all the significant functions. These are the synthesis of the seventeen sines for the excitation, one lock-in, and delta-sigma modulators for the digital-to-analog converter and analog-to-digital converter requirements. A simple analog circuit filters the output, implements a voltage divider, and acts as current limiter in order not to damage the cells. We convert the measurements to resistance and capacitance as a function of frequencies. Consistent results are obtained for left and right eyes of the normal subjects. Significant differences are detected between the results for normal eyes and pathological eyes.",2012,0, 933,Implementing the IT fundamentals knowledge area,"Climate change is on the policy agenda at the global level, with the aim of understanding and reducing its causes and to mitigate its consequences. In most of the countries and international organisms UNO, OECD, EC, etc … the efforts and debates have been directed to know the possible causes, to predict the future evolution of some variable conditioners, and trying to make studies to fight against the effects or to delay the negative evolution of such. Nevertheless, the elaboration of a global model was not boarded that can help to choose the best alternative between the feasible ones, to elaborate the strategies and to evaluate the costs. As in all natural, technological and social changes, the best-prepared countries will have the best bear and the more rapid recover. In all the geographic areas the alternative will not be the same one, but the model should help us to make the appropriated decision. It is essential to know those areas that are more sensitive to the negative effects of climate change, the parameters to take into account for its evaluation, and comprehensive plans to deal with it. The objective of this paper is to elaborate a mathematical model support of decisions, that will allow to develop and to evaluate alternatives of adaptation to the climatic change of different communities in Europe and Latin-America, mainly, in vulnerable areas to the climatic change, considering in them all the intervening factors. The models will take into consideration criteria of physical type (meteorological, edaphic, water resources), of use of the ground (agriculturist, forest, mining, industrial, urban, tourist, cattle dealer), economic (income, costs, benefits, infrastructures), social (population), politician (implementation, legislation), educative (Educational programs, diffusion), sanitary and environmental, at the present moment and the future.",2012,0, 934,Implementing the jigsaw model in CS1 closed labs,"We apply the Jigsaw cooperative learning model to our CS1 closed labs. The Jigsaw cooperative learning model assigns students into main groups in which each group member is responsible for a unique subtask, gathers all students responsible for the same subtask into a same focus group for focused exploration, returns all students to their original main groups for reporting and reshaping, and then each group integrates the solutions for the subtasks from its members. For our study, we used the Jigsaw model in three CS1 closed labs. For each, there were three sections: (1) students worked individually, (2) students worked in groups using Jigsaw, and (3) students worked in groups using a computer-supported Jigsaw environment. The post-test scores of the three sections are compared to study the impact of Jigsaw and the feasibility of using a computer-supported Jigsaw design. Further, we investigate how the three lab topics (debugging, unified modeling language (UML), and recursion) affected impact of Jigsaw model on student performance.",2006,0, 935,Implementing tutoring strategies into a patient simulator for clinical reasoning learning.,"Objective: This paper describes an approach for developing intelligent tutoring systems (ITS) for teaching clinical reasoning. Materials and methods: Our approach to ITS for clinical reasoning uses a novel hybrid knowledge representation for the pedagogic model, combining finite state machines to model different phases in the diagnostic process, production rules to model triggering conditions for feedback in different phases, temporal logic to express triggering conditions based upon past states of the student's problem solving trace, and finite state machines to model feedback dialogues between the student and TeachMed. The expert model is represented by an influence diagram capturing the relationship between evidence and hypotheses related to a clinical case. Results: This approach is implemented into TeachMed, a patient simulator we are developing to support clinical reasoning learning for a problem-based learning medical curriculum at our institution; we demonstrate some scenarios of tutoring feedback generated using this approach. Conclusion: Each of the knowledge representation formalisms that we use has already been proven successful in different applications of artificial intelligence and software engineering, but their integration into a coherent pedagogic model as we propose is unique. The examples we discuss illustrate the effectiveness of this approach, making it promising for the development of complex ITS, not only for clinical reasoning learning, but potentially for other domains as well.",2006,0, 936,Implementing Virtual Practices using an Alternative Methodology to Develop Educational Software,"Educational Software is one of the pillars of distance learning and educational systems and has become the basic tool for future generations of students. However, recent methodologies used in this development have too many problems: a lack of common theoretical frameworks which can be used by anyone in the project, and excessive formality in both technical and pedagogical factors. Activities employed in the development of educational software are complex because the process depends on the developer's expertise, aspects of software engineering, and the acquisition and implementation of pedagogical knowledge. We propose the introduction of ""effective practices "" within an alternative methodology to develop this kind of software. The identification of effective practices is focused internally to ensure the effective realization of the development process, and externally to guide the pedagogical monitoring of a project. Our effective practices provide the basis of an alternative methodology for the development of educational software under rigorous conditions that enable us to achieve a highly successful and repeatable process in the field of electronic instrumentation.",2007,0, 937,Implications of an ethic of privacy for human-centred systems engineering,"

Privacy remains an intractable ethical issue for the information society, and one that is exacerbated by modern applications of artificial intelligence. Given its complicity, there is a moral obligation to redress privacy issues in systems engineering practice itself. This paper investigates the role the concept of privacy plays in contemporary systems engineering practice. Ontologically a nominalist human concept, privacy is considered from an appropriate engineering perspective: human-centred design. Two human-centred design standards are selected as exemplars of best practice, and are analysed using an existing multi-dimensional privacy model. The findings indicate that the human-centred standards are currently inadequate in dealing with privacy issues. Some implications for future practice are subsequently highlighted.

",2008,0, 938,Implications of researcher assumptions about perceived relative advantage and compatibility,"

Although scale reuse is an important and efficient research practice, it may not always be the most appropriate practice. Mechanistically reusing scales developed for a particular context may lead to a variety of undesirable effects. One of the risks is that frequently reused scales can inadvertently begin to alter the definitions of related constructs. When this occurs, a full understanding of the constructs can be lost. Innovation diffusion is one area in which evidence suggests that this has occurred, specifically for relative advantage and compatibility.

This article seeks to better understand the risks of mechanistic scale reuse within the information systems field, with a specific focus on the relative advantage and compatibility constructs. We review the information systems literature focusing on IT adoption and diffusion, examining the theoretical and empirical relationships between relative advantage and compatibility. Evidence from this review indicates that there may be both conceptual and empirical overlap between the two, which has led to inconsistent empirical and theoretical treatment of the constructs across studies. We also report an empirical examination of the domain coverage of the scales, which provides evidence that the scales a) exhibit a high degree of conceptual and empirical overlap and b) only represent a subset of their full conceptualization. We offer recommendations for researchers who wish to use these constructs in future work.

",2008,0, 939,Improving availability with recursive microreboots: a soft-state system case study.,"Even after decades of software engineering research, complex computer systems still fail. This paper makes the case for increasing research emphasis on dependability and, specifically, on improving availability by reducing time-to-recover.All software fails at some point, so systems must be able to recover from failures. Recovery itself can fail too, so systems must know how to intelligently retry their recovery. We present here a recursive approach, in which a minimal subset of components is recovered first; if that does not work, progressively larger subsets are recovered. Our domain of interest is Internet services; these systems experience primarily transient or intermittent failures, that can typically be resolved by rebooting. Conceding that failure-free software will continue eluding us for years to come, we undertake a systematic investigation of fine grain component-level restarts, microreboots, as high availability medicine. Building and maintaining an accurate model of large Internet systems is nearly impossible, due to their scale and constantly evolving nature, so we take an application-generic approach, that relies on empirical observations to manage recovery.We apply recursive microreboots to Mercury, a commercial off-the-shelf (COTS)-based satellite ground station that is based on an lnternet service platform. Mercury has been in successful operation for over 3 years. From our experience with Mercury, we draw design guidelines and lessons for the application of recursive microreboots to other software systems. We also present a set of guidelines for building systems amenable to recursive reboots, known as ""crash-only software systems.""",2004,0, 940,Improving courseware quality through life-cycle encompassing quality assurance,"The quality of courseware development is affected by four factors: content and instructional issues; managment; technical and graphical issues; and concerns of the customer. In this paper we describe IntView, a courseware development method that integrates these four factors throughout the whole development life-cycle. By combining existing courseware quality assurance methodologies with software engineering techniques such as inspections and tests the interests of the participating roles are balanced. Both the IntView methodology and the quality assurance techniques are described and the results of some preliminary case studies are reported.",2004,0, 941,Improving Design Artifact Reviews with Group Support Systems and an Extension of Heuristic Evaluation Techniques,"This paper proposes a new systems development methodology entitled heuristic dataflow diagrams (HDFDs). HDFD leverages group support systems (GSS) and heuristics to improve the creation of DFDs, which are frequently used design artifacts in systems development. We propose theory-based hypotheses to predict likely outcomes of using HDFD in non-GSS groups, GSS groups, and distributed GSS groups. These hypotheses were tested in a laboratory experiment using 123 subjects. The results indicate that HDFD performed with GSS provides process gains over HDFD performed without GSS. Further, distributed groups using GSS and HDFD are able to be as effective as face-to-face (FtF) GSS groups. Our GSS-based methodology of HDFD can be embraced by practitioners to improve the creation of DFDs in FtF and distributed teams and it can likely also be extended to other systems design methodologies, such as structured walkthroughs, entity-relationship diagrams, use cases, and sequence diagrams.",2005,0, 942,Improving Dynamic Calibration through Statistical Process Control,"Dynamic calibration (DC), presented by the authors in previous works has proved to be a flexible approach for massive maintenance software project estimation, able to recalibrate an estimation model in use according to relevant process performance changes pointed out by the project manager. Nevertheless, it results quite subjective in its application and tightly based on manager experience. In this work the authors present an improvement of the approach based on the use of statistical process control (SPC) technique. SPC is a statistically based method able to quickly highlight shift in process performances. It is well known in manufacturing contexts and it has recently emerged in the software engineering community. In this work, authors have integrated SPC in DC as decision support tool for identifying when recalibration of the estimation model must be carried out. This extension makes DC less ""person-based"", more deterministic and transferable in its use than the previous version. The extended approach has been experimented on industrial data related to a renewal project and the results compared with both, a concurrent approach such as analogy based estimation and its previous version. The results are encouraging and stimulate further investigation.",2005,0, 943,Improving graphical information system model use with elision and connecting lines,"Graphical information system (IS) models are used to specify and design IS from several perspectives. Due to the growing size and complexity of modern information systems, critical design information is often distributed via multiple diagrams. This slows search performance and results in reading errors that later cause omissions and inconsistencies in the final designs. We study the impact of large screens and the two promising visual integration techniques of elision and connecting lines that can decrease the designers' cognitive efforts to read diagrams. We conduct a laboratory experiment using 84 computer science students to investigate the impact of these techniques on the accuracy of the subjects' search and recall with entity-relationship diagrams and data flow diagrams. The search tasks involve both vertical and horizontal searches on a moderately complex IS model that consists of multiple diagrams. We also examine the subjects' spatial visualization abilities as a possible covariant for observed search performance. These visual integration techniques significantly reduced errors in both the search and the recall of diagrams, especially with respect to individuals with low spatial visualization ability.",2004,0, 944,Improving Hierarchical Taxonomy Integration with Semantic Feature Expansion on Category-Specific Terms,"

In recent years, the taxonomy integration problem has obtained much attention in many research studies. Many sorts of implicit information embedded in the source taxonomy are explored to improve the integration performance. However, the semantic information embedded in the source taxonomy has not been discussed in the past research. In this paper, an enhanced integration approach called SFE (Semantic Feature Expansion) is proposed to exploit the semantic information of the category-specific terms. From our experiments on two hierarchical Web taxonomies, the results are positive to show that the integration performance can be further improved with the SFE scheme.

",2008,0, 945,Improving modularity of reflective middleware with aspect-oriented programming,"Reflective middleware has been proposed as an effective way to enhance adaptability of component-oriented middleware architectures. To be effectively adaptable, the implementation of reflective middleware needs to be modular. However, some recently emerged applications such as mobile, pervasive, and embedded applications have imposed more stringent modularity requirements to the middleware design. They require support for the conception of a minimal middleware while promoting finegrained modularity of reflective middleware features. The key problem is that fundamental mechanisms for decomposing reflective middleware implementations, such as object-oriented ones, suffer from not providing the proper means to achieve the required level of localizing reflection-specific concerns. This paper presents a systematic investigation on how aspect-oriented programming scales up to improve modularity of typical reflection-specific crosscutting concerns. We have quantitatively compared Java and AspectJ implementations of an OpenORB-compliant reflective middleware using separation of concerns metrics.",2006,0, 946,Improving Object-Oriented Micro Architectural Design Through Knowledge Systematization,"

Designers have accumulated much knowledge referring to OO systems design and construction, but this large body of knowledge is neither organized nor unified yet. In order to improve OO micro architectures, using the accumulated knowledge in a more systematic and effective way, we have defined a rules catalog (that unifies knowledge such as heuristics, principles, bad smells, etc.), the relationships between rules and patterns and an improvement method based on these subjects. We have carried out a controlled experiment which shows us that the usage of a rules catalog and its relationship with patterns really improves OO micro architectures.

",2005,0, 947,IMPROVING SOFTWARE PRODUCT INTEGRATION,"The paper discusses Holmes tool designed to support the Sherlock (Predonzani et al., 2000) software product line methodology. Holmes attempts to provide comprehensive support for product line development, from market and product strategy analysis to modeling, designing, and developing the resulting system. The paper shows the overall architecture of Holmes. It centres on the use of JavaSpaces as a distributed blackboard of objects.",2000,0, 948,Improving the quality of UML models in practice,"Internal evaluation is an important part of the service quality evaluation of power supply. This paper proposes an index system of service quality internal evaluation which covers all kinds of the electricity service activities. Also, a model of how to weigh the indexes based on the Analytic Hierarchy Process and Triangular Fuzzy Number Principle is established, to solve the uncertainty of empowering the indexes. Finally, a technical support system framework that features of systemization, efficiency, operation and assistant decision, is designed, according to the proposed evaluation model.",2008,0, 949,"Inclusion Through the Ages? Gender, ICT Workplaces, and Life Stage Experiences in England","AbstractThis exploratory paper examines the various challenges that women working in information and communications technology (ICT) in England face in relation to their age, their life stage, and their career stage, with these three aspects being at least partially related. We first examine the literature currently available in relation to women, age and ICT work, arguing that age tends to be the forgotten variable in research on women in ICT. Using eight case studies of individual female ICT professionals in their twenties, thirties, forties, and fifties, we explore the nuances of experience these women have in relation to their career and their caring responsibilities. We consider the possibility that women in ICT may have heterogeneous experiences of working in what are often masculinized environments related to, but not determined by, their age. Based on our interpretations of our empirical data, we adapt Supers career-stage theory to better frame our subsequent theoretical assertions. To conclude, we suggest that exploring age, life stage, and/or career stage in relation to female ICT professionals circumstances and experiences means that we can better theorize gender in the field of information systems, and hence develop more relevant gender inclusion strategies.",2006,0, 950,Incorporating the Cultural Dimensions into the Theoretical Framework of Website Information Architecture,"

Information Architecture (IA) has emerged as a discipline that is concerned with the development of systematic approaches to the presentation and organization of online information. The IA discipline has commanded significant attention from professional practitioners but lacks in the theoretical perspective. In our effort to formalize the knowledge of the discipline, we report on the extension of our initial work of formalizing the architectural framework for understanding website IA. Since the web is not a culturally neutral medium, we sought to delineate the cultural dimensions within our formed framework of website IA with the incorporation of the cultural dimensions of Hofstede and Hofstede's (2005), Hall's (1966), Hall and Hall's (1990) and Trompenaar's (1997). This attempt contributes towards the progress of putting a sense of cultural localization to the IA augmentation for local and international website design. In addition, to avoid theoretical aloofness and arbitrariness, practical design presumptions are also reflected.

",2007,0, 951,Indicative Markers of Leadership provided by ICT Professional Bodies in the Promotion and Support of Ethical Conduct,"Most countries with a mature Information and Communications Technology (ICT industry have at least one professional body (PB) that claims to represent its members working with such technology. Other ICT PBs operate in the international arena. These PBs may differ in membership criteria, jurisdiction and even objectives but all profess to promote high ethical and professional standards. This study seeks to determine the common indicative markers that demonstrate that an ICT PB is offering leadership in identifying, promoting and supporting ethical conduct amongst a variety of constituencies including its own members and beyond. An extensive literature review identified over 200 prospective markers covering a broad range of potential activities of an ICT PB. These were grouped into nine major areas: ethical professional practice; continuous professional development; research and publication; education of future professionals; members? career development; social obligations; professional engagement; preserving professional dignity/ reputation and regulation of the profession. These markers were arranged hierarchically in a word processing document referred to as a ?marker template?. An analysis of selected ICT PBs websites was undertaken to confirm and refine the template. It will be used in the future for a comparative study of how professional bodies offer leadership to their various constituencies in the area of ethical conduct.",2007,0, 952,Inducing Sequentiality Using Grammatical Genetic Codes,"AbstractThis paper studies the inducement of sequentiality in genetic algorithms (GAs) for uniformly-scaled problems. Sequentiality is a phenomenon in which sub-solutions converge sequentially in time in contrast to uniform convergence observed for uniformly-scaled problems. This study uses three different grammatical genetic codes to induce sequentiality. Genotypic genes in the grammatical codes are interpreted as phenotypes according to the grammar, and the grammar induces sequential interactions among phenotypic genes. The experimental results show that the grammatical codes can indeed induce sequentiality, but the GAs using them need exponential population sizes for a reliable search.",2004,0, 953,Induction and Evaluation of Affects for Facial Motion Capture,"

In this study, we are interested in capturing the facial configuration of <em>Affects</em>in order to use them for Embodied Conversational Agents. In order to create a believable <Emphasis Type=""SmallCaps"">eca</Emphasis>, it is necessary to capture natural <em>Affects</em>that can be learnt and replayed. However, until now, animation data are extracted from videos and their description is far from being sufficient to generate realistic facial expressions. It seems that believable results cannot be obtained without using 3D motion capture. This is why in this study we tried to set up a protocol for <em>Affects</em>induction in a motion capture situation with manipulated subjects who are unaware of the real goals. Similarly from [1], we induce natural <em>Affects</em>in order to capture the related facial expressions.

",2007,0, 954,Industrial Experience with Fact Based Modeling at a Large Bank,"

In this article the author describes his experience in introducing fact orientation including high level business process descriptions, structured concept definitions, and operational level business process descriptions from which to generate the web application, in a project that is set up to develop a new contract administration for consumer credit in an international environment. This was done by dividing the TO-BE processes into a number of iterations.

",2007,0, 955,Inferring Piecewise Ancestral History from Haploid Sequences,"AbstractThe determination of complete human genome sequences, subsequent work on mapping human genetic variations, and advances in laboratory technology for screening for these variations in large populations are together generating tremendous interest in genetic association studies as a means for characterizing the genetic basis of common human diseases. Considerable recent work has focused on using haplotypes to reduce redundancy in the datasets, improving our ability to detect significant correlations between genotype and phenotype while simultaneously reducing the cost of performing assays. A key step in applying haplotypes to human association studies is determining regions of the human genome that have been inherited intact by large portions of the human population from ancient ancestors. This talk describes computational methods for the problem of predicting segments of shared ancestry within a genetic region among a set of individuals. Our approach is based on what we call the haplotype coloring problem: coloring segments of a set of sequences such that like colors indicate likely descent from a common ancestor. I will present two methods for this problem. The first uses the notion of ?haplotype blocks to develop a two-stage coloring algorithm. The second is based on a block-free probabilistic model of sequence generation that can be optimized to yield a likely coloring. I will describe both methods and illustrate their performance using real and contrived data sets.",2004,0, 956,Information dissemination in modern banking applications.,"The goal of this paper is to promote application of logic synthesis methods and tools in different tasks of modern digital designing. The paper discusses functional decomposition methods, which are currently being investigated, with special attention to balanced decomposition. Since technological and computer experiments with application of these methods produce promising results, this kind of logic synthesis probably dominate the development of digital circuits for FPGA structures. Many examples confirming effectiveness of decomposition method in technology mapping in digital circuits design for cryptography and DSP applications are presented.",2005,0, 957,Information Management in Distributed Healthcare Networks,"In mobile ad hoc networks (MANETs), identity (ID)-based cryptography with threshold secret sharing is a popular approach for the key management design. Most previous work for key management in MANETs concentrates on the protocols and structures. How to optimally conduct node selection in ID-based cryptography with threshold secret sharing merits further investigation. In this paper, we propose a distributed scheme to dynamically select nodes with master key shares to provide the private key generation service. The proposed scheme considers the node security and energy states in the process of selecting best nodes to construct a private key generator (PKG). Intrusion detection systems are modeled as noisy sensors to monitor the system security situations. The node selection process is formulated as a stochastic optimization problem. Simulation results are presented to illustrate the effectiveness of the proposed scheme.",2009,0, 958,Information Quality in Healthcare: Coherence of Data Compared between Organization's Electronic Patient Records,"In this paper we present a case-based analysis of health care data quality problems in a situation, where data of diabetes patient are combined from different information systems. Nationally uniform integrated health care information systems shall become more important when meeting the demands of patient centered care in the future. During the development of several electronic health records it has become clear that the integration of the data is still challenging. Data collected in various systems can have quality faults, it can for instance be non-coherent or include contradictory information, or the desired data is completely missing, as proved to be in our case as well. The quality of the content of patient information and the process of data production constitute a central part of good patient care, and more attention should be paid to them.",2008,0, 959,Information Security Management System for SMB in Ubiquitous Computing,"

In this study, an information security management system is developed through theoretical and literary approach aiming at efficient and sys-tematic information security of Korean small and medium size businesses, considering the restrictions of the literature review on the information security management systems and the inherent characteristics of the small and medium size businesses. The management system was divided into the 3 areas of the supporting environment of the information security, establishment of the information security infrastructure, and management of the information security. Through verification by statistical methods(reliability analysis, feasibility study) based on the questionnaire for the specialists, the overall information security management system is structures with the 3 areas, 8 management items, and 18 detailed items of the management system. On the basis of this study, it is expected that small and medium size businesses will be able to establish information security management systems in accordance with the information security policy incorporating the existing informatization strategies and management strategies, information security systems which will enhance existing information management, and concrete plans for follow up management.

",2006,0, 960,"Information Security Standards: Adoption Drivers (Invited Paper) What drives organisations to seek accreditation? The case of BS 7799-2:2002","AbstractISO/IEC 17799 is a standard governing Information Security Management. Formalised in the 1990s, it has not seen the take up of accreditations that could be expected from looking at accreditation figures for other standards such as the ISO 9000 series. This paper examines why this may be the case by investigating what has driven the accreditation under the standard in 18 UK companies, representing a fifth of companies accredited at the time of the research. An initial literature review suggests that adoption could be driven by external pressures, or simply an objective of improving operational performance and competitive performance. It points to the need to investigate the influence of Regulators and Legislators, Competitors, Trading Partners and Internal Stakeholders on the decision to seek accreditation.An inductive analysis of the reasons behind adoption of accreditation and its subsequent benefits suggests that competitive advantage is the primary driver of adoption for many of the companies we interviewed. We also find that an important driver of adoption is that the standard enabled organisations to access best practice in Information Security Management thereby facilitating external relationships and communication with internal stakeholders. Contrary to the accepted orthodoxy and what could be expected from the literature, increased regulation and the need to comply with codes of practice are not seen as significant drivers for companies in our sample.",2006,0, 961,Information System in Atomic Collision Physics,Corrigendum to IEEE Std 802.3-2005. This corrigendum will correct Equation 55-55. It has been incorporated into the second printing of IEEE Std 802.3an-2006.,2007,0, 962,Information Systems as a Design Science,"In order to solve the problem of actively adapt to the police informatization and the development trend of cloud technical, a Cloud Based Forensic Science Information System Model is proposed in this paper. Forensic Science Information Management System can largely benefit from cloud based models in terms of cost reduction and resource utilization. The purpose of system design is accounting forensic science management from the heavy manual work extricate themselves, thereby improving work efficiency. Compared with conventional design schemes, this system can easy and ubiquitous access to forensic science data in the cloud, and opportunities to utilize the services of forensic science experts who maybe in remote areas. Therefore, develop a cloud based forensic science information system play an important role.",2013,0, 963,Information systems outsourcing: a survey and analysis of the literature,"Notice of Retraction

After careful and considered review of the content of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.

We hereby retract the content of this paper. Reasonable effort should be made to remove all past references to this paper.

The presenting author of this paper has the option to appeal this decision by contacting TPII@ieee.org.

On the basis of extant literature, we built a conceptual model of the relationship between network structure, IT capability, organizational relationship management capability, trust and Information Systems Outsourcing (ISO) success, and proposed some hypotheses about them. To test the hypotheses, we designed a questionnaire, and conducted a survey in Xi'an Software Park. We used AMOS 16.0 to analyze the collected data. The empirical results show that network structure, organizational relationship management capability, and trust have significant impacts on ISO success while IT capability appears to have no impact on ISO success.",2010,0, 964,Information Technology Outsourcing (ITO) Governance: An Examination of the Outsourcing Management Maturity Model,"Given a lack of experience in outsourcing contractual management, firms involved in IT outsourcing (ITO) can encounter unexpectedly poor service quality improvement. As a guidance for better ITO activities, Raffoul (2002) introduced the outsourcing management maturity (OMM) model, a framework established to create effective vendor management structure, create measurable and enforceable service-level agreements (SLAs), implement formal processes, and drive vendors to improve service quality. This model was developed to enable outsourcing to be managed as an investment portfolio whereby cost is reduced, risks are mitigated, IT organization credibility is established, and outsourcing benefits are materialized in a timely manner. This paper questions if one or more aspects of this OMM model may be considered necessary success factors when applied in practice, particularly in the financial sector. This research, as part of a study on drivers for IT cost optimization and outsourcing, examines the elements of the OMM model in the context of several recent IT outsourcing contracts in the banking sector to see which of its elements are actively employed in successful contracts.",2004,0, 965,Information theoretic evaluation of change prediction models for large-scale software,"In this paper, we analyze the data extracted from several open source software repositories. We observe that the change data follows a Zipf distribution. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events, i.e., changes or bugs, that happen to each file and normalizes the counts to compute a probability distribution. The second model is Reflexive Exponential Decay (RED) in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and decays exponentially. The third model is called RED-Co-Change. With each modification to a given file, the RED-Co-Change model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes. We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on this evaluation, we observe that of our three prediction models, the RED-Co-Change model predicts the distribution that is closest to the actual distribution for all the studied systems.",2006,0, 966,In-group/out-group effects in distributed teams: an experimental simulation,"

Modern workplaces often bring together virtual teams where some members are collocated, and some participate remotely. We are using a simulation game to study collaborations of 10-person groups, with five collocated members and five isolates (simulated 'telecommuters'). Individual players in this game buy and sell 'shapes' from each other in order to form strings of shapes, where strings represent joint projects, and each individual players' shapes represent their unique skills. We found that the collocated people formed an in-group, excluding the isolates. But, surprisingly, the isolates also formed an in-group, mainly because the collocated people ignored them and they responded to each other.

",2004,0, 967,Innovation Processes in the Public Sector ? New Vistas for an Interdisciplinary Perspective on E-Government Research?,"

Public sector innovations have been comprehensively studied from a managerial (New Public Management, NPM) as well as technological (Electronic Government, eGovernment) perspective. Here, much research work took a single-organisational managerial stance while little was investigated into corresponding public-sectoral innovation and diffusion processes. At this point, a political science view understands the embeddedness of public-sectoral innovation processes in the surrounding politico-administrative system. Therefore, we seek to investigate into public sector innovations in terms of identifying politico-administrative system dynamics which shape the process of their emergence and diffusion. In order provide empirical evidence, we analyse the Japanese case by the means of a series of qualitative-empirical expert interviews. We demonstrate how decentralisation reforms open up innovation potential for local governments, by which means the central government still holds strong influence on innovation and diffusion processes, and which possible paths of eGovernment and NPM innovation manifest as a result.

",2007,0, 968,INSCAPE: Emotion Expression and Experience in an Authoring Environment,"

Human emotions are known to play an important role in the users' engagement, namely by activating their attention, perception and memory skills, which in turn will help to understand the story – and hopefully perceive, or rather “feel” it as an entertaining experience. Despite the more and more realistic and immersive use of 3D computer graphics, multi-channel sound and sophisticated input devices – mainly forced by game applications – the emotional participation of users still seems a weak point in most interactive games and narrative systems. This paper describes methods and concepts on how to bring emotional experiencing and emotional expression into interactive storytelling systems. In particular, the Emotional Wizard is introduced, as an emerging module for authoring emotional expression and experiencing. Within the INSCAPE framework, this module is meant to improve elicited emotions as elements of style, which are used deliberately by an author within an integrated storytelling environment.

",2006,0, 969,Institutionalization of software product line: An empirical investigation of key organizational factors,"A good fit between the person and the organization is essential in a better organizational performance. This is even more crucial in case of institutionalization of a software product line practice within an organization. Employees' participation, organizational behavior and management contemplation play a vital role in successfully institutionalizing software product lines in a company. Organizational dimension has been weighted as one of the critical dimensions in software product line theory and practice. A comprehensive empirical investigation to study the impact of some organizational factors on the performance of software product line practice is presented in this work. This is the first study to empirically investigate and demonstrate the relationships between some of the key organizational factors and software product line performance of an organization. The results of this investigation provide empirical evidence and further support the theoretical foundations that in order to institutionalize software product lines within an organization, organizational factors play an important role.",2007,0, 970,Instructional design of a programming course: a learning theoretic approach,"Phenomenography is a well-known empirical research approach that is often used to investigate students' ways of learning programming. Phenomenographic pedagogy is an instructional approach to plan learning and teaching activities. This theoretical paper gives an overview of prior research in phenomenographic studies of programming and shows how the results from these research studies can be applied to course design. Pedagogic principles grounded in the phenomenographic perspective on teaching and learning are then presented that consider how to tie students' experiences to the course goals (relevance structure) and how to apply variation theory to focus on the desired critical aspects of learning. Building on this, an introductory object-oriented programming course is described as an example of research-based course design. The insights gained from the experience of running the course are shared with the community of computer science educators, as also the benefits and responsibilities for those who wish to adopt the phenomenographic perspective on learning to plan their teaching. The development of an increased awareness of the variation in students' ways of experiencing programming and the need to broaden the context of the programming course are discussed.",2014,0, 971,Instrumenting Contracts with Aspect-Oriented Programming to Increase Observability and Support Debugging,"In this paper we report on how aspect-oriented programming (AOP), using AspectJ, can be employed to automatically and efficiently instrument contracts and invariants in Java. The paper focuses on the templates to instrument preconditions, postconditions, and class invariants, and the necessary instrumentation for compliance-checking to the Liskov substitution principle.",2005,0, 972,"Integrated scoring for spelling error correction, abbreviation expansion and case restoration in dirty text","

An increasing number of language and speech applications are gearing towards the use of texts from online sources as input. Despite such rise, not much work can be found in the aspect of integrated approaches for cleaning dirty texts from online sources. This paper presents a mechanism of Integrated Scoring for Spelling error correction, Abbreviation expansion and Case restoration (ISSAC). The idea of ISSAC was first conceived as part of the text preprocessing phase in an ontology engineering project. Evaluations of ISSAC using 400 chat records reveal an improved accuracy of 96.5% over the existing 74.4% based on the use of Aspell only.

",2006,0, 973,Integrated Software Process and Product Lines,"In order to realize the industrialization production of software, people have carried out research on and analysis the software product line architecture of the growing maturity, component technology and development methods for product line. In this paper, a novel software engineering process model is proposed based on the modern industrial production systems and automated production method: that is ????N-life-cycle model????. Based on this new model, not only integrated software engineering environment model and framework have been proposed, which are based on the product line development process model, but also study systematically on theirs implementation. ""N-life-cycle model"" and ""integrated software engineering environment model based on the product line"" which are set up in the article are brand-new open models possessing modern manufacturing production characteristic. The models can impel the research development quickly of product line engineering and product line software engineering environment towards the industrialisation and automatization of the software industry.",2008,0, 974,Integrated strategy of industrial product suppliers: Working with B2B intermediaries,"Purpose ? The primary purpose was to learn about different variables of an integrated strategy associated with choosing to supply through business?to?business (B2B) intermediaries and apply the variables to a series of cases. Design/methodology/approach ? A literature review served as a basis to develop an integrated model. A combination of primary and secondary research was conducted to apply the concepts of the model to different internet trading exchanges. Findings ? Each trade exchange offers a different set of customers and suppliers vying for business opportunities. There are no common platforms for software and hardware. If a small company is interested in trading through an internet exchange, they want to select based on the variables identified that best meet their needs and integrate with their business strategy. Research limitations/implications ? The focus was on industrial products and may not be applicable to consumer products. Practical implications ? Suppliers must carefully operate in the future by evaluating each customer and determining which trade exchanges will provide them with the greatest benefit at the lowest cost. The infrastructure investment is an unavoidable cost that cannot be forgone unless the supplier wants to discontinue providing to most of its customers. The supplier needs to look at all aspects identified in the integrated business model and the foundation and facilitation for success lie in the information management of the entire entity. Originality/value ? This paper takes the existing body of knowledge and applies it to the development of an integrated e?business model for industrial suppliers used to compare different internet trading exchanges. ",2005,0, 975,Integrated Support for Scientific Creativity,"Neonatal Intensive Care Unit maintain and support life during the critical period of premature development. This research presents the challenges, trends and opportunities for integrated real time neonatal clinical decision support. We demonstrated this potential using environment known as Artemis, a clinical decision support system. A review of the current devices in the intensive care unit and neonatal practice shows the current environment and our perspective for the future of the neonatal clinical decision support. The study demonstrates that Artemis will be able to incorporate new data streams from infusion pumps, EEG monitors and cerebral oxygenation monitors innovating the practice and improving the clinical support.",2012,0, 976,Integrating a model of analytical quality assurance into the V-Modell XT,"Economic models of quality assurance can be an important tool for decision-makers in software development projects. They enable to base quality assurance planning on economical factors of the product and the used defect-detection techniques. A variety of such models has been proposed but many are too abstract to be used in practice. Furthermore, even the more concrete models lack an integration with existing software development process models to increase their applicability. This paper describes an integration of a thorough stochastic model of the economics of analytical quality assurance with the systems development process model V-Modell XT. The integration is done in a modular way by providing a new process module - a concept directly available in the V-Modell XT for extension purposes - related to analytical quality assurance. In particular, we describe the work products, roles, and activities defined in our new process module and their effects on existing V-Modell XT elements.",2006,0, 977,Integrating Data Sources and Network Analysis Tools to Support the Fight Against Organized Crime,

We discuss how methods from social network analysis could be combined with methodologies from database mediator technology and information fusion in order to give police and other civil security decision-makers the ability to achieve predictive situation awareness. Techniques based on these ideas have been demonstrated in the EU PASR project HiTS/ISAC.

,2008,0, 978,Integrating Gene Expression Data from Microarrays Using the Self-Organising Map and the Gene Ontology,"

The self-organizing map (SOM) is useful within bioinformatics research because of its clustering and visualization capabilities. The SOM is a vector quantization method that reduces the dimensionality of original measurement and visualizes individual tumor sample in a SOM component plane. The data is taken from cDNA microarray experiments on Diffuse Large B-Cell Lymphoma (DLBCL) data set of Alizadeh. The objective is to get the SOM to discover biologically meaningful clusters of genes that are active in this particular form of cancer. Despite their powers of visualization, SOMs cannot provide a full explanation of their structure and composition without further detailed analysis. The only method to have gone someway towards filling this gap is the unified distance matrix or U-matrix technique. This method will be used to provide a better understanding of the nature of discovered gene clusters. We enhance the work of previous researchers by integrating the clustering results with the Gene Ontology for deeper analysis of biological meaning, identification of diversity in gene expression of the DLBCL tumors and reflecting the variations in tumor growth rate.

",2007,0, 979,Integrating Service Registries with OWL-S Ontologies,"With the advance of cloud computing, cloud service providers (CSP) provide increasingly diversified services to users. To utilize general search engine, such as Google, is not an effective and efficient if the services are similar but with different attributes. Therefore, an intelligent service discovery platform is necessary for seeking suitable services accurately and quickly. In the paper1, we propose a framework that integrates intelligent agent and ontology for service discovery in cloud environment. The framework contains some agents and mainly assists users to discovering suitable service according to user demand. User can submit his flat-text based request for discovering required service. We implement a cloud service discovery environment to demonstrate the concept and its application. We also utilize the Recall and Precision to evaluate the accuracy of the system.",2012,0, 980,Integrating tools for practical software analysis,"The paper describes software for active antenna radar simulation. Cost and feasibility considerations impose dispersion tolerance between the numerous transmit and receive channels of an active antenna, so evaluation of global performance of the instrument becomes delicate. This tool allows an accurate calculation of transmitted and received signals through the antenna taking into account phase and amplitude distortions and dispersions withstood along the RF network distribution. Radiation pattern and impulse response express active antenna performance. First results are presented",1993,0, 981,Integrating visual goal models into the Rational Unified Process.,"The Rational Unified Process is a comprehensive process model that is tailorable, provides templates for the software engineering products, and integrates the use of the Unified Modeling Language (UML); it is rapidly becoming a de facto standard for developing software. The process supports the definition of requirements at multiple levels. Currently, the early requirements, or goals, are captured in a textual document called the Vision Document, as the UML does not include a goal modeling diagram. The goals are subsequently refined into software requirements, captured in UML Use Case Diagrams. Given the well documented advantages of visual modeling techniques in requirements engineering, including the efficient communication and understanding of complex information among numerous diverse stakeholders, the need for an enhanced version of the Vision Document template which supports the visual modeling of goals is identified. Here, an Enhanced Vision Document is proposed which integrates two existing visual goal models: AND/OR Graph for functional goals and Softgoal Interdependency Graph for non-functional goals. A specific approach to establishing traceability relationships from the goals to the Use Cases is presented. Tool support has been developed for the Enhanced Vision Document template; the approach is illustrated using an example system called the Quality Assurance Review Assistant Tool.",2006,0, 982,Integration of ASP offerings: the perspective of SMES,"

Since 2001 Application Service Providers (ASPs) have been aware of the need to integrate their offerings with the existing systems of their customers. At present, options for integrating ASP offerings involve considerable expense, time and a skilled workforce to achieve. Small and medium-sized enterprises (SMEs) were originally attracted to ASPs because of their low start-up costs and quick time to market. Therefore, with SMEs in particular the problems with present options for ASP integration can introduce barriers to its adoption. What is not known is how the integration of an ASP offering is perceived by its potential customers and whether concerns held by SMEs reflect those of the more general population. This paper attempts to identify whether integrating ASP offerings is perceived as a worthwhile undertaking. In understanding the perceptions of SMEs and larger organizations the paper also tries to identify if and how the perception of an integrated ASP differs with company size. The results of a survey suggested that ASP integration is perceived positively by both SMEs and larger organizations. While there were many general concerns held by both, there were also issues that appeared to be of concern to only SMEs and not larger organizations, and vice-versa Your paper should be in the same format as this file.

",2007,0, 983,Integration of Structured Review and Modelbased Verification: a Case Study,"In this report, we discuss how structured reviews and formal verification and validation (V&V) can be integrated into a single development framework to exploit the synergy between them. The integrated approach uses graphical modeling techniques and the supporting tools as a front-end to formal V&V in order to improve feasibility of the framework. This in turns increases acceptability of formal V&V techniques among software developers by hiding their esoteric features behind the graphical modeling techniques, which are popular among the software developers.",2004,0, 984,"Intelligent Approaches to Mining the Primary Research Literature: Techniques, Systems, and Examples","In this chapter, we describe how creating knowledge bases from the primary biomedical literature is formally equivalent to the process of performing a literature review or a research synthesis. We describe a principled approach to partitioning the research literature according to the different types of experiments performed by researchers and how knowledge engineering approaches must be carefully employed to model knowledge from different types of experiment. The main body of the chapter is concerned with the use of text mining approaches to populate knowledge representations for different types of experiment. We provide a detailed example from neuroscience (based on anatomical tract-tracing experiments) and provide a detailed description of the methodology used to perform the text mining itself (based on the Conditional Random Fields model). Finally, we present data from textmining experiments that illustrate the use of these methods in a real example. This chapter is designed to act as an introduction to the field of biomedical text-mining for computer scientists who are unfamiliar with the way that biomedical research uses the literature.",2008,0, 985,Intelligent Consumer Purchase Intention Prediction System for Green Products,"

In this paper the authors model green behaviour by predicting consumers' purchase intention using Kohonen's LVQ technique. It is envisaged that such a model may facilitate better understanding of green consumers' market segments. The model employs cognitive, affective, and situational attributes of consumers to predict their purchase intention. The model can, potentially, provide a more direct method for companies to gauge consumers' intention to purchase green products. The results indicate that consumers are more strongly resistant to lower quality than higher prices of green products in comparison to other alternative non-green products.

",2005,0, 986,Interactive Views to Improve the Comprehension of UML Models - An Experimental Validation,"Software development is becoming more and more model-centric. As a result models are used for a large variety of purposes, such as quality analysis, understanding, and maintenance. We argue that the UML and related existing tooling does not offer sufficient support to the developer to understand the models and evaluate their quality. We have proposed and implemented a collection of views to increase model understanding: MetaView, ContextView, MetricView, and UML-City-View. The purpose of this experiment is to validate whether there is a difference between the proposed views and the existing views with respect to comprehension correctness and comprehension effort. The comprehension task performed by the subjects was to answer a questionnaire about a model. 100 MSc students with relevant background knowledge have participated in the experiment. The results are statistically significant and show that the correctness is improved by 4.5% and that the time needed is reduced by 20%.",2007,0, 987,Interest-based Negotiation in Multi-Agent Systems,"Distance education that transmits information on the global Internet has become the trend of educational development in the coming years; however, it still has some drawbacks and shortcomings. This article focuses on how to apply Multi-Agent technology in distance learning systems. The systems are supposed to teach students individualized according to their personality characteristics and cognitive abilities by establishing Student Agent and Teacher Agent, thus, to improve the intelligence and personalization of distance education system, in order to fully tap the potential of learners and improve teaching effectiveness and learning efficiency.",2010,0, 988,Internalisation of Information Security Culture amongst Employees through Basic Security Knowledge,"AbstractThis paper discusses the concept of basic security knowledge. This concept is about organisational members possessing basic security knowledge that can be applied to perform security tasks in their daily work routine. The intention of this paper is not to attempt an exhaustive literature review, but to understand the concept of basic security knowledge that can be used to cultivate a culture of information security in an organisation. The first part highlights some of the basic ideas on knowledge. The second part interprets the concept of basic security knowledge in the case study. Finally, a synthesised perspective of this concept is presented.",2006,0, 989,International workshop on realising evidence-based software engineering,The following topics are dealt with: evidence-based software engineering; search engine; software process simulation.,2007,0, 990,Inter-Organizational Knowledge Management. The Importance of Organizational and Environmental Context,"AbstractThis paper analyzes knowledge management in an inter-organizational level. First, a brief literature review is carried out. After, knowledge management facilitators are studied, analyzing two key factors; the organizational and the environmental context. Last point studies the particular case of SMEs, studying the importance of the cluster concept in order to improve the knowledge management process in an inter-organizational level.",2004,0, 991,Inter-Package Dependency Networks in Open-Source Software,"To date, numerous open source projects are hosted on many online repositories. While some of these projects are active and thriving, some projects are either languishing or showing no development activities at all. This phenomenon thus begs the important question of what are the influential factors that affect the success of open source projects. In a quest to deepen our understanding of the evolution of open source projects, this research aims to analyze the success of open source projects by using the theoretical lens of social network analysis. Based on extensive analyses of data collected from online repositories, we study the impact of the communication patterns of software development teams on the demand and supply outcomes of these projects, while accounting for project-specific characteristics. Using panel data analysis of data over 13 months, we find significant impacts of communication patterns on project outcomes over the long term.",2009,0, 992,"Interpretation, interaction and reality construction in software engineering: An explanatory model.","The incorporation of social issues in software engineering is limited. Still, during the last 20 years the social element inherent in software development has been addressed in a number of publications that identified a lack of common concepts, models, and theories for discussing software development from this point of view. It has been suggested that we need to take interpretative and constructive views more seriously if we are to incorporate the social element in software engineering. Up till now we have lacked papers presenting 'simple' models explaining why. This article presents a model that helps us better to understand interpretation, interaction and reality construction from a natural language perspective. The concepts and categories following with the model provide a new frame of reference useful in software engineering research, teaching, and methods development.",2007,0, 993,Interprocess Communication in the Process Specification Language,"In this paper, we suggest a formal framework as a basis for a genetic combination of formal languages. This makes it possible for the developer to specify the dynamic part of a system with a process algebra, and the static part with an algebraic specification language. The framework is based on a formal kernel composed of an abstract grammar describing the general form of the combination, and a global operational semantics giving the meaning of each language which can be built with our framework.",2001,0, 994,Investigating Adoption of Agile Software Development Methodologies in Organisations,"Agile software development methodologies have recently gained widespread popularity. The Agile Manifesto states valuing ""individuals, and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following apian"" by M. Fowler (2005). Different organizations are transforming their traditional software development practices into agile ones. There have been several disparate anecdotal evidences in support of the changes required and the challenges involved. In this paper, we provide a consolidated picture of the important changes required and the challenges involved in such transformation projects. We also provide a conceptual framework that would help managers for focusing on the important changes required and the challenges involved in agile software development projects.",2007,0, 995,Investigating formal representations of PIN block attacks,"Financial security APIs control the use of tamper-proof hardware security modules (HSMs) that are used in cash machine networks. The idea is that the API keeps the system secure even from corrupt insiders. Recently, several attacks have been found on these APIs, attracting the attention of formal methods researchers to the area. One family of attacks involves cracking PIN values by tweaking inputs to API functions away from their usual values and watching for errors. These so-called ?PIN block attacks? affect many APIs. A framework has been proposed for modelling them as Markov Decision Processes, and analysing the resulting models using probabilistic model checking, in order to asses how vulnerable an API configuration is. One problem of this framework is that the models produced are very large, and thus it often takes considerable time to analyse them. The objective of this thesis is to investigate and implement alternative ways of representing the models of PIN block attacks, aiming at increasing their compactness, and consequently making their analysis more efficient in terms of time and memory requirements. The great amount of symmetry inherent in the model is one of the main characteristics that will draw our attention, since it is responsible for a lot of redundant operations. We experiment with our approaches on a number of different security API configurations, and evaluate the results. We argue that the efficiency of the probabilistic model checker depends on a number of issues, that should be taken into account during the modelling of real-world systems, in order to achieve faster performance and avoid memory overloads.",2007,0, 996,Investigating pair-programming in a 2nd-year software development and design computer science course,This paper presents the results of a pair programming experiment conducted at the University of Auckland (NZ) during the first semester of 2004. It involved 300 second year Computer Science students attending a software design and construction course. We investigated similar issues to those reported in [26] and employed a subset of the questionnaires used by Laurie Williams et al. on the experiments presented in [26]. Our results support the use of pair programming as an effective programming/design learning technique.,2005,0, 997,Investigating the applicability of the evidence-based paradigm to software engineering,"Context: The success of the evidence-based paradigm in other domains, especially medicine, has raised the question of how this might be employed in software engineering.Objectives: To report the research we are doing to evaluate problems associated with adopting the evidence-based paradigm in software engineering and identifying strategies to address these problems.Method: Currently the experimental paradigms used in a selected set of domains are being examined along with the experimental protocols that they employ. Our aim is to identify those domains that have generally similar characteristics to software engineering and to study the strategies that they employ to overcome the lack of rigorous empirical protocols. We are also undertaking a series of systematic literature reviews to identify the factors that may limit their applicability in the software engineering domain.Conclusions: We have identified two domains that experience problems with experimental protocols that are similar to those occurring for software engineering, and will investigate these further to assess whether the approaches used to aggregate evidence in these domains can be adapted for use in software engineering. Our experiences from performing systematic literature reviews are positive, but reveal infrastructure problems caused by poor indexing of the literature.",2006,0, 998,Investigating the Relationship between Spatial Ability and Feedback Style in ITSs,"

Rapid and widespread development of computerised learning tools have proven the need for further exploration of the learners' personal characteristics in order to maximise the use of the current technology. In particular, this paper looks at the potential of accounting for spatial ability in ERM-Tutor; a constraint-based tutor that teaches logical database design. Our evaluation study shows no conclusive results to support a difference in effectiveness of the textual versus multimedia feedback presentation modes with respect to the students' spatial ability. However, we observed a number of trends indicating that matching the instruction presentation mode towards the students' spatial ability influences their perception of the system and motivation to use it, more than their learning gain.

",2008,0, 999,Investigating training effects on software reviews: a controlled experiment,"The software review/inspection task is a labour and time intensive activity. Naturally, any activity aimed at improving the performance of inspectors would be deemed favourable to both practitioners as well as to researchers. This study is motivated by previous work by Chowdhury and Land, on the effects of inspector training on inspection performance. There have been few studies in this area. One classic controlled experiment, consisting of 86 subjects was conducted. We manipulated one independent variable (training). The control group undertook no training, the other 3 treatments were process training, process training with practice and process training with worked examples. The results show practice and worked examples proceeding process training, were both very promising training approaches. They did not affect false positive identification. However, their relative benefits were less clear. We also pinpointed a few possible areas for future research from this work.",2005,0, 1000,Investigating Web size metrics for early Web cost estimation,"This paper's aim is to bring light to this issue by identifying size metrics and cost drivers for early Web cost estimation based on current practices of several Web Companies worldwide. This is achieved using two surveys and a case study. The first survey (S1) used a search engine to obtain Web project quote forms employed by Web companies worldwide to provide initial quotes on Web development projects. The 133 Web project quote forms gathered data on size metrics, cost factors, contingency and possibly profit metrics. These metrics were organised into categories and ranked. Results indicated that the two most common size metrics used for Web cost estimation were ''total number of Web pages'' (70%) and ''which features/functionality to be provided by the application'' (66%). The results of S1 were then validated by a mature Web company that has more than 12years of experience in Web development and a portfolio of more than 50 Web applications. The analysis was conducted using an interview. Finally, once the case study was finished, a second validation was conducted using a survey (S2) involving local New Zealand Web companies. The results of both validations were used to prepare Web project data entry forms to gather data on Web projects worldwide. After gathering data on 67 real Web projects worldwide, multivariate regression applied to the data confirmed that the number of Web pages and features/functionality provided by the application to be developed were the two most influential effort predictors.",2005,0, 1001,Investigation into XP Techniques applied to the Software Hut,"In the 2003 Software Hut (SH) module, the three winning teams developed software using a variant of the traditional software engineering methodology. Despite the fact that half the groups involved developed software using the agile methodology known as eXtreme Programming (XP), none managed to develop the best piece of software. This paper describes the motivation, development and construction of the project's investigation into the difficulties in understanding XP. It begins by examining the components that form the building blocks of an XP project through analysing past projects and papers, which in turn motivates several project hypotheses regarding the most problematic areas. The resulting chapters detail the creation of mechanisms whose aims are to validate these hypotheses and ultimately help improve undergraduate students? understanding of the XP methodology. The project delivers a case study and on-line support tool to address these issues and goes on to analyse these solutions from the perspectives of requirements, testing and user evaluation. The conclusions of which provide the direction of further ways to investigate the area.",2004,0, 1002,Investigation of IS professionals' intention to practise secure development of applications,"It is well known that software errors may lead to information security vulnerabilities, the breach of which can have considerable negative impacts for organizations. Studies have found that a large percentage of security defects in e-business applications are due to design-related flaws, which could be detected and corrected during applications development. Traditional methods of managing software application vulnerabilities have often been ad hoc and inadequate. A recent approach that promises to be more effective is to incorporate security requirements as part of the application development cycle. However, there is limited practice of secure development of applications (SDA) and lack of research investigating the phenomenon. Motivated by such concerns, the goal of this research is to investigate the factors that may influence the intention of information systems (IS) professionals to practise SDA, i.e., incorporate security as part of the application development lifecycle. This study develops two models based on the widely used theory of planned behaviour (TPB) and theory of reasoned action (TRA) to explain the phenomenon. Following model operationalization, a field survey of 184 IS professionals was conducted to empirically compare the explanatory power of the TPB-based model versus the TRA-based model. Consistent with TPB and TRA predictions, attitude and subjective norm were found to significantly impact intention to practise SDA for the overall survey sample. Attitude was in turn determined by product usefulness and career usefulness of SDA, while subjective norm was determined by interpersonal influence, but not by external influence. Contrary to TPB predictions, perceived behavioural controls, conceptualized in terms of self-efficacy and facilitating conditions, had no significant effect on intention to practise SDA. Thus, a modified TRA-based model was found to offer the best explanation of behavioural intention to practise SDA. Implications for research and information security practice are suggested.",2007,0, 1003,Invisible Safety of Distributed Protocols,"Video surveillance systems have become an indispensable tool for the security and organization of public and private areas. Most of the current commercial video surveillance systems rely on a classical client/server architecture to perform person and object recognition. In order to support the more complex and advanced video surveillance systems proposed in the last years, companies are required to invest resources in order to maintain the servers dedicated to the recognition tasks. In this work we propose a novel distributed protocol that exploits the computational capabilities of the surveillance devices (i.e. cameras) to perform the recognition of the person. The cameras fall back to a centralized server if their hardware capabilities are not enough to perform the recognition. By means of simulations, we show that our algorithm is able to reduce up to 50% the load of the server with no negative impact on the quality of the surveillance service.",2015,0, 1004,IS Knowledge Gaps: An Industrial Perspective,"This study empirically investigated what knowledge topics are important to an information system (IS) professional from an industrial perspective. More than 200 IS professionals participated in this study to provide what they thought about 43 IS related professional courses. The respondents were asked to rate the knowledge level that they had learned about each of the course in their formal education, are now familiar with about it as well as how practical the topic will be in their career. The findings might be helpful to IS training institutes, licensing bodies, departments and curriculum designers in universities or colleges. The results of this study can provide useful suggestions to help IS professionals for choosing the suitable and right learning courses as well as to act as practicable guidelines for IS professional curriculum planning and development.",2006,0, 1005,Is OO the Systems Development Technology for Your Organization?,"Object-oriented technology, emerged in response to the growing needs for developing and maintaining complex software system, has aroused a great deal of interests in both academia and industry. After more than a decade, there is still no clear result describing the extent to which the technology is used. A recent survey found that the adoption rate of object-oriented technology in systems analysis and design is much lower than expected. This study applies the theory of diffusion of innovations to investigate the extent to which the technology is used and the reasons as to why it is being adopted or not adopted. Based on the survey findings, an instrument is proposed for organizations to assess whether they are ready to adopt this new paradigm for their systems development. The purpose of this study is to help management make informed decisions on the adoption of object-oriented systems development methodologies in their organizations",2006,0, 1006,Is there a future for empirical software engineering?,"Software engineering research can be done in many ways, in particular it can be done in different ways when it comes to working with industry. This paper presents a list of top 10 challenges to work with industry based on our experience from working with industry in a very close collaboration with continuous exchange of knowledge and information. The top 10 list is based on a large number of research projects and empirical studies conducted with industrial research partners since 1983. It is concluded that close collaboration is a long-term undertaking and a large investment. The importance of addressing the top 10 challenges is stressed, since they form the basis for a long-term sustainable and successful collaboration between industry and academia.",2013,0, 1007,IT Application Assessment Model for Global Software Development,"In the technology enabled globalizing world with shrinking margins, IT offshore outsourcing is a well established practice in global software development. This is in line with the strategy of focusing only on core businesses of the enterprises. Despite this increasingly popular trend, the initial expectations of cost reduction of offshore outsourcing are not realized. Typically many hidden costs, risks of transition, learning needs, communications overheads, setup-times, ramping up durations, scope creeps, government regulations, etc, are not taken into account in the initial estimation of the relationships. This leads to ineffective value realization of offshore outsourcing which can be avoided by precision in cost estimates that are sustainable in execution cycles. It is thus imperative to develop a structured, objective and consistent approach to determine cost and productivity of application offshore outsourcing engagements. Wipro Offshore Outsourcing Methodology (WQOM), described in this paper, specifies such an approach, taking a holistic view of the end to end process. The driver for the methodology has been the focus to build quality in the complete process, starting from, pre-sales till the life cycle of the relationship. This methodology has been developed by incorporating the experience, judgment, intuition and expertise of multiple experts who managed and were part of many successful long-term offshore-outsourcing engagements. It provides guidelines to practitioners and decision makers to estimate the cost of IT application offshore outsourcing which include application assessment and to plan transition and steady state productivity achievement in a predictable and systematic manner. The core of the methodology is Wipro Application Assessment Model (WAAM), which leads to a consistent, robust and experience based estimate of time required to transition the application to offshore and the offshore onsite resource mix to maintain the application during its - life. It facilitates the practitioners to recognize risks and optimal bidding",2006,0, 1008,IT Service Management Case Based Simulation Analysis & Design: Systems Dynamics Approach,"IT service management (ITSM) systems have been widely deployed within firms in the context of firms' attempts to standardize firm IT activities and processes. Within newly developed or established IT systems, ITSM aims to improve systems' responsiveness to unforeseen disturbances. This paper proposes using the study of system dynamics to analyze IT management processes. We use the ITSM/ITIL (IT infrastructure library) model as a key reference in developing a simulation model in the terms of a commercial simulation tool, VENSIM. The current research demonstrates the feasibility of applying methods drawn from the study of system dynamism to IT process management. Through analyzing the dynamical characteristics of ITSM processes in a real case, we propose a simulation-based case study concerned to evaluate rational policy under an uncertain environment.",2007,0, 1009,It's more than just use: An exploration of telemedicine use quality,"Many information systems (IS) studies portray use as an indicator of system success. However, ""simply saying that more use will yield more benefits without considering the nature of this use, is clearly insufficient"" [1 p. 16]. Researchers are also urged to address the study context in defining components of IS success. Our research specifies the use quality construct in the context of a mission critical system deployment, namely, the use of medical video conferencing for patient examinations. This type of telemedicine encounter provides an interesting context of system use as people in various roles interact with each other and with technology. We use a multi-method field study to collect and interpret a rich set of data on telemedicine encounters. We analyze the data from the perspectives of both patients and providers in the encounter. The result of this field study is a socio-technical framework of use quality for telemedicine service encounters in which the individual use quality attributes are identified, discussed, and compared via provider and patient perspectives.",2005,0, 1010,IT-Supported Visualization of Knowledge Community Structures,"In the first part of this article, Communities of Practice are conceptually positioned as a very important and successful element of corporate Knowledge Management. By utilizing IT platforms they enable a direct connection of knowledge workers and the transfer and reuse of tacit expertise to geographically remote business problems. Although current Community Software provides its members with many sophisticated features, facilitators or moderators still lack functionalities to monitor, evaluate and communicate the development of their expert networks. After discussing the requirements of this special target group, this contribution concentrates on electronic discussions and proposes a software system for automatically analyzing the structure and value of Knowledge Communities by extracting available electronic data about their communication network. This includes the entities employee, topic, and document and their many relationships. Insightful structural visualizations based on theories of Network Analysis are introduced. They can be accessed and manipulated in a Management Cockpit to improve the transparency in communities.",2005,0, 1011,IUMELA: A Lightweight Multi-Agent Systems Based Mobile Learning Assistant Using the ABITS Messaging Service,"

University College Dublin has made an unprecedented transition from its once traditional educational metaphor to a modularised education framework, the first of its kind in Ireland. Queries have been raised regarding whether students who are unfamiliar with the concepts of modularisation, are capable of making informed decisions, ensuring success from specifically tailored module combinations. IUMELA is an intelligent modular-education learning assistant designed, using multi-agent systems (MAS), in order to assist students in their decision-making process. This paper introduces an alternative IUMELA MAS architecture that uses a significantly more lightweight mobile assistant.

",2007,0, 1012,JADE: A software framework for developing multi-agent applications. Lessons learned,"Since a number of years agent technology is considered one of the most innovative technologies for the development of distributed software systems. While not yet a mainstream approach in software engineering at large, a lot of work on agent technology has been done, many research results and applications have been presented, and some software products exists which have moved from the research community to the industrial community. One of these is JADE, a software framework that facilitates development of interoperable intelligent multi-agent systems and that is distributed under an Open Source License. JADE is a very mature product, used by a heterogeneous community of users both in research activities and in industrial applications. This paper presents JADE and its technological components together with a discussion of the possible reasons for its success and lessons learned from the somewhat detached perspective possible nine years after its inception.",2008,0, 1013,Joint Reference Modeling: Collaboration Support through Version Management,"The derivation of specific models from reference models corresponds with the creation of reference model variants. Research on the design of such variant constructions generally assumes an unchangeable stock of reference models. The potential inherent in the management of these variant constructions which reflect the changes in jointly designed reference models through time and, in doing so, their evolutionary development, has not yet been tapped into. The article at hand analyzes this problem and presents a concept for the version management of jointly designed reference models as a solution. The task to be mastered with the proposed approach can be concretized using data structures and system architecture and then prototypically implemented",2007,0, 1014,Joint Registration and Segmentation of Serial Lung CT Images in Microendoscopy Molecular Image-Guided Therapy,"

In lung cancer image-guided therapy, a real-time electromagnetic tracked microendoscopic optical imaging probe is guided to the small lung lesion of interest. The alignment of the pre-operative lung CT images as well as the intra-operative serial images is often an important step to accurately guide and monitor the interventional procedure in the diagnosis and treatment of these small lung lesions. Registering the serial images often relies on correct segmentation of the images and on the other hand, the segmentation results can be further improved by temporal alignment of the serial images. This paper presents a joint serial image registration and segmentation algorithm. In this algorithm, serial images are segmented based on the current deformations, and the deformations among the serial images are iteratively refined based on the updated segmentation results. No temporal smoothness about the deformation fields is enforced so that the algorithm can tolerate larger or discontinuous temporal changes that often appear during image-guided therapy. Physical procedure models could also be incorporated to our algorithm to better handle the temporal changes of the serial images during intervention. In experiments, we apply the proposed algorithm to align serial lung CT images. Results using both simulated and clinical images show that the new algorithm is more robust compared to the method that only uses deformable registration.

",2008,0, 1015,Justifying the Use of COTS Components within Safety Critical Applications,"The use of COTS software components within safety-critical systems has been suggested as potentially bringing substantial benefits in terms of cost and time savings. However, the success of a COTS-based safety-critical system development depends largely upon systematic COTS selection, evaluation and integration that take into account application specific safety concerns. Due to the lack of such a systematic approach, current practices often make early decisions on the use of COTS software products without adequate consideration of safety, which makes it extremely difficult, or impossible in some cases, to certify the final COTS-based safety-critical system (i.e. inability to establish an acceptable safety case). This thesis defines and demonstrates a coherent approach to COTS selection, evaluation and integration, which works towards final system certification. Within the approach, application specific safety requirements derived for the expected COTS functionality are used as evaluation and selection criteria. Where these requirements cannot be met directly by a candidate COTS component the approach encourages the targeted application of suitably matched mitigation strategies. By addressing safety considerations early and explicitly in the COTS based system development lifecycle this approach facilitates the development of a structured safety case to support system certification. Evaluation of the approach is described through a number of distinct case studies and the results of peer review activity.",2005,0, 1016,Key Research Issues in Grid Workflow Verification and Validation,"In the grid architecture, a grid workflow system is a type of high-level grid middleware which is supposed to support modelling, redesign and execution of large-scale sophisticated e-science and e-business processes in many complex scientific and business applications. To ensure the correctness of grid workflow specification and execution, grid workflow verification and validation must be conducted so that we can identify any violations and consequently take proper action to remove them in time. However, current research on the grid workflow verification and validation is just at the early stage and very few projects focus on them. Therefore, a systematic identification of key research issues in the grid workflow verification and validation field is no doubt helpful and should be presented so that we can be on the right track and reduce unnecessary work as far as possible. Hence, in this paper, we systematically analyse the grid workflow verification and validation and investigate their key research issues. Especially, we identify some important open research points which are not discussed by the current research and hence need further investigation. All these analyses form a big picture for the grid workflow verification and validation.",2006,0, 1017,Kinematic tracking and activity recognition using motion primitives,"We present a method for 3D monocular kinematic pose estimation and activity recognition through the use of dynamical human motion vocabularies. A motion vocabulary is comprised as a set of primitives that each describe the movement dynamics of an activity in a low-dimensional space. Given image observations over time, each primitive is used to infer the pose independently using its expected dynamics in the context of a particle filter. Pose estimates from a set of primitives are inferred in parallel and arbitrated to estimate the activity being performed. The approach presented is evaluated through tracking and activity recognition over extended motion trials. The results suggest robustness with respect to multi-activity movement, movement speed, and camera viewpoint.",2006,0, 1018,Knowledge Acquisition in Software Engineering Requires Sharing of Data and Artifacts,"

An important goal of empirical software engineering research is to cumulatively build up knowledge on the basis of our empirical studies, for example, in the form of theories and models (conceptual frameworks). Building useful bodies of knowledge will in general require the combined effort by several research groups over time. To achieve this goal, data, testbeds and artifacts should be shared in the community in an efficient way. There are basically two challenges: (1) How do we encourage researchers to use material provided by others? (2) How do we encourage researchers to make material available to others in an appropriate form? Making material accessible to others may require substantial effort by the creator. How should he or she benefit from such an effort, and how should the likelihood of misuse be reduced to a minimum? At the least, the requester should officially request permission to use the material, credit the original developer with the work involved, and provide feedback on the results of use as well as problems with using the material. There are also issues concerning the protection of data, maintenance of artifacts and collaboration among creators and requestors, etc. A template for a data sharing agreement between the creator and requestor that addresses these issues has been proposed.

",2006,0, 1019,Knowledge Artifacts as Bridges between Theory and Practice: The Clinical Pathway Case,"This paper discusses how Clinical Pathways (CPs) are defined, used and maintained in two hospital settings. A literature review and observational study are combined to illustrate the composite nature of CPs and the different roles they play in different phases of their life-cycle, with respect to the theme of bridging medical knowledge with the related practices by which physicians deal with a specific care problem. We take the case of the CP as a paradigmatic case to stress the urgent need for an integrated approach with the computer-based support of information and knowledge management in rapidly evolving cooperative work settings.",2008,0, 1020,Knowledge Integration in Information Systems Education Through an (Inter)active Platform of Analysis and Modelling Case Studies,"

In this paper we discuss how knowledge integration throughout system analysis, modelling and development courses can be stimulated by giving an overview of our MIRO-project at K.U.Leuven. This includes offering an online knowledge base of all-embracing case studies, structured according to the Zachman framework. Supported by collaborative groupware, students not only get the opportunity to consult and compare solutions for the case studies, but also actively discuss and contribute to alternative solutions. In this Problem Based Learning (PBL)-context, students are able to influence and understand the development of a certain process through interactive computerized animations and demos.

",2006,0, 1021,Knowledge Management in Software Process Improvement,"Software process improvement (SPI) is a long-term journey, which is made comfortable by many means. The most dominant and preferred plan is a knowledge driven methodology with which software development organisations are experimenting. To have a look and feel of knowledge and its management, it has become essential to have a standardised knowledge management tool (KMT) that comprises specifications like-acquisition, representation, sharing and deploying. Although several tools and techniques are available for managing knowledge to solve domain problems, it is felt in the knowledge society that no standard KM tools exist that would facilitate SPI. In this piece of implementation work, the authors outline the features that are deemed significant to implement a KMT that drives the journey of SPI. Four process areas are chosen and four subsystems are identified in covering these process areas. A series of studies conducted among organisations requiring the support of a KMT in making a decisive SPI initiative are also discussed with elaborate and significant results. Implications of this work demands the cooperation of software development companies with the research community in finding a better approach to their improvement program.",2008,0, 1022,Knowledge networking to support medical new product development,"New product development (NPD) in the pharmaceutical industry is very knowledge intensive. Knowledge generated and used during medical NPD processes is fragmented and distributed across various phases and artifacts. Many challenges in medical NPD can be addressed by the integration of this fragmented knowledge. We propose the creation and use of knowledge networks to address these challenges. Based on a case study conducted in a leading pharmaceutical company, we have developed a knowledge framework that represents knowledge fragments that need to be integrated to support medical NPD. We have also developed a prototype system that supports knowledge integration using knowledge networks. We illustrate the capabilities of the system through scenarios drawn from the case study. Qualitative validation of our approach is also presented.",2007,0, 1023,Knowledge sources of innovation studies in Korea: A citation analysis,"AbstractThis paper is an investigation of the knowledge sources of Korean innovation studies using citation analysis, based on a Korean database during 19932004. About two thirds of knowledge has come from foreign sources and 94% of them are from English materials. Research Policy is the most frequently cited journal followed by Harvard Business Review, R&D Management and American Economic Review. An analysis of who cites the most highly cited journal is also included. Neo-Schumpeterians in Korea cite more papers from Research Policy than general researchers, and there is no difference between groups in the year of citation.",2008,0, 1024,Knowledge Support in Software Process Tailoring,"A software process is a set of activities needed to transform a user's requirements into a software system. Using a well-defined process is a widely recognized approach to increasing quality and productivity in software development. Building software processes from scratch each time would create high risks and overhead. Therefore, they are often created by tailoring existing processes and standards. Reusing software processes and knowledge embedded in the processes can significantly improve effectiveness and efficiency of software development. In this research, I investigate whether knowledge can improve effectiveness and efficiency of process tailoring and what kind of knowledge can help most in process tailoring. Two types of knowledge are examined, i.e., generalized and contextualized knowledge.",2005,0, 1025,Knowledge-Sharing Issues in Experimental Software Engineering,"The issues associated with licensing and certification of software engineers are difficult. At present, there is no agreed-to body of knowledge on which to base certification. Some state legislatures are attempting to regulate the practice of software engineering without adequate understanding of the field. As a result of safety-critical software disasters, some professionals believe that licensing or certification is inevitable, so the software community had better figure out how to do it before someone else does it for them. In this paper, we survey the state of the practice of licensing and certification in other professions, identify the issues that might be encountered in attempting to license and certify software engineers, and suggest possible actions that could be taken by the profession. We discuss the implications of licensing or certification for education",1997,0, 1026,KOntoR: An Ontology-enabled Approach to Software Reuse,"Design with reuse has been accepted as a cost-effective way to software development. Software reuse covers the process of identification, representation, retrieval, adaptation, and integration of reusable software components. In this paper, we propose a semi-formal approach to software reuse. The approach consists of the following major steps: (1) software components are annotated with formal information, (2) the software components are then translated into predicate transition nets, and (3) consistency checking of the reusable and new components is carried out using the reachability analysis technique of predicate transition (PrT) nets. The approach is demonstrated through an example",1999,0, 1027,Language Patterns in the Learning of Strategies from Negotiation Texts,"Following Kiernan and Aizawa [1], and Thornton and Houser [2] among others, I have in a separate paper [3 J explored the use of cell phones and SMS in the classroom as a means of exploiting what really constitutes an immediately available form of ubiquitous computing, to facilitate second language acquisition. In order to gain information of Korean college students prior to conducting research specifically addressing their language learning strategies used in accessing online resources [4], [5], I conducted a preliminary survey, the results of which are presented here. I surveyed their use of cell phones, electronic dictionaries, SMS, Email, Computers and the Internet, investigating their use of their target second language of English (L2), and questioning whether they used such resources for L2 learning, and to what extent they did so in the target L2 language of English. It is intended to refine and repeat this survey in forthcoming semesters.",2007,0, 1028,Leadership by example: A perspective on the influence of Barry Boehm.,"Over the course of the past 10 years of working with Dr. Boehm on various projects has both influenced the software engineering program at Mississippi State University as well as provided growth opportunities and expansion of the MSU ABET accredited software engineering undergraduate degree program. Looking back over the key interactions with him, it is apparent that he leads (and influences) by his example, his work ethic, and his intellect in the software engineering field. This paper provides insights into his specific influences though collaborative work with another university.",2007,0, 1029,Learner?s Tailoring E-Learning System on the Item Revision Difficulty Using PetriNet,"E-learning models are attempts to develop frameworks to address the concerns of the learner and the challenges presented by the technology so that online learning can take place effectively. So it usually used the item difficulty of item analysis method. But item guessing factor in learning results has to be considered to apply the relative item difficulty more precisely. So, for e-Learning system support to learner considering learning grade, it need item revision difficulty which considered item guessing factor. In this paper, I designed and embodied the learner?s tailoring e-learning system on the item revision difficulty. For an efficient design, I use PetriNet and UML modeling. In building this system, I am able to support a variety of learning step choice to learners so that the learner can work in a flexible learning environment.",2006,0, 1030,Learner-centered web-based instruction in software engineering.,"During the past several years, there has been a growing demand for skilled software engineers, a demand that, unfortunately, has not been satisfied, resulting in a variety of problems for the discipline. This work presents a new approach called learner-centered Web-based instruction in software engineering that can be used to educate skilled engineers. The approach is based on three ideas. First, software engineering education must become more realistic. Second, software engineering education has to move closer to the learner. Finally, it must take advantage of the Web since this technology has the power of being a unique tool for implementing change in education. This work reports and discusses the results from the evaluation of the approach.",2005,0, 1031,Learning and Recognizing the Places We Go,"This paper investigates the state of programmed informal learning (e.g., team competitions, internships) in engineering education, the relevant research and available assessment instruments. Our purpose is to synthesize the existing informal learning research in engineering education for the engineering community, which should subsequently lead to the development of improved programs and learning experiences for engineering students. We also draw on the research performed in science education to identify potential outcomes for engineering education, including: improved student attitudes towards engineering, development of an engineering identity, knowledge of engineering practices, and broadened participation in engineering. Last, we provide future direction for informal learning research in engineering education.",2011,0, 1032,Learning to Integrate Web Catalogs with Conceptual Relationships in Hierarchical Thesaurus,"

Web catalog integration has been addressed as an important issue in current digital content management. Past studies have shown that exploiting a flattened structure with auxiliary information extracted from the source catalog can improve the integration results. Although earlier studies have also shown that exploiting a hierarchical structure in classification may bring better advantages, the effectiveness has not been testified in catalog integration. In this paper, we propose an enhanced catalog integration (ECI) approach to extract the conceptual relationships from the hierarchical Web thesaurus and further improve the accuracy of Web catalog integration. We have conducted experiments of real-world catalog integration with both a flattened structure and a hierarchical structure in the destination catalog. The results show that our ECI scheme effectively boosts the integration accuracy of both the flattened scheme and the hierarchical scheme with the advanced Support Vector Machine (SVM) classifiers.

",2006,0, 1033,Lessons from applying the systematic literature review process within the software engineering domain,"A consequence of the growing number of empirical studies in software engineering is the need to adopt systematic approaches to assessing and aggregating research outcomes in order to provide a balanced and objective summary of research evidence for a particular topic. The paper reports experiences with applying one such approach, the practice of systematic literature review, to the published studies relevant to topics within the software engineering domain. The systematic literature review process is summarised, a number of reviews being undertaken by the authors and others are described and some lessons about the applicability of this practice to software engineering are extracted.The basic systematic literature review process seems appropriate to software engineering and the preparation and validation of a review protocol in advance of a review activity is especially valuable. The paper highlights areas where some adaptation of the process to accommodate the domain-specific characteristics of software engineering is needed as well as areas where improvements to current software engineering infrastructure and practices would enhance its applicability. In particular, infrastructure support provided by software engineering indexing databases is inadequate. Also, the quality of abstracts is poor; it is usually not possible to judge the relevance of a study from a review of the abstract alone.",2007,0, 1034,Lessons Learned from Developing a Dynamic OCL Constraint Enforcement Tool for Java,"

Analysis and design by contract allows the definition of a formal agreement between a class and its clients, expressing each party's rights and obligations. Contracts written in the Object Constraint Language (OCL) are known to be a useful technique to specify the precondition and postcondition of operations and class invariants in a UML context, making the definition of object-oriented analysis or design elements more precise while also helping in testing and debugging. In this article, we report on the experiences with the development of ocl2j, a tool that automatically instruments OCL constraints in Java programs using aspect-oriented programming (AOP). The approach strives for automatic and efficient generation of contract code, and a non-intrusive instrumentation technique. A summary of our approach is given along with the results of an initial case study, the discussion of encountered problems, and the necessary future work to resolve the encountered issues.

",2005,0, 1035,Let?s Get Ready to Rumble: Crossover Versus Mutation Head to Head,"AbstractThis paper analyzes the relative advantages between crossover and mutation on a class of deterministic and stochastic additively separable problems. This study assumes that the recombination and mutation operators have the knowledge of the building blocks (BBs) and effectively exchange or search among competing BBs. Facetwise models of convergence time and population sizing have been used to determine the scalability of each algorithm. The analysis shows that for additively separable deterministic problems, the BB-wise mutation is more efficient than crossover, while the crossover outperforms the mutation on additively separable problems perturbed with additive Gaussian noise. The results show that the speed-up of using BB-wise mutation on deterministic problems is ${\mathcal{O}}(\sqrt{k}\log m)$, where k is the BB size, and m is the number of BBs. Likewise, the speed-up of using crossover on stochastic problems with fixed noise variance is ${\mathcal{O}}(m\sqrt{k}/\log m)$.",2004,0, 1036,Leveraging IS theory by exploiting the isomorphism between different research areas,"

The discipline of Information Systems is sometimes accused of being heavy on practical technology but light on conceptual theory. Identifying 'isomorphisms' between specialist research areas in other disciplines (especially mathematics) has produced spectacular results. This paper suggests that isomorphic thinking could also benefit IS research, in particular by leveraging existing frameworks and applying them outside their original context in isomorphic IS research areas.

The paper briefly defines the concept of isomorphism and illustrates the principle of isomorphic mapping using some well-known IS frameworks and theories which originate from other disciplines. This is followed by a practical case study on how a suggested framework for evaluating models could be applied almost literally to seemingly unrelated research areas such as website analysis. This case study exposes the underlying similarities ('isomorphism') between these research fields. The article concludes with some additional suggestions on how isomorphic thinking could advance research in other IS areas.

",2004,0, 1037,Leveraging lessons learned for distributed projects through Communities of Practice,"Stakeholders in the technology market understand that active management of past project lessons learned is the basis for promoting improvements to organization processes assets. However implementing and deploying an effective and easy manner to collect and share tacit knowledge throughout organizations is not trivial, especially for remote distributed ones. In order to make this process easier, community of practice (CoP) appears as one way to manage tacit knowledge in distributed organizations. It empowers collaborators to resolve technical issues through collaboration and participation in virtual communities. These communities would be responsible to review lessons, allow discussions regarding the subject or problem to find out the root causes and after analyze it and share tacit knowledge across collaborators. This industry report will describe an experience in the software industry that is following CoP definitions to share tacit lessons across global units",2006,0, 1038,"Libraries, social software and distance learners: Blog it, tag it, share it!","This paper describes a recent project funded by the University of London to explore how social software or Web 2.0 technologies can enhance the use of libraries by distance learners. LASSIE (Libraries And Social Software In Education) involves a team of librarians, learning technologists and archivists. The project first conducted an extensive literature review, which is available online. The literature review provides an overview of key social software and explores the current implementation of these tools by libraries. It also considers the key issues in supporting distance learners? use of libraries and whether social software might provide solutions. The literature review was followed by several case studies to explore specific types of social software in practice. These included the use of social bookmarking for sharing resources, social software and online reading lists, blogging in the library community, the use of social networking sites and podcasting for information literacy support. LASSIE will be completed in December 2007 and a final report with results from the case studies and an updated literature review will be made available from the project website. One of the successes of the project has been to establish a project blog, which provides the project team with an opportunity to reflect on progress, but also to gather opinions from others in the field.",2007,0, 1039,Lightweight reference affinity analysis,"This note presents a method for analysis of random reference tracking in feedback systems with saturating actuators. The development is motivated by the frequency domain approach to linear systems, where the bandwidth and resonance peak of the sensitivity function are used to predict the quality of step reference tracking. Similarly, based on the so-called saturating random sensitivity function, we introduce tracking quality indicators and show that they can be used to determine both the quality of random reference tracking and the nature of track loss under actuator saturation. The shortcomings of the method are also discussed.",2005,0, 1040,Links for a Human-Centered Science of Design: Integrated Design Knowledge Environments for a Software Development Process,"Based on extensive empirical observation of design activities that might be supported by a knowledge repository, we report conclusions from three case studies. Seeking to improve research infrastructure necessary to cultivate a ""science of design"" within human-computer interaction, we focus on identifying essential activities that help proceduralize the key requirements of knowledge management within a software development effort. From related literature, we selected five focus points for our analyses, which in turn, guided development of our repository in terms of how design knowledge is used, reused, and harvested through system tools. The case studies successively validate potential activities, while exposing breakdowns in process or practice that show promise of being resolved with additional tool features highlighted in other cases. Emerging largely from our case studies, we present general guidelines and tradeoffs for developing a design knowledge repository, as well as directions for further empirical study.",2005,0, 1041,Local Flow Betweenness Centrality for Clustering Community Graphs,"

The problem of information flow is studied to identify de facto communities of practice from tacit knowledge sources that reflect the underlying community structure, using a collection of instant message logs. We characterize and model the community detection problem using a combination of graph theory and ideas of centrality from social network analysis. We propose, validate, and develop a novel algorithm to detect communities based on computation of the Local Flow Betweenness Centrality. Using LFBC, we model the weights on the edges in the graph so we can extract communities. We also present how to compute efficiently LFBC on relevant edges without having to recalculate the measure for each edge in the graph during the process. We validate our algorithms on a corpus of instant messages that we call MLog. Our results demonstrate that MLogs are a useful source for community detection that can augment the study of collaborative behavior.

",2005,0, 1042,Local Information and Communication Infrastructures: An Introduction,"This standard defines a PHY and MAC layer for short-range optical wireless communications using visible light in optically transparent media. The visible light spectrum extends from 380 to 780 nm in wavelength. The standard is capable of delivering data rates sufficient to support audio and video multimedia services and also considers mobility of the visible link, compatibility with visible-light infrastructures, impairments due to noise and interference from sources like ambient light and a MAC layer that accommodates visible links. The standard will adhere to any applicable eye safety regulations",2010,0, 1043,Locality phase prediction,"A number of improved algorithms for phase prediction and frame interpolation in the context of sinusoidal speech coding are presented. A minimum-variance sinusoidal phase estimation scheme is proposed. It is shown that reasonably accurate estimates for short-time sinusoidal phases corresponding to voiced frames can be obtained. In addition, improved algorithms for interpolation of sine wave parameters are presented which result in further reduction in bit rate while preserving the subjective equality of the reproduced speech at low bit rates. The performance of the proposed algorithms were evaluated on a large speech database and the results of statistical analysis are provided. The proposed algorithms were successfully integrated into a 2.4 kbps sinusoidal coder, where speech of good quality intelligibility, and naturalness was obtained",2000,0, 1044,Locality-Based Server Profiling for Intrusion Detection,"In recent years, web applications have become tremendously popular. However, vulnerabilities are pervasive resulting in exposure of organizations and firms to a wide array of risks. SQL injection attacks, which has been ranked at the top in web application attack mechanisms used by hackers can potentially result in unauthorized access to confidential information stored in a backend database and the hackers can take advantages due to flawed design, improper coding practices, improper validations of user input, configuration errors, or other weaknesses in the infrastructure. Whereas using cross-site scripting techniques, miscreants can hijack Web sessions, and craft credible phishing sites. In this paper we have made a survey on different techniques to prevent SQLi and XSS attacks and we proposed a solution to detect and prevent against the malicious attacks over the developer's Web Application written in programming languages like PHP, ASP.NET and JSP also we have created an API (Application Programming Interface) in native language through which transactions and interactions are sent to IDS Server through Inter Server Communication Mechanism. This IDS Server which is developed from PHPIDS, a purely PHP based intrusion detection system and has a system architecture meant only for PHP application detects and prevents attacks like SQLi (SQL Injection) and XSS(Cross-site scripting), LFI(Local File Inclusion), and RFE(Remote File Execution) and returns back the result to the Web Application and logs the intrusions. In addition to this behavioural pattern of Web Logs is analysed using WAPT algorithm (Web Access Pattern Tree), which helps in recording the activity of the web application and examines any suspicious behaviour, uncommon patterns of behaviour over a period of time, and it also monitors the increased activity and known attack variants. Based on this an report is generated dynamically using P-Chart which can help the Website owner to increase the security measu- - res, and also used to improve the quality of the Web Application.",2011,0, 1045,Localized Flooding Backbone Construction for Location Privacy in Sensor Networks,"Source and destination location privacy is a challenging and important problem in sensor networks. Nevertheless, privacy preserving communication in sensor networks is still a virgin land. In this paper, we propose to protect location privacy via a flooding backbone, which is modeled by a minimum connected dominating set (MCDS) in unit-disk graphs. We design an efficient and localized algorithm to compute an approximate MCDS. Theoretical analysis indicates that our algorithm generates a connected dominating set (CDS) with a size at most 148 ldr opt + 37, where opt is the cardinality of a MCDS. To our best knowledge, this algorithm is the first localized algorithm with a constant performance ratio for CDS construction in unit-disk graphs.",2007,0, 1046,Longitudinal Studies in Evidence-Based Software Engineering,"Philips Laboratories has developed HVDEV, a procedural language layout generator for compiling high voltage MOS device layouts from behavioral specifications. HVDEV is analyzed as a case study in silicon compilation software engineering. The paper formulates a comparative analysis to conventional layout design accounting for software development and maintenance. Critical factors in planning silicon compilation software development are identified.",1987,0, 1047,Looking at human-computer interface design: Effects of ethnicity in computer agents,This paper presents empirical research findings that identify demonstrated attitude changes in computer users associated with their receiving advice from personified computer agents of two different ethnicities: African American and European American. Our findings indicate that computer users are more likely to change their actions (demonstrating underlying attitudes) based on input from a computer agent whose ethnicity is similar to theirs. These findings directly impact computer agent design in many fields.,2007,0, 1048,"Lost in translation: a critical analysis of actors, artifacts, agendas, and arenas in participatory design","As computer technologies start to permeate the everyday activities of a continuously growing population, social and technical as well as political and legal issues will surface. Participatory design is asked to take a more critical view of participation, design, technology, and the arenas in which the network of actors and artifacts dialectically construct the social orders. This paper has a much more modest aim of that to contribute the discussion of participation and design in part by a more indepth understanding of the translation problem among different actors who directly participate in participatory design activities. This problem takes place when different actors come to participate in the design activities and when they are to decide whether to adopt and use a designed artifact. By analyzing a multi-year-long effort to understand and provide social and technical means for the use of educational computer technologies in special education, this paper aims to shed new light on the understanding of this problem. The arenas of participation framework is employed to frame the different social orders in which actors act, carry out their work practices, participate in design processes, and ultimately make use of this artifact. While fundamental to the democratization of the design of sociotechnical solutions, participatory design may not be sufficient to reveal all sociopolitical issues of work practices that surface in its adoption and use. It is necessary to take into account the different arenas in which their design and use are carried out.",2004,0, 1049,M(in)BASE: An upward-tailorable process wrapper framework for identifying and avoiding model clashes.,"

MBASE (Model-Based [System] Architecting & Software Engineering) is a framework that can be wrapped around any software development process to deal with project failures caused by “model clashes.” Existing MBASE guidelines have all been designed to cover large classes of projects, and are intended to be tailored down, based on risk considerations, to the project at hand. Experience has shown that tailoring down is quite hard to learn and apply; based upon this observation, we are developing M(in)BASE, a minimal version of MBASE intended to be tailored up. In this paper, we review the fundamentals of MBASE, discuss, in detail, the reasons for creating M(in)BASE, and describe M(in)BASE.

",2005,0, 1050,Maintaining Industrial Competence,"Abstract:In knowledge economy time, the underlying reason of economy development falling behind of old northeast industrial zones is its low knowledge competence. Based on knowledge power analysis of three northeast provinces and relational research achievements review of regional knowledge competence, knowledge competence appraisal index system is constructed from derivation and ontology, and empirical analysis of three northeast provinces' knowledge competence from 2004 to 2006 is done by fuzzy integral method. Conclusion is draw that integral knowledge competence of three northeast provinces is comparative low and knowledge competence of Liaoning province is a little higher.",2011,0, 1051,Maize Production Emulation System Based on Cooperative Models,"Based on the maize ecophysiological characteristics, the maize developping process-based cooperative models including growth model, developmental phase models, water balance model and nitrogen balance model etc. was built combined with the basic data such as variety characteristics, weather data, soil level and cultivation management with the technology support of system engineering method, crop simulation and computer. On the basis of cooperative models, this paper further constructed Maize Production Emulation System (MPES) with several additional functions such as determining variety characteristic parameters, deciding the planting design, simulating maize phenology stages and production features, warning of the nitrogen leaching in advance, simulating the water and nitrogen deficit degree and maize growth three-dimensional display. The system reproduces the maize production process in digital form. MPES was test through actual experiment, and the results verified its strong mechanism and prediction performance as well as its universal adaptation.",2008,0, 1052,Making Arithmetic Accessible,"We present annotations needed for handwritten archive document retrieval by content. We propose two complementary ways of producing those annotations: automatically by using document image analysis and collectively by using Internet and a manual input by users. A platform for managing those annotations is presented as well as examples of automatic annotations on civil status registers, military forms (tested on 60000 pages) and naturalization decrees, using a generic document recognition method. Examples of collective annotations built on automatic annotations are also given. This platform will be officially open to public on Internet and inside the new building of the Archives departementales des Yvelines in December 2003. 1200000 images of civil status registers will be available for collective annotation as well as 35000 pages of military forms with automatic annotation of handwritten names.",2004,0, 1053,Making Manufacturing Changes Less Disruptive: Agent-Driven Integration,"AbstractThis paper presents some results of our recent investigation on how to address changes in manufacturing environments. The results presented here are based on our recent visits to manufacturing plants (in order to understand current industria practice) and a comprehensive literature review. It has become apparent that agents and multi-agents based technologies have the power and flexibility to deal with the shop dynamics. This paper also discusses research opportunities and challenges in this area, presents our recent research work in developing agent-based technologies to streamline and coordinate design and production activities within a manufacturing enterprise, and between the enterprise and its suppliers.",2006,0, 1054,MAKO-PM: Just-In-Time Process Model,"Artifact is the key business entity in the evolution of business process. Artifact-centric business process management is a typical representative of the data-centric business process management. There are many artifacts during the execution of business process system. In a real world application, such as restaurant process, we should check every artifact's correctness. In this paper, we explore the model of artifact-centric business process system from the perspective of knowledge popularization through introducing description logics to modeling, analysis and prove the bisimilar relation between two different system models. Then, we do verification of artifact through finding a pruning of raw system. At last, we apply such system model and verification to restaurant process.",2014,0, 1055,Malware Phylogeny Generation using Permutations of Code,"AbstractMalicious programs, such as viruses and worms, are frequently related to previous programs through evolutionary relationships. Discovering those relationships and constructing a phylogeny model is expected to be helpful for analyzing new malware and for establishing a principled naming scheme. Matching permutations of code may help build better models in cases where malware evolution does not keep things in the same order. We describe methods for constructing phylogeny models that uses features called n-perms to match possibly permuted codes. An experiment was performed to compare the relative effectiveness of vector similarity measures using n-perms and n-grams when comparing permuted variants of programs. The similarity measures using n-perms maintained a greater separation between the similarity scores of permuted families of specimens versus unrelated specimens. A subsequent study using a tree generated through n-perms suggests that phylogeny models based on n-perms may help forensic analysts investigate new specimens, and assist in reconciling malware naming inconsistenciesAbstraktkodliv programy, jako viry a ervy (malware), jsou zdka psny narychlo, jen tak. Obvykle jsou vsledkem svch evolunch vztah. Zjitnm tchto vztah a tvorby v pesn fylogenezi se pedpokld uiten pomoc v analze novho malware a ve vytvoen zsad pojmenovacho schmatu. Porovnvn permutac kdu uvnit malware m e nabdnout vhody pro fylogenn generovn, protoe evolun kroky implementovan autory malware nemohou uchovat posloupnosti ve sdlenm kdu. Popisujeme rodinu fylogennch genertor, kter provdj clustering pomoc PQ stromov zaloench extraknch vlastnost. Byl vykonn experiment v nm vstup stromu z tchto genertor byl vyhodnocen vzhledem k fylogenezm generovanm pomoc vench n-gram. Vsledky ukazuj vhody pstupu zaloenho na permutacch ve fylogennm generovn malware.RsumLes codes malveillants, tels que les virus et les vers, sont rarement crits de zro; en consquence, il existe des relations de nature volutive entre ces diffrents codes. Etablir ces relations et construire une phylognie prcise permet desprer une meilleure capacit danalyse de nouveaux codes malveillants et de disposer dune mthode de fait de nommage de ces codes. La concordance de permutations de code avec des parties de codes malveillants sont susceptibles dtre trs intressante dans ltablissement dune phylognie, dans la mesure o les tapes volutives ralises par les auteurs de codes malveillants ne conservent gnralement pas lordre des instructions prsentes dans le code commun. Nous dcrivons ici une famille de gnrateurs phylogntiques ralisant des regroupements laide de caractristiques extraites darbres PQ. Une exprience a t ralise, dans laquelle larbre produit par ces gnrateurs est valu dune part en le comparant avec les classificiations de rfrences utilises par les antivirus par scannage, et dautre part en le comparant aux phylognies produites laide de polygrammes de taille n (n-grammes), pondrs. Les rsultats dmontrent lintrt de lapproche utilisant les permutations dans la gnration phylogntique des codes malveillants.AbstraktiHaitalliset ohjelmat, kuten tietokonevirukset ja -madot, kirjoitetaan harvoin alusta alkaen. Tmn seurauksena niist on lydettviss evoluution kaltaista samankaltaisuutta. Samankaltaisuuksien lytmisell sek rakentamalla tarkka evoluutioon perustuva malli voidaan helpottaa uusien haitallisten ohjelmien analysointia sek toteuttaa nimemiskytntj. Permutaatioiden etsiminen koodista saattaa antaa etuja evoluutiomallin muodostamiseen, koska haitallisten ohjelmien kirjoittajien evolutionriset askeleet eivt vlttmtt silyt jaksoittaisuutta ohjelmakoodissa. Kuvaamme joukon evoluutiomallin muodostajia, jotka toteuttavat klusterionnin kyttmll PQ-puuhun perustuvia ominaisuuksia. Teimme mys kokeen, jossa puun tulosjoukkoa verrattiin virustentorjuntaohjelman muodostamaan viitejoukkoon sek evoluutiomalleihin, jotka oli muodostettu painotetuilla n-grammeilla. Tulokset viittaavat siihen, ett permutaatioon perustuvaa lhestymistapaa voidaan menestyksekksti kytt evoluutiomallien muodostamineen.ZusammenfassungMalizise Programme, wie z.B. Viren und Wrmer, werden nur in den seltensten Fllen komplett neu geschrieben; als Ergebnis knnen zwischen verschiedenen malizisen Codes Abhngigkeiten gefunden werden.Im Hinblick auf Klassifizierung und wissenschaftlichen Aufarbeitung neuer maliziser Codes kann es sehr hilfreich erweisen, Abhngigkeiten zu bestehenden malizisen Codes darzulegen und somit einen Stammbaum zu erstellen.In dem Artikel wird u.a. auf moderne Anstze innerhalb der Staumbaumgenerierung anhand ausgewhlter Win32 Viren eingegangen.AstrattoI programmi maligni, quali virus e worm, sono raramente scritti da zero; questo significa che vi sono delle relazioni di evoluzione tra di loro. Scoprire queste relazioni e costruire una filogenia accurata puoaiutare sia nellanalisi di nuovi programmi di questo tipo, sia per stabilire una nomenclatura avente una base solida. Cercare permutazioni di codice tra vari programmi puo dare un vantaggio per la generazione delle filogenie, dal momento che i passaggi evolutivi implementati dagli autori possono non aver preservato la sequenzialita del codice originario. In questo articolo descriviamo una famiglia di generatori di filogenie che effettuano clustering usando feature basate su alberi PQ. In un esperimento lalbero di output dei generatori viene confrontato con una classificazione di rifetimento ottenuta da un programma anti-virus, e con delle filogenie generate usando n-grammi pesati. I risultati indicano i risultati positivi dellapproccio basato su permutazioni nella generazione delle filogenie del malware.",2005,0, 1056,"Management competences, not tools and techniques: A grounded examination of software project management at WM-data","Traditional software project management theory often focuses on desk-based development of software and algorithms, much in line with the traditions of the classical project management and software engineering. This can be described as a tools and techniques perspective, which assumes that software project management success is dependent on having the right instruments available, rather than on the individual qualities of the project manager or the cumulative qualities and skills of the software organisation. Surprisingly, little is known about how (or whether) these tools techniques are used in practice. This study, in contrast, uses a qualitative grounded theory approach to develop the basis for an alternative theoretical perspective: that of competence. A competence approach to understanding software project management places the responsibility for success firmly on the shoulders of the people involved, project members, project leaders, managers. The competence approach is developed through an investigation of the experiences of project managers in a medium sized software development company (WM-data) in Denmark. Starting with a simple model relating project conditions, project management competences and desired project outcomes, we collected data through interviews, focus groups and one large plenary meeting with most of the company's project managers. Data analysis employed content analysis for concept (variable) development and causal mapping to trace relationships between variables. In this way we were able to build up a picture of the competences project managers use in their daily work at WM-data, which we argue is also partly generalisable to theory. The discrepancy between the two perspectives is discussed, particularly in regard to the current orientation of the software engineering field. The study provides many methodological and theoretical starting points for researchers wishing to develop a more detailed competence perspective of software project managers' work.",2007,0, 1057,Management of Globally Distributed Component-based Software Development,"With the global distribution of scientific and software engineering skills and with the need to foster multidisciplinary research collaboration across organisations result in teams dispersed separated by time and distance. However to attain the potential benefits of such collaboration, there is a critical need for a better management of communication, knowledge and co-ordination across distributed teams. The importance of these factors is becoming increasingly known to organisations requiring them to develop methods and enabling mechanisms in need for more successful and efficient collaboration outcomes. This paper discusses and emphasises the importance of managing these factors in distributed software engineering projects based on experiences drawn from an international scientific research and software engineering project (ePCRN). It presents their impact on the collaborative process and how they may hinder the progress of the software development process. It also presents the methods and mechanisms used in the project to address some of these factors.",2009,0, 1058,Managing a New Computer Device Development in a Creative ISO 9001 Certified Company: A Case Study,"This paper describes the findings of a case study that explores the micro level factors surrounding the processes of creativity and process management in a creative organization. The paper adopts an interpretive approach, which involves the collection and analysis of qualitative data in an ISO certified organization. The paper argues that structured process management provides a framework for organizations. However, we also observed that the attitude which surrounds process management is diverse inside an organization. Additionally, this framework results in positive effects as well as certain constraints for organizations. These constraints in turn affect its processes and innovations. Both intended and ingenuous actions occur as a response to these constraints. The creative potential of an organization helps to overcome the given constraints of structured process management. Therefore the paper claims that within a creative business environment such creative potential is essential",2007,0, 1059,Managing Knowledge Assets for NPD Performance Improvement: Results of an Action Research Project,"AbstractThis paper explores the fundamental issue of how Knowledge Management (KM) initiatives impact on business performance. Reflecting on the management literature enabled the definition of a conceptual background which has been tested and developed by an action research project. Drawing on the results of this project the paper proposes a framework to support managers in defining, planning, implementing and evaluating KM initiatives for performance improvement.",2004,0, 1060,Managing Large Repositories of Natural Language Requirements,"AbstractAn increasing number of market and technology driven software development companies face the challenge of managing an enormous amount of requirements written in natural language. As requirements arrive at high pace, the requirements repository easily deteriorates, impeding customer feedback and well-founded decisions for future product releases. In this chapter we introduce a linguistic engineering approach in support of large-scale requirements management. We present three case studies, encompassing different requirements management processes, where our approach has been evaluated. We also discuss the role of natural language requirements and present a survey of research aimed at giving support in the engineering and management of natural language requirements.",2005,0, 1061,Managing Non-Technical Requirements in COTS Components Selection,"The selection of COTS components is made not only by an analysis of their technical quality but also (and sometimes mostly) by considering how they fulfill those non-technical requirements considered relevant, which refer to licensing, reputation, and similar issues. In this paper we present an approach for managing nontechnical requirements during COTS selection. The proposal is based on extending the ISO/IEC 9126-1 catalogue of quality factors by adding factors related to non-technical issues, obtaining a cohesive and comprehensive framework for managing requirements during selection",2006,0, 1062,Managing Software Performance in the Globally Distributed Software Development Paradigm,"The information technology (IT) industry continues to lose close to GBP 45 billion each year as a result of under per forming applications. Our observations, while troubleshooting a number of projects on performance related issues, has been that the root cause for most of these problems lies in shortcomings at the requirements engineering, architecture and design or system integration testing phases of the software development lifecycle (SDLC). We attribute this to a lack of awareness on the basic principles of performance engineering in terms of the activities that need to be performed in this context and when and how in the SDLC should these be done. This problem is particularly accentuated in projects executed using the globally distributed software development model owing to the geographic dispersion of the development teams. This paper proposes an experience based methodology on how to manage the performance of an application that is developed under this radically new development paradigm",2006,0, 1063,Managing the business of software product line: An empirical investigation of key business factors,"Business has been highlighted as a one of the critical dimensions of software product line engineering. This paper's main contribution is to increase the understanding of the influence of key business factors by showing empirically that they play an imperative role in managing a successful software product line. A quantitative survey of software organizations currently involved in the business of developing software product lines over a wide range of operations, including consumer electronics, telecommunications, avionics, and information technology, was designed to test the conceptual model and hypotheses of the study. This is the first study to demonstrate the relationships between the key business factors and software product lines. The results provide evidence that organizations in the business of software product line development have to cope with multiple key business factors to improve the overall performance of the business, in addition to their efforts in software development. The conclusions of this investigation reinforce current perceptions of the significance of key business factors in successful software product line business.",2007,0, 1064,Mastering Dual-Shore Development ? The Tools and Materials Approach Adapted to Agile Offshoring,"

Software development in offshoring settings with distributed teams presents particular challenges for all participants. Process models that work well for conventional projects may have to be adapted. In this paper we present casestudy-reinforced advice on how to extend the Tools & Materials approach - a well established communication-centered agile design and development approach - to the field of dual-shore development in offshoring projects. We show how communication challenges can be tackled with common guiding and design metaphors, architecture-centric development, task assignments with component tasks and extensive quality assurance measures.

",2007,0, 1065,Maximising the information gained from a study of static analysis technologies for concurrent software,"

The results of empirical studies in Software Engineering are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining and replicating existing studies and using power analyses for an accurate minimum sample size. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of the verification and validation (V&V) of concurrent Java components. The paper presents the results of this controlled experiment and shows that the combination of automated static analysis and code inspection is cost-effective. Throughout the experiment a strategy to maximise the information gained from the experiment was used. As a result, despite the size of the study, conclusive results were obtained, contributing to the research on V&V technology evaluation.

",2007,0, 1066,Maximising the information gained from an experimental analysis of code inspection and static analysis for concurrent java components,"The results of empirical studies are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies in software engineering can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining existing studies, conducting power analyses for an accurate minimum sample size and benefiting from previous studies through replication. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of verification and validation (V&V) of concurrent Java components. The combination of these V&V technologies was shown to be cost-effective despite the size of the study, which thus contributes to research in V&V technology evaluation.",2006,0, 1067,MDA Approach in Real-Time Systems Development with Ada 2005,"Over the years, number of design methodologies were developed. One of the state-of-the-art modeling approaches is Model Driven Architecture. This thesis is an attempt to utilize the MDA in a specific and complex domain ? real-time systems development. In MDA framework there are three levels of abstraction: computation independent, platform independent and platform specific. The target environment of the method presented in the thesis is Ada 2005 programming language which extended the old version of the language with several new object-oriented features making it suitable for using with the MDA. Application of the MDA in real-time systems domain targeted towards Ada 2005 implementation constitutes a new design method which benefits from the MDA, UML and Ada 2005 advantages. The thesis starts with presentation of the complexity of the real-time systems domain. A few real-time domain aspects are chosen as a main area for elaborating the design method. The utilizes UML Profile for Schedulability, Performance and Time for defining platform independent model. Additionally it provides its extension ? the Ada UML profile ? which constitutes the platform specific model. This is followed by specification of transformations between platform independent and specific model. The specification is used as a base for implementation of the transformations. Guidelines for code generation form the Ada UML profile are also provided. Finally, the thesis describes how the transformations can be implemented in Telelogic TAU tool.",2007,0, 1068,Measures to Detect Word Substitution in Intercepted Communication,"

Those who want to conceal the content of their communications can do so by replacing words that might trigger attention by other words or locutions that seem more ordinary. We address the problem of discovering such substitutions when the original and substitute words have the same natural frequency. We construct a number of measures, all of which search for local discontinuities in properties such as string and bag-of-words frequency. Each of these measures individually is a weak detector. However, we show that combining them produces a detector that is reasonably effective.

",2006,0, 1069,Measuring and Comparing the Adoption of Software Process Practices in the Software Product Industry,"

Compatibility of agile methods and CMMI have been of interest forthe software engineering community, but empirical evidence beyond case studiesis scarce, which be attributed to the lack of validated measurement scales forsurvey studies. In this study, we construct and validate a set of Rasch scales formeasuring process maturity and use of agile methods. Using survey data from86 small and medium-sized software product firms, we find that the use of agilemethods and the maturity level of the firm are complementary in this sample. Inaddition to providing initial survey evidence of the compatibility of agile methodsand process maturity, our study provides a set of validated scales that canbe further refined and used in later survey studies.

",2008,0, 1070,Measuring Cohesion and Coupling of Object-Oriented Systems - Derivation and Mutual Study of Cohesion and Coupling,"Cohesion and coupling are considered amongst the most important properties to evaluate the quality of a design. In the context of OO software development, cohesion means relatedness of the public functionality of a class whereas coupling stands for the degree of dependence of a class on other classes in OO system. In this thesis, a new metric has been proposed that measures the class cohesion on the basis of relative relatedness of the public methods to the overall public functionality of a class. The proposed metric for class cohesion uses a new concept of subset tree to determine relative relatedness of the public methods to the overall public functionality of a class. A set of metrics has been proposed for measuring class coupling based on three types of UML relationships, namely association, inheritance and dependency. The reasonable metrics to measure cohesion and coupling are supposed to share the same set of input data. Sharing of input data by the metrics encourages the idea for the existence of mutual relationships between them. Based on potential relationships research questions have been formed. An attempt is made to find answers of these questions with the help of an experiment on OO system FileZilla. Mutual relationships between class cohesion and class coupling have been analyzed statistically while considering OO metrics for size and reuse. Relationships among the pairs of metrics have been discussed and results are drawn in accordance with observed correlation coefficients. A study on Software evolution with the help of class cohesion and class coupling metrics has also been performed and observed trends have been analyzed.",2004,0, 1071,MEMENTO: a digital-physical scrapbook for memory sharing,"

The act of reminiscence is an important element of many interpersonal activities, especially for elders where the therapeutic benefits are well understood. Individuals typically use various objects as memory aids in the act of recalling, sharing and reviewing their memories of life experiences. Through a preliminary user study with elders using a cultural probe, we identified that a common memory aid is a photo album or scrapbook in which items are collected and preserved. In this article, we present and discuss a novel interface to our memento system that can support the creation of scrapbooks that are both digital and physical in form. We then provide an overview of the user's view of memento and a brief description of its multi-agent architecture. We report on a series of exploratory user studies in which we evaluate the effect and performance of memento and its suitability in supporting memory sharing and dissemination with physical---digital scrapbooks. Taking account of the current technical limitations of memento, our results show a general approval and suitability of our system as an appropriate interaction scheme for the creation of physical---digital items such as scrapbooks.

",2007,0, 1072,Merging six emergency departments into one: a simulation approach,"Simulation of existing systems can reinforce a Subject Matter Expert's gut feelings. However, it is more difficult to develop intuition for proposed systems, particularly when considering the consolidation of multiple systems. This paper discusses the use of simulation to determine the operational ramifications of combining six Emergency Departments into one of the largest in the country. Each of these six existing Emergency Departments serve a different type of patient population and each maintains their own independent processes. This hospital required all Emergency Departments to effectively function using the same floor space, processes and ancillary services, such as testing facilities, waiting rooms, and registration. Healthcare planners need to understand the ramifications of sharing resources among multiple departments and the operational impact of high volume systems. This project explored these challenges to find key bottlenecks and mitigation strategies using simulation.",2007,0, 1073,Meta-analysis and Reflection as System Development Strategies,"We propose a coevolutionary system which reciprocally develops player's strategies in two player games. The game environment is the seven stud poker that is a complex real- world game of imperfect information. In our system, the players decide their actions based on a self-learning by Classifier Systems and then make the strategies more complex and excellent. We analyze dynamics of the evolution of the player's strategies and show the learning process of reciprocating skills of players.",2007,0, 1074,Meta-analysis of correlations among usability measures,"We have developed a classifier decision fusion measure which is used as framework for combining multiple classifier decisions. The combination of different sources of information about a face, in the form of different feature sets and classification methods, provides an opportunity to develop an improved level of verification compared to the use of a single set of classifiers. Recently, the face recognition method based on principal component analysis (PCA) and directional filter bank (DFB) responses is integrated with voting algorithm. We look at the possibility of using cross correlation as a measure to compare the outputs of various classifiers. In our system recognition ability of the PCA is enhanced by providing directional images as inputs and then using the normalized cross correlation as a decision fusion measure. The proposed method fuses the decisions of DFB-PCA on the basis of maximum cross correlation of each directional test image with mean of its respective directional class. The experiment results showed the remarkable recognition rate of 97% in Olivetti data set",2005,0, 1075,Metabolic Visualization and Intelligent Shape Analysis of the Hippocampus,"

This paper suggests a prototype system for visualization and analysis of anatomic shape and functional features of the hippocampus. Based on the result of MR-SPECT multi-modality image registration, anatomical and functional features of hippocampus are extracted from MR and registered SPECT images, respectively. The hippocampus is visualized in 3D by applying volume rendering to hippocampus volume data extracted from the MR image with color coded by registered SPECT image. In order to offer the objective and quantitative data concerning to the anatomic shape and functional features of the hippocampus, the geometric volume and the SPECT intensity histogram of hippocampus regions are automatically measured based on the MR and the registered SPECT image, respectively. We also propose a new method for the analysis of hippocampal shape using an integrated Octree-based representation, consisting of meshes, voxels, and skeletons.

",2005,0, 1076,Methodology of Integrated Knowledge Management in Lifecycle of Product Development Process and Its Implementation,"This work first provides a literature review of product development and knowledge management including product development process, design history and domain knowledge. It then presents a method for the knowledge-based multi-view process modeling including process implementation, process monitoring, and knowledge management. The integrated framework and hierarchical model of the design history is built. The relationship among process history, design intent and domain knowledge is analyzed, and the method for acquisition and management of process history, design intent and domain knowledge is presented. The functional modules of knowledge-based PDPM system are described, and the architecture of integrated knowledge management (IKM) system based on PDPM is set up and developed with a B/C/S structure. The system is used successfully during the life cycle of a new type of railway rolling stock development in a Chinese enterprise.",2004,0, 1077,Methods for justifying arithmetic hypotheses and computer algebra,"

Computer algebra methods for justifying hypotheses in arithmetic geometry are considered. The specific feature of these methods is that they are designed for parametric problems. The methods developed can be used for solving computational problems and justifying hypotheses on equidistribution, as well as for the theory of algebraic curves over finite fields.

",2006,0, 1078,Metrics are fitness functions too,"Metrics, whether collected statically or dynamically, and whether constructed from source code, systems or processes, are largely regarded as a means of evaluating some property of interest. This viewpoint has been very successful in developing a body of knowledge, theory and experience in the application of metrics to estimation, predication, assessment, diagnosis, analysis and improvement. This paper shows that there is an alternative, complementary, view of a metric: as a fitness function, used to guide a search for optimal or near optimal individuals in a search space of possible solutions. This 'Metrics as Fitness Functions' (MAFF) approach offers a number of additional benefits to metrics research and practice because it allows metrics to be used to improve software as well as to assess it and because it provides an additional mechanism of metric analysis and validation. This paper presents a brief survey of search-based approaches and shows how metrics have been combined with the search based techniques to improve software systems. It describes the properties of a metric which make it a good fitness function and explains the benefits for metric analysis and validation which accrue from the MAFF approach.",2004,0, 1079,Metrics in Software Test Planning and Test Design Processes,"This paper explains the necessity of integrating the design of testing systems into the process of assembly system design. In this way, the authors propose a method to include testing in the method to design assembly systems from the Laboratory of Automation in Besancon. The different aspects of the testing problem are explained and formalized with three kinds of variables: aim variable, operative variable and action variable. These variables enable one to design the testing system. A five step process is also presented which is used to generate assembly plans including testing operations and to select the plans according to a testing strategy: 1) establishment of initial data on the product, leading to an operative model of the product; 2) the assembly plan generation including the testing features; 3) the establishment of some knowledge of the operative testing by using a multicriteria analysis; 4) a first selection among the assembly plans generated; and 5) a second selection from among the plans remaining after the first selection",1999,0, 1080,Metrics-Based Management of Software Product Portfolios,"Commmercial software product vendors such as Microsoft, IBM, and Oracle develop and manage a large portfolio of software products, which might include operating systems, middleware, firmware, and applications. Many institutions (such as banks, universities, and hospitals) also create and manage their own custom applications. Managers at these companies face an important problem: How can you manage investment, revenue, quality, and customer expectations across such a large portfolio? A heuristics-based product maturity framework can help companies effectively manage the development and maintenance of a portfolio of software products",2007,0, 1081,Microphase: an approach to proactively invoking garbage collection for improved performance,"

To date, the most commonly used criterion for invoking garbage collection (GC) is based on heap usage; that is, garbage collection is invoked when the heap or an area inside the heap is full. This approach can suffer from two performance shortcomings: untimely garbage collection invocations and large volumes of surviving objects. In this work, we explore a new GC triggering approach called MicroPhase that exploits two observations: (i) allocation requests occur in phases and (ii) phase boundaries coincide with times when most objects also die. Thus, proactively invoking garbage collection at these phase boundaries can yield high efficiency. We extended the HotSpot virtual machine from Sun Microsystems to support MicroPhase and conducted experiments using 20 benchmarks. The experimental results indicate that our technique can reduce the GC times in 19 applications. The differences in GC overhead range from an increase of 1% to a decrease of 26% when the heap is set to twice the maximum live-size. As a result, MicroPhase can improve the overall performance of 13 benchmarks. The performance differences range from a degradation of 2.5% to an improvement of 14%.

",2007,0, 1082,Milestone Markets: Software Cost Estimation through Market Trading,"Software cost estimation remains a difficult challenge despite decades of attention by both researchers and practitioners. Predictions are often inaccurate and characterized by very wide confidence intervals. Direct approaches base ""expert"" estimates on detailed requirements, along with the experience and intuition of the estimator. The Delphi method seeks a consensus estimate among a group of expert estimators. Still other approaches use historical project data to fit estimation models, such as COCOMO. While hybrid techniques like COBRA combine aspects of several methods. This paper proposes a very different approach, the use of information markets to continually aggregate the individual estimates of diverse software project stakeholders. Information markets have been applied successfully in several areas with the market consensus often outperforming individual experts. The paper describes a market mechanism for software cost estimation, explores the characteristics that make such an approach possible, and presents initial experiments based on a simple estimation task.",2006,0, 1083,Mining Discriminative Distance Context of Transcription Factor Binding Sites on ChIP Enriched Regions,"

Genome-wide identification of transcription factor binding sites (TFBSs) is critical for understanding transcriptional regulation of the gene expression network. ChIP-chip experiments accelerate the procedure of mapping target TFBSs for diverse cellular conditions. We address the problem of discriminating potential TFBSs in ChIP-enriched regions from those of non ChIP-enriched regions using ensemble rule algorithms and a variety of predictive variables, including those based on sequence and chromosomal context. In addition, we developed an input variable based on a scoring scheme that reflects the distance context of surrounding putative TFBSs. Focusing on hepatocyte regulators, this novel feature improved the performance of identifying potential TFBSs, and the measured importance of the predictive variables was consistent with biological meanings. In summary, we found that distance-based features are better discriminators of ChIP-enriched TFBS over other features based on sequence or chromosomal context.

",2007,0, 1084,Mining metrics to predict component failures,"What is it that makes software fail? In an empirical study of the post-release defect history of five Microsoft software systems, we found that failure-prone software entities are statistically correlated with code complexity measures. However, there is no single set of complexity metrics that could act as a universally best defect predictor. Using principal component analysis on the code metrics, we built regression models that accurately predict the likelihood of post-release defects for new entities. The approach can easily be generalized to arbitrary projects; in particular, predictors obtained from one project can also be significant for new, similar projects.",1996,0, 1085,Mining Process Execution and Outcomes ? Position Paper,"

Organizational processes in general and patient-care processes in particular, change over time. This may be in response to situations unpredicted by a predefined business process model (or clinical guideline), or as a result of new knowledge which has not yet been incorporated into the model. Process mining techniques enable capturing process changes, evaluating the gaps between the predefined model and the practiced process, and modifying the model accordingly. This position paper motivates the extension of process mining in order to capture not only deviations from the process model, but also the outcomes associated with them (e.g., patient improving or deteriorating). These should be taken into account when modifications to the process are made.

",2007,0, 1086,Mining Unexpected Associations for Signalling Potential Adverse Drug Reactions from Administrative Health Databases,"

Adverse reactions to drugs are a leading cause of hospitalisation and death worldwide. Most post-marketing Adverse Drug Reaction (ADR) detection techniques analyse spontaneous ADR reports which underestimate ADRs significantly. This paper aims to signal ADRs from administrative health databases in which data are collected routinely and are readily available. We introduce a new knowledge representation, Unexpected Temporal Association Rules (UTARs), to describe patterns characteristic of ADRs. Due to their unexpectedness and infrequency, existing techniques cannot perform effectively. To handle this unexpectedness we introduce a new interestingness measure, unexpected-leverage, and give a user-based exclusion technique for its calculation. Combining it with an event-oriented data preparation technique to handle infrequency, we develop a new algorithm, MUTARA, for mining simple UTARs. MUTARA effectively short-lists some known ADRs such as the disease esophagitis unexpectedly associated with the drug alendronate. Similarly, MUTARA signals atorvastatin followed by nizatidine or dicloxacillin which may be prescribed to treat its side effects stomach ulcer or urinary tract infection, respectively. Compared with association mining techniques, MUTARA signals potential ADRs more effectively.

",2006,0, 1087,Missing requirements and relationship discovery through proxy viewpoints model,"This paper addresses the problem of ""missing requirements"" in software requirements specification (SRS) expressed in natural language. Due to rapid changes in technology and business frequently witnessed over time, the original SRS documents often experience the problems of missing, not available, and hard-to-locate requirements. One of the flaws in earlier solutions to this problem has no consideration for missing requirements from multiple viewpoints. Furthermore, since such SRS documents represent an incomplete domain model, mannual discovery (identification and incorporation) of missing requirements and relationships is highly labor intensive and error-prone. Consequently, deriving and improving an efficient adaptation of SRS changes remain a complex problem. In this paper, we present a new methodology entitled ""Proxy Viewpoints Model-based Requirements Discovery (PVRD)"". The PVRD methodology provides an integrated framework to construct proxy viewpoints model from legacy status requirements and supports requirements discovery process as well as efficient management.",2004,0, 1088,Mixture Random Effect Model Based Meta-analysis for Medical Data Mining,"

As a powerful tool for summarizing the distributed medical information, Meta-analysis has played an important role in medical research in the past decades. In this paper, a more general statistical model for meta-analysis is proposed to integrate heterogeneous medical researches efficiently. The novel model, named mixture random effect model (MREM), is constructed by Gaussian Mixture Model (GMM) and unifies the existing fixed effect model and random effect model. The parameters of the proposed model are estimated by Markov Chain Monte Carlo (MCMC) method. Not only can MREM discover underlying structure and intrinsic heterogeneity of meta datasets, but also can imply reasonable subgroup division. These merits embody the significance of our methods for heterogeneity assessment. Both simulation results and experiments on real medical datasets demonstrate the performance of the proposed model.

",2005,0, 1089,"MML, inverse learning, and medical data-sets","Bayesian Networks (BNs) model data to infer the probability of a certain outcome. Conditional Probability Distributions (CPDs) specify the frequency distributions for every possible value an attribute can take. Large dimensionality of data results in very complex CPDs which makes it difficult to state the CPDs and to infer some property of the model once proposed. This project will extend the work of Comley and Dowe (2003, 2004) based on ideas from Dowe and Wallace (1998) on improving BNs. Techniques for simplifying CPDs will be investigated. Better models and therefore better inference for data-sets with high dimensionality (e.g. medical datasets) are expected.",2004,0, 1090,Mobile Personalization at Large Sports Events User Experience and Mobile Device Personalization,"

Mobile personalization is frequently discussed, and has been shown in relation to a number of usage scenarios. However, this research has focused mainly on technology development. There have been few studies of mobile user experience, and personalization in sports. This paper is devoted to the new field of studying the user experience related to mobile personalization at large sports events (LSE). In order to support and enrich the user experience at LSE with mobile personalization, this study investigates the current audience experience at stadiums and derives the usage patterns that device personalization could usefully support in this context.

",2007,0, 1091,Mobile Phone Based User Interface Concept for Health Data Acquisition at Home,"AbstractThe availability of mobile information and communication technologies is increasing rapidly and provides huge opportunities for home monitoring applications. This paper presents a new human-computer interface concept which is based on digital camera enabled mobile phones. Only two keystrokes are necessary to take a photo of a medical measurement device, for example a blood pressure meter, and to send the photo to a remote monitoring centre where specifically designed algorithms extract the numeric values from the photo and store them to a database for further processing. The results of a feasibility study indicates the potential of this new method to give people access to mobile phone based, autonomous recording and documentation of health parameters at home.",2004,0, 1092,Mobilizing software expertise in personal knowledge exchanges,"Personal knowledge exchanges (PKEs) are Web-based markets that match seekers and providers of knowledge and facilitate the pricing and transfer of knowledge assets. They show significant potential to function as infrastructure for the ''elance-economy.'' This study examines transactions for software expertimse in a personal knowledge exchange. It evaluates the mobilization of knowledge in terms of the speed and the number of knowledge providers that are matched to a knowledge request. Hypotheses are proposed based on transaction costs imposed by the characteristics of knowledge. We also study the impact of safeguarding and coordination mechanisms that may help overcome challenges to PKE use. We find that knowledge mobilization in the PKE is adversely impacted by knowledge transfer costs due to the tacitness, situatedness, and complexity of knowledge that is sought. To a lesser extent, knowledge mobilization is adversely affected by the likelihood of opportunistic behavior as indicated by the reputation ratings of the individual requesting the knowledge. The study enables a better understanding of the factors impacting the effectiveness of personal knowledge exchanges and provides important managerial implications for shaping their development.",2007,0, 1093,Model for defining and reporting reference-based validation protocols in medical image processing,"Objectives?Image processing tools are often embedded in larger systems. Validation of image processing methods is important because the performance of such methods can have an impact on the performance of the larger systems and consequently on decisions and actions based on the use of these systems. Most validation studies compare the direct or indirect results of a method with a reference that is assumed to be very close or equal to the correct solution. In this paper, we propose a model for defining and reporting reference-based validation protocols in medical image processing.",2006,0, 1094,MODEL TRANSFORMATION SUPPORT FOR THE ANALYSIS OF LARGE?SCALE SYSTEMS,"The basal economic factors were important to urban disaster-carrying capacity in the economic sub-system. Based on the collaboration, the concept model was set up while the two ""disaster-mitigation expenditure with economic output"" relationship couples of expenditure of disaster-defense and gross domestic product (GDP), insurance density and available income per person were considered as the compound of basal economic factors in the urban disaster-carrying capacity. The grey systems academic model was used to do coordinative analysis about the situation lately decade in Dalian aiming at the defection of existent methods. The result was that the coordinative relationship of expenditure of disaster-defend and GDP and insurance density and available income per person were both not very well which due to the actuality that economic development was in high-speed and the urban area hasn't been damaged by disasters recently. It was suggested that the construction of disaster prevention and reduction and insurance should be expedited while urban economy should develop ceaselessly.",2007,0, 1095,Model-Based Performance Prediction in Software Development: A Survey,"Over the last decade, a lot of research has been directed toward integrating performance analysis into the software development process. Traditional software development methods focus on software correctness, introducing performance issues later in the development process. This approach does not take into account the fact that performance problems may require considerable changes in design, for example, at the software architecture level, or even worse at the requirement analysis level. Several approaches were proposed in order to address early software performance analysis. Although some of them have been successfully applied, we are still far from seeing performance analysis integrated into ordinary software development. In this paper, we present a comprehensive review of recent research in the field of model-based performance prediction at software development time in order to assess the maturity of the field and point out promising research directions.",2004,0, 1096,Model-based Technology Integration with the Technical Space Concept,"In this paper we introduce the concept of Technical Space (TS) to refer to technologies at a higher level of abstraction. Some technical spaces can be easily identified, e.g. the XML TS, the DBMS TS, the programming languages TS, the OMG/MDA TS, etc. As the spectrum of such available technologies is rapidly broadening, the necessity to offer clear guidelines when choosing practical solutions to engineering problems is becoming a must. The purpose of our work is to figure out how to work more efficiently by using the best possibilities of each technology. To do so, we need a basic understanding of the similarities and differences between various TSs, and also of the possible operational bridges that will allow transferring the artifacts obtained in one TS to other TS. The analysis of several technical spaces reveals that they can be perceived in a broader context as model management frameworks, that is, every space is populated with models. An important commonality is that these frameworks are organized according to a three-level architecture based on models, metamodels, and metametamodels. The unified model-based view brings a conceptual foundation to study the possible bridges between spaces. Bridging technical spaces is especially useful when it brings new capabilities not available in a given space. We hope that the presented vision may help us putting forward the idea that there could be more cooperation than competition among alternative technologies",2005,0, 1097,Model-Based Verification in the Development of Dependable Systems,"In this paper, we present a framework that integrates the semi-formal modeling language, namely UML, with the formal method, namely PVS, to exploit their synergy in the development of dependable systems. System descriptions are given in UML notations, which are translated into semantic models in PVS based on formal semantic definitions for UML notations. The translation is automated and the resulting semantic models are rigorously analyzed using the PVS toolkit. We demonstrate, by example, how the framework contributes to improved use of formal methods in the development of dependable systems in the industrial settings, and to underpinning semi-formal modeling languages with rigorous semantic foundation.",2005,0, 1098,Model-driven architecture for cancer research,"It is a common phenomenon for research projects to collect and analyse valuable data using ad-hoc information systems. These costly-to-build systems are often composed of incompatible variants of the same modules, and record data in ways that prevent any meaningful result analysis across similar projects. We present a framework that uses a combination of formal methods, model-driven development and service-oriented architecture (SOA) technologies to automate the generation of data management systems for cancer clinical trial research, an area particularly affected by these problems. The SOA solution generated by the framework is based on an information model of a cancer clinical trial, and comprises components for both the collection and analysis of cancer research data, within and across clinical trial boundaries. While primarily targeted at cancer research, our approach is readily applicable to other areas for which a similar information model is available.",2007,0, 1099,Model-driven design of web applications with client-side adaptation,"The paper describes the design of a framework for enterprise Web applications that adapts their contents to various types of Web-enabled terminals, such as wearable devices, PDAs, and automobile PCs. Such terminals have different capabilities as regards their processing units, user interaction, and communication. Thus, applications must dynamically adapt their contents to each type of device when they provide service sessions. On the other hand, applications that serve various dynamic contents from databases and transactions need to be connected to back-end systems, namely, business objects designed independently of the Web applications. For reuse and easy development of such adaptive and enterprise systems, the framework should separate three concerns: (1) design of business objects, (2) design of logical Web contents, and (3) design of the content adaptation. The paper reports the author's experience in designing, implementing, and applying a framework to a banking system using small display devices, and discusses the design",1999,0, 1100,Model-Driven Engineering,"In the field of access control, many security breaches occur because of a lack of early means to evaluate if access control policies are adequate to satisfy privileges requested by subjects which try to perform actions on objects. This paper proposes an approach based on UMLsec, to tackle this problem. We propose to extend UMLsec, and to add OrBAC elements. In particular, we add the notions of context, inheritance and separation. We also propose a methodology for modeling a security policy and assessing the security policy modeled, based on the use of MotOrBAC. This assessment is proposed in order to guarantee security policies are well-formed, to analyse potential conflicts, and to simulate a real situation.",2013,0, 1101,Model-driven safety evaluation with state-event-based component failure annotations.,"

Over the past years, the paradigm of component-based software engineering has been established in the construction of complex mission-critical systems. Due to this trend, there is a practical need for techniques that evaluate critical properties (such as safety, reliability, availability or performance) of these systems. In this paper, we review several high-level techniques for the evaluation of safety properties for component-based systems and we propose a new evaluation model (State Event Fault Trees) that extends safety analysis towards a lower abstraction level. This model possesses a state-event semantics and strong encapsulation, which is especially useful for the evaluation of component-based software systems. Finally, we compare the techniques and give suggestions for their combined usage.

",2005,0, 1102,Model-Driven Software Development for Pervasive Information Systems Implementation,"Model-driven development (MDD) conceptions and techniques essentially centre the focus of development on models. They are subject of current research as they allow enhanced productivity, technological platform independence and longevity of software artifacts. Another area of current research is the ubiquitous/pervasive computing area. This field of computing research focuses on the widespread adoption of embedded or mobile heterogeneous computing devices, which, when properly orchestrated, globally compose pervasive information systems (PIS). This work intends to clarify how should be MDD concepts and techniques structurally consolidated into an approach to software development for PIS. It involves two projects as case studies. From these case studies, it will be proposed methodological insights to design approaches for software development of PIS. While clarifying several issues pertaining to MDD for PIS, it shall promote other research works based on issues needing further study.",2007,0, 1103,Model-driven software evolution: A research agenda,"The design, evolution and reuse of software system architectures are always important research areas in software engineering. In this paper, we first propose the concept of a multi-level orthogonal software system architecture, then describe a methodology of evolving and reusing this software architecture. Furthermore, we apply these concepts and methods to practical work, and are confident that they work well in reducing the complexity of software evolution (especially for large-scale software) and in enhancing the reuse rate",1999,0, 1104,"Model-Driven, Network-Context Sensitive Intrusion Detection","Network security has become an important issue in today's extensively interconnected computer world. The industry, academic institutions, small and large businesses and even residences have never been more risk from the increasing onslaught of computer attacks than more recently. Such malicious efforts cause damage ranging from mere violation of confidentiality and issues of privacy up to actual financial losses if business operations are compromised. Intrusion detection systems (IDS) have been used along with data mining and machine learning efforts to detect intruders. However, with the limitation of organizational resources, it is unreasonable to inspect every network alarm raised by the ids. Towards resource-and cost-sensitive IDS models we investigate the Modified Expected Cost of Misclassification as a model selection measure for building goal oriented intrusion detection classifier. The case study presented is that of the DARPA 1998 offline intrusion detection project. The empirical results show promise for building a resource-based intrusion detection model.",2004,0, 1105,Modeling Designers? Color Decision Processes Through Emotive Choice Mapping,"

Color selection support systems require a quantitative model of the color design decision-making process in order to support color selection strategies that further the specified goals of the designer without obstructing the unspecified goals. The system described in this paper models the color selection decision process based on the current state of the design, the desired state of the design, which is based on specified and unspecified designer goals. The specified goals are quantified as subjective responses to completed designs. In the main study discussed, seven novice designers independently designed 20 web pages and, in the process, every color selection was recorded. Adjective pairs selected from monologues provided semantic differential for evaluation of the resulting designs. A neural network-based inference system models designer selection based on the eventual results and the current state of the design to form designer strategies for color design support. This research is relevant in a variety of interactive applications but is of special interest for systems that work in close conjunction with human creativity.

",2005,0, 1106,Modeling Enablers for Successful KM Implementation,"Knowledge is recognized as a critical resource to gain and sustain competitive advantage in business. While many organizations are employing knowledge management (KM) initiatives, research studies suggest that it is difficult to establish return on investment of such efforts; however, desired results can be obtained through successful implementation. In this research study, using literature review, we identified a set of enablers and barriers of successful KM implementation. Using this set of factors, we developed a questionnaire by applying interpretive structural modeling (ISM) methodology to determine underlying relations among these factors and develop strategies for successful implementation of KM initiatives. Contributions from this research effort should also support organizations in making decisions about improving organizational performance using KM initiatives, and understanding the directional relations among KM factors. Because of the number of participants in our study, applicability of our research results may have certain limitations. To address this inadequacy, as a future research effort, we intend to increase the number of respondents and participant organizations",2007,0, 1107,Modeling Genetic Networks: Comparison of Static and Dynamic Models,"Sensor networks are sensing, computing and communication infrastructure that are able to observe and respond to phenomena in the natural environment and in our physical and cyber infrastructure. In this paper, we present a comparison evaluation for mobile and static sensor nodes in Wireless Sensor Networks (WNSs) considering TwoRayGround and Shadowing propagation models. The simulation results have shown that for the multi mobile sensors with TwoRayGround, the good put is stable. Also, the good put of Shadowing of mobile sensors is better than TwoRayGround. In case of consumed energy, the consumed energy of mobile sensors using Shadowing is better than TwoRaygound. Also, the RE of mobile sensors using TwoRayGound is better than Shadowing.",2011,0, 1108,Modeling Genome Evolution with a DSEL for Probabilistic Programming,"

Many scientific applications benefit from simulation. However, programming languages used in simulation, such as C++ or Matlab, approach problems from a deterministic procedural view, which seems to differ, in general, from many scientists’ mental representation. We apply a domain-specific language for probabilistic programming to the biological field of gene modeling, showing how the mental-model gap may be bridged. Our system assisted biologists in developing a model for genome evolution by separating the concerns of model and simulation and providing implicit probabilistic non-determinism.

",2006,0, 1109,Modeling Student Knowledge: Cognitive Tutors in High School and College,"

This paper examines the role of adaptive student modeling in cognitive tutor research and dissemination. Cognitive tutorsTM are problem solving environments constructed around cognitive models of the knowledge students are acquiring. Over the past decade we in the Pittsburgh Advanced Cognitive Tutor (PACT) Center at Carnegie Mellon have been employing a cognitive programming tutor in university-based teaching and research, while simultaneously developing cognitive mathematics tutors that are currently in use in about 150 schools in 14 states. This paper examines adaptive student modeling issues in these two contexts. We examine the role of student modeling in making the transition from the research lab to widespread classroom use, describe our university-based efforts to empirically validate student modeling in the ACT Programming Tutor, and conclude with a description of the key role that student modeling plays in formative evaluations of the Cognitive Algebra II Tutor.

",2000,0, 1110,Modeling Successful Performance in Web Search,"Over the last few years, cloud computing has become quite popular. It offers Web-based companies the advantage of scalability. However, this scalability adds complexity which makes analysis and predictable performance difficult. There is a growing body of research on load balancing in cloud data centres which studies the problem from the perspective of the cloud provider. Nevertheless, the load balancing of scalable web servers deployed on the cloud has been subjected to less research. This paper introduces a simple queueing model to analyse the performance metrics of web server under varying traffic loads. This assists web server managers to manage their clusters and understand the trade-off between QoS and cost. In this proposed model two thresholds are used to control the scaling process. A discrete-event simulation (DES) is presented and validated via an analytical solution.",2013,0, 1111,Modeling the Behavior of TCP in Web Traffic,"In this paper we propose a novel methodology for analyzing web user behavior based on session simulation by using an Ant Colony Optimization algorithm which incorporates usage, structure and content data originating from a real web site. In the first place, artificial ants learn from a clustered web user session set through the modification of a text preference vector. Then, trained ants are released through a web graph and the generated artificial sessions are compared with real usage. The main result is that the proposed model explains approximately 80% of real usage in terms of a predefined similarity measure.",2011,0, 1112,Modeling the evolution of operating systems: An empirical study.,"Process-driven service-oriented architectures (SOA) need to cope with constant changing requirements of various compliance requirements, such as quality of service (QoS) constraints within service level agreements (SLA). To the best of our knowledge, only little evidence is available if and in how far process-driven SOAs deal with the evolution of the requirements. In this work, we evaluate an incremental and model-driven development approach on the evolution of the requirements and the domain model in the context of an industrial case study. The case study focuses on advanced telecom services that need to be compliant to QoS constraints. This paper answers questions about the applicability of the incremental development approach, the impact of requirement changes, possible drawbacks of using a non-incremental development approach, and general recommendations based on the findings. Our results provide guidelines for dealing with the evolution of model-driven service-oriented systems.",2010,0, 1113,Modeling the Experimental Software Engineering Process,"Reviews on software engineering literature have shown an insufficient experimental validation of claims, when compared to the standard practice in other well-established sciences. Poor validation of software engineering claims increases the risks of introducing changes in the software process of an organization, as the potential benefits assessment is based on hype, rather than on facts. The community lacks highly disseminated experimental best practices. We contribute with a model of the experimental software engineering process that is aligned with recent proposals for best practices in experimental data dissemination. The model can be used in the definition of software engineering experiments and in comparisons among experimental results.",2007,0, 1114,"Modeling the Non-functional Requirements in the Context of Usability, Performance, Safety and Security","Requirement engineering is the most significant part of the software development life cycle. Until now great emphasis has been put on the maturity of the functional requirements. But with the passage of time it reveals that the success of software development does not only pertain to the functional requirements rather non-functional requirements should also be taken into consideration. Among the non-functional requirements usability, performance, safety and security are considered important. Further it reveals that there exist so many modeling and testing techniques for functional requirements but the area of non-functional requirements is still deprived off. This is mainly due to difficulty, diversity in nature and hard to express for being domain-specific. Hence emphasis is put to the development of these models or testing techniques. While developing these models or testing techniques it is found that all the four areas of usability, performance, safety and security are not only closely related but rather depend on one another up to some extent. This meant that they all should be tackled while keeping into consideration of the related from among them. For the purpose it seemed necessary to collect in one artefact all the available modeling and testing techniques related to the four core areas of non-functional requirements may be collected and compared. This work at first provides an understanding of the problem domain while describing aspects of the non-functional requirements. Then possibly the available related models or testing techniques are collected and discussed. Finally in the last they are compared with respect to diversified aspects.",2007,0, 1115,Modeling the Static/Data Aspects of the System,"In order to develop highly secure database systems to meet the requirements for class B2, an extended formal security policy model based on the BLP model is presented in this paper. A method for verifying security model for database systems is proposed. According to this method, the development of a formal specification and verification to ensure the security of the extended model is introduced. During the process of the verification, a number of mistakes have been identified and corrections have been made. Both the specification and verification are developed in Coq proof assistant. Our formal security model was improved and has been verified secure. This work demonstrates that our verification method is effective and sufficient and illustrates the necessity for formal verification of the extended model by using tools.",2008,0, 1116,Modelling a Receiver?s Position to Persuasive Arguments,"AbstractSocial psychology shows that the effect of a persuasive argument depends on characteristics of the person to be persuaded, including the persons involvement with the topic and the discrepancy between the persons current position on the topic and the arguments position. Via a series of experiments, this paper provides insight into how the receivers position can be modelled computationally, as a function of the strength, feature importance, and position of arguments in a set.",2007,0, 1117,"Modular Network SOM: Theory, Algorithm and Applications","The well-known Routh's criterion uses a very efficient computational method, or algorithm, that has been found to reduce greatly calculational labor and chances of error in a number of other important applications to circuit theory. Among these applications are finding common factors of polynomials, computing Sturm's functions, synthesizing RC, RL, or LC ladder networks by means of continued-fraction expansions, determining RC, RL, or LC realizability of a given immittance function, and analysis of ladder networks. Methods of handling the first two problems, both in normal and special cases, are given and illustrated.",1959,0, 1118,Modular Security: Design and Analysis,"This paper presents the novel rectenna design for Wireless Power Transmission. The design would receive and convert microwave of 2.45GHz to DC. Proposed rectenna is a combination of Microstrip patch antenna, followed by stepped impedance filter and zero biased rectifier. Performance of rectenna is analyzed using Harmonic Balance Analysis. Good agreement between simulated and measured results is observed.",2012,0, 1119,Monitoring Requirements Coverage using Reconstructed Views: An Industrial Case Study,"Requirements views, such as coverage and status views, are an important asset for monitoring and managing software development. We have developed a method that automates the process for reconstructing these views, and built a tool, ReqAnalyst, to support this method. In this paper, we investigate to what extent we can automatically generate requirements views to monitor requirements in test categories and test cases. The technique used for retrieving the necessary data is an information retrieval technique called latent semantic indexing (LSI). We applied our method in a case study at LogicaCMG. We defined a number of requirements views and experimented with different reconstruction settings to generate these views",2006,0, 1120,Monitoring Research Collaborations Using Semantic Web Technologies,"The new movement to personalize treatment plans and improve prediction capabilities is greatly facilitated by intelligent remote patient monitoring and risk prevention. This paper focuses on patients suffering from bipolar disorder, a mental illness characterized by severe mood swings. We exploit the advantages of Semantic Web and Electronic Health Record Technologies to develop a patient monitoring platform to support clinicians. Relying on intelligently filtering of clinical evidence-based information and individual-specific knowledge, we aim to provide recommendations for treatment and monitoring at appropriate time or concluding into alerts for serious shifts in mood and patients' non response to treatment.",2015,0, 1121,Motivating open source software developers: influence of transformational and transactional leaderships,"Open Source Software (OSS) is developed by geographically distributed unpaid programmers. The success of such a seemingly chaotic OSS project will largely depend on how the project leader organizes and motivates the developers to contribute. Grounded on leadership and motivation theories, we proposed and tested a research model that seeks to explain the behavioral effects of a leader on the developers' motivation to contribute. Survey data collected from 118 OSS developers on Sourceforge.net was used to test the research model. The results indicate that leaders' transformational leadership is positively related to developers' intrinsic motivation and leaders' active management by exception, a form of transactional leadership, is positively related to developers' extrinsic motivation.",2006,0, 1122,Multi-level model-based self-diagnosis of distributed object-oriented systems,"

Self-healing relies on correct diagnosis of system malfunctioning. This paper presents a use-case based approach to self-diagnosis. Both a static and a dynamic model of a managed-system are distinguished with explicit functional, implementational, and operational knowledge of specific use-cases. This knowledge is used to define sensors to detect and localise anomalies at the same three levels, providing the input needed to perform informed diagnosis. The models presented can be used to automatically instrument existing distributed legacy systems.

",2006,0, 1123,Multimodal Human Computer Interaction: A Survey,"On-line help from a human actor will be exploited to facilitate computer perception. This paper proposes an innovative real-time algorithm - running on an active vision head - to build 3D scene descriptions from human cues. The theory is supported by experimental results both for figure/ground segregation of typical heavy objects in a scene (such as furniture), and for 3D object/scene reconstruction.",2004,0, 1124,Multimodal Interaction in a Ubiquitous Environment,"This paper presents the autonomous mobile robot interactive behavior in ubiquitous computing environment. We have constructed multimodal network-based interfaces for human-robot interactions (HRI) and present examples in providing services via the direct or indirect interaction in responding to the user's demands. The mobile robot consists of seven systems including vision, speech, remote supervisory and sensory systems, locomotion, robotic arms and power management systems. Various combination schemes have been developed towards network-based interfaces in ubiquitous computing environments. The goal of this paper is to provide a service scheme of an autonomous mobile robot in the loop to be accepted by people as natural assistant. In this paper we have demonstrated three kinds of scenario and successfully tested on security warrior, an autonomous mobile robot developed in our lab.",2009,0, 1125,Multi-objective optimization of generalized reliability design problems using feature models - A concept for early design stages.,"Reliability optimization problems such as the redundancy allocation problem (RAP) have been of considerable interest in the past. However, due to the restrictions of the design space formulation, they may not be applicable in all practical design problems. A method with high modelling freedom for rapid design screening is desirable, especially in early design stages. This work presents a novel approach to reliability optimization. Feature modelling, a specification method originating from software engineering, is applied for the fast specification and enumeration of complex design spaces. It is shown how feature models can not only describe arbitrary RAPs but also much more complex design problems. The design screening is accomplished by a multi-objective evolutionary algorithm for probabilistic objectives. Comparing averages or medians may hide the true characteristics of this distributions. Therefore the algorithm uses solely the probability of a system dominating another to achieve the Pareto optimal set. We illustrate the approach by specifying a RAP and a more complex design space and screening them with the evolutionary algorithm.",2007,0, 1126,Multiprocessor Implementation of Transitive Closure,"In this paper, traditional design of profibus node is based on single processor architecture. To enhance node performance and process more external events, a kind of profibus bridge node with asymmetric architecture design is introduced in this paper, which consists of two heterogeneous multiprocessors and two kinds of communication mode, i.e. profibus and Ethernet ways. The design is primarily based on function modules. Implementation of data flow control and optimized external events are also proposed. In addition, it Compares and analyzes this architecture with communication modes of multiprocessors.",2007,0, 1127,Multisensors Information Fusion with Neural Networks for Noninvasive Blood Glucose Detection,"

A multisensors information fusion model (MIFM) based on the Mixture of Experts (ME) neural networks was designed to fuse the multi-sensors signals for infrared noninvasive blood glucose detection. ME algorithm greatly improved the precision of noninvasive blood glucose measurement with multisensors. The principle of ME, design and implementation of MIFM were described in details. The standard deviation of the error of predication (SO) was 0.88 mmol/l from blood and 0.65 mmol/l from water-glucose. The correlation coefficient (CC) to training data from blood analysis was 0. 9.

",2005,0, 1128,Multiview framework for goal oriented measurement plan design.,"Understanding the results of measurements is a primary issue for continuous software process improvement. Models provide support for better understanding measures. One of the problems often encountered in defining a measurement plan is its dimensions in terms of goals and metrics. This inevitably impacts on the usability of a measurement plan in terms of effort needed for interpreting the measurement results and accuracy of interpretation itself. The authors validate an approach (multiview framework) for designing a measurement plan, according to the GQM model, and structured in order to improve usability. For this reason an experiment was executed to validate the approach and provide evidence that a GQM designed according to the multiview framework is more usable, and that interpretation depends on the collected measures and is independent of who interprets them. In the experiment the authors verify that a measurement plan designed according to the proposed model doesn't negatively impact on efficiency of interpretation. The experimental results are positive and encourage further replications and studies.",2003,0, 1129,MuNDDoS: A Research Group on Global Software Development,"The purpose of this paper is to present the MuNDDoS Research Group, located at PUCRS University, in Porto Alegre, Brazil. This group was created in 2001, and since then is conducting several research projects on Global Software Development in Brazil, collaborating with research groups from other universities in Brazil and other countries. We present the research group goals, some of the projects developed, and the current projects",2006,0, 1130,"Music, Heart Rate, and Emotions in the Context of Stimulating Technologies","

The present aim was to explore heart rate responses when stimulating participants with technology primarily aimed at the rehabilitation of older adults. Heart rate responses were measured from 31 participants while they listened to emotionally provoking negative, neutral, and positive musical clips. Ratings of emotional experiences were also collected. The results showed that heart rate responses to negative musical stimuli differed significantly from responses to neutral stimuli. The use of emotion-related physiological responses evoked by stimulating devices offers a possibility to enhance, for example, emotionally stimulating or otherwise therapeutic sessions.

",2007,0, 1131,My world(s)': A tabletop environment to support fantasy play for kindergarten children,"

This research aims to design My World(s) a tabletop application for kindergarten children's (age 3 to 5 year-old). My world(s) will provide an interactive tabletop environment to support individual or peer-to-peer fantasy play and offer young children the possibility to create and enact their fantasies in a digital context. The research will be based on literature review, field studies (observations of young children activities in ecological settings) and interviews with nursery teachers and parents. A prototype of My world(s) tabletop application will be developed based on the data gathered and it will be evaluated empirically.

",2007,0, 1132,myGrid: personalised bioinformatics on the information grid.,"In this paper, we analyzed the drawbacks existing in the current personalized information recommendation. Then we analyzed the limitations of the application of the Internet, the semantic Web and the grid technology in the field of personalized information recommendation. Ground on the analysis above, we put forward a personalized information recommendation model based on semantic grid, and preliminary advanced the personalized information recommendation solution of large-scale, high accuracy, strong timeliness, which is geared to the distributed, heterogeneous, massive information environment. Specifically speaking, the solution makes use of the high-performance computing and information service capability of semantic grid to resolve the problem of recommendation scale and timeliness; it makes use of the semantic processing ability of semantic grid to resolve the problem of recommendation intelligence; In addition, it makes use of the grid monitoring technology to resolve the problem of the real time information acquisition of the candidate grid nodes.",2009,0, 1133,National reconstruction information management system,"It is becoming more vital than ever before for business to manage customer relationship and build customer loyalty. Information systems and data mining techniques have significant contributions in the customer relationship management process. In the dynamic business environment, information systems need to evolve to adapt to the change in requirements, which is driven by customer relationship management. An evolving information system is proposed in this paper and discussed by focusing on a specific type of knowledge, namely association rule. With new classes and attributes created based on new knowledge and user requirements, the evolving information system is capable of collecting, processing and providing more valuable information on customers, to support customer relationship management.",2008,0, 1134,Nearest neighbor sampling for better defect prediction,"The finite-sample risk of the k-nearest-neighbor classifier is analyzed for a family of two-class problems in which patterns are randomly generated from smooth probability distributions in an n-dimensional Euclidean feature space. First, an exact integral expression for the m-sample risk is obtained for a k-nearest-neighbor classifier that uses a reference sample of m labeled feature vectors. Using a multidimensional application of Laplace's method of integration, this integral can be represented as an asymptotic expansion in negative rational powers of m. The leading terms of this asymptotic expansion elucidate the curse of dimensionality and other properties of the finite-sample risk",1994,0, 1135,Negotiating Models,"Automated negotiation has become the core of the intelligent e-commerce. Traditional research in automated negotiation is focused on negotiation protocol and strategy. However, current research is lack of unified technology standard, which causes the system's practical application difficult. This paper designs a negotiating agent architecture, which is based on the agent's ability of communication, and can support both goal-directed reasoning and reactive response. In order to construct a general interaction mechanism among negotiating agents, a communication model is proposed, in which the negotiation language used by agents is defined. Design of the communication model and the language has been attempted in such a way so as to provide general support for a wide variety of commercial negotiation circumstances, and therefore to be particularly suitable for electronic commerce. Finally, the design and expression of the negotiation ontology is discussed.",2009,0, 1136,Negotiation of software requirements in an asynchronous collaborative environment,"The effect of task structure and negotiation sequence on collaborative software requirements negotiation is investigated. This work began with an extensive literature review that focused on current research in collaborative software engineering and, in particular, on the negotiation of software requirements and the requisite collaboration for the development of such requirements. A formal detailed experiment was then conducted to evaluate the effects of negotiation sequence and task structure in an asynchronous group meeting environment. The experiment tested the impact of these structures on groups negotiating the requirements for an emergency response information system. The results reported here show that these structures can have a positive impact on solution quality but a negative impact on solution satisfaction, although following a negotiation sequence can help asynchronous groups come to agreement faster. Details of the experimental procedures, statistical analysis, and discussion of the results of the experiment are also presented, as are suggestions for improving this work and a plan for future research.",2005,0, 1137,Network Legos: Building Blocks of Cellular Wiring Diagrams,"

Publicly-available data sets provide detailed and large-scale information on multiple types of molecular interaction networks in a number of model organisms. These multi-modal universal networks capture a static view of cellular state. An important challenge in systems biology is obtaining a dynamic perspective on these networks by integrating them with gene expression measurements taken under multiple conditions.

We present a top-down computational approach to identify building blocks of molecular interaction networks by (i) integrating gene expression measurements for a particular disease state (e.g., leukaemia) or experimental condition (e.g., treatment with growth serum) with molecular interactions to reveal an active network, which is the network of interactions active in the cell in that disease state or condition and (ii) systematically combining active networks computed for different experimental conditions using set-theoretic formulae to reveal network legos, which are modules of coherently interacting genes and gene products in the wiring diagram.

We propose efficient methods to compute active networks, systematically mine candidate legos, assess the statistical significance of these candidates, arrange them in a directed acyclic graph (DAG), and exploit the structure of the DAG to identify true network legos. We describe methods to assess the stability of our computations to changes in the input and to recover active networks by composing network legos.

We analyse two human datasets using our method. A comparison of three leukaemias demonstrates how a biologist can use our system to identify specific differences between these diseases. A larger-scale analysis of 13 distinct stresses illustrates our ability to compute the building blocks of the interaction networks activated in response to these stresses.

",2007,0, 1138,Neural Networks and Other Machine Learning Methods in Cancer Research,"We have utilized neural networks in different applications of bioinformatics such as discrimination of beta-barrel membrane proteins, mesophilic and thermophilic proteins, different folding types of globular proteins, different classes of transporter proteins and predicting the secondary structures of beta-barrel membrane proteins. In these methods, we have used the information about amino acid composition, neighboring residue information, inter-residue contacts and amino acid properties as features. We observed that the performance with neural networks is comparable to or better than other widely used machine learning techniques.",2008,0, 1139,Neuro-Imaging Platform for Neuroinformatics,"As one of the subject in modern educational technology, E-Learning has not been widely applied yet. The accumulation of knowledge is a tree growing process. From the person's cognitive processes, we propose a methodology for E-Learning design based on tree structure. Knowledge point is stored in a database as a record, and then is bound to a TreeView control, in order to show the form of a tree. Finally, a very visual and friendly user interface E-Learning platform with clear context between knowledge points was constructed by using the latest Microsoft development technologies. And the trial platform is supported by teachers and students praise.",2010,0, 1140,"Nieuwland, ?Integrated development and maintenance of software products to support efficient updating of customer configurations: A case study in mass market erp software","The maintenance of enterprise application software at a customer site is a potentially complex task for software vendors. This complexity can unfortunately result in a significant amount of work and risk. This paper presents a case study of a product software vendor that tries to reduce this complexity by integrating product data management (PDM), software configuration management (SCM), and customer relationship management (CRM) into one system. The case study shows that by combining these management areas in a single software itnowledge base, software maintenance processes can be automated and improved, thereby enabling a software vendor of enterprise software to serve a large number of customers with many different product configurations.",2005,0, 1141,NLP-Based Curation of Bacterial Regulatory,"

Manual curation of biological databases is an expensive and labor-intensive process in Genomics and Systems Biology. We report the implem-entation of a state-of-the-art, rule-based Natural Language Processing system that creates computer-readable networks of regulatory interactions directly from abstracts and full-text papers. We evaluate its output against a manually-curated standard database, and test the possibilities and limitations of automatic and semi-automatic curation of the so-called biobibliome. We also propose a novel Regulatory Interaction Mining Markup Language suited for representing this data, useful both for biologists and for text-mining specialists.

",2009,0, 1142,Note-Taking Support for Nurses Using Digital Pen Character Recognition System,"

This study presents a novel system which supports nurses in note-taking by providing a digital pen and character-recognition system laying stress on user interface. The system applies characteristics of a digital pen for improving the efficiency of tasks related to nursing records. The system aims at improving the efficiency of nursing activities and reducing the time spent for tasks for nursing records. In our system, first, notes are written on a check sheet using a digital pen along with a voice that is recorded on a voice recorder; the pen and voice data are transferred to a PC. The pen data are then recognized automatically as characters, which can be viewed and manipulated with the application. We conducted an evaluation experiment to improve efficiency and operation of the system, and its interface. The evaluation and test operations used 10 test subjects. Based on the test operation and the evaluation experiment of the system, it is turned out that improvement for urgent situations, enhancement of portability, and further use of character recognition are required.

",2006,0, 1143,Numerical Time-Series Pattern Extraction Based on Irregular Piecewise Aggregate Approximation and Gradient Specification,"

This paper proposes and evaluates a method for extracting interesting patterns from numerical time-series data which takes account of user subjectivity. The proposed method conducts irregular sampling on the data preserving the subjectively noteworthy features using a user specified gradient. It also conducts irregular quantization, preserving the intrinsically objective characteristics of the data using statistical distributions. It then extracts representative patterns from the discretized data using group average clustering. Experimental results using benchmark datasets indicate that the proposed method does not destroy the intrinsically objective features, since it has the same performance as the basic subsequence clustering using K-Means algorithm. Results using a dataset from a clinical hepatitis study indicate that it extracts interesting patterns for a medical expert.

",2007,0, 1144,Nurses? Working Practices: What Can We Learn for Designing Computerised Patient Record Systems?,"

As demonstrated by several studies, nurses are reluctant to use poorly designed computerised patient records (CPR). So far, little is known about the nurses' interaction with paper-based patient records. However, these practices should guide the design of a CPR system. Hence, we investigated the nurses' work with the patient records by means of observations and structured interviews on wards in internal medicine, geriatrics and surgery. Depending on the working context and the nursing tasks and activities to be performed, characteristic access preferences and patterns were identified when nurses interacted with patient records. In particular, we found typical interaction patterns when nurses performed tasks that included all assigned patients. Another important finding concerns worksheets. Nurses use them during their whole shift to manage all relevant information in a concise way. Based on our findings, we suggest a CPR design which reflects the identified practices and should improve the acceptance of CPR systems in the demanding hospital environment.

",2007,0, 1145,Object-oriented cohesion subjectivity amongst experienced and novice developers: an empirical study,"The concept of software cohesion in both the procedural and object-oriented paradigm is well known and documented. What is not so well known or documented is the perception of what empirically constitutes a cohesive 'unit' by software engineers. In this paper, we describe an empirical investigation using object-oriented (OO) classes as a basis. Twenty-four subjects (drawn from IT experienced and novice groups) were asked to rate ten classes sampled from two industrial systems in terms of their overall cohesiveness; a class environment was used to carry out the study. Three hypotheses were investigated as part of the study, relating to class size, the role of comment lines and the differences between the two groups in terms of how they rated cohesion. Several key results were observed. Firstly, class size (when expressed in terms of number of methods) only influenced the perception of cohesion by novice subjects. Secondly, well-commented classes were rated more highly amongst IT experienced than novice subjects. Thirdly, results suggest strongly that cohesion comprises a combination of various class factors including low coupling, small numbers of attributes and well-commented methods, rather than any single, individual class feature per se. Finally, if the research supports the view that cohesion is a subjective concept reflecting a cognitive combination of class features then cohesion is also a surrogate for class comprehension.",2006,0, 1146,Object-oriented concept analysis for software imodularisation.,"The object oriented programming paradigm often claimed to allow a faster development pace and higher quality of software. Within the design model, it is necessary for design classes to collaborate with one another. However, collaboration should be kept to an acceptable minimum i.e. better designing practice will introduce low coupling. If a design model is highly coupled, the system is difficult to implement, to test and to maintain overtime. In case of enhancing software, we need to introduce or remove module and in that case coupling is the most important factor to be considered because unnecessary coupling may make the system unstable and may cause reduction in the system's performance. So coupling is thought to be a desirable goal in software construction, leading to better values for external software qualities such as maintainability, reusability and so on. To test this hypothesis, a good measure of class coupling is needed. In this paper, the major issue of coupling measures have been analyzed with the objective of determining the most significant coupling measure.",2010,0, 1147,On Comparison of Mechanisms of Economic and Social Exchanges: The Times Model,"AbstractAn e-market system is a concrete implementation of a market institution; it embeds one or more exchange mechanisms. E-market systems are also information systems which are information and communication technologies artifacts. This work puts forward an argument that the study of e-markets must incorporate both the behavioral economics as well as the information systems perspectives. To this end the paper proposes a conceptual framework that integrates the two. This framework is used to formulate a model, which incorporates the essential features of exchange mechanisms, as well as their implementations as is artefacts. The focus of attention is on two classes of mechanisms, namely auctions and negotiations. They both may serve the same purpose and their various types have been embedded in many e-market systems.",2008,0, 1148,On establishing the essential components of a technology-dependent framework: a strawman framework for industrial case study-based research,"A goal of evidence-based software engineering is to provide a means by which industry practitioners can make rational decisions about technology adoption. When a technology is mature enough for potential widespread use, practitioners find empirical evidence most compelling when the study has taken place in a live, industrial situation in an environment comparable to their own. However, empirical software engineering is in need of guidelines and standards to direct industrial case studies so that the results of this research are valuable and can be combined into an evidentiary base. In this paper, we present a high-level view of a measurement framework that has been used with multiple agile software development industrial case studies. We propose that this technology-dependent framework can be used as a strawman for a guideline of data collection, analysis, and reporting of industrial case studies. Our goal in offering the framework as a strawman is to solicit input from the community on a guideline for the essential components of a technology-dependent framework for industrial case study research.",2005,0, 1149,On Extended Finite Element Method (XFEM) for Modelling of Organ Deformations Associated with Surgical Cuts,"AbstractThe Extended Finite Element Method (XFEM) is a technique used in fracture mechanics to predict how objects deform as cracks form and propagate through them. Here, we propose the use of XFEM to model the deformations resulting from cutting through organ tissues. We show that XFEM has the potential for being the technique of choice for modelling tissue retraction and resection during surgery. Candidates applications are surgical simulators and image-guided surgery. A key feature of XFEM is that material discontinuities through FEM meshes can be handled without mesh adaptation or remeshing, as would be required in regular FEM. As a preliminary illustration, we show the result of XFEM calculation for a simple 2D shape in which a linear cut was made.",2004,0, 1150,On System Scalability,"The storage and bandwidth requirements of digital video and audio exceed those that can be supported by conventional file servers. Despite the emergence of new compression algorithms capable of providing extremely high compression ratios, there is still a challenge to provide optimised storage services capable of storing 1000s of hours of multimedia data and providing simultaneous access to hundreds and potentially thousands of clients. This paper describes a scalable multimedia storage architecture (SMSA) that supports wide area storage, storage server scalability allowing the addition of extra storage nodes, and maximised available data streams through the use of the load balancing techniques of network striping/file replication. It also allows for the storage of multi-resolution data produced by scalable compression techniques to match the Quality of Service requirements of heterogeneous clients",1996,0, 1151,"On the ""selling"" of academic research to industry","Pharmaceutical companies are one of the fastest growing companies in the market. ERP as a business solution can help pharmaceutical companies to ease the work flow in the organization and to enhance its performance. This paper is based on the survey carried out at Leben Laboratory Pvt. Ltd., a pharmaceutical company, involved in manufacturing of various kinds of medicines. Objectives of this paper are to study the need of automation software/ERP in pharmaceutical companies, identify the problem faced in these companies while using the Software and their failures. It also aims at documenting the current software used by Leben and developing customized software as per their requirements thereby overcoming the drawbacks of current software. It reports how the developed software can fulfil needs of many SMEs and effectively provide better solutions to manage their work.",2012,0, 1152,On the Comparability of Relialibility Measures: Bifurcation Analysis of Two Measures in the Case of Dichotomous Ratings,"AbstractThe problem of analysing interrater agreement and reliability is known both in human decision making and in machine interaction. Several measures have been developped in the last 100 years for this purpose, with Cohens Kappacoefficient to be the most popular one. Due to methodological considerations, the validity of kappa-type measures for interrater agreement has been discussed in a variety of papers. However, a global comparison of properties of these measures is currently still deficient. In our approach, we constructed an integral measure to evaluate the differences between two reliability measures for dichotomous ratings. Additionally, we studied bifurcation properties of the difference of these measures to quantify areas of minimal differences. From the methodological point of view, our integral-measure can also be used to construct other measures for interrater agreement.",2006,0, 1153,On the Creation of a Reference Framework for Software Product Management: Validation and Tool Support,"Software product management does not get as much attention in scientific research as it should have, compared to the high value product software companies ascribe to it. In this paper, we give a status overview of the current software product management domain by performing a literature study and field studies with product managers. Based on these, we are able to present a reference framework for software product management, in which the key process areas, stakeholders and their relations are modeled. To validate the reference framework, we perform a case study in which we analyze the stakeholder communication concerning the conception, development and launching of a new product at a major software vendor. Finally, we propose the Software Product Management Workbench for operational support for product managers in product software companies.",2006,0, 1154,On the difficulty of replicating human subjects studies in software engineering,"Replications play an important role in verifying empirical results. In this paper, we discuss our experiences performing a literal replication of a human subjects experiment that examined the relationship between a simple test for consistent use of mental models, and success in an introductory programming course. We encountered many difficulties in achieving comparability with the original experiment, due to a series of apparently minor differences in context. Based on this experience, we discuss the relative merits of replication, and suggest that, for some human subjects studies, literal replication may not be the the most effective strategy for validating the results of previous studies.",2008,0, 1155,On the Implementation of a Knowledge Management Tool for SPI,"Software Process Improvement is a long term journey which is made comfortable by many means. The most dominant and preferred plan is the knowledge driven methodology which the software development organizations are experimenting with. To have a look and feel of the knowledge and its management, it has become essential to have a standardized knowledge management tool (KMT) that comprises specifications like-acquisition, representation, sharing and deploying. While several tools and techniques are available in managing knowledge for solving domain problems, it is felt in the knowledge society that no standard KM tools exist that would facilitate SPI. In this piece of implementation work, we outline the features that are deemed significant for the implementation of a KMT that drives the journey of SPI. Four process areas are chosen and four subsystems are identified in covering these process areas. A survey is also conducted among the organizations requiring the support of KMT in making decisive SPI initiative. Implications of this work demands the cooperation of the software development companies with the research community in finding better approach in their improvement program.",2007,0, 1156,On the Problem of Identifying the Quality of Geographic Metadata,"This chapter contains sections titled: Case Study: Feelwell Health Systems, Identifying Problems, Building a Cross-Functional Team, Adopting a Framework: Building and Testing Hypotheses, Key Information, Uncovering the Causes of Data Quality Problems, Concluding Remarks",2006,0, 1157,On the use of error propagation for statistical validation of computer vision software.,"Computer vision software is complex, involving many tens of thousands of lines of code. Coding mistakes are not uncommon. When the vision algorithms are run on controlled data which meet all the algorithm assumptions, the results are often statistically predictable. This renders it possible to statistically validate the computer vision software and its associated theoretical derivations. In this paper, we review the general theory for some relevant kinds of statistical testing and then illustrate this experimental methodology to validate our building parameter estimation software. This software estimates the 3D positions of buildings vertices based on the input data obtained from multi-image photogrammetric resection calculations and 3D geometric information relating some of the points, lines and planes of the buildings to each other.",2005,0, 1158,On Transferring a Method into a Usage Situation,"This paper proposes a new effective data transfer method for IEEE 802.11 wireless LANs by integrating priority control and multirate mechanism. The IEEE 802.11 PHY layer supports a multirate mechanism with dynamic rate switching and an appropriate data rate is selected in transmitting a frame. However, the multirate mechanism is used with the CSMA/CA (carrier sense multiple access with collision avoidance) protocol, low rate transmissions need much longer time than high rate transmissions to finish sending a frame. As a result, the system capacity is decreased. The proposed method assumes the same number of priority levels as the data rates, and a data rate is associated to a priority level. The priority of a transmission goes up with the used data rate. For this purpose, we have modified the CSMA/CA protocol to support prioritized transmission. By selecting the appropriate priority depending on the data rate and giving more transmission opportunities for high rate transmission, the system capacity is increased. The effect of the proposed mechanism is confirmed by computer simulations.",2002,0, 1159,On Utilization of the Grid Computing Technology for Video Conversion and 3D Rendering,"

In this paper, we investigate the recent popular computing technique called Grid Computing, and use video conversion and 3D rendering applications to demonstrate this technology’s effectiveness and high performance. We also report on developing a resource broker called Phantom that runs on our grid computing testbed and whose main function is querying nodes in grid computing environments and showing their system information to aid in selecting the best nodes for job assignments to have the jobs executed in the least amount of time.

",2005,0, 1160,"Online Engineering Education: Learning Anywhere, Anytime","In the last couple of decades, Photovoltaic (PV) solar systems have captured a lot of interest as a clean, renewable energy option. As a result, PV engineering has become an emerging field in engineering education. In order to meet increased learning requirements, new learning resources, an effective curriculum and proper assessment are needed. Pveducation.org is one of the discipline's oldest learning resources, providing PV content for photovoltaic professionals. The purpose of this paper is (1) evaluating the effectiveness of the pveducation.org learning portal from the user's perspective, and (2) find a relationship between the effectiveness of the website and user's learning gains. This study will conduct a systematic assessment of educational technology by using statistical techniques and data collected through a survey.",2012,0, 1161,Ontological representations of software patterns,"In the software development lifecycle, security expertise is one common missing quality that needs to be addressed on a stronger footing, by taking advantage of the scaling effect of security patterns. Security patterns capture security experts' knowledge for a given security problem. Hence, they are produced by experts in security and consumed by novice security users, such as software developers. In this paper we present an ontology based approach to find an eligible set of security patterns requested by software developers. We adopt the formal description of security properties presented in the Serenity EU project for defining our ground security requirements. We distinguish between two profiles for software developers and define a corresponding ontological interface. This ontological interface contains a mapping between security requirements from one side and threat models, security bugs, security errors on another side taking into consideration their contexts of applicability. We describe the current status of this work in progress where results are quite promising.",2008,0, 1162,Ontologies to Support Learning Design Context,"This paper describes the design of an international symposium whereby design researchers and design educators from diverse disciplines form a learning partnership to advance design thinking. Concepts from three theoretical frameworks, the scholarship of integration, learning partnerships and complexity theory, were used to design interactions before, during, and after the symposium. This transformative approach provides a potentially more effective means than the traditional diffusion model (research-disseminate-adopt) to translate educational research into teaching practice.",2014,0, 1163,Ontology Database: A New Method for Semantic Modeling and an Application to Brainwave Data,"

We propose an automatic method for modeling a relational database that uses SQL triggers and foreign-keys to efficiently answer positive semantic queries about ground instances for a Semantic Web ontology. In contrast with existing knowledge-based approaches, we expend additional space in the database to reduce reasoning at query time. This implementation significantly improves query response time by allowing the system to disregard integrity constraints and other kinds of inferences at run-time. The surprising result of our approach is that load-time appears unaffected, even for medium-sized ontologies. We applied our methodology to the study of brain electroencephalographic (EEG and ERP) data. This case study demonstrates how our methodology can be used to proactively drive the design, storage and exchange of knowledge based on EEG/ERP ontologies.

",2008,0, 1164,Ontology Versioning and Evolution for Semantic Web-Based Applications,"One of the goals of computational intelligence is to coordinate various distributed data sources and services to compose more complex services that cover more than provided by a single service. The semantic grid [4] is proposed that integrates grid technologies and semanticWeb technologies by providing semantic annotations of grid services and data sources. We argue that technologies for automated composition of services are also useful for constructing statically composed service cooperating system(called ‘services integration system’). In this paper, we propose a framework that supports the development of services integration systems. It helps portal site developers to discover, evaluate, and execute service chaining paths for services integration systems. Semi-automated source code generation will reduce the cost of services integration system development.",2005,0, 1165,Ontology-Based Knowledge Extraction-A Case Study of Software Development,"It is very important to construct a common library from the software development process in a standard analysis pattern. Reusing analysis knowledge in the same domain and the common library can help to build up high-quality applications in limited developing time. System analysis patterns enable a software application to model a specific problem by representing some domain classes and their relationships as modeling components. In this paper, a conceptual model was developed that emphasized the role of analysis pattern and domain library for application with ontology including domain model and dynamic characteristics of classes in tree maps. The result shows two conclusions were derived: first, less development and requirement changes the process costs with common library in ontology and we easy to search and reuse the library components. The second, it is more highly user satisfaction due to the system flexibility and more patients were paid for requirement changes",2006,0, 1166,OPEN Process Support for Web Development,"To support commercial-strength Web development, it is as important to utilize a process as it is in regular, non-Web information systems development. We evaluate the efficacy of an established OO/CBD development process (OPEN), in Web development and propose new and amended activities and tasks that should be included in OPEN to fully, support the new demands of Website construction and the delivery of business value on the Web. Sixteen new tasks are identified together with one new activity. Four subtasks of particular relevance to the interface based on usage centered design are also advocated",2001,0, 1167,Open Source Project Categorization Based on Growth Rate Analysis and Portfolio Planning Methods,"AbstractIn this paper, we propose to arrive at an assessment and evaluation of open source projects based on an analysis of their growth rates in several aspects. These include code base, developer number, bug reports and downloads. Based on this analysis and assessment, a well-known portfolio planning method, the BCG matrix, is employed for arriving at a very broad classification of open source projects. While this approach naturally results in a loss of detailed information, a top-level categorization is in some domains necessary and of interest.",2008,0, 1168,Open Source Software Usage Implications in the Context of Software Development,"Open source software (OSS) has steadily penetrated the software industry. OSS can be interpreted in many ways, such as software license, as a development philosophy, and as a development process. OSS has increased in its maturity as a solution for industries in the last decade. OSS also continues to exploit its options as a business model. The author proposes a three stage model of the evolution of OSS. The author coins a new term, organizational OSS, and presents the definition. Then, the author discusses the requirements that lead to a new stage, the driving factors and obstacles toward a new direction in OSS.",2010,0, 1169,Open Source Software: A Source of Possibilities for Software Engineering Education and Empirical Software Engineering,"Open source projects are an interesting source for software engineering education and research. By participating in open source projects students can improve their programming and design capabilities. By reflecting on own participation by means of an established research method and plan, master's students can in addition contribute to increase knowledge concerning research questions. In this work we report on a concrete study in the context of the Net- beans open source project. The research method used is a modification of action research.",2007,0, 1170,Optimal Information Transmission Through Cortico-Cortical Synapses,

Neurons in visual cortex receive a large fraction of their inputs from other cortical neurons with a similar stimulus preference. Here we use models of neuronal population activity and information theoretic tools to investigate whether this arrangement of synapses allows efficient information transmission. We find that efficient information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching a target neuron. This is compatible with present neurophysiological evidence from visual cortex. We thus suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission.

,2005,0, 1171,Optimization of the Alberty and Hespelt carrier frequency error detection algorithm,"Following a literature survey, the Alberty and Hespelt frequency error detection (FED) algorithm was chosen for software implementation in an all-digital demodulator. For the correct operation of this algorithm, it is desired to have symmetric bandpass filters. In this paper a simple method will be presented to ensure that the bandpass filters are fully symmetrical. Simulations show that even with symmetric bandpass filters, there is a substantial amount of tracking jitter. To overcome this problem, a smart filter has been implemented to ensure that the tracking performance is almost jitter free without increasing the error acquisition time",2005,0, 1172,OrderedList - A bioconductor package for detecting similarity in ordered gene lists,

Summary:OrderedList is a Bioconductor compliant package for meta-analysis based on ordered gene lists like those resulting from differential gene expression analysis. Our package quantifies the similarity between gene lists. The significance of the similarity score is estimated from random scores computed on perturbed data. OrderedList illustrates list similarity in intuitive plots and determines the score-driving genes for further analysis.

Availability: http://www.bioconductor.org

Contact: claudio.lottaz@molgen.mpg.de

Supplementary information: Please visit our webpage on http://compdiag.molgen.mpg.de/software

,2006,0, 1173,Organization,"Nowadays, plugin-based applications are quite common. Eclipse is probably the most popular example of such an application. By means of plugins, end-users can add or remove functionality even at runtime. Besides the kernel, plugin-based applications can be kept very small and nearly everything can be designed as a plugin. However, if plugins are added at runtime, their ordering is difficult to organize. This can be observed for graphical user interface representations of plugins, such as menu or list items for example. In particular, the kernel may not refer to a single concrete plugin, since it has to be independent of concrete plugins - according to the plugin concept. Therefore, self-organization is proposed in the present paper as a solution to structure plugin-based applications. A pattern for linearly ordered plugins is presented. The end-user still retains the possibility to reorder the plugins manually according to his preferences. A sample application of the presented pattern in the context of graphical user interfaces is described",2006,0, 1174,Organizational Assimilation of Vertical Standards: An Integrative Model,"Vertical standards are complex networked technologies whose assimilation is subject to extensive interorganizational dependence and network effects. Classical theories of diffusion of technological innovation cannot sufficiently explain their assimilation without taking community-level effects into account. This paper introduces a two-level model of organizational assimilation of vertical standards which extends diffusion of innovations theory by including network effects. It combines the most consistently significant firm-level variables from diffusion of innovations with community-level variables subject to network effects. Results of a large-scale empirical test of the model in the insurance, reinsurance, and related financial services industries are presented. The test is the first of its kind in the growing vertical standards literature. The model is strongly supported and confirms the value of integrative approaches employing variables at both the firm and community levels",2007,0, 1175,Original Article Scenario inspections,"Today's software is often subject to attacks that exploit vulnerabilities. Since in the area of security, vulnerabilities are hard to find, quality assurance needs detailed guidance. Focusing on early quality assurance, we propose Security Inspection Scenarios as reading support for static quality assurance. They provide detailed guidance and clear and comprehensible structuring. As the vulnerabilities are partly dependent on the operating system and programming language used, we need to build generic scenarios and instantiate them. In this paper, we show how to create Security Inspection Scenarios, accompanied by a short example demonstrating their usage. After an analysis of the possible benefits of our approach, a proposal for an evaluation is presented. We assume our scenarios support practitioners in a beneficial way and are applicable in most development lifecycles which are interested in security aspects.",2009,0, 1176,OSS tools in a heterogeneous environment for embedded systems modelling: an analysis of adoptions of XMI,"The development and maintenance of UML models is an inherently distributed activity, where distribution may be geographical, temporal or both. It is therefore increasingly important to be able to interchange model information between tools - whether in a tool chain, for legacy reasons or because of the natural heterogeneity resulting from distributed development contexts. In this study we consider the current utility of XMI interchange for supporting OSS tool adoption to complement other tools in an embedded systems development context. We find that the current state of play is disappointing, and speculate that the problem lies both with the open standards and the way in which they are being supported and interpreted. There is a challenge here for the OSS community to take a lead as tool vendors gear up for XMI 2.0.",2005,0, 1177,"Outsourcing software parts of safety critical system A critical decision?","Assuring system integrity to a remote communication partner through attestation is a security concept which also is very important for safety-critical systems facing security threats. Most remote attestation methods are based on integrity measurement mechanisms embedded in the underlying hardware or software (e.g. operating system). Alternatively, the application software can measure itself, whereas the security of this approach relies on obscurity of the measurement mechanism. There are several tools available to introduce such obscurity through automatic code transformations, but these tools cannot be applied to safety-critical systems, because automatic code transformations are difficult to justify during safety certification. We present a software-based remote attestation concept for safety-critical systems and apply it to an automation system case study. The attestation concept utilizes the safety-related black channel principle to allow the application of code protection tools in order to protect the attestation mechanism without increasing the safety certification effort for the system.",2013,0, 1178,Overcoming Requirements Engineering Challenges: Lessons from Offshore Outsourcing,"With outsourcing on the rise, every relation between an outsourcer and a vendor calls for collaboration between multiple organizations across multiple locations. As part of a global IT-services organization with high process maturity, we have had many opportunities to understand the requirements engineering life cycle related to global software development. RE is a software project's most critical phase; the RE phase's success is essential for the project's success. Case studies from an Indian IT-services firm provide insights into the root causes of RE phase conflicts in client-vendor offshore-outsourcing relationships",2006,0, 1179,Overview of the CLEF-2005 Cross-Language Speech Retrieval Track,"

The task for the CLEF-2005 cross-language speech retrieval track was to identify topically coherent segments of English interviews in a known-boundary condition. Seven teams participated, performing both monolingual and cross-language searches of ASR transcripts, automatically generated metadata, and manually generated metadata. Results indicate that monolingual search technology is sufficiently accurate to be useful for some purposes (the best mean average precision was 0.13) and cross-language searching yielded results typical of those seen in other applications (with the best systems approximating monolingual mean average precision).

",2005,0, 1180,PACK: Profile Analysis using Clustering and Kurtosis to find molecular classifiers in cancer,"

Motivation: Elucidating the molecular taxonomy of cancers and finding biological and clinical markers from microarray experiments is problematic due to the large number of variables being measured. Feature selection methods that can identify relevant classifiers or that can remove likely false positives prior to supervised analysis are therefore desirable.

Results: We present a novel feature selection procedure based on a mixture model and a non-gaussianity measure of a gene's expression profile. The method can be used to find genes that define either small outlier subgroups or major subdivisions, depending on the sign of kurtosis. The method can also be used as a filtering step, prior to supervised analysis, in order to reduce the false discovery rate. We validate our methodology using six independent datasets by rediscovering major classifiers in ER negative and ER positive breast cancer and in prostate cancer. Furthermore, our method finds two novel subtypes within the basal subgroup of ER negative breast tumours, associated with apoptotic and immune response functions respectively, and with statistically different clinical outcome.

Availability: An R-function pack that implements the methods used here has been added to vabayelMix, available from (www.cran.r-project.org).

Contact: aet21@cam.ac.uk

Supplementary information: Supplementary information is available at Bioinformatics online.

",2006,0, 1181,Papier-Ma?che?: Toolkit support for tangible input,"Tangible user interfaces (TUIs) augment the physical world by integrating digital information with everyday physical objects. Currently, building these UIs requires ""getting down and dirty"" with input technologies such as computer vision. Consequently, only a small cadre of technology experts can currently build these UIs. Based on a literature review and structured interviews with nine TUI researchers, we created Papier-Mâché, a toolkit for building tangible interfaces using computer vision, electronic tags, and barcodes. Papier-Mache introduces a high-level event model for working with these technologies that facilitates technology portability. For example, an application can be prototyped with computer vision and deployed with RFID. We present an evaluation of our toolkit with six class projects and a user study with seven programmers, finding the input abstractions, technology portability, and monitoring window to be highly effective.",2004,0, 1182,Parallel Knowledge Base Development by Subject Matter Experts,"AbstractThis paper presents an experiment of parallel knowledge base development by subject matter experts, performed as part of the DARPAs Rapid Knowledge Formation Program. It introduces the Disciple-RKF development environment used in this experiment and proposes design guidelines for systems that support authoring of problem solving knowledge by subject matter experts. Finally, it compares Disciple-RKF with the other development environments from the same DARPA program, providing further support for the proposed guidelines.",2004,0, 1183,Parameterising Bayesian Networks,"This study presents a new structural health monitoring framework for complex degradation processes such as degradation of composites under fatigue loading. Since early detection and measurement of an observable damage marker in composite is very difficult, the proposed framework is established based on identifying and then monitoring “indirect damage indicators”. Dynamic Bayesian Network is utilized to integrate relevant damage models with any available monitoring data as well as other influential parameters. As the damage evolution process in composites is not fully explored, a technique consisting of extended Particle Filtering and Support Vector Regression is implemented to simultaneously estimate the damage model parameters as well as damage states in the presence of multiple measurements. The method is then applied to predict the time to failure of the component.",2017,0, 1184,Partition-Based Extraction of Cerebral Arteries from CT Angiography with Emphasis on Adaptive Tracking,"

In this paper a method to extract cerebral arteries from computed tomographic angiography (CTA) is proposed. Since CTA shows both bone and vessels, the examination of vessels is a difficult task. In the upper part of the brain, the arteries of main interest are not close to bone and can be well segmented out by thresholding and simple connected-component analysis. However in the lower part the separation is challenging due to the spatial closeness of bone and vessels and their overlapping intensity distributions. In this paper a CTA volume is partitioned into two sub-volumes according to the spatial relationship between bone and vessels. In the lower sub-volume, the concerning arteries are extracted by tracking the center line and detecting the border on each cross-section. The proposed tracking method can be characterized by the adaptive properties to the case of cerebral arteries in CTA. These properties improve the tracking continuity with less user-interaction.

",2005,0, 1185,Pathway Logic,"Non-small cell lung cancer (NSCLC) is a malignant tumor, and contains three major subtypes which are difficult to be distinguished at early stages of NSCLC. Many pathways work together to perform certain functions in cells. One might expect the high level of co-appearance or repression of pathways to distinguish different subtypes of NSCLC. However, it is difficult to detect coordinated regulations of pathways by existing methods. In our work, the coordinated regulations of pathways are detected using modified higher logic analysis of gene expression data. Specifically, we identify the genes whose regulation obeys a logic function by the modified higher logic analysis, which focuses on the relationships among the gene triplets that are not evident when genes are examined in a pairwise fashion. Then, the relationships among genes are mapped to pathways to predict the coordinated regulated relationships among pathways. By comparing coordinated regulations of pathways, we find that the regulation patterns of pathways which are associated with cell death are different in three subtypes of NSCLC. This method allows us to uncover co-appearance or repression of pathways in high level, and it has a potential to distinguish the subtypes for complex diseases.",2016,0, 1186,Pattern Acquisition Methods for Information Extraction Systems.,"Spectrum pooling can be considered as a first step towards a fully dynamic spectrum allocation strategy. It allows a license owner to share a sporadically used part of his licensed spectrum with a renter system, until he needs it himself. For the smooth operation of a spectrum pooling scheme, the renter system has to monitor the channel and extract the channel allocation information (CAI), i.e. it has to detect which parts of the shared spectrum the owner system accesses, in order to immediately vacate the frequency bands being required by the license owner and to gain access to the frequency bands which the license owner has stopped using. This work presents and compares two methods based on exploiting the cyclostationary properties of the spectrum owner signal for the extraction of the CAI in a specific spectrum pooling scenario, where the license owner is a GSM network and the spectrum renter is an OFDM based WLAN system.",2004,0, 1187,Patterns and Protocols for Agent-Oriented Software Development,"Previous research on collaboration posits collaboration process as a key factor for team performance. However, it is not fully understood which characteristics of a process make collaboration more efficient. In this research, we investigate the effect of collaboration process patterns on teamwork efficiency (e.g. time cost) in the software development setting. We propose a framework to identify frequent interaction structures referred to as collaboration process patterns and study their impact on the efficiency of software development. For purposes of pattern extraction, we propose an algorithm to extract sub-structures from software development processes stored in a software project tracking system. To analyze the effect of different collaboration process patterns, we conduct an empirical study to examine their correlation with issue resolution time using data from an open source software community. As a result, we identified several collaboration process patterns that are positively (or negatively) correlated with issue resolution time. We also found that this correlation may change with task complexity.",2012,0, 1188,Patterns of cooperative interaction: Linking ethnomethodology and design,"Patterns of Cooperative Interaction are regularities in the organisation of work, activity, and interaction among participants, and with, through, and around artifacts. These patterns are organised around a framework and are inspired by how such regularities are highlighted in ethnomethodologically-informed ethnographic studies of work and technology. They comprise a high level description and two or more comparable examples drawn from specific studies. Our contention is that these patterns form a useful resource for reusing findings from previous field studies, for enabling analysis and considering design in new settings. Previous work on the relationship between ethnomethodology and design has been concerned primarily in providing presentation frameworks and mechanisms, practical advice, schematisations of the ethnomethodologist's role, different possibilities of input at different stages in development, and various conceptualisations of the relationship between study and design. In contrast, this article seeks to first discuss the position of patterns relative to emergent major topics of interest of these studies. Subsequently it seeks to describe the case for the collection of patterns based on?findings, their comparison across studies and their general implications for design problems, rather than the concerns of?practical and methodological?interest outlined in the other work. Special attention is paid to our evaluations and to how they inform how the patterns collection may be read, used and contributed to, as well as to reflections on the composition of the collection as it has emerged. The paper finishes, first, with a discussion of how our work relates to other work on patterns, before some closing comments are made on the role of our patterns and ethnomethodology in systems design.",2004,0, 1189,Peer Reviews in Real Life - Motivators and Demotivators,"Peer reviews are an efficient quality assurance method in software development. Several reviewing methods exist to match the needs of different organizations and situations. Still, peer reviews are not practiced as commonly as one would suppose. This study aims at finding out what types of reviewing methods are in use in software companies, surveying the most important benefits of peer reviews and investigating reasons for not utilizing reviews. The study is carried out in companies locating in the Oulu region, but the results can be generalized to all small software companies. The results show that companies that use reviews have adjusted the process for their own needs. The main motivator for arranging reviews is the decreased amount of defects in products while the other aspects of reviews, such as process improvement or knowledge sharing are not considered as important. The main demotivator for reviews is lack of time and people resources.",2005,0, 1190,Peer-based computer-supported knowledge refinement: an empirical investigation,"

Nonexpert peer-based knowledge refinement, it turns out, is just as helpful as expert-centric knowledge refinement for improving the quality of results.

",2008,0, 1191,People with Disabilities: Automatic and Manual Evaluation of Web Sites,"

Quality of websites is a key factor in addressing all users. Besides functional issues is the design of each web page affecting the ability to navigate and interact with web applications. Web designers are only slowly becoming aware of accessibility issues, more tools, better processes in creating high quality websites and a better understanding of guidelines is needed

",2006,0, 1192,Performance and Cost Tradeoffs in Web Search,"The rise of the Internet of Things has led to an explosion of sensor computing platforms. The complexity and applications of IoT devices range from simple devices in vending machines to complex, interactive artificial intelligence in smart vehicles and drones. Developers target more aggressive objectives and protect market share through feature differentiation; they just choose between low-cost, and low-performance CPU-based systems, and high-performance custom platforms with hardware accelerators including GPUs and FPGAs. Both CPU-based and custom designs introduce a variety of design challenges: extreme pressure on time-to-market, design cost, and development risk drive a voracious demand for new CAD technologies to enable rapid, low cost design of effective IoT platforms with smaller design teams and lower risk. In this article, we present a generic IoT device design flow and discuss platform choices for IoT devices to efficiently tradeoff cost, power, performance and volume constraints: CPU-based systems and custom platforms that contain hardware accelerators including embedded GPUs and FPGAs. We demonstrate this design process through a driving application in computer vision. We also present current critical design automation needs for IoT development and demonstrate how our prior work in CAD for FPGAs and SoCs begin to address these needs.",2016,0, 1193,Performance Evaluation of Existing Approaches for Hybrid Ad Hoc Networks Across Mobility Models,"

There is being an on-going effort in the research community to efficiently interconnect Mobile Ad hoc Networks (MANET) to fixed ones like the Internet. Several approaches have been proposed within the MANET working group of the Internet Engineering Task Force (IETF), but there is still no clear evidence about which alternative is best suited for each mobility scenario, and how does mobility affect their performance. In this paper, we answer these questions through a simulation-based performance evaluation across mobility models. Our results show the performance trade-offs of existing proposals and the strong influence that the mobility pattern has on their behavior.

",2005,0, 1194,Performing and Reviewing Assessments of Contemporary Modularization Approaches: What Constitutes Reasonable Expectations?,The inherent difficulties in assessing contemporary modularization (CoM) approaches are considered. The motivation is provided for a model relating assessment methodologies to the maturity of the CoM approach.,2007,0, 1195,Performing systematic literature reviews in software engineering,"Software outsourcing partnership (SOP) is mutually trusted inter-organisational software development relationship between client and vendor organisations based on shared risks and benefits. SOP is different to conventional software development outsourcing relationship, SOP could be considered as a long term relation with mutual adjustment and renegotiations of tasks and commitment that exceed mere contractual obligations stated in an initial phase of the collaboration. The objective of this research is to identify various factors that are significant for vendors in conversion of their existing outsourcing contractual relationship to partnership. We have performed a systematic literature review for identification of the factors. We have identified a list of factors such as 'mutual interdependence and shared values', 'mutual trust', 'effective and timely communication', 'organisational proximity' and 'quality production' that play vital role in conversion of the existing outsourcing relationship to a partnership.",2014,0, 1196,Person identification system based on a trapezoid pyramid architecture of a gray-level image,"AbstractTo realize fully automated face recognition, there must be thorough processing from detection of the face in a scene to recognition. There have been many reports on face recognition, however, studies on detection available for recognition are very few. One of the difficulties comes from many variations of input condition such as illumination and background. As for access control systems such as security or login, input conditions can be rather fixed. Under this condition, fully automated person identification by the facial image is tried and achieved. The face in a scene is first sought by coarse-to-fine processing based on a trapezoid pyramid architecture of a gray-level image, and the result is applied to the recognition. The simple algorithm is implemented by software in a personal computer, and this realizes a series of processing within one second.",1997,0, 1197,Personal Opinion Surveys,"Although surveys are an extremely common research method, surveybased research is not an easy option. In this chapter, we use examples of three software engineering surveys to illustrate the advantages and pitfalls of using surveys. We discuss the six most important stages in survey-based research: setting the survey?s objectives; selecting the most appropriate survey design; constructing the survey instrument (concentrating on self-administered questionnaires); assessing the reliability and validity of the survey instrument; administering the instrument; and, finally, analysing the collected data. This chapter provides only an introduction to survey-based research; readers should consult the referenced literature for more detailed advice.",1977,0, 1198,Personalization for the Semantic Web,"A concept of the Semantic Web provides a frame-work for development of systems with higher levels of usability, operability and independence. One of the most intriguing ideas of this new paradigm is related to the vision of web systems requiring minimal involvement of humans in activities related to a pre-selection and eventually selection of most promising alternatives among all obtained options. In order to perform such a task, systems have to be able to mimic users' behaviour. To be more specific, such systems should know not only users' preferences but also users' levels of acceptance of alternatives that satisfy multiple criteria to a different degree. All this represents a challenge especially in the case when any information provided by any individuals is more likely to be vague than crisp. The approach proposed here aims at creating capability to mimic human acceptance patterns in order to make web systems more independent and human-like at the same time. To address these needs, the concepts of fuzziness and fuzzy reasoning are introduced as important aspects of systems supporting users in their selection processes. Architecture of such systems with ontologies representing fuzzy information and abilities to express vague acceptance levels of users is proposed and described. Feasibility of the presented approach is illustrated by a prototype of Semantic Web application - Hotel Reservation System.",2014,0, 1199,Person-Job Cognitive Style Fit for Software Developers: The Effect on Strain and Performance,"

Software developers face a constant barrage of innovations designed to improve the development environment. Yet stress/strain among software developers has been steadily increasing and is at an all-time high, while their productivity is often questioned. Why, if these innovations are meant to improve the environment, are developers more stressed and less productive than they should be? Using a combination of cognitive style and person-environment fit theories as the theoretical lens, this study examines one potential source of stress/strain and productivity impediment among software developers. Specifically, this paper examines the fit between the preferred cognitive style of a software developer and his or her perception of the cognitive style required by the job environment, and the effect of that fit on stress/strain and performance. Data collected from a field study of 123 (object-oriented) software developers suggest that performance decreases and stress increases as this gap between cognitive styles becomes wider. Using surface response methodology, the precise fit relationship is modeled. The interaction of the developer and the environment provides explanatory power above and beyond either of the factors separately, suggesting that studies examining strain and performance of developers should explicitly consider and measure the cognitive style fit between the software developer and the software development environment. In practice, managers can use the results to help recognize misfit, its consequences, and the appropriate interventions (such as training or person/task matching).

",2005,0, 1200,Persuasive Appliances: Goal Priming and Behavioral Response to Product-Integrated Energy Feedback,"

Previous studies have shown the embedding of feedback dialogue in electronic appliances to be a promising energy conservation tool if the correct goal-feedback match is made. The present study is the first in a series planned to explore contextual effects as moderators of both the goal and the feedback. Tentative results are reported of a study where two different levels of alternative goals (related/unrelated) are primed and compared as to theory predictions of their motivational strength. Results suggest enhanced performance when an action-related goal is primed, however, more participants must be included before final conclusions can be drawn.

",2006,0, 1201,Persuasive Online-Selling in Quality and Taste Domains,"

‘Quality & taste’ products like wine or fine cigars are one of the fastest growing product sectors in e-commerce. Online shops for these types of products require on the one side persuasive Web presentation and on the other side deep product knowledge. In that context recommender applications may help to create an enjoyable shopping experience for online users. The Advisor Suite framework is a knowledge-based conversational recommender system that aims at mediating between requirements and desires of online shoppers and technical characteristics of the product domain.

In this paper we present a conceptual scheme to classify the driving factors for creating a persuasive online shopping experience with recommender systems. We discuss these concepts on the basis of several fielded applications. Furthermore, we give qualitative results from a long-term evaluation in the domain of Cuban cigars.

",2006,0, 1202,Persuasiveness of a Mobile Lifestyle Coaching Application Using Social Facilitation,"

In a field study we compared usage and acceptance of a mobile lifestyle coaching application with a traditional web application. The participants (N=40) documented health behaviour (activity and healthy nutrition) daily, trying to reach a defined goal. In addition, health questionnaires and social facilitation features were provided to enhance motivation. Acceptance of the system was high in both groups. The mobile application was perceived as being more attractive and fun to use. Analysis of the usage patterns showed significant differences between the mobile and the web-based application. There was no significant difference between the two groups in terms of task compliance and health behaviour. The effectiveness of mobility and social facilitation was confounded by other variables, e.g. gender and age. Initial motivation for lifestyle change was related to the overall compliance and goal achievement of the participant. Implications show ways to strengthen the persuasiveness of health applications on mobile devices.

",2006,0, 1203,Pervasive challenges for software components.,"The object oriented programming paradigm often claimed to allow a faster development pace and higher quality of software. Within the design model, it is necessary for design classes to collaborate with one another. However, collaboration should be kept to an acceptable minimum i.e. better designing practice will introduce low coupling. If a design model is highly coupled, the system is difficult to implement, to test and to maintain overtime. In case of enhancing software, we need to introduce or remove module and in that case coupling is the most important factor to be considered because unnecessary coupling may make the system unstable and may cause reduction in the system's performance. So coupling is thought to be a desirable goal in software construction, leading to better values for external software qualities such as maintainability, reusability and so on. To test this hypothesis, a good measure of class coupling is needed. In this paper, the major issue of coupling measures have been analyzed with the objective of determining the most significant coupling measure.",2010,0, 1204,Petri nets and software engineering.,"Concerns the design and implementation of methodologies applicable to reverse engineering in the software process. In particular, forward engineering is normally in dark and cannot enlighten the advancement of the new process needs. At present, there exists no software engineering methodology capable of guaranteeing major achievements in reverse engineering once a software project has been started and it is to be upgraded in one particular manner. Techniques such as SADT, HIPO, the spiral methodologies, and the very waterfall model do not provide optimal results in reverse engineering because they are based on a systematic, quasi sequential approach to satisfy each specific step in a particular model. However, Petri nets offer the possibility of optimizing reverse software engineering at any stage of the software process and provide a flexible mechanism to ensure quality and guarantee continuity, and consistency in reverse engineering models. In particular, timed Petri nets (TPNs) offer a unique capability to control and optimize the time and cost involved in each individual task or milestone and similarly offer a powerful approach to interact different tasks associated with a specific place in a more flexible and effective manner",1995,0, 1205,Phylogenetic Profiling of Insertions and Deletions in Vertebrate Genomes,"

Micro-indels are small insertion or deletion events (indels) that occur during genome evolution. The study of micro-indels is important, both in order to better understand the underlying biological mechanisms, and also for improving the evolutionary models used in sequence alignment and phylogenetic analysis. The inference of micro-indels from multiple sequence alignments of related genomes poses a difficult computational problem, and is far more complicated than the related task of inferring the history of point mutations. We introduce a tree alignment based approach that is suitable for working with multiple genomes and that emphasizes the concept of indel history. By working with an appropriately restricted alignment model, we are able to propose an algorithm for inferring the optimal indel history of homologous sequences that is efficient for practical problems. Using data from the ENCODE project as well as related sequences from multiple primates, we are able to compare and contrast indel events in both coding and non-coding regions. The ability to work with multiple sequences allows us to refute a previous claim that indel rates are approximately fixed even when the mutation rate changes, and allows us to show that indel events are not neutral. In particular, we identify indel hotspots in the human genome.

",2006,0, 1206,Physiologic System Interfaces Using fNIR with Tactile Feedback for Improving Operator Effectiveness,"

This paper explores the validation of tactile mechanisms as an effective means of communications for integration into a physiologic system interface (PSI). Tactile communications can offer a channel that only minimally interferes with a primary or concurrent task. The PSI will use functional brain imaging techniques, specifically functional near-infrared imaging (fNIR), to determine cognitive workload in language and visual processing areas of the brain. The resulting closed-loop system will thus have the capability of providing the operator with necessary information by using the modality most available to the user, thus enabling effective multi-tasking and minimal task interference.

",2007,0, 1207,"Planning e-Government ? A Service-Oriented Agency Survey","This paper reviews different models in e-government in order to determine important parameters for e-government strategic planning. Lack of appropriate strategies and planning are some of the main challenges in e-government. For developing of e-government in local and national level, appropriate national planning is important. This paper tries to present a strategy based model for e-government planning. A proposed model consists of different dimensions which are strategy, finance, data and information, management, legality, logistics, human resource, technical infrastructure, security, technology, culture and marketing.",2007,0, 1208,Playing and Cheating in Ambient Entertainment,"We survey ways to extract information from users interacting in ambient intelligence entertainment environments. We speculate on the use of this information, including information obtained from physiological processes and brain-computer interfacing, in future game environments.",1997,0, 1209,Pluralistic multi-agent decision support system: a framework and an empirical test,

Recent research in decision support systems (DSSs) has focused on building active cooperative intelligent systems. Research in agent-based decision support is a promising stream in this direction. This paper proposes a framework for a pluralistic multi-agent decision support system (MADSS). The distinguishing feature of the proposed approach is its organization around human decision making process. The framework builds upon the decision support pyramid with agents organized into groups according to the phases of the problem solving model. We outline the design principles and develop architecture for MADSS. The framework is illustrated through an investment MADSS prototype. The results of the empirical test are presented.

,2004,0, 1210,Pointer analysis in the presence of dynamic class loading,"Short time load forecasting (STLF) is a pivotal concept in energy marketing, therefore, regulatory has defined penalty for load forecasting errors, which disturb energy market balance. Distributed generation (DG) has two effects on STLF models: first, in the presence of DG these models inevitably entail non-repeating data as well as load trends, and second, the share of DG in power generation is not constant. Therefore, the STLF model should be evaluated and improved continuously otherwise model accuracy will dwindle gradually. A lot of STLF models have been developed but there isn't proper tool to assess their accuracy in the presence of DG. For controlling the impact of probabilistic behaviour of distributed generators on load forecasting, West Tehran province power distribution company (WTPPDC) combined dynamic model, statistical control chart and Process capability analysis for continual evaluation and monitoring of the STLF model. In this study WTPPDC have used process capability analysis for evaluation of forecasting capability of model and control charts for detecting out of control error and accumulative bias in prediction in the presence of DG. Quality approach to load forecasting error controlling can help distribution companies to improve their model before forecasting errors reduce their profit and business confidence.",2012,0, 1211,Population Coding of Song Element Sequence in the Songbird Brain Nucleus HVC,"

Birdsong is a complex vocalization composed of various song elements organized according to sequential rules. To reveal the neural representation of song element sequence, we recorded the neural responses to all possible element pairs of stimuli in the Bengalese finch brain nucleus HVC. Our results show that each neuron has broad but differential response properties to element sequences. We calculated the time course of population activity vectors and mutual information between auditory stimuli and neural activities. The clusters of population vectors responding to second elements had a large overlap, whereas the clusters responding to first elements were clearly divided. At the same timing, confounded information also significantly increased. These results indicate that the song element sequence is encoded in a neural ensemble in HVC via population coding.

",2007,0, 1212,Population Dynamics in the Elderly: The Need for Age-Adjustment in National BioSurveillance Systems,"

With the growing threat of pandemic influenza, efforts to improve national surveillance to better predict and prevent this disease from affecting the most vulnerable populations are being undertaken. This paper examines the utility of Medicare data to obtain age-specific influenza hospitalization rates for historical analyses. We present a novel approach to describing and analyzing age-specific patterns of hospitalizations using Medicare data and show the implications of a dynamic population age distribution on hospitalization rates. We use these techniques to highlight the utility of implementing a real-time nationwide surveillance system for influenza cases and vaccination, and discuss opportunities to improve the existing system to inform policy and reduce the burden of influenza nationwide.

",2007,0, 1213,Positive Effects of Entertainment Technology on Human Behaviour,"Many envision a future in which personal service robots share our homes and take part in our daily lives. These robots should possess a certain “social intelligence”, so that people are willing, if not eager, to interact with them. In this endeavor, applied psychologists and roboticists have conducted numerous studies to identify the factors that affect social interactions between humans and robots, both positively and negatively. In order to ascertain the extent to which the social human-robot interaction might be influenced by robot behavior and a person's attitude towards technology, an experiment was conducted using the UG paradigm, in which participants (N=48) interacted with a robot, which displayed either animated or apathetic behavior. The results suggest that although the interaction with a robot displaying animated behavior is overall rated more favorably, people may nevertheless act differently towards such robots, depending on their perceived technological competence and their enthusiasm for technology.",2015,0, 1214,Potential prevention of medical errors in casualty surgery by using information technology,"Recent studies on adverse events in medicine have shown that errors in medicine are not rare and may cause severe harm. Quality problems in discharge letters may be a source of medical error. We have analyzed 150 discharge letters of an outpaitient clinic for casualty surgery in order to identify and to classify typical mistakes. A Failure Mode and Effect Analysis has been initiated in order to estimate the risk associated with different failure types. Possible IT solutions to prevent the identified problems have been assessed, focusing on expected effects and on feasibility. Our analyses have shown that there is a need to improve the quality of discharge letters, and that IT support based on the frequency and severity of certain error types has a good potential. We plan to introduce both pre-structured discharge letters and reminders in order to prevent the observed errors. They could improve both documentation quality and, if used during the patient visit, quality of treatment. Moreover, they could produce training effects on less experienced physicians. To be able to rapidly integrate such an adapted IT support into a comprehensive Healthcare Information System (HIS), it is important to establish a responsive IT infrastructure.",2004,0, 1215,"Power, Death and Love: A Trilogy for Entertainment","

In this paper we review the latest understandings about what emotions are and their roles in perception, cognition and action in the context of entertainment computing. We highlight the key influence emotions have in the perception of our surrounding world, as well as in the initiation of action. Further to this we propose a model for emotions and demonstrate how it could be used for entertainment computing. We then present a review of emotion based toys and show our own development in this area. We conclude our paper with a discussion on how entertainment systems would gain from a better and more comprehensive understanding of emotions.

",2005,0, 1216,Practical Approach to Specification and Conformance Testing of Distributed Network Applications,"

Standardization of infrastructure and services in distributed applications and frameworks requires ground methodological base. Design by Contract approach looks very promising as a candidate. It helps to obtain component-wise design, to separate concerns between developers accurately, and makes development of high quality complex systems a manageable process. Unfortunately, in its classic form it can hardly be applied to distributed network applications because of lack of adequate means to describe nondeterministic asynchronous events. We extend Design by Contract with capabilities to describe callbacks and asynchronous communication between components. The resulting method was used to specify distributed applications and to develop conformance test suites for them in automated manner. Specifications are developed in an extension of C language that makes them clear and useful for industrial developers and decreases greatly test construction effort. Practical results of numerous successful applications of the method are described. More information on the applications of the method can be found at the site of RedVerst group of ISP RAS [1].

",2005,0, 1217,Practical problems of programming in the large (PPPL).,"Practical Problems of Programming in the Large are those issues that IT industry experiences today when working on large software systems or when integrating software within entire organisations. Relevant and current topics include Software Architecture, Component Software, Middleware platforms, Model-Driven-Architecture, but also Enterprise Application Integration, and others. The workshop had practitioners and researchers concerned with technology transfer presenting their views on problems currently seen as most pressing in the above areas. In addition, the discussions were focussed by an ?Example of a problem of programming in the large? concerning the step-wise re-engineering a complex legacy information system. Participants discussed how they would approach this exemplary problem, identified key challenges and compared their solution strategies. In the afternoon, general problems of transferring academic research results into practice were discussed. The invited talk of Dave Thomas discussed current problematic trends in software engineering research, such as the concentration on general purpose programming languages for developing domain-specific enterprise software. Finally, we discussed specific needs for a software engineering education from industrial perspective.",1988,0, 1218,Practices and Supporting Structures for Mature Inquiry Culture in Distributed Software Development Projects,"As software specifications for complex systems are practically never entirely complete and consistent, the recipient of the specification needs domain knowledge in order to decide which parts of the system are specified clearly and which parts are specified ambiguously and thus need inquiry to get a more detailed specification. By analyzing the evidence gained in multiple-case study, the necessary components for achieving a mature inquiry culture in distributed software development derived from the practices at Siemens Program and System Engineering (PSE) are identified. These components are presented in three categories-pillars: project communication, requirements communication and inquiry practices",2006,0, 1219,Pragmatic Countermeasures for Implementation-related Vulnerabilities in Web Applications,"Security vulnerabilities continue to infect web applications, allowing attackers to access sensitive data and exploiting legitimate web sites as a hosting ground for malware. Consequently, researchers have focused on various approaches to detect and prevent critical classes of security vulnerabilities in web applications, including anomaly-based and misuse-based detection mechanisms, static and dynamic server-side and client-side web application security policy enforcement. This paper present a survey on web application security aspects includes critical vulnerabilities, hacking tools and also approaches to improve web application and websites security level.",2011,0, 1220,Precise Interprocedural Analysis using Random Interpretation,"We describe a unified framework for random interpretation that generalizes previous randomized intraprocedural analyses, and also extends naturally to efficient interprocedural analyses. There is no such natural extension known for deterministic algorithms. We present a general technique for extending any intraprocedural random interpreter to perform a context-sensitive interprocedural analysis with only polynomial increase in running time. This technique involves computing random summaries of procedures, which are complete and probabilistically sound.As an instantiation of this general technique, we obtain the first polynomial-time randomized algorithm that discovers all linear relationships interprocedurally in a linear program. We also obtain the first polynomial-time randomized algorithm for precise interprocedural value numbering over a program with unary uninterpreted functions.We present experimental evidence that quantifies the precision and relative speed of the analysis for discovering linear relationships along two dimensions: intraprocedural vs. interprocedural, and deterministic vs. randomized. We also present results that show the variation of the error probability in the randomized analysis with changes in algorithm parameters. These results suggest that the error probability is much lower than the existing conservative theoretical bounds.",2005,0, 1221,Predicting object-oriented software maintainability using multivariate adaptive regression splines,"Prediction of maintainability parameter for Object-Oriented Software using source code metrics is an area that hasattracted the attention of several researchers in academia andindustry. However, maintainability prediction of Service-Orientedsoftware is a relatively unexplored area. In this work, we conductan empirical analysis on maintainability prediction of eBay webservices using several source code metrics. We consider elevendifferent types of source code metrics as input for developinga maintainability prediction model using Multivariate AdaptiveRegression Splines (MARS) method. We compare and evaluatethe performance of the maintainability prediction model withMultivariate Linear Regression (MLR) approach and SupportVector Machine (SVM). Eight different types of feature selectiontechniques have been implemented to reduce dimension andremove irrelevant features. The experiment results reveals thatthe maintainability prediction model developed using MARSmethod achieved better performance as compared to MLR andSVM methods. Experimental results also demonstrate that themodel developed by considering a selected set of source codemetrics by feature selection technique as input achieves betterresults as compared to the approach which considers all sourcecode metrics.",2017,0, 1222,Predicting Software Metrics at Design Time,"This paper introduces an abstract high dependability framework for the implementation of embedded control software with hard real-time constraints. The framework specifies time-triggered sensor readings, atomic component invocations, actuator updates, and pattern switches independent of any implementation platform. In order to leverage model continuity, XML-based description of composite component informal description is required, which supports reuse of components and model information interoperability. By separating the platform-independent from the platform-dependent concerns, Consider a quality process control system in steel industry on a distributed real-time embedded environment, we implement a simplified high dependability design framework to prove the feasibility in the validation and synthesis of embedded control component execution.",2009,0, 1223,Prediction of Partners' Behaviors in Agent Negotiation under Open and Dynamic Environments,"Prediction of partners' behaviors in negotiation has been an active research direction in recent years in the area of multi-agent and agent system. So by employing the prediction results, agents can modify their own negotiation strategies in order to achieve an agreement much quicker or to look after much higher benefits. Even though some of prediction strategies have been proposed by researchers, most of them are based on machine learning mechanisms which require a training process in advance. However, in most circumstances, the machine learning approaches might not work well for some kinds of agents whose behaviors are excluded in the training data. In order to address this issue, we propose three regression functions to predict agents' behaviors in this paper, which are linear, power and quadratic regression functions. The experimental results illustrate that the proposed functions can estimate partners' potential behaviors successfully and efficiently in different circumstances.",2007,0, 1224,Prediction of Protein Interaction with Neural Network-Based Feature Association Rule Mining,"

Prediction of protein interactions is one of the central problems in post-genomic biology. In this paper, we present an association rule-based protein interaction prediction method. We adopted neural network to cluster protein interaction data, and used information theory based feature selection method to reduce protein feature dimension. After model training, feature association rules are generated to interaction prediction by decoding a set of learned weights of trained neural network and by mining association rules. For model training, an initial network model was constructed with public Yeast protein interaction data considering their functional categories, set of features, and interaction partners. The prediction performance was compared with traditional simple association rule mining method. The experimental results show that proposed method has about 96.1% interaction prediction accuracy compared to simple association mining approach which achieved about 91.4% accuracy.

",2006,0, 1225,Prediction of Yeast Protein?Protein Interactions by Neural Feature Association Rule,"

In this paper, we present an association rule based protein interaction prediction method. We use neural network to cluster protein interaction data and feature selection method to reduce protein feature dimension. After this model training, association rules for protein interaction prediction are generated by decoding a set of learned weights of trained neural network and association rule mining. For model training, the initial network model was constructed with existing protein interaction data in terms of their functional categories and interactions. The protein interaction data of Yeast (S.cerevisiae) from MIPS and SGD are used. The prediction performance was compared with traditional simple association rule mining method. According to the experimental results, proposed method shows about 96.1% accuracy compared to simple association mining approach which achieved about 91.4%.

",2005,0, 1226,Predictive analyses for nonhomogeneous Poisson processes with power law using Bayesian approach,"Nonhomogeneous Poisson process (NHPP) also known as Weibull process with power law, has been widely used in modeling hardware reliability growth and detecting software failures. Although statistical inferences on the Weibull process have been studied extensively by various authors, relevant discussions on predictive analysis are scattered in the literature. It is well known that the predictive analysis is very useful for determining when to terminate the development testing process. This paper presents some results about predictive analyses for Weibull processes. Motivated by the demand on developing complex high-cost and high-reliability systems (e.g., weapon systems, aircraft generators, jet engines), we address several issues in single-sample and two-sample prediction associated closely with development testing program. Bayesian approaches based on noninformative prior are adopted to develop explicit solutions to these problems. We will apply our methodologies to two real examples from a radar system development and an electronics system development.",2007,0, 1227,Preliminary Results On Using Static Analysis Tools For Software Inspection,"Software inspection has been shown to be an effective defect removal practice, leading to higher quality software with lower field failures. Automated software inspection tools are emerging for identifying a subset of defects in a less labor-intensive manner than manual inspection. This paper investigates the use of automated inspection for a large-scale industrial software system at Nortel Networks. We propose and utilize a defect classification scheme for enumerating the types of defects that can be identified by automated inspections. Additionally, we demonstrate that automated code inspection faults can be used as efficient predictors of field failures and are effective for identifying fault-prone modules.",2004,0, 1228,Presentation of Arguments and Counterarguments for Tentative Scientific Knowledge,"

A key goal for a scientist is to find evidence to argue for or against universal statements (in effect first-order formulae) about the world. Building logic-based tools to support this activity could be potentially very useful for scientists to analyse new scientific findings using experimental results and established scientific knowledge. In effect, these logical tools would help scientists to present arguments and counterarguments for tentative scientific knowledge, and to share and discuss these with other scientists. To address this, in this paper, we explain how tentative and established scientific knowledge can be represented in logic, we show how first-order argumentation can be used for analysing scientific knowledge, and we extend our framework for evaluating the degree of conflict arising in scientific knowledge. We also discuss the applicability of recent developments in optimizing the impact and believability of arguments for the intended audience.

",2005,0, 1229,Presenting and Analysing your Data,"A simple method of illustrating and comparing performances of grid-connected PV (photovoltaic) power stations has been developed. The method is based on the introduction of some normalized parameters referring to system efficiency, insolation, and operation and maintenance data. It can be used for an immediate comparison of the performances of different PV power plants operating in different sites. Operating data for the 300 kW ENEA Delphos plant (Italy) have been evaluated using the proposed approach. The plant reliability and the system utilization are illustrated and discussed, together with maintenance and energy production data",1990,0, 1230,Presenting software engineering results using structured abstracts: a randomised experiment,"

When conducting a systematic literature review, researchers usually determine the relevance of primary studies on the basis of the title and abstract. However, experience indicates that the abstracts for many software engineering papers are of too poor a quality to be used for this purpose. A solution adopted in other domains is to employ structured abstracts to improve the quality of information provided. This study consists of a formal experiment to investigate whether structured abstracts are more complete and easier to understand than non-structured abstracts for papers that describe software engineering experiments. We constructed structured versions of the abstracts for a random selection of 25 papers describing software engineering experiments. The 64 participants were each presented with one abstract in its original unstructured form and one in a structured form, and for each one were asked to assess its clarity (measured on a scale of 1 to 10) and completeness (measured with a questionnaire that used 18 items). Based on a regression analysis that adjusted for participant, abstract, type of abstract seen first, knowledge of structured abstracts, software engineering role, and preference for conventional or structured abstracts, the use of structured abstracts increased the completeness score by 6.65 (SE 0.37, p??<??0.001) and the clarity score by 2.98 (SE 0.23, p??<??0.001). 57 participants reported their preferences regarding structured abstracts: 13 (23%) had no preference; 40 (70%) preferred structured abstracts; four preferred conventional abstracts. Many conventional software engineering abstracts omit important information. Our study is consistent with studies from other disciplines and confirms that structured abstracts can improve both information content and readability. Although care must be taken to develop appropriate structures for different types of article, we recommend that Software Engineering journals and conferences adopt structured abstracts.

",2008,0, 1231,PRESERVING PRIVACY IN ASSOCIATION RULE MINING,"Privacy-preserving data mining [Agrawal, R., et al., May 2000] has recently emerged to address one of the negative sides of data mining technology: the threat to individual privacy. For example, through data mining, one is able to infer sensitive information, including personal information or even patterns, from non-sensitive information or unclassified data. There have been two broad approaches for privacy-preserving data mining. The first approach is to alter the data before delivery to the data miner so that real values are obscured. The second approach assumes the data is distributed between two or more sites, and these sites cooperate to learn the global data mining results without revealing the data at their individual sites. Given specific rules to be hidden, many data altering techniques for hiding association, classification and clustering rules have been proposed. However, to specify hidden rules, entire data mining process needs to be executed. For some applications, we are only interested in hiding certain sensitive items. In this work, we assume that only sensitive items are given and propose two algorithms to modify data in database so that sensitive items cannot be inferred through association rules mining algorithms. Examples illustrating the proposed algorithms are given. The efficiency of the proposed approach is further compared with Dasseni etc. (2001) approach. It is shown that our approach required less number of databases scanning and prune more number of hidden rules. However, our approach must hide all rules containing the hidden items on the right hand side, where Dasseni's approach can hide specific rules.",2004,0, 1232,Privacy Considerations in Location-Based Advertising,"Location-based services (LBS) have become an immensely valuable source of real-time information and guidance. Nonetheless, the potential abuse of users' sensitive personal data by an LBS server is evolving into a serious concern. Privacy concerns in LBS exist on two fronts: location privacy and query privacy. In this paper we investigate issues related to query privacy. In particular, we aim to prevent the LBS server from correlating the service attribute, e.g., bar/tavern, in the query to the user's real-world identity. Location obfuscation using spatial generalization aided by anonymization of LBS queries is a conventional means to this end. However, effectiveness of this technique would abate in continuous LBS scenarios, i.e., where users are moving and recurrently requesting for LBS. In this paper, we present a novel query-perturbation-based scheme that protects query privacy in continuous LBS even when user-identities are revealed. Unlike most exiting works, our scheme does not require the presence of a trusted third party.",2011,0, 1233,Privacy is Linking Permission to Purpose?,"Humans have an amazing ability to bootstrap new knowledge. The concept of structural bootstrapping refers to mechanisms relying on prior knowledge, sensorimotor experience, and inference that can be implemented in robotic systems and employed to speed up learning and problem solving in new environments. In this context, the interplay between the symbolic encoding of the sensorimotor information, prior knowledge, planning, and natural language understanding plays a significant role. In this paper, we show how the symbolic descriptions of the world can be generated on the fly from the continuous robot's memory. We also introduce a multi-purpose natural language understanding framework that processes human spoken utterances and generates planner goals as well as symbolic descriptions of the world and human actions. Both components were tested on the humanoid robot ARMAR-III in a scenario requiring planning and plan recognition based on human-robot communication.",2015,0, 1234,Probabilistic Document Length Priors for Language Models,"In this paper we present a new language model based on an odds formula, which explicitly incorporates document length as a parameter. Furthermore, a new smoothing method called exponential smoothing is introduced, which can be combined with most language models. We present experimental results for various language models and smoothing methods on a collection with large document length variation, and show that our new methods compare favorably with the best approaches known so far.",2008,0, 1235,Probabilistic Techniques for Corporate Blog Mining,"Corporate innovation continues to be the subject of heightened interest in the business press and the increasing focus of international conferences. In addition, there has been increased attention paid to technology management in the services sector, as evidenced by a recent series of international technical conferences. In this paper, two recently published studies on innovation are analyzed to ascertain the relative involvement of services and goods companies; this analysis relied on a technique labelled data surface mining (DSM). The current findings reinforce the previously reported observation that innovation appears to be the purview of the goods sector. The issues of technology management especially relevant to the services sector are presented. These issues are of critical importance in light of the fact that the services sector represents 80% (GDP and/or employment) of the United States economy and is of increasing importance in the global economy.",2008,0, 1236,Problem Oriented Software Engineering: Solving the Package Router Control Problem,"Problem orientation is gaining interest as a way of approaching the development of software intensive systems, and yet, a significant example that explores its use is missing from the literature. In this paper, we present the basic elements of Problem Oriented Software Engineering (POSE), which aims at bringing both nonformal and formal aspects of software development together in a single framework. We provide an example of a detailed and systematic POSE development of a software problem: that of designing the controller for a package router. The problem is drawn from the literature, but the analysis presented here is new. The aim of the example is twofold: to illustrate the main aspects of POSE and how it supports software engineering design and to demonstrate how a nontrivial problem can be dealt with by the approach.",2008,0, 1237,Problem-based learning and self-efficacy: How a capstone course prepares students for a profession,"AbstractProblem-based learning (PBL) is apprenticeship for real-life problem solving, helping students acquire the knowledge and skills required in the workplace. Although the acquisition of knowledge and skills makes it possible for performance to occur, without self-efficacy the performance may not even be attempted. I examined how student self-efficacy, as it relates to being software development professionals, changed while involved in a PBL environment. Thirty-one undergraduate university computer science students completed a 16-week capstone course in software engineering during their final semester prior to graduation. Specific instructional strategies used in PBLnamely the use of authentic problems of practice, collaboration, and reflectionare presented as the catalyst for students' improved self-efficacy. Using a self-efficacy scale as pre-and postmeasures, and guided journal entries as process data, students were observed to increase their levels of self-efficacy.",2005,0, 1238,Process Artifacts Defined as an Aspectual Service to System Models,"Process artifacts identified from a process description often implicitly bias and cross-cut the definition of generic services from various tools that assist/automate process activities. The resulting toolsupport is tightly coupled with the process definition it supports, leading to poor adaptability when the required artifacts or process activities evolve/change. This issue is of further concern while providing toolsupport for assisting knowledge-intensive process activities through an interactive exploration of related knowledge-bases. Therefore, our focus is on early separation of process related cross-cutting concerns from generic tool-support services for creating, browsing, accessing, querying, inferencing, and visualizing associated knowledge-bases. We discuss our approach in the context of designing tool support for a system security Certification and Accreditation (C&A) process automation based on service-oriented and aspect-oriented design paradigms.",2006,0, 1239,Process Ownership Challenges in IT-Enabled Transformation of Interorganizational Business Processes,"The paper investigates the challenges of process ownership in business processes crossing organizational boundaries. A literature review explores the research traditions of business process reengineering, interorganizational systems (IOS), workflow management, and system development with regard to process ownership and the changing role from intraorganizational issues to inter-organizational issues. The result is a list of relevant process owner tasks, classified by different issues in which a shift of focus is suggested. A case study of a governmental process portal serves the purpose of exemplifying the novel process ownership challenges in an interorganizational context. Analyzing the case from the process ownership perspective reveals that the proposed shift of focus is indeed applicable and how neglecting these new challenges are a barrier for successful transformation. With the categorization and shift of focus suggested in this paper, future research may investigate in more detail the dilemmas of distributed versus centralized ownership and bring out different models of interorganizational process ownership to support handling the related issues in an integrated way.",2004,0, 1240,Program comprehension as fact finding,"This study examines the effect of individual differences on the program comprehension strategies of users working with an unfamiliar programming system. Participants of varying programming expertise were given a battery of psychological tests, a brief introduction to a statistical programming environment, and a 20-minute debugging task. Our data show three distinct comprehension strategies that were related to programming experience, but individuals with stronger domain knowledge for specific bugs tended to succeed.",2003,0, 1241,Program Restructuring Through Clustering Techniques,"Program restructuring is a key method for improving the quality of ill-structured programs, thereby increasing the understandability and reducing the maintenance cost. It is a challenging task and a great deal of research is still ongoing. This work presents an approach to program restructuring at the function level, based on clustering techniques with cohesion as the major concern. Clustering has been widely used to group related entities together. The approach focuses on automated support for identifying ill-structured or low-cohesive functions and providing heuristic advice in both the development and evolution phases. A new similarity measure is defined and studied intensively. The approach is applied to restructure a real industrial program. The empirical observations show that the heuristic advice provided by the approach can help software designers make better decision of why and how to restructure a program. Specific source code level software metrics are presented to demonstrate the value of the approach.",2004,0, 1242,Program transformations for light-weight CPU accounting and control in the Java virtual machine,"

This article constitutes a thorough presentation of an original scheme for portable CPU accounting and control in Java, which is based on program transformation techniques at the bytecode level and can be used with every standard Java Virtual Machine. In our approach applications, middleware, and even the standard Java runtime libraries (i.e., the Java Development Kit) are modified in a fully portable way, in order to expose details regarding the execution of threads. These transformations however incur a certain overhead at runtime. Further contributions of this article are the systematic review of the origin of such overheads and the description of a new static path prediction scheme targeted at reducing them.

",2008,0, 1243,Programming for Artists: A Visual Language for Expressive Lighting Design,"Programming is a process of formalizing and codifying knowledge, and, as a result, programming languages are designed for generalists trained in this process of formalization. Artists, whose training focuses on skill and tacit knowledge, are marginalized by existing tools. By designing visual languages that take advantage of an artist's skills in visual perception and expression, we can allow that artist to take advantage of the expressive potential that modern computing offers. In particular, this paper will look at lighting design for interactive, virtual environments, and augmenting an existing programming language to allow artists to leverage their skills in the pragmatics of that medium.",2005,0, 1244,Programs are knowledge bases,"This paper discusses the current practices of assessing students' Java programming language concepts and skills by using educational multiplayer online role playing game. Some of the pitfalls of course design have been identified which need to be addressed to increase students' motivation and to reduce students' difficulties in grasping Java programming concepts. As a solution, a game based assessment and exercise design is proposed for Java programming which will keep students on the task, and provide motivation and challenges. The game is developed and built on web browsers with AJAX technology. The web-based game enables students to play it across different operating system platforms and AJAX makes it possible for the game to be able to provide students with rapid response and instant interplayer interactions.",2010,0, 1245,Progressive Multilayer Reconfiguration for Software DSM Systems in Non-Dedicated Clusters,"This paper is aimed at resolving the reconfiguration problem of software distributed shared memory (DSM) systems in non-dedicated clusters. Different to the past studies focused on dedicated clusters, the goal of this study focuses on prompting system-wide jobs' throughput rather than DSM programs' performance. We invent an approach called progressive multilayer reconfiguration (PMR) for DSM systems. As named, reconfiguration is divided into three different layers, i.e., processor, application, and node in this approach. According to the state transfer of the workload, the three different layer reconfigurations are progressively and respectively performed during the execution of DSM applications. The preliminary results show that PMR can not only utilize abundant CPU cycles available in non-dedicated clusters for DSM applications but also minimize the slowdown of local jobs caused by the disturb from DSM applications.",2005,0, 1246,Promoting Physical Activity Through Internet: A Persuasive Technology View,"

Participation in regular physical activity (PA) is critical to sustaining good health. While a few attempts have been made to use internet-based interventions to promote PA, no system review has been conducted in determining the effectiveness of the intervention. The purpose of this study was to conduct a review under the framework of persuasive technology (PT). Based on a comprehensive of literature search, nice experimental studies were identified and evaluated using the PT functional triad defined by Fogg in 2003[1]. It was found that only two studies led to short-term impact in promoting PA and, furthermore, two studies have found that the intervention based traditional print materials worked better. From a perspective of PT, none of the studies designed its intervention based on the framework of captology and few took full advantages of PT functions. Designing new-generation, PT based internet intervention and examining related human factors are urgently needed.

",2007,0, 1247,Properties of two?s complement floating point notations,"This paper presents the design of a combined two's complement and IEEE 754-compliant floating-point comparator. Unlike previous designs, this comparator incorporates both operand types into one unit, while still maintaining low area and high speed. The comparator design uses a novel magnitude comparator with logarithmic delay, plus additional logic to handle both two's complement and floating point operands. The comparator fully supports 32-bit and 64-bit floating-point comparisons, as defined in the IEEE 754 standard, as well as 32-bit and 64-bit two's complement comparisons. Area and delay estimates are presented for designs implemented in AMI C5N 0.5 μm CMOS technology.",2005,0, 1248,Propositional Logic Constraint Patterns and Their Use in UML-Based Conceptual Modeling and Analysis,"An important conceptual modeling activity in the development of database, object-oriented and agent-oriented systems is the capture and expression of domain constraints governing underlying data and object states. UML is increasingly used for capturing conceptual models, as it supports conceptual modeling of arbitrary domains, and has extensible notation allowing capture of invariant constraints both in the class diagram notation and in the separately denoted OCL syntax. However, a need exists for increased formalism in constraint capture that does not sacrifice ease of use for the analyst. In this paper, we codify a set of invariant patterns formalized for capturing a rich category of propositional constraints on class diagrams. We use tools of Boolean logic to set out the distinction between these patterns, applying them in modeling by way of example. We use graph notation to systematically uncover constraints hidden in the diagrams. We present data collected from applications across different domains, supporting the importance of ""pattern-finding"" for n-variable propositional constraints using general graph theoretic methods. This approach enriches UML-based conceptual modeling for greater completeness, consistency, and correctness by formalizing the syntax and semantics of these constraint patterns, which has not been done in a comprehensive manner before now",2007,0, 1249,Proselytizing pervasive computing education: a strategy and approach influenced by human-computer interaction,"A course on pervasive computing should be structured around key functions throughout a systems development process to cover common underlying concerns throughout science and engineering disciplines - development of design rationale, prototyping, evaluation, and component reuse. However, broader considerations of usage context and appreciation for research and methodological contributions from other disciplines must be strongly factored into course planning. To achieve both goals, we suggest learning objectives, a general strategy and approach using case-based learning.",2004,0, 1250,Protection of database security via collaborative inference detection.,"Malicious users can exploit the correlation among data to infer sensitive information from a series of seemingly innocuous data accesses. Thus, we develop an inference violation detection system to protect sensitive data content. Based on data dependency, database schema and semantic knowledge, we constructed a semantic inference model (SIM) that represents the possible inference channels from any attribute to the pre-assigned sensitive attributes. The SIM is then instantiated to a semantic inference graph (SIG) for query-time inference violation detection. For a single user case, when a user poses a query, the detection system will examine his/her past query log and calculate the probability of inferring sensitive information. The query request will be denied if the inference probability exceeds the prespecified threshold. For multi-user cases, the users may share their query answers to increase the inference probability. Therefore, we develop a model to evaluate collaborative inference based on the query sequences of collaborators and their task-sensitive collaboration levels. Experimental studies reveal that information authoritativeness, communication fidelity and honesty in collaboration are three key factors that affect the level of achievable collaboration. An example is given to illustrate the use of the proposed technique to prevent multiple collaborative users from deriving sensitive information via inference.",2008,0, 1251,Protocol analysis: a neglected practice,"Geographic routing protocols greatly reduce the requirements of topology storage and provide flexibility in the accommodation of the dynamic behavior of ad hoc networks. This paper presents performance evaluations and comparisons of two geographic routing protocols and the popular AODV protocol. The trade-offs among the average path reliabilities, average conditional delays, average conditional number of hops, and area spectral efficiencies and the effects of various parameters are illustrated for finite ad hoc networks with randomly placed mobiles. This paper uses a dual method of closed-form analysis and simple simulation that is applicable to most routing protocols and provides a much more realistic performance evaluation than has previously been possible. Some features included in the new analysis are shadowing, exclusion and guard zones, and distance-dependent fading.",2014,0, 1252,Protocol Conformance Testing a SIP Registrar: an Industrial Application of Formal Methods,"Various research prototypes and a well-founded theory of model based testing (MBT) suggests the application of MBT to real-world problems. In this article we report on applying the well-known TGV tool for protocol conformance testing of a Session Initiation Protocol (SIP) server. Particularly, we discuss the performed abstractions along with corresponding rationales. Furthermore, we show how to use structural and fault-based techniques for test purpose design. We present first empirical results obtained from applying our test cases to a commercial implementation and to a popular open source implementation of a SIP Registrar. Notably, in both implementations our input output labeled transition system model proved successful in revealing severe violations of the protocol.",2007,0, 1253,Prototype System for Semantic Retrieval of Neurological PET Images,"

Positron Emission Tomography (PET) is used within neurology to study the underlying biochemical basis of cognitive functioning. Due to the inherent lack of anatomical information its study in conjunction with image retrieval is limited. Content based image retrieval (CBIR) relies on visual features to quantify and classify images with a degree of domain specific saliency. Numerous CBIR systems have been developed semantic retrieval, has however not been performed. This paper gives a detailed account of the framework of visual features and semantic information utilized within a prototype image retrieval system, for PET neurological data. Images from patients diagnosed with different and known forms of Dementia are studied and compared to controls. Image characteristics with medical saliency are isolated in a top down manner, from the needs of the clinician - to the explicit visual content. These features are represented via Gabor wavelets and mean activity levels of specific anatomical regions. Preliminary results demonstrate that these representations are effective in reflecting image characteristics and subject diagnosis; consequently they are efficient indices within a semantic retrieval system.

",2008,0, 1254,Proving the concept of a data broker as an emergent alternative to supra-enterprise EPR systems.,"Electronic Patient Records systems configured into large enterprise models have become the assumed best route forward. In England, as in several other countries, this has expanded to a major meta-enterprise procurement programme. However, concerns are raised that such systems lack user ownership, and experience from other sectors shows difficulties with large enterprise systems. At a time of great change and once again shifting organizations, is this move simply building large and ponderous edifices with unstable materials? Latest software engineering research is now demonstrating the potential of an alternative model, enabling trusted information brokers to search out in real time at point of use data held in registered local and departmental systems. If successful, this could enable a new and less cumbersome paradigm. The data could move where needed whatever the service configuration. A concept demonstrator has been built set in the context of health and social care in England. It is important for all technological support to the health sector to be reviewed as new technologies emerge so as to identify and exploit new opportunities, and the results of this 3 year project show that the health record information broker route merits further investigative research.",2005,0, 1255,Pursuing Electronic Health,"Current emphasis on the computerization of healthcare makes research on physician technology acceptance increasingly important. By understanding factors in physician acceptance and use of information technology, administrators may be able to increase the value of systems in their organizations. The objective of this study was to examine the effect of physician characteristics on effort expectancy and performance expectancy in the context of forthcoming EHR. This was achieved through a survey of 256 physicians in an academic medical group practice just before the implementation of a vendor-based EHR. Openness to change in clinical practice and being a male were positively related to effort expectancy and performance expectancy. Age and computer self-efficacy had negative and positive associations with effort expectancy respectively. These results contribute to the growing literature on technology acceptance in healthcare and highlight some specific physician characteristics that administrators may be able to leverage when managing EHR implementations.",2012,0, 1256,Python,"Software developers use Application Programming Interfaces (APIs) of libraries and frameworks extensively while writing programs. In this context, the recommendations provided in code completion pop-ups help developers choose the desired methods. The candidate lists recommended by these tools, however, tend to be large, ordered alphabetically and sometimes even incomplete. A fair amount of work has been done recently to improve the relevance of these code completion results, especially for statically typed languages like Java. However, these proposed techniques rely on the static type of the object and are therefore inapplicable for a dynamically typed language like Python. In this paper, we present PyReco, an intelligent code completion system for Python which uses the mined API usages from open source repositories to order the results based on relevance rather than the conventional alphabetic order. To recommend suggestions that are relevant for a working context, a nearest neighbor classifier is used to identify the best matching usage among all the extracted usage patterns. To evaluate the effectiveness of our system, the code completion queries are automatically extracted from projects and tested quantitatively using a ten-fold cross validation technique. The evaluation shows that our approach outperforms the alphabetically ordered API recommendation systems in recommending APIs for standard, as well as, third-party libraries.",2016,0, 1257,Qos for wireless interactive multimedia streaming,"With the microelectronic technology, photoelectron technology and the continuous development of wireless network technology, the development of the computer has entered the era of mobile, with PDA, portable computers and dressing equipment, becoming increasingly prevalent in mobile computing systems, represented by the use of computers in mobile, wireless devices, pocket will move the subject through the network and huge number space seamlessly together, has become possible, in this paper, based on the system design and implementation of wireless interactive multimedia system was tested, develop a set of interactive multimedia system in wireless environment, through the wireless network environment to build a wireless interactive platform, the mobile terminal and data service center, or between mobile terminal and mobile terminal through the platform for multimedia data transmission and information interaction.",2014,0, 1258,Quality Assessment of Modeling and Simulation of Network-Centric Military Systems,"Modeling and simulation (M&S) of network-centric military systems poses significant technical challenges. A network-centric military system (also known as network-centric operations, network-centric warfare or FORCEnet) is a system of systems aligning and integrating other systems such as battlefields, computers, databases, mobile devices, people (users), processes, satellites, sensors, warriors, shooters, and weapons into a globally networked distributed complex system. Characteristics of a network-centric military system are described using a layered architecture. Challenges for M&S of network-centric military systems are presented. The paper focuses on the quality assessment challenge and advocates the use of a quality model with four perspectives: product, process, project, and people. A hierarchy of quality indicators is presented for network-centric military system M&S. An approach is described for conducting collaborative assessment of M&S quality using the quality indicators.",1991,0, 1259,Quantifying identifier quality: an analysis of trends,"In this paper, spatial and temporal pattern analysis was performed to obtain better understanding of the water quality in the Min River Basin. The water quality data of 34 sites for five indicators, Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD5), Total Nitrogen (TN), Total Phosphorus (TP), and Dissolved Oxygen (DO) between 2001 and 2010 were analyzed. The spatial trends of water quality for the 2000s were presented. An interpolation approach based on the distance along river network was used to reveal the spatial variation of water quality in year 2010. A non-parametric seasonal Mann-Kendall's test was employed to determine the significance of temporal trends for each parameter of each site. Basically, the water quality of the Min River basin is good and most of the segments can satisfy the national environmental standards of Class III. Moreover, certain spatial and temporal patterns can be discovered through the analysis. The best water quality is in the Dadu River, the Qingyi River, and the upper parts of the Min River. The water quality of the Fu River is the worst in the basin. The middle parts of the Min River are worse than the lower parts and better than the Fu River. From temporal perspective, COD showed significant upward trend, whereas BOD5 showed significant downward trend. TN and TP showed increasing trends in a similar spatial manner, which exhibits the signs of degradation in water quality. Particularly, the concentration of TN exhibited significant upward trend. Nevertheless, DO showed some upward trends. The reasons of the spatial-temporal characteristics were also discussed and some water management strategies were recommended.",2012,0, 1260,Quantifying Software Process Improvement,"TD-SCDMA is known as the low chip-rate time division duplex (LCR TDD) mode of the third generation partnership program (3GPP). It incorporates a combination of time and code division multiple access schemes and is well suited for advanced techniques like joint detection (JD) and spatial processing. This work focuses on the performance and memory requirements of a completely software-based zero-forcing block linear (ZF-BLE) JD and spatial processing receiver for a TD-SCDMA system, which combats intersymbol interference (ISI) as well as multi access interference (MAI). The analysis is based on the TMS320C6416T, which is a 1 GHz DSP with four 16-bit MAC units and a 1 Mbyte level-2 unified on-chip memory space.",2004,0, 1261,Quantifying the Effects of Aspect-Oriented Programming: A Maintenance Study,"One of the main promises of aspect-oriented programming (AOP) is to promote improved modularization of crosscutting concerns, thereby enhancing the software stability in the presence of changes. This paper presents a quantitative study that assesses the positive and negative effects of AOP on typical maintenance activities of a Web information system. The study consists of a systematic comparison between the object-oriented and the aspect-oriented versions of the same application in order to assess to what extent each solution provides maintainable software decompositions. Our analysis was driven by fundamental modularity attributes, such as coupling, cohesion, conciseness, and separation of concerns. We have found that the aspect-oriented design has exhibited superior stability and reusability through the changes, as it has resulted in fewer lines of code, improved separation of concerns, weaker coupling, and lower intra-component complexity",2006,0, 1262,QuARS: A Tool for Analyzing Requirements,"The ability of the human mind to perceive visual informationmakes visualization not only useful, but a powerful toolfor information discovery. Answering questions about complexrelationships requires the analyst to choose a statistical analysistechnique that makes relationships visually discernible. Oftenthe proper technique is dependent on the characteristics of thedataset, such as dependency among variables, sample size, andtypes of data (ordinal or categorical). In this work, we proposea web based interface approach that visualizes various statisticaltests and displays the distributions of data using color codingschemes. With our system, a user can select multiple variablesinteractively, and the resulting selections will be visualized tohelp the user understand the data and statistical formulas used toshow it. This capability allows a user to quickly evaluate differentsubsets of a large, complex dataset for statistical correlations. Tovalidate our approach, we performed a controlled user study toevaluate the ease of use of our system, and to test the effectivenessof our interface. We see our system as directly applicable to dataanalytical tasks, as well as a useful teaching tool for those learningdata analytics.",2016,0, 1263,Query Planning for Searching Inter-dependent Deep-Web Databases,"

Increasingly, many data sources appear as online databases, hidden behind query forms, thus forming what is referred to as the <em>deep web</em>. It is desirable to have systems that can provide a high-level and simple interface for users to query such data sources, and can automate data retrieval from the deep web. However, such systems need to address the following challenges. First, in most cases, no single database can provide all desired data, and therefore, multiple different databases need to be queried for a given user query. Second, due to the dependencies present between the deep-web databases, certain databases must be queried before others. Third, some database may not be available at certain times because of network or hardware problems, and therefore, the query planning should be capable of dealing with unavailable databases and generating alternative plans when the optimal one is not feasible.

This paper considers query planning in the context of a deep-web integration system. We have developed a dynamic query planner to generate an efficient query order based on the database dependencies. Our query planner is able to select the <em>top</em><em>K</em>query plans. We also develop cost models suitable for query planning for deep web mining. Our implementation and evaluation has been made in the context of a bioinformatics system, SNPMiner. We have compared our algorithm with a naive algorithm and the optimal algorithm. We show that for the 30 queries we used, our algorithm outperformed the naive algorithm and obtained very similar results as the optimal algorithm. Our experiments also show the scalability of our system with respect to the number of data sources involved and the number of query terms.

",2008,0, 1264,RankProd: A bioconductor package for detecting differentially expressed genes in meta-analysis,"

Summary: While meta-analysis provides a powerful tool for analyzing microarray experiments by combining data from multiple studies, it presents unique computational challenges. The Bioconductor package RankProd provides a new and intuitive tool for this purpose in detecting differentially expressed genes under two experimental conditions. The package modifies and extends the rank product method proposed by Breitling et al., [(2004) FEBS Lett., 573, 83--92] to integrate multiple microarray studies from different laboratories and/or platforms. It offers several advantages over t-test based methods and accepts pre-processed expression datasets produced from a wide variety of platforms. The significance of the detection is assessed by a non-parametric permutation test, and the associated P-value and false discovery rate (FDR) are included in the output alongside the genes that are detected by user-defined criteria. A visualization plot is provided to view actual expression levels for each gene with estimated significance measurements.

Availability: RankProd is available at Bioconductor http://www.bioconductor.org. A web-based interface will soon be available at http://cactus.salk.edu/RankProd

Contact: fhong@salk.edu

Supplementary information: Supplementary data are available at Bioinformatics online.

",2006,0, 1265,Rapid Benchmarking for Semantic Web Knowledge Base Systems,"Adaptive learning (AL) systems have long been one of the promising solutions to web-based personalized learning. This paper proposes a framework to solve the problem of integrating knowledge resources on the Web based on Semantic Web languages. As a consequence, knowledge modules of an AL system can be shared and reused on the Internet, resulting a service-based approach to developing distributed AL systems. Based on this framework, a prototype AL system was implemented to demonstrate how the knowledge modules of an AL system can be developed and integrated. Finally, a preliminary prototype evaluation result shows that the performance of the service-based approach is acceptable under light to middle traffics of requests based on current web service implementations.",2008,0, 1266,RE Approach for e-Business Advantage,"Wireless Sensor Network is a group of distributed sensor nodes that has many applications those can vary from simple such as sensing temperature, pressure, humidity from the environment to critical applications that includes traffic monitoring, target tracking, patient monitoring etc. These critical applications are real time applications those can be possible with the Mobile Wireless Sensor Network (MWSN). MWSN can support the reliability of real time applications as mentioned. The basic of MWSN is handover procedure that can help them to maintain the existence of sensor node in between or from away the mobility range. There are various types of handover procedures and techniques to handle the mobility of sensor node with the support of different handover strategies. These handover techniques are somewhat different from each other and can affect more or less the performance of the network in terms of some network parameters such as network throughput, average time taken to send packets and packet delivery ratio. IEEE 802.15.4 is one of the standards from many technologies used in WSN. It can cover limited area. Thus, in IEEE 802.15.4 Wireless Personal Area Network coordinators are used to extend the coverage area that can help to deploy the mobility of sensor nodes in WSN with the help of handover procedures. These various handover techniques focus on the different parameters according to the requirement of the application for which these are needed and used in wireless scenario. These parameters such as average delay, packet delivery ratio, network throughput, energy consumption etc. can change the performance of the network by effecting more or less network lifetime.",2015,0, 1267,Reaching Beyond the Invisible Barriers: Serving a Community of Users with Multiple Needs,"

This paper discusses a four-phase model for evaluating multimedia learning materials that emphasizes the diversity of learners and variations in instructional needs and user characteristics. The authors begin with an overview of the model, supporting evidence for its use, and key characteristics of users supported by each of the phases. They then focus on results of a current use that emphasized stage four, real-time usability, and show how they were able to document that the models under review met the needs of diverse learners and varied instructional strategies.

",2007,0, 1268,Reading and class work,"We develop a novel technique for class-based matching of object parts across large changes in viewing conditions. Given a set of images of objects from a given class under different viewing conditions, the algorithm identifies corresponding regions depicting the same object part in different images. The technique is based on using the equivalence of corresponding features in different viewing conditions. This equivalence-based matching scheme is not restricted to planar components or affine transformations. As a result, it identifies corresponding parts more accurately and under more general conditions than previous methods. The scheme is general and works for a variety of natural object classes. We demonstrate that using the proposed methods, a dense set of accurate correspondences can be obtained. Experimental comparisons to several known techniques are presented. An application to the problem of invariant object recognition is shown, and additional applications to wide-baseline stereo are discussed.",2004,0, 1269,Read-It: A Multi-modal Tangible Interface for Children Who Learn to Read,AbstractMulti-modal tabletop applications offer excellent opportunities for enriching the education of young children. Read-It is an example of an interactive game with a multi-modal tangible interface that was designed to combine the advantages of current physical games and computer exercises. It is a novel approach for supporting children who learn to read. The first experimental evaluation has demonstrated that the Read-It approach is indeed promising and meets a priori expectations.,2004,0, 1270,Read-It: five-to-seven-year-old children learn to read in a tabletop environment,"Augmented tabletops can be used to create multi-modal and collaborative environments in which natural interactions with tangible objects that represent virtual (digital) information can be performed. Such environments are considered potentially interesting for many different applications. In this paper, we address the question of whether or not it makes sense to use such environments to design learning experiences for young children. More specifically, we present the ""Read-It"" application that we have created to illustrate how augmented tabletops can support the development of reading skills. Children of five-to-seven-years old were actively involved in designing and testing this application. A pilot experiment was conducted with a prototype of the Read-It application, in order to confirm that it does indeed meet the a priori expectations. We hope that the Read-It application will inspire the development of more tabletop applications that are targeted at specific user groups and activities.",2004,0, 1271,Ready! Set! Go! An Action Research Agenda for Software Architecture Research,"Software architecture practice is highly complex. Software architects interact with business as well as technical aspects of systems, often embedded in large and changing organizations. We first make an argument that an appropriate research agenda for understanding, describing, and changing architectural practice in this context is based on an action research agenda in which researchers use ethnographic techniques to understand practice and engages directly with and in practice when proposing and designing new practices. Secondly, we present an overview of an ongoing project which applies action research techniques to understand and potentially change architectural practice in four Danish software companies.",2008,0, 1272,Realising evidence-based software engineering,The following topics are dealt with: evidence-based software engineering; search engine; software process simulation.,2007,0, 1273,Realising evidence-based software engineering (REBSE-2) a report from the workshop held at ICSE 2007,"

Context: The REBSE international workshops are concerned with exploring the adaptation and use of the evidence-based paradigm in software engineering research and practice, through a mix of presentations and discussion.

Objectives: These were to explore both experience with, and potential for, evidence-based software engineering (EBSE); to consider how this might affect empirical practices in software engineering; and to work towards creating a community of researchers to practice and promote EBSE.

Method: Three sessions were dedicated to a mix of presentations and interactive discussion, while the fourth was dedicated to summarising progress and identifying both issues of concern and actions to pursue.

Conclusions: While we identified a number of issues, a key need is clearly to have a central repository to both provide information and to maintain a record of activity in this area.

",2007,0, 1274,Realising evidence-based software engineering a report from the workshop held at ICSE 2005,"Context: The workshop was held to explore the potential for adapting the ideas of evidence-based practices as used in medicine and other disciplines for use in software engineering.Objectives: To devise ways of developing suitable evidence-based practices and procedures, especially the use of structured literature reviews, and introducing these into software engineering research and practice.Method: Three sessions were dedicated to a mix of presentations based on position papers and interactive discussion, while the fourth focused upon the key issues as decided by the participants.Results: An initial scoping of the major issues, identification of useful parallels, and some plans for future development of an evidence-based software engineering community.Conclusions: While there are substantial challenges to introducing evidence-based practices, there are useful experiences to be drawn from a variety of other domains.",2005,0, 1275,Realizing quality improvement through test driven development: results and experiences of four industrial teams,"

Test-driven development (TDD) is a software development practice that has been used sporadically for decades. With this practice, a software engineer cycles minute-by-minute between writing failing unit tests and writing implementation code to pass those tests. Test-driven development has recently re-emerged as a critical enabling practice of agile software development methodologies. However, little empirical evidence supports or refutes the utility of this practice in an industrial context. Case studies were conducted with three development teams at Microsoft and one at IBM that have adopted TDD. The results of the case studies indicate that the pre-release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15---35% increase in initial development time after adopting TDD.

",2008,0, 1276,Real-Time Rendering of Fur,"This paper presents a system for inferring complex mental states from video of facial expressions and head gestures in real-time. The system is based on a multi-level dynamic Bayesian network classifier which models complex mental states as a number of interacting facial and head displays, identified from component-based facial features. Experimental results for 6 mental states groups- agreement, concentrating, disagreement, interested, thinking and unsure are reported. Real-time performance, unobtrusiveness and lack of preprocessing make our system particularly suitable for user-independent human computer interaction.",2004,0, 1277,RealTourist ? A Study of Augmenting Human-Human and Human-Computer Dialogue with Eye-Gaze Overlay,"

We developed and studied an experimental system, RealTourist, which lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist’s eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner’s eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study serves two purposes. The first is to identify patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel. The second is to investigate how computer-mediated communication can be supported by the display of the partner’s eye-gaze.

",2005,0, 1278,Reasoning Frameworks,"With the growing concern for driving safety, many driving-assistance systems have been developed. In this paper, we develop a reasoning-based framework for the monitoring of driving safety. The main objective is to present drivers with an intuitively understood green/yellow/red indicator of their danger level. Because the danger level may change owing to the interaction of the host vehicle and the environment, the proposed framework involves two stages of danger-level alerts. The first stage collects lane bias, the distance to the front car, longitudinal and lateral accelerations, and speed data from sensors installed in a real vehicle. All data were recorded in a normal driving environment for the training of hidden Markov models of driving events, including normal driving, acceleration, deceleration, changing to the left or right lanes, zigzag driving, and approaching the car in front. In addition to recognizing these driving events, the degree of each event is estimated according to its character. In the second stage, the danger-level indicator, which warns the driver of a dangerous situation, is inferred by fuzzy logic rules that address the recognized driving events and their degrees. A hierarchical decision strategy is also designed to reduce the number of rules that are triggered. The proposed framework was successfully implemented on a TI DM3730-based embedded platform and was fully evaluated in a real road environment. The experimental results achieved a detection ratio of 99 % for event recognition, compared with that achieved by four conventional methods.",2013,0, 1279,"Reasons for Software Effort Estimation Error: Impact of Respondent Role, Information Collection Approach, and Data Analysis Method","This study aims to improve analyses of why errors occur in software effort estimation. Within one software development company, we collected information about estimation errors through: 1) interviews with employees in different roles who are responsible for estimation, 2) estimation experience reports from 68 completed projects, and 3) statistical analysis of relations between characteristics of the 68 completed projects and estimation error. We found that the role of the respondents, the data collection approach, and the type of analysis had an important impact on the reasons given for estimation error. We found, for example, a strong tendency to perceive factors outside the respondents' own control as important reasons for inaccurate estimates. Reasons given for accurate estimates, on the other hand, typically cited factors that were within the respondents' own control and were determined by the estimators' skill or experience. This bias in types of reason means that the collection only of project managers' viewpoints will not yield balanced models of reasons for estimation error. Unfortunately, previous studies on reasons for estimation error have tended to collect information from project managers only. We recommend that software companies combine estimation error information from in-depth interviews with stakeholders in all relevant roles, estimation experience reports, and results from statistical analyses of project characteristics",2004,0, 1280,Recognizing Textual Entailment Via Atomic Propositions,"

This paper describes Macquarie University's Centre for Language Technology contribution to the PASCAL 2005 Recognizing Textual Entailment challenge. Our main aim was to test the practicability of a purely logical approach. For this, atomic propositions were extracted from both the text and the entailment hypothesis and they were expressed in a custom logical notation. The text entails the hypothesis if every proposition of the hypothesis is entailed by some proposition in the text. To extract the propositions and encode them into a logical notation the system uses the output of Link Parser. To detect the independent entailment relations the system relies on the use of Otter and WordNet.

",2005,0, 1281,Reconstructing Metabolic Pathways by Bidirectional Chemical Search,"

One of the main challenges in systems biology is the establishment of the metabolome: a catalogue of the metabolites and bio-chemical reactions present in a specific organism. Current knowledge of biochemical pathways as stored in public databases such as KEGG, is based on carefully curated genomic evidence for the presence of specific metabolites and enzymes that activate particular biochemical reactions. In this paper, we present an efficient method to build a substantial portion of the artificial chemistry defined by the metabolites and biochemical reactions in a given metabolic pathway, which is based on bidirectional chemical search. Computational results on the pathways stored in KEGG reveal novel biochemical pathways.

",2007,0, 1282,Reconstruction of Protein-Protein Interaction Pathways by Mining Subject-Verb-Objects Intermediates,"

The exponential increase in publication rate of new articles is limiting access of researchers to relevant literature. This has prompted the use of text mining tools to extract key biological information. Previous studies have reported extensive modification of existing generic text processors to process biological text. However, this requirement for modification had not been examined. In this study, we have constructed Muscorian, using MontyLingua, a generic text processor. It uses a two-layered generalization-specialization paradigm previously proposed where text was generically processed to a suitable intermediate format before domain-specific data extraction techniques are applied at the specialization layer. Evaluation using a corpus and experts indicated 86-90% precision and approximately 30% recall in extracting protein-protein interactions, which was comparable to previous studies using either specialized biological text processing tools or modified existing tools. Our study had also demonstrated the flexibility of the two-layered generalization-specialization paradigm by using the same generalization layer for two specialized information extraction tasks.

",2007,0, 1283,RED Based Congestion Control Mechanism for Internet Traffic at Routers,One of the main challenges in the internet backbone is to provide an adequate quality of service (QoS) and this can be done by either adopting a suitable buffer management method or by QoS routing. One of the popular buffer management algorithms used is random early detection (RED) which falls into the category of active queue management (AQM) algorithms. There are many variant algorithms based on RED including adaptive RED and in this paper a new AQM algorithm which depends on changing the maximum dropping probability in a way that has been shown by simulation to have a similar throughput and lower delay than either RED or adaptive RED is proposed.,2007,0, 1284,Redesigning the Intermediate Course in Software Design,"Universities are required to produce graduates with good technical knowledge and `employability skills' such as communication, team work, problem-solving, initiative and enterprise, planning, organizing and self-management. The capstone software development course described in this paper addresses this need. The course design contains three significant innovations: running the course for two cohorts of students in combination; requiring students to be team members in 3rd year and team leaders in their 4th (final) year; and providing assessment and incentives for individuals to pursue quality work in a group-work environment. The course design enables the creation of a simulated industrial context, the benefits of which go well beyond the usual, well-documented benefits of group project work. In order to deliver a successful outcome, students must combine academic theory and practical knowledge whilst overcoming the day-to-day challenges that face project teams. Course design enables the blending of university-based project work and work-integrated learning in an innovative context to better prepare students for participating in, and leading, multi-disciplinary teams on graduation. Outcomes have been compellingly positive for all stakeholders - students, faculty and industry partners.",2013,0, 1285,Reducing the Representation Complexity of Lattice-Based Taxonomies,"In this paper an improved method is proposed to convert multi-variable dependent time-varying systems into a polytopic representation with uncertain parameters for further stability analysis. Based on the Taylor expansion, the employed technique utilizes a specific arrangement of the cross-variable terms such that the obtained vertices of the polytopic system increases only polynomially with the order of the Taylor expansion. An example about stability analysis demonstrates the effectiveness and usefulness of the proposed method in robust stability studies.",2016,0, 1286,Reference Models for E-Services Integration Based on Life-Events,"Component metadata is one of the most effective methods to improve the testability of component-based software. In this paper, we firstly give a formal definition of component, and summarize the basic meanings of component metadata. Based on these, an idea of grouped-metadata object (GMO) is introduced, which is divided into two types, respectively named descriptive metadata and operative metadata. And a general framework of descriptive metadata and operative metadata is further given, which is consisted of several groups. Each group includes several attributes, and their meanings are described in detail. Furthermore, we give a formal reference model of GMO using class diagram of UML. Combining with the above formal model, we present change model used in GMO and introduce an idea to map all changes inside component to the changes in component interfaces, mainly referring to changes of public method and variables. Here we introduce a concept of method dependency graph (MDG) to implement the mapping. Then the changes are reflected in relevant attributes in GMO provided to component users in order to facilitate component-based software integration testing and regression testing. Finally, the case study based on previous formal model is done, and the corresponding results are given. All these show effectively that the models we presented are valid and helpful for component-based software integration testing and regression testing.",2007,0, 1287,Reflection and abstraction in learning software engineering's human aspects.,"Intertwining reflective and abstract modes of thinking into the education of software engineers, especially in a course that focuses on software engineering's human aspects, can increase students' awareness of the discipline's richness and complexity while enhancing their professional performance in the field. The complexity of software development environments includes the profession's cognitive and social aspects. A course designed to increase students' awareness of these complexities introduces them to reflective mental processes and to tasks that invite them to apply abstract thinking. For the past three years, we have taught a Human Aspects of Software Engineering course at both the Technion-Israel Institute of Technology and the School of Computer Science at Carnegie Mellon University. This course aims to increase software engineering students' awareness of the richness and complexity of various human aspects of software engineering and of the problems, dilemmas, question, and conflicts these professionals could encounter during the software development process.",2005,0, 1288,Reflection on development and delivery of a data mining unit,"

Educators developing data mining courses face a difficult task of designing curricula that are adaptable, have solid foundations, and are tailored to students from different academic fields. This task could be facilitated by debating and sharing the ideas and experiences gained from the practice of data mining as well as from teaching data mining. The shared body of knowledge would be a valuable resource which would help educators design better data mining curricula. The aim of this paper is to make a contribution to such a debate. The paper presents a reflection and evaluation of the author's experience with developing and delivering a postgraduate unit Knowledge Discovery and Data Mining.

",2007,0, 1289,Reflection: Improving Research through Knowledge Transfer,It is through our mental models of the world that we understand it. Advances in science are nothing more than improvements to the model. This paper presents the development and refinement of our model of the research process as we seek to understand and improvement the process through three generations of case studies. We conclude by introducing an approach to help manage and plan research projects.,2006,0, 1290,Reflections on CSEE&T 2006,"Eye center detection is an essential module in iris segmentation and gaze tracking. It is more challengeable to achieve this goal using a usual web camera under natural illumination. The image resolution and the eye region scale are main problems. Focusing on solving these problems, this paper proposes a robust eye center searching algorithm which can locate the iris including the center and the radius in Id searching time consuming. Experiment shows an excited result. Compared with the well known Hough and Integral Differential, this method also is robust to the reflection and occlusion.",2010,0, 1291,Reformulation and Convex Relaxation Techniques for Global Optimization,"A key challenge toward green communications is how to maximize energy efficiency by optimally allocating wireless resources in large-scale multiuser multicarrier orthogonal frequency-division multiple-access (OFDMA) systems. The quality-of-service (QoS)-constrained energy efficiency maximization problem is generally hard to solve due to the inverse transposition of the optimization operands in the optimization objective. We apply convex relaxation to make the problem quasiconcave with respect to power and concave with respect to the subcarrier indexing coefficients. The Karush-Kuhn-Tucker (KKT) optimality conditions lead to transcendental functions, where existing solutions are only numerically tractable. Different from the existing approaches, we apply the Maclaurin series expansion technique to transform the complex transcendental functions into simple polynomial expressions that allow us to obtain the global optimum in fast polynomial time, with the tractable upper bound of truncation error. With the new solution method, we propose a joint optimal allocation policy for both adaptive power and dynamic subcarrier allocations. We gain insight on the optimality, feasibility, and computational complexity of the joint optimal solution to show that the proposed scheme is theoretically and practically sound with fast convergence toward near-optimal solutions with an explicitly tractable truncation error. The simulation results confirm that the proposed scheme achieves a much higher energy efficiency performance with the guaranteed QoS and much lower complexity than existing approaches in the literature.",2016,0, 1292,Reframing Software Design: Perspectives on Advancing an Elusive Discipline,"Software engineering researchers and practitioners have long had an uncertain and uneasy relationship with design. It is acknowledged that software design is critical and major strides have been made in advancing the discipline, but we all are keenly aware that something ?is just not quite right? and that design remains one of the least-understood aspects of software engineering. In this paper, we present our novel Eyeglass framework and use it to offer a series of fresh perspectives on software design, its accomplishments, and fundamental challenges ahead. The Eyeglass framework is inspired by the broader discipline of design and evaluates software design in terms of seven interrelated dimensions: ideas, representation, activities, judgment, communication, domain of use, and domain of materials. The main conclusion of our examination is that we have unnecessarily limited ourselves in our explorations of software design. While there has been some success, to further advance the discipline we must step back, reframe software design to address all seven dimensions, and engage in a deep study of these dimensions, individually and as a whole",2006,0, 1293,Regression Models of Software Development Effort Estimation Accuracy and Bias,"Accurate software effort estimation is one of the key factors to a successful project by making a better software project plan. To improve the estimation accuracy of software effort, many studies usually aimed at proposing novel effort estimation methods or combining several approaches of the existing effort estimation methods. However, those researches did not consider the distribution of historical software project data which is an important part impacting to the effort estimation accuracy. In this paper, to improve effort estimation accuracy by least squares regression, we propose a data partitioning method by the accuracy measures, MRE and MER which are usually used to measure the effort estimation accuracy. Furthermore, the empirical experimentations are performed by using two industry data sets (the ISBSG Release 9 and the Bank data set which consists of the project data performed in a bank in Korea).",2009,0, 1294,Relative Color Polygons for Object Detection and Recognition,"The oceanography is the technnique of analyzing the oceanic imagery in order to find the useful information about ships, objects. The technique is helpful in detecting the lost ships, boats, aero planes, debris, containers, etc. It may consists of the large volumes of image data, which must be further shortened to find the useful information to find the lost objects in the oceanic area. In this simulation study, the proposed model has been annalysd to detect the objects in the oceanic images in order to minimize the human effort to shortlist the images containing the useful information. The simulative analysis has been designed to use the combination of the color and shape based analysis to detect the objects accurately. The three dimensional color pixel (24-bit pixel) based approach has been used along with the shape and size evaluation to achieve the higher accuracy for the target objects. The MATLAB based simulation is performed on various kinds of satellite images, and the evaluation has been performed on the basis of various performance parameters. The results have shown the effectiveness of the proposed model.",2015,0, 1295,Relaxing Feature Selection in Spam Filtering by Using Case-Based Reasoning Systems,"

This paper presents a comparison between two alternative strategies for addressing feature selection on a well known case-based reasoning spam filtering system called SPAMHUNTING. We present the usage of the k more predictive features and a percentage-based strategy for the exploitation of our amount of information measure. Finally, we confirm the idea that the percentage feature selection method is more adequate for spam filtering domain.

",2007,0, 1296,Reliability and Validity in Comparative Studies of Software Prediction Models,"Empirical studies on software prediction models do not converge with respect to the question ""which prediction model is best?"" The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.",2005,0, 1297,Reliability Prediction and Assessment of Fielded Software Based on Multiple Change-Point Models,"In this paper, we investigate some techniques for reliability prediction and assessment of fielded software. We first review how several existing software reliability growth models based on non-homogeneous Poisson processes (NHPPs) can be readily derived based on a unified theory for NHPP models. Furthermore, based on the unified theory, we can incorporate the concept of multiple change-points into software reliability modeling. Some models are proposed and discussed under both ideal and imperfect debugging conditions. A numerical example by using real software failure data is presented in detail and the result shows that the proposed models can provide fairly good capability to predict software operational reliability.",2005,0, 1298,Repeatability and Accuracy of Bone Cutting and Ankle Digitization in Computer-Assisted Total Knee Replacements,"AbstractIn conventional total knee replacement (TKR) surgery, a significant fraction of implants have varus/valgus alignment errors large enough to reduce the lifespan of the implant, so we are developing a more accurate computer-assisted procedure aimed at reducing the standard deviation (SD) of the implants frontal alignment to under 1. In this study we measured the contributions to overall alignment error of two steps in our proposed procedure: ankle digitization and manual bone cutting. We introduce a new digitizing probe that quickly and robustly locates the midpoint between the ankle malleoli. Based on repeated measurements on 8 cadavers by 6 operators (318 measurements), we estimate that the new probe introduces only 0.15 (SD) of variability into the definition of the tibial mechanical axis in the frontal plane. We also measured the accuracy and repeatability with which surgeons can implement a bone cut using conventional cutting guides to see if conventional cutting techniques are sufficiently accurate. A total of 53 tibial plateau, distal and anterior/posterior (A/P) femoral cuts approximated primary and revision TKR resections. In 20 test cuts on cadaver bone made by 2 expert TKR surgeons, we found a SD of 0.37 (bias of 0.29) in the varus/valgus difference between the guide orientation and the implant orientation before cementing, but in 23 additional cuts performed by four less-experienced surgeons, the SD was 0.83 (bias of 0.31). Ten A/P femoral cuts showed similar trends. We conclude that, in the hands of an experienced surgeon, our current technique (based on our previously-reported non-invasive hip centre locating technique [Inkpen 1999b], robust ankle digitization and manual cutting using computer-guided cutting guides) can approach our target alignment variability goal of a SD less than 1.",2000,0, 1299,Replicating studies on cross- vs single-company effort models using the ISBSG Database,"

In 2001 the ISBSG database was used by Jeffery et al. (Using public domain metrics to estimate software development effort. Proceedings Metrics'01, London, pp 16---27, 2001; S1) to compare the effort prediction accuracy between cross- and single-company effort models. Given that more than 2,000 projects were later volunteered to this database, in 2005 Mendes et al. (A replicated comparison of cross-company and within-company effort estimation models using the ISBSG Database, in Proceedings of Metrics'05, Como, 2005; S2) replicated S1 but obtained different results. The difference in results could have occurred due to legitimate differences in data set patterns; however, they could also have occurred due to differences in experimental procedure given that S2 was unable to employ exactly the same experimental procedure used in S1 because S1's procedure was not fully documented. Recently, we applied S2's experimental procedure to the ISBSG database version used in S1 (release 6) to assess if differences in experimental procedure would have contributed towards different results (Lokan and Mendes, Cross-company and single-company effort models using the ISBSG Database: a further replicated study, Proceedings of the ISESE'06, pp 75---84, 2006; S3). Our results corroborated those from S1, suggesting that differences in the results obtained by S2 were likely caused by legitimate differences in data set patterns. We have since been able to reconstruct the experimental procedure of S1 and therefore in this paper we present both S3 and also another study (S4), which applied the experimental procedure of S1 to the data set used in S2. By applying the experimental procedure of S2 to the data set used in S1 (study S3), and the experimental procedure of S1 to the data set used in S2 (study S4), we investigate the effect of all the variations between S1 and S2. Our results for S4 support those of S3, suggesting that differences in data preparation and analysis procedures did not affect the outcome of the analysis. Thus, the different results of S1 and S2 are very likely due to fundamental differences in the data sets.

",2008,0, 1300,Replication's Role in Software Engineering,"The author describes a software package, running under MSDOS, developed to assist lecturers in the assessment of software assignments. The package itself does not make value judgments upon the work, except when it can do so absolutely, but displays the students' work for assessment by qualified staff members. The algorithms for the package are presented, and the functionality of the components is described. The package can be used for the assessment of software at three stages in the development process: (1) algorithm logic and structure, using Warnier-Orr diagrams; (2) source code structure and syntax in Modula-2; and (3) runtime performance of executable code",1992,0, 1301,Reporting guidelines for controlled experiments in software engineering,"One major problem for integrating study results into a common body of knowledge is the heterogeneity of reporting styles: (1) it is difficult to locate relevant information and (2) important information is often missing. Reporting guidelines are expected to support a systematic, standardized presentation of empirical research, thus improving reporting in order to support readers in (1) finding the information they are looking for, (2) understanding how an experiment is conducted, and (3) assessing the validity of its results. The objective of this paper is to survey the most prominent published proposals for reporting guidelines, and to derive a unified standard that which can serve as a starting point for further discussion. We provide detailed guidance on the expected content of the sections and subsections for reporting a specific type of empirical studies, i.e., controlled experiments. Before the guidelines can be evaluated, feedback from the research community is required. For this purpose, we propose to adapt guideline development processes from other disciplines.",2005,0, 1302,Representing Knowledge Gaps Effectively,"Based on the previous research foundation, this paper first defines “knowledge gap”. Then, it analyzes seven context factors of knowledge gap, including knowledge foundation and cultural background, knowledge attributes, willingness to participate, absorptive capacity, tools and platforms, trust,team culture,and how they influence intra-team knowledge diffusion. Accordingly, establishes influence mechanism model of knowledge diffusion within R&D team. Finally, the paper puts forward relevant suggestions.",2010,0, 1303,Requirements engineering and process modelling in software quality management - Towards a generic process metamodel.,"

This paper examines the concept of Quality in Software Engineering, its different contexts and its different meanings to various people. It begins with a commentary on quality issues for systems development and various stakeholders' involvement. It revisits aspects and concepts of systems development methods and highlights the relevance of quality issues to the choice of a process model. A summarised review of some families of methods is presented, where their application domain, lifecycle coverage, strengths and weaknesses are considered. Under the new development era the requirements of software development change; the role of methods and stakeholders change, too. The paper refers to the latest developments in the area of software engineering and emphasises the shift from traditional conceptual modelling to requirements engineering and process metamodelling principles. We provide support for an emerging discipline in the form of a software process metamodel to cover new issues for software quality and process improvement. The widening of the horizons of software engineering both as a ‘communication tool’ and as a ‘scientific discipline’ (and not as a ‘craft’) is needed in order to support both communicative and scientific quality systems properties. In general, we can consider such a discipline as a thinking tool for understanding the generic process and as the origin of combining intuition and quality engineering to transform requirements to adequate human-centred information systems. We conclude with a schematic representation of a Generic Process Metamodel (GPM) indicating facets contributed by Software Engineering, Computer Science, Information Systems, Mathematics, Linguistics, Sociology and Anthropology. Ongoing research and development issues have provided evidence for influence from even more diverse disciplines.

",2004,0, 1304,Requirements Engineering and the Creative Process in the Video Game Industry,"The software engineering process in video game development is not clearly understood, hindering the development of reliable practices and processes for this field. An investigation of factors leading to success or failure in video game development suggests that many failures can be traced to problems with the transition from preproduction to production. Three examples, drawn from real video games, illustrate specific problems: 1) how to transform documentation from its preproduction form to a form that can be used as a basis for production;, 2) how to identify implied information in preproduction documents; and 3) how to apply domain knowledge without hindering the creative process. We identify 3 levels of implication and show that there is a strong correlation between experience and the ability to identify issues at each level. The accumulated evidence clearly identifies the need to extend traditional requirements engineering techniques to support the creative process in video game development.",2005,0, 1305,Requirements Engineering for Cross-organizational ERP Implementation: Undocumented Assumptions and Potential Mismatches,"A key issue in requirements engineering (RE) for enterprise resource planning (ERP) in a cross-organizational context is how to find a match between the ERP application modules and requirements for business coordination. This paper proposes a conceptual framework for analyzing coordination requirements in inter-organizational ERP projects from a coordination theory perspective. It considers the undocumented assumptions for coordination that may have significant implications for ERP adopting organizations. In addition, we build a library of existing coordination mechanisms supported by modern ERP systems, and use it to make a proposal for how to improve the match between ERP implementations and supported business coordination processes. We discuss the implications of our framework for practicing requirements engineers. Our framework and library are based on a literature survey and the experience with ERP implementation of one of us (Daneva). We further validate and refine our framework.",2005,0, 1306,Requirements for DDDAS Flexible Point Support,"Dynamic data-driven application systems (DDDAS) integrate computer simulations with experimental observations to study phenomena with greater speed and accuracy than could be achieved by either experimentation or simulation alone. One of the key challenges behind DDDAS is automatically adapting simulations when experimental data indicates that a simulation must change. Coercion is a semi-automated simulation adaptation approach that can be automated further if elements of the simulation called flexible points are described in advance. In this paper, we use a number of DDDAS adaptation examples to identify the information that needs to be captured about flexible points in order to support coercion",2006,0, 1307,Requirements: Management Influences on Software Quality Requirements,"Software Requirements Specifications (SRS) or software requirements are basically an organization's understanding of a customer's system requirements and dependencies at a given point in time. This research paper focuses only on the requirements specifications phase of the software development cycle (SDC). It further narrows it down to analyzing the quality of the prepared SRS to ensure that the quality is acceptable. It is a known fact that companies will pay less to fix problems that are found very early in any software development cycle. The Software Quality Assurance (SQA) audit technique is applied in this study to determine whether or not the required standards and procedures within the requirements specifications phase are being followed closely. The proposed online quality analysis system ensures that software requirements among others are complete, consistent, correct, modifiable, ranked, traceable, unambiguous, and understandable. The system interacts with the developer through a series of questions and answers session, and requests the developer to go through a checklist that corresponds to the list of desirable characteristics for SRS. The Case-Based Reasoning (CBR) technique is used to evaluate the requirements quality by referring to previously stored software requirements quality analysis cases (past experiences). CBR is an AI technique that reasons by remembering previously experienced cases.",2010,0, 1308,Research article: Evaluation of PROforma as a language for implementing medical guidelines in a practical context,"Background PROforma is one of several languages that allow clinical guidelines to be expressed in a computer-interpretable manner. How these languages should be compared, and what requirements they should meet, are questions that are being actively addressed by a community of interested researchers. Methods We have developed a system to allow hypertensive patients to be monitored and assessed without visiting their GPs (except in the most urgent cases). Blood pressure measurements are performed at the patients' pharmacies and a web-based system, created using PROforma, makes recommendations for continued monitoring, and/or changes in medication. The recommendations and measurements are transmitted electronically to a practitioner with authority to issue and change prescriptions. We evaluated the use of PROforma during the knowledge acquisition, analysis, design and implementation of this system. The analysis focuses on the logical adequacy, heuristic power, notational convenience, and explanation support provided by the PROforma language. Results PROforma proved adequate as a language for the implementation of the clinical reasoning required by this project. However a lack of notational convenience led us to use UML activity diagrams, rather than PROforma process descriptions, to create the models that were used during the knowledge acquisition and analysis phases of the project. These UML diagrams were translated into PROforma during the implementation of the project. Conclusion The experience accumulated during this study highlighted the importance of structure preserving design, that is to say that the models used in the design and implementation of a knowledge-based system should be structurally similar to those created during knowledge acquisition and analysis. Ideally the same language should be used for all of these models. This means that great importance has to be attached to the notational convenience of these languages, by which we mean the ease with which they can be read, written, and understood by human beings. The importance of notational convenience arises from the fact that a language used during knowledge acquisition and analysis must be intelligible to the potential users of a system, and to the domain experts who provide the knowledge that will be used in its construction.",2006,0, 1309,Research ethics and computer science: an unconsummated marriage,"The ethical conduct of research is a cornerstone of modern scientific research. Computer science and the discipline's technological artifacts touch nearly every aspect of modern life, and computer scientists must conduct and report their research in an ethical manner. This paper examines a small selection of potential ethical dilemmas researchers in this discipline face, and discusses how ethical concerns may be addressed in these situations. The paper concludes with an overview of other areas of ethical concern and a look to the future development of a code for ethical computer science research",2006,0, 1310,Research Frontiers in Advanced Data Mining Technologies and Applications,Insurance systems are now in a new era of intense competition with the rapidly economic development in our country. But many insurance companies have no ability to mining and analyzing the information from the mass customerspsila data and to make full use of them to make profit for the companies. Data mining technology can improve the levels of management and decision making in the insurance system and make the highest investment-return rate for both the companies and their customers. The research on the application data mining technology to insurance system is scarce at present. The papers try to analyze the problem and give some suggestions to data mining technology application in insurance system.,2009,0, 1311,Research in information systems in China (1999?2005) and international comparisons,"In the process of rural information system construction, some developed countries, such as U.S.A, German, France, Australia and Japan, have made advanced achievements. Comparing the process of development in those countries, the developing stages of information system construction can be divided into three stages, denoted as the initial stage, the middle stage and the advanced stage. To identify the developing stage of China in construction of rural information system, a discriminant analysis methodology is applied in this paper based on the statistic data from 1985 to 2010. By using the history statistic data of developed countries, we can identify the corresponding Fisher discriminant function for each stage of rural information system construction. Using the results of discriminant analysis methodology, we can identify the range of each stage in rural information system construction in China and analysis the corresponding developing traits. Then, taking Jilin province as an specific case in the discriminant analysis, we can identify its time table for rural information system construction.",2012,0, 1312,Research issues in software fault categorization,"Technology for actively supporting groups of collaborating users is being applied to many kinds of cooperative work activities. The following topics are addressed: (1) the authors' definition of groupware, (2) a conceptual framework in which to examine research issues, and (3) high-level groupware research issues within this framework. One potentially fertile area of application for such technologies is the problem of software process management. Software process means one of the teamwork, cooperation, coordination, and communication activities that occur within and across groups and organizations of persons throughout the life of software projects, including processes that occur under the broad categories of proposal writing, software engineering, development, and maintenance. This application domain contains a number of research issues and technology problems in such areas as communication, distribution, concurrence control, and human-computer interface design. The research areas needed for groupware to facilitate software processes are described",1991,0, 1313,Research Knowledge Management can be Murder,"Driven by scientific research mission, research team in university, whose ultimate goal is pursuing innovation, creating new knowledge and technology, and cultivating creative talents, is a typical knowledge team. Knowledge can exist in the form of scientific research's outputs and creative talents, which are the objectives of team building. So, successful team management always equals effective knowledge management (KM). Having a core position, knowledge innovation activities happen from end to end in the operation process of university research team. This paper first discussed the knowledge creation process of university research team from the perspective of KM and then defined the systematic elements of KM in research teams. Then data was collected by questionnaire survey and analyzed by employing structural equation model (SEM). The results can be used to explore the impacts that the systematic elements have on KM of university research team and how they impact.",2008,0, 1314,"Research methods in computing: what are they, and how should we teach them?","Despite a lack of consensus on the nature of Computing Research Methods (CRM), a growing number of programs are exploring models and content for CRM courses. This report is one step in a participatory design process to develop a general framework for thinking about and teaching CRM.We introduce a novel sense-making structure for teaching CRM. That structure consists of a road map to the CRM literature, a framework grounded in questions rather than answers, and two CRM skill sets: core skills and specific skills. We integrate our structure with a model for the process a learner goes through on the way to becoming an expert computing researcher and offer example learning activities that represent a growing repository of course materials meant to aid those wishing to teach research skills to computing students.Our model is designed to ground discussion of teaching CRM and to serve as a roadmap for institutions, faculty, students and research communities addressing the transition from student to fully enfranchised member of a computing research community of practice. To that end, we offer several possible scenarios for using our model.In computing, research methods have traditionally been passed from advisor to student via apprenticeship. Establishing a richer pedagogy for training researchers in computing will benefit all (see Figure 1).",2006,0, 1315,Research of Hot-Spot Selection Algorithm in Virtual Address Switch,"

SAN-level buffer cache is an important factor in improving the efficiency of the storage area network (SAN). In this paper, we analyzed the SAN-level access pattern characterization, and designed a new hot spot selection algorithm called maximal access times and oldest access first select (MOFS) and minimal access times and oldest access first eliminate (MOFE) for SAN-level buffer cache. The line size for the hot spot is larger than the line size implemented in disk array caches. The algorithm calls in the block with the highest number of access times and oldest access to SAN-level buffer cache, and eliminates the block with the minimal access times and oldest access from the accessed block list. The algorithm uses the self-adapt mechanism to change the parameter’s algorithm value dynamically. We implemented a virtual address switch in the SAN virtualization system to collect the access request information. Base on this we implemented the hot spot selection algorithm to select a block and send it to the SAN-level buffer cache. Lastly we evaluated the MOFS and MOFE algorithm and proved that this algorithm realizes a low call in ratio and high hit ratios in the SAN-level buffer cache and the self-adapt mechanism makes the MOFS and MOFE work efficiently with different types of workloads.

",2005,0, 1316,Research perspectives on the objects-early debate,Introduction to the Social &amp; Psychological Perspectives and Theories in Collaboration Research Minitrack,2013,0, 1317,Results of SEI Independent Research and Development Projects,"Over the past several years, a noticeable amount of the semiconductor manufacturing industry's overall R&D burden has shifted from chip manufacturer to equipment supplier. However, it is difficult for equipment suppliers to support the permanent dedicated research staff required to bear their increasing R&D burden. Likewise, their counterparts inside the chip manufacturer are urged to focus on current process development, integration, and efficiency issues. This shift in the R&D burden has been widely recognized in the supplier community, which has referred to it as the “technology gap”. This paper describes one way of dealing with that technology gap. A successful joint development project (JDP) between SpeedFam Corporation and Lucent Technologies is described and used to exemplify how the R&D burden can be properly balanced by allowing each organization to focus on their core competency. Key to the success of the JDP was the use of private, independent R&D supplied under contract by Southwest Research Institute, which also helped facilitate the balance through preliminary self-funded R&D. The paper explains how issues regarding intellectual property protection and ownership were successfully resolved and briefly describes the technology produced from the project",1998,0, 1318,"Rethinking free, libre and open source software","Nowadays, there is a huge variety of Digital Imaging and Communications in Medicine (DICOM) software tools. Some of these tools can only display DICOM images and some other offer additional features, such as volume rendering and options for further digital processing and analysis. Other published research in the area offer a detailed description of the software functions but not an evaluation of these tools as software applications. This paper evaluates four, freely available and open source DICOM tools focusing on different aspects. The software tools compared are: Eviewbox, GIMIAS, ImageJ and MITK 3M3. The scores resulted from the evaluation are illustrated in four diagrams, one developed for every software tool, which demonstrate the variation in scores. The software tool that has received the highest rating is ImageJ.",2011,0, 1319,Reusable Idioms and Patterns in Graph Transformation Languages 2004,"Software engineering tools based on Graph Transformation techniques are becoming available, but their practical applicability is somewhat reduced by the lack of idioms and design patterns. Idioms and design patterns provide prototypical solutions for recurring design problems in software engineering, but their use can be easily extended into software development using graph transformation systems. In this paper we briefly present a simple graph transformation language: GReAT, and show how typical design problems that arise in the context of model transformations can be solved using its constructs. These solutions are similar to software design patterns, and intend to serve as the starting point for a more complete collection.",2004,0, 1320,Reuse of TTCN-3 Code,"KSTAR is a fully superconducting tokamak, in operation since 2008 at the National Fusion Research Institute in Korea. All coils are wound using cable-in-conduit conductors and cooled with forced-flow supercritical helium (SHe) at 4.5 K and 5.5 bar. We consider here the central pair, PF1U/L, of the central solenoid coils; during operation these coils are subjected to sharp current transients, which induce AC losses in the coil. The thermal hydraulic transient following a trapezoidal current pulse, with ramp up to 10 kA at a rate of 1 kA/s and ramp down to 0 kA at a rate of 10 kA/s, is simulated here using the 4C code, and the results are compared with the measurements.",2011,0, 1321,Revealing actual documentation usage in software maintenance through war stories.,"War stories are a form of qualitative data that capture informants' specific accounts of surmounting great challenges. The rich contextual detail afforded by this approach warrants its inclusion in the methodological arsenal of empirical software engineering research. We ground this assertion in an exemplar field study that examined the use of documentation in software maintenance environments. Specific examples are unpacked to reveal a depth of insight that would not have been possible using standard interviews. This afforded a better understanding of the complex relationship between project personnel and documentation, including individuals' roles as pointers, gatekeepers, or barriers to documentation.",2007,0, 1322,Review of remotely sensed imagery classification patterns based on object-oriented image analysis,"AbstractWith the wide use of high-resolution remotely sensed imagery, the object-oriented remotely sensed information classification pattern has been intensively studied. Starting with the definition of object-oriented remotely sensed information classification pattern and a literature review of related research progress, this paper sums up 4 developing phases of object-oriented classification pattern during the past 20 years. Then, we discuss the three aspects of methodology in detail, namely remotely sensed imagery segmentation, feature analysis and feature selection, and classification rule generation, through comparing them with remotely sensed information classification method based on per-pixel. At last, this paper presents several points that need to be paid attention to in the future studies on object-oriented RS information classification pattern: 1) developing robust and highly effective image segmentation algorithm for multi-spectral RS imagery; 2) improving the feature-set including edge, spatial-adjacent and temporal characteristics; 3) discussing the classification rule generation classifier based on the decision tree; 4) presenting evaluation methods for classification result by object-oriented classification pattern.",2006,0, 1323,Reviewing and Evaluating Techniques for Modeling and Analyzing Security Requirements,"The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.",2007,0, 1324,Reviewing Security and Privacy Aspects in Combined Mobile Information System (CMIS) for health care systems,"Medical area has been benefited by the use of ICT (Information and Communication Technology) in recent days. CMIS (Combined Mobile Information System), our proposed model system, is such a system targeted for health care system. IMIS (Integrated Mobile Information System), a system for diabetic healthcare, which is being developed in Blekinge Institute of Technology will be taken as a case study for our proposed system. CMIS is a multi-role system with core service being medical-care related and others like self-monitoring, journal-writing, communicating with fellow patients, relatives, etc. The main reason for not using CMIS could be the security and privacy of the users' information. Any system connected to Internet is always prone to attack, and we think CMIS is no exception. The security and privacy is even more important considering the legal and ethical issues of the sensitive medical data. The CMIS system can be accessed through PDA (Personal Digital Assistant), smart phones or computer via Internet using GPRS (General Packet Radio Service)/UMTS (Universal Mobile Telecommunication System) and wired-communication respectively. On the other hand, it also increases the burden for security and privacy, related to the use of such communications. This thesis discusses various security and privacy issues arising from the use of mobile communication and wired communication in context of CMIS i.e., issues related to GPRS (mobile) and web application (using wired communication). Along with the threats and vulnerabilities, possible countermeasures are also discussed. This thesis also discusses the prospect of using MP2P (Mobile Peer-to-Peer) as a service for some services (for example, instant messaging system between patients) in CMIS. However, our main concern is to study MP2P feasibility with prospect to privacy. In this thesis, we have tried to identify various security and privacy threats and vulnerabilities CMIS could face, security services required to be achieved and countermeasure against those threats and vulnerabilities. In order to accomplish the goal, a literature survey was carried out to find potential vulnerabilities and threats and their solution for our proposed system. We found out that XSS (cross-site scripting), SQL injection and DoS attack being common for a web application. We also found that attack against mobile communication is relatively complex thus difficult to materialize. In short, we think that an overall planned security approach (routinely testing system for vulnerabilities, applying patches, etc) should be used to keep threats and attacks at bay.",2007,0, 1325,Reviewing Software Diagrams: A Cognitive Study,"Reviews and inspections of software artifacts throughout the development life cycle are effective techniques for identifying defects and improving software quality. While review methods for text-based artifacts (e.g., code) are well understood, very little guidance is available for performing reviews of software diagrams, which are rapidly becoming the dominant form of software specification and design. Drawing upon human cognitive theory, we study how 12 experienced software developers perform individual reviews on a software design containing two types of diagrams: entity-relationship diagrams and data flow diagrams. Verbal protocol methods are employed to describe and analyze defect search patterns among the software artifacts, both text and diagrams, within the design. Results indicate that search patterns that rapidly switch between the two design diagrams are the most effective. These findings support the cognitive theory thesis that how an individual processes information impacts processing success. We conclude with specific recommendations for improving the practice of reviewing software diagrams.",2004,0, 1326,Revisiting the problem of using problem reports for quality assessment,"In this paper, we describe our experience with using problem reports from industry for quality assessment. The non-uniform terminology used in problem reports and validity concerns have been subject of earlier research but are far from settled. To distinguish between terms such as defects or errors, we propose to answer three questions on the scope of a study related to what (problem appearance or its cause), where (problems related to software; executable or not; or system), and when (problems recorded in all development life cycles or some of them). Challenges in defining research questions and metrics, collecting and analyzing data, generalizing the results and reporting them are discussed. Ambiguity in defining problem report fields and missing, inconsistent or wrong data threatens the value of collected evidence. Some of these concerns could be settled by answering some basic questions related to the problem reporting fields and improving data collection routines and tools.",2006,0, 1327,"RFID Adoption Theoretical Concepts and Their Practical Application in Fashion","This paper attempts to define a methodology for evaluating the potential benefits related to the adoption of innovative technologies such as Radio Frequency IDentification (RFID) and EPCglobal in some critical processes of a supply chain. The starting point of this work has been a quantitative and qualitative analysis carried out on a particular stakeholder of the pharmaceutical supply chain: the wholesaler. Some experimental measurements, derived by applying the Key Performance Indicator (KPI) method, are discussed. The case study presented allowed us to derive guidelines and indications for the development of a practical unified approach able to evaluate the RFID adoption from different perspectives and easily understandable also by company's top management.",2011,0, 1328,Righting Software,"Three forms of intellectual property protection-trade secrecy, copyrights, and patents-are discussed. Deciding how to use and possibly combine these forms of protection depends on the nature of the protection needed and the ways in which the software will be distributed. It is argued that the more important the software is to a company and the more significant technologically it is to the industry, the more crucial a solid protection strategy becomes.<>",1993,0, 1329,Rigorously defining and analyzing medical processes: An experience report,"

This paper describes our experiences in defining the processes associated with preparing and administrating chemotherapy and then using those process definitions as the basis for analyses aimed at finding and correcting defects. The work is a collaboration between medical professionals from a major regional cancer center and computer science researchers. The work uses the Little-JIL language to create precise process definitions, the <Emphasis Type=""SmallCaps"">Propel</Emphasis>system to specify precise process requirements, and the FLAVERS system to verify that the process definitions adhere to the requirement specifications. The paper describes how these technologies were applied to successfully identify defects in the chemotherapy process. Although this work is still ongoing, early experiences suggest that this approach can help reduce medical errors and improve patient safety. The work has also helped us to learn about the desiderata for process definition and analysis technologies, both of which are expected to be broadly applicable to other domains.

",2008,0, 1330,Risk management in ERP project introduction: Review of the literature,"In recent years ERP systems have received much attention. However, ERP projects have often been found to be complex and risky to implement in business enterprises. The organizational relevance and risk of ERP projects make it important for organizations to focus on ways to make ERP implementation successful. We collected and analyzed a number of key articles discussing and analyzing ERP implementation. The different approaches taken in the literature were compared from a risk management point of view to highlight the key risk factors and their impact on project success. Literature was further classified in order to address and analyze each risk factor and its relevance during the stages of the ERP project life cycle.",2007,0, 1331,Risk Mitigation for Cross Site Scripting Attacks Using Signature Based Model on the Server Side,"Researchers and industry experts state that the Cross-site Scripting (XSS) is the top most vulnerability in the web applications. Attacks on web applications are increasing with the implementation of newer technologies, new html tags and new JavaScript functions. This demands an efficient approach on the server side to protect the users of the application. The proposed Signature based misuse detection approach introduces a security layer on top of the web application, so that the existing web application remain unchanged whenever a new threat is introduced that demands new security mechanisms. The web pages that are newly introduced in the web application need not be changed to incorporate the security mechanisms as the solution is implemented on top of the web application. To test the effectiveness of this approach, the vulnerable web inputs listed in research sites, black-hat hacker sites and in the black hat hacker sites are considered. The proposed security system was run on JBoss server and tested on those vulnerable inputs collected from the above sites. There are around 100 variants of XSS attacks found during the testing. It has been found that the approach is very effective as it addresses the vulnerabilities at a granular level of tags and attributes, in addition to addressing the XSS vulnerabilities.",2007,0, 1332,Roadmapping Working Group 2 Results,"In this two-part paper we describe the ongoing standardization work on designing an autonomicity-enabled mesh architecture framework. This is work in progress being carried out by the AFI (Autonomic network engineering for the self-managing Future Internet) working group of the European Telecommunications Standards Institute (ETSI). In the first part (a separate paper), we briefly described the AFI GANA (Generic Autonomic Network Architecture) Reference Model for autonomic network engineering, cognition and self-management, and discussed general instantiation issues. In this second part we describe the steps needed to accomplish an instantiation of GANA onto wireless mesh networks - thereby creating an autonomicity-enabled wireless mesh architecture. Additionally, we present an example use case showcasing autonomic cooperative networking.",2012,0, 1333,ROADNet: A network of SensorNets,"As sensor networks become denser and more widely deployed, the potential develops for interconnecting these networks to combine datasets, share technological solutions, and to conduct cross-disciplinary research and monitoring operations that rely on several signal domains simultaneously. To that end, the real-time observatories, applications and data management network (ROADNet) research project is connecting multiple sensor networks deployed by collaborating research projects into a single network in order to support a variety of research topics including coastal ocean observing, microclimatology and seismology. This paper gives a brief overview of the ROADNet project and discusses some of the implementation challenges we uncovered while building and maintaining the ROADNet system. We encountered challenges on several fronts including building effective programming abstractions for sensor networks, building tools for managing large-scale data in a scalable manner, and building efficient tools for deploying and managing hundreds of sensors. We discuss how these challenges were addressed and some of the lessons learned from collaborations with domain scientists using our network to conduct their research",2006,0, 1334,"Rules, Norms, and Individual Preferences for Action: An Institutional Framework to Understand the Dynamics of e-Government Evolution","AbstractRecently national, state, and local governments from many countries have been attempting to reform their administrative structure, processes, and regulatory frameworks. E-government can be seen as a powerful approach for government administrative reform. The dynamics and evolution of e-government is a complex process resulting from strategic behavior, development of rules and standards, and appropriation of those rules and standards by the international community. The purpose of this paper is to present a theoretical and analytical framework that explains how this e-government evolution has taken place. Based on a literature review about the study of rules and principles from both institutional and principal-agent theories, a dynamic feedback-rich model is developed and a number of lessons are presented and discussed.",2004,0, 1335,"Running an E-Learning Project: Technology, Expertise, Pedagogy","Abstractthis paper is focused on methodological issues of an e-learning project: components, coordination tasks, actors role, design and implementation steps",2004,0, 1336,SCALABLE AUTOMATED METHODS FOR DYNAMIC PROGRAM ANALYSIS,"Aiming at the problem of higher memory consumption and lower execution efficiency during the dynamic detecting to C/C++ programs memory vulnerabilities, this paper presents a dynamic detection method called ISC. The ISC improves the Safe-C using pointer analysis technology. Firstly, the ISC defines a simple and efficient fat pointer representation instead of the safe pointer in the Safe-C. Furthermore, the ISC uses the unification-based analysis algorithm with one level flow static pointer. This identification reduces the number of pointers that need to be converted to fat pointers. Then in the process of program running, the ISC detects memory vulnerabilities through constantly inspecting the attributes of fat pointers. Experimental results indicate that the ISC could detect memory vulnerabilities such as buffer overflows and dangling pointers. Comparing with the Safe-C, the ISC dramatically reduces the memory consumption and lightly improves the execution efficiency.",2013,0, 1337,Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty,"Software development project schedule estimation has long been a difficult problem. The Standish CHAOS Report indicates that only 20 percent of projects finish on time relative to their original plan. Conventional wisdom proposes that estimation gets better as a project progresses. This concept is sometimes called the cone of uncertainty, a term popularized by Steve McConnell (1996). The idea that uncertainty decreases significantly as one obtains new knowledge seems intuitive. Metrics collected from Landmark's projects show that the estimation accuracy of project duration followed a lognormal distribution, and the uncertainty range was nearly identical throughout the project, in conflict with popular interpretation of the ""cone of uncertainty""",2006,0, 1338,"Scientific Computing and the Chebfun System Case for Support","In the present work an attempt is made to develop a decision support system (DSS) using the pathological attributes to predict the fetal delivery to be done normal or by surgical procedure. The pathological tests like blood sugar (BR), blood pressure (BP), resistivity index (RI) and systolic/diastolic (S/P) ratio will be recorded at the time of delivery. All attributes lie within a specific range for normal patient. The database consists of the attributes for cases i.e. normal and surgical delivery. Soft computing technique namely artificial neural networks (ANN) are used for simulator. The attributes from dataset are used for training & testing of ANN models. Three models of ANN are trained using back-propagation algorithm (BPA), radial basis function network (RBFN) and one hybrid approach is adaptive neuro-fuzzy inference system (ANFIS). The designing factors have been changed to get the optimized model, which gives highest recognition score. The optimized models of BPA, RBFN and ANFIS gave accuracies of 93.75, 99.00 and 99.50 % respectively. Thus ANFIS is the best network for mentioned problem. This system will assist doctor to take decision at the critical time of fetal delivery.",2009,0, 1339,Scientific research ontology to support systematic review in software engineering,"The term systematic review is used to refer to a specific methodology of research, developed in order to gather and evaluate the available evidence pertaining to a focused topic. It represents a secondary study that depends on primary study results to be accomplished. Several primary studies have been conducted in the field of Software Engineering in the last years, determining an increasing improvement in methodology. However, in most cases software is built with technologies and processes for which developers have insufficient evidence to confirm their suitability, limits, qualities, costs, and inherent risks. Conducting systematic reviews in Software Engineering consists in a major methodological tool to scientifically improve the validity of assertions that can be made in the field and, as a consequence, the reliability degree of the methods that are employed for developing software technologies and supporting software processes. This paper aims at discussing the significance of experimental studies, particularly systematic reviews, and their use in supporting software processes. A template designed to support systematic reviews in Software Engineering is presented, and the development of ontologies to describe knowledge regarding such experimental studies is also introduced.",2007,0,1340 1340,Scientific research ontology to support systematic review in software engineering. .,"The term systematic review is used to refer to a specific methodology of research, developed in order to gather and evaluate the available evidence pertaining to a focused topic. It represents a secondary study that depends on primary study results to be accomplished. Several primary studies have been conducted in the field of Software Engineering in the last years, determining an increasing improvement in methodology. However, in most cases software is built with technologies and processes for which developers have insufficient evidence to confirm their suitability, limits, qualities, costs, and inherent risks. Conducting systematic reviews in Software Engineering consists in a major methodological tool to scientifically improve the validity of assertions that can be made in the field and, as a consequence, the reliability degree of the methods that are employed for developing software technologies and supporting software processes. This paper aims at discussing the significance of experimental studies, particularly systematic reviews, and their use in supporting software processes. A template designed to support systematic reviews in Software Engineering is presented, and the development of ontologies to describe knowledge regarding such experimental studies is also introduced.",2007,0, 1341,Scorm run-time environment as a service,"Standardization efforts in e-learning are aimed at achieving interoperability among Learning Management Systems (LMSs) and Learning Object (LO) authoring tools. Some of the specifications produced have reached quite a good maturity level and have been adopted in software systems. Some others, such as SCORM Run-Time Environment (RTE), have not reached the same success, probably due to their intrinsic difficulty in being understood adequately and implemented properly. The SCORM RTE defines a set of functionalities which allow LOs to be launched in the LMS and to exchange data with it. Its adoption is crucial in the achievement of full interoperability among LMSs and LO authoring tools. In order to boost the adoption of SCORM RTE in LMSs, we propose a Service Oriented Architecture (SOA)-based reference model for offering the SCORM RTE functionalities as a service, external to the LMS. By externalizing functionalities from LMSs, our model encourages the independent development of e-learning system components, allowing e-learning software producers to gain several benefits, such as better software re-use and easier integration and complexity management, with a consequent cost reduction. The proposed model is validated through a prototype system, in which a popular LMS, developed with PHP language, is enhanced with the support of SCORM RTE functionalities, provided by an external Web service based on Java technology.",2005,0, 1342,Scrum and Team Effectiveness: Theory and Practice,"The need to evaluate Information Visualization (InfoVis) systems, just as other information systems (IS), cannot be overestimated. In its case, evaluating InfoVis has proved to be more challenging, because many of the previous evaluation studies have been on user interface of IS generally, with few attending to the peculiarities of InfoVis. The few InfoVis evaluations recorded have been mainly on its perceptual function through its interface evaluation, and others on its cognitive support through the knowledge discovery process. Evaluating InfoVis decision support effectiveness has been sparingly attended to. This experience is argued to be caused by insufficient explicit evaluation methods for InfoVis' associated abstract concepts - decision support as an example. This paper uses an unobtrusive research method involving thematic analysis of InfoVis-related theoretical literatures to characterize and categorize the InfoVis evaluation theories. The result presents perceptual, cognitive and decision supports as InfoVis evaluation paradigms. The theoretical characterization posited that these supports are sequential and of dependent phases. Finally, based on the theoretical argument and the findings, an evaluation framework for InfoVis' decision support effectiveness is proposed, and the process of its experimental evaluation is suggested.",2015,0, 1343,Search Engine Overlaps : Do they agree or disagree?,"Secondary studies, such as systematic literature reviews and mapping studies, are an essential element of the evidence-based paradigm. A critical part of the review process is the identification of all relevant research. As such, any researcher intending to conduct a secondary review should be aware of the strengths and weakness of the search engines available. Analyse the overlap between search engine results for software engineering studies. Three independent studies were conducted to evaluate the overlap between multiple search engines for different search areas. The findings indicate that very little overlap was found between the search engines. To complete a systematic review, researchers must use multiple search terms and search engines. The lack of overlap might also be caused by inconsistent keyword selection amongst authors.",2007,0, 1344,Second international workshop on interdisciplinary software engineering research (WISER),"WISER is a series of international workshops that focus on identifying and transferring techniques from other disciplines that might usefully be applied to software engineering research and practice.The workshops address this topic through presentations and discussions of both actual case studies and of ways in which potentially useful approaches can be identified, adapted and adopted within software engineering.The papers in the proceedings address topics ranging from a general approach to identifying domains that have similar experimental practices to software engineering to specific case studies of the application of techniques from, for example, graph theory, strategic planning, economics and social and cognitive theory.",2006,0, 1345,Second international workshop on interdisciplinary software engineering research: (WISER'06),"WISER is a series of international workshops that focus on identifying and transferring techniques from other disciplines that might usefully be applied to software engineering research and practice.The workshops address this topic through presentations and discussions of both actual case studies and of ways in which potentially useful approaches can be identified, adapted and adopted within software engineering.The papers in the proceedings address topics ranging from a general approach to identifying domains that have similar experimental practices to software engineering to specific case studies of the application of techniques from, for example, graph theory, strategic planning, economics and social and cognitive theory.",2006,0, 1346,Secure Software Development - Identification of Security Activities and Their Integration in Software Development Lifecycle,"Today?s software is more vulnerable to attacks due to increase in complexity, connectivity and extensibility. Securing software is usually considered as a post development activity and not much importance is given to it during the development of software. However the amount of loss that organizations have incurred over the years due to security flaws in software has invited researchers to find out better ways of securing software. In the light of research done by many researchers, this thesis presents how software can be secured by considering security in different phases of software development life cycle. A number of security activities have been identified that are needed to build secure software and it is shown that how these security activities are related with the software development activities of the software development lifecycle.",2007,0, 1347,Security and trust requirements engineering.,"Security requirements often have implicit assumptions about trust relationships among actors. The more actors trust each other, the less stringent the security requirements are likely to be. Trust always involves the risk of mistrust; hence, trust implies a trade-off: gaining some benefits from depending on a second party in trade for getting exposed to security and privacy risks. When trust assumptions are implicit, these trust trade-offs are made implicitly and in an ad-hoc way. By taking advantage of agent- and goal-oriented analysis, we propose a method for discovering trade-offs that trust relationships bring. This method aims to help the analyst select among alternative dependency relationships by making explicit trust trade-offs. We propose a simple algorithm for making the trade-offs in a way that reaches a balance between costs and benefits.",2009,0, 1348,Security Considerations in SCADA Communication Protocols,"It has been shown that secure communication between two partners can be achieved with devices containing components with quantum properties. While such devices are in timid stages of practical availability, the quantum properties exhibited are well defined theoretically: quantum bits (qubits) in persistent quantum states, quantum transformations or gates applicable on qubits, and quantum communication channels between devices. The present paper measures and verifies by simulation a security level of a quantum communication protocol that does not use an encryption/decryption key. The algorithm is simpler than the two-step protocols that involve the distribution of a secret key first. Our simulations measure the security level of the protocol depending on several parameters: the number of encoding bases (two or three), the length of the signature string attached to the message, the percentage of qubits the eavesdropper inspects, etc.",2015,0, 1349,Security Requirements for the Rest of Us: A Survey,"Most software developers aren't primarily interested in security. For decades, the focus has been on implementing as much functionality as possible before the deadline, and patching the inevitable bugs when it's time for the next release or hot fix. However, the software engineering community is slowly beginning to realize that information security is also important for software whose primary function isn't related to security. Security features or mechanisms typically aren't prominent in such software's user interface.",2008,0, 1350,Seeds of Evidence: Integrating Evidence-Based Software Engineering,"With increasing interest in evidence-based software engineering (EBSE), software engineering faculty face the challenge of educating future researchers and industry practitioners regarding the generation and use of EBSE results. We propose development and population of a community-driven Web database containing summaries of EBSE studies. We present motivations for inclusion of these activities in a software engineering course, and address the particular appeal of a community-driven Web database to students who have grown up in the Internet generation. We present our experience with integrating these activities into a graduate software engineering course, and report student and industry practitioner assessments of the resulting artifacts.",2008,0, 1351,Seeing inside: Using social network analysis to understand patterns of collaboration and coordination in global software teams,"One of the pervasive challenges facing any software development team is getting the right level and timing of communication to ensure that people are able to coordinate their work effectively. Communication issues are difficult to address because important aspects of communication are largely invisible to both management and the individuals on the team. Communication challenges are further exaggerated in global software teams, because of the different time-zones, cultures, and working environments. Social Network Analysis (SNA) is an established method for revealing patterns of human communication and decision-making. This tutorial introduces students to basic concepts in SNA, illustrates how SNA can be used to understand the dynamics of and address common communication problems in global software teams, and provides structured exercises in data capture, analysis and interpretation.",2007,0, 1352,SEER: charting a roadmap for software engineering education,"This past decade has seen a number of innovative, pioneering projects related to the development of software engineering both as a profession and as an academic discipline. However, most of these projects either are complete or projected for completion by 2004, and it is unclear as to what the software engineering education community should be doing next to build on this work. This half-day workshop will bring together stakeholders in software engineering education (both academic and industry) to discuss this topic and to outline a Software Engineering Education Roadmap (SEER) which could potentially provide needed direction for this community over the next several years. A website and email list for SEER was created in order to start the discussion before the workshop, will be used both to disseminate the roadmap formulated by the participants and continue the dialog after it.",2004,0, 1353,Selecting Best Practices for Effort Estimation,"Effort estimation often requires generalizing from a small number of historical projects. Generalization from such limited experience is an inherently underconstrained problem. Hence, the learned effort models can exhibit large deviations that prevent standard statistical methods (e.g., t-tests) from distinguishing the performance of alternative effort-estimation methods. The COSEEKMO effort-modeling workbench applies a set of heuristic rejection rules to comparatively assess results from alternative models. Using these rules, and despite the presence of large deviations, COSEEKMO can rank alternative methods for generating effort models. Based on our experiments with COSEEKMO, we advise a new view on supposed ""best practices"" in model-based effort estimation: 1) Each such practice should be viewed as a candidate technique which may or may not be useful in a particular domain, and 2) tools like COSEEKMO should be used to help analysts explore and select the best method for a particular domain",2006,0, 1354,Selecting Empirical Methods for Software Engineering Research,"Software engineering research can be done in many ways, in particular it can be done in different ways when it comes to working with industry. This paper presents a list of top 10 challenges to work with industry based on our experience from working with industry in a very close collaboration with continuous exchange of knowledge and information. The top 10 list is based on a large number of research projects and empirical studies conducted with industrial research partners since 1983. It is concluded that close collaboration is a long-term undertaking and a large investment. The importance of addressing the top 10 challenges is stressed, since they form the basis for a long-term sustainable and successful collaboration between industry and academia.",2013,0, 1355,"Self-efficacy, Training Effectiveness, and Deception Detection: A Longitudinal Study of Lie Detection Training","AbstractStudies examining the ability to detection deception have consistently found that humans tend to be poor detectors. In this study, we examine the roles of self-efficacy and training over time. We conducted a field experiment at a military training center involving 119 service members. The subjects were given two sessions of deception detection training. Their performance history, perceived effectiveness of the training, and perceived self-efficacy were measured over time. Two significant findings were identified. First, training novelty and relativity played a noticeable role in the subjects perceptions of effectiveness. Second, influencing subject self-efficacy to detect deception requires time and multiple iterations of the task so as to allow the subjects the opportunity to calibrate their skills. We believe that continued research along this line will ultimately results in marked improvement in deception detection ability.",2004,0, 1356,Semantic Video Annotation by Mining Association Patterns from Visual and Speech Features,"To support effective multimedia information retrieval, video annotation has become an important topic in video content analysis. Existing video annotation methods put the focus on either the analysis of low-level features or simple semantic concepts, and they cannot reduce the gap between low-level features and high-level concepts. In this paper, we propose an innovative method for semantic video annotation through integrated mining of visual features, speech features, and frequent semantic patterns existing in the video. The proposed method mainly consists of two main phases: 1) Construction of four kinds of predictive annotation models, namely speech-association, visual-association, visual-sequential, and statistical models from annotated videos. 2) Fusion of these models for annotating un-annotated videos automatically. The main advantage of the proposed method lies in that all visual features, speech features, and semantic patterns are considered simultaneously. Moreover, the utilization of high-level rules can effectively complement the insufficiency of statistics-based methods in dealing with complex and broad keyword identification in video annotation. Through empirical evaluation on NIST TRECVID video datasets, the proposed approach is shown to enhance the performance of annotation substantially in terms of precision, recall, and F-measure.",2008,0, 1357,Separating Essentials from Incidentals: An Execution Architecture for Real-Time Control Systems,"Source code for real-time control systems often intertwines several concerns such as functionality, data flow, control flow, synchronization, timing, and architectural style. This combination of concerns makes software harder to write correctly, harder to verify, and harder to reuse. This paper proposes an execution architecture that makes such systems more analyzable, verifiable, and reusable by separating ""essential code"" (software specific to the physical platform, the physical environment, and mission goals) from ""incidental code"" (all other software, particularly architectural support software for combining together essential components). This architecture elevates two forms of processing as first-class items: individual transformations of global state, as defined in pure functions, and rules of interaction of transformations, as managed by an engine that maintains certain invariants. Importantly, the explicit specification of these two forms of processing by systems engineers reduces sources of ambiguity in requirements",2004,0, 1358,Separating the Wheat from the Chaff: Practical Anomaly Detection Schemes in Ecological Applications of Distributed Sensor Networks,"

We develop a practical, distributed algorithm to detect events, identify measurement errors, and infer missing readings in ecological applications of wireless sensor networks. To address issues of non-stationarity in environmental data streams, each sensor-processor learns statistical distributions of differences between its readings and those of its neighbors, as well as between its current and previous measurements. Scalar physical quantities such as air temperature, soil moisture, and light flux naturally display a large degree of spatiotemporal coherence, which gives a spectrum of fluctuations between adjacent or consecutive measurements with small variances. This feature permits stable estimation over a small state space. The resulting probability distributions of differences, estimated online in real time, are then used in statistical significance tests to identify rare events. Utilizing the spatio-temporal distributed nature of the measurements across the network, these events are classified as single mode failures - usually corresponding to measurement errors at a single sensor - or common mode events. The event structure also allows the network to automatically attribute potential measurement errors to specific sensors and to correct them in real time via a combination of current measurements at neighboring nodes and the statistics of differences between them. Compared to methods that use Bayesian classification of raw data streams at each sensor, this algorithm is more storage-efficient, learns faster, and is more robust in the face of non-stationary phenomena. Field results from a wireless sensor network (Sensor Web) deployed at Sevilleta National Wildlife Refuge are presented.

",2007,0, 1359,Separation of structural concerns in physical hypermedia models.,"

In this paper we propose a modeling and design approach for building physical hypermedia applications, i.e. those mobile applications in which physical and digital objects are related and explored using the hypermedia paradigm. We show that by separating the geographical and domain concerns we gain in modularity, and evolution ease. We first review the state of the art of this kind of software systems, arguing about the need of a systematic modeling approach; we next present a light extension to the OOHDM design approach, incorporating physical objects and ”walkable” links; next we generalize our approach and show how to improve concern separation and integration in hypermedia design models. We compare our approach with others in the field of physical and ubiquitous hypermedia and in the more generic software engineering field. Some concluding remarks and further work are finally presented.

",2005,0, 1360,Service Engineering Methodology,"In order to pursue sustainability, commercial activities between the supply and demand sides must be somehow changed. In the context of eco-design, producers must need a much bigger framework than is available in a bunch of current eco-design techniques. This calls for establishing a new discipline. The authors are carrying out a new discipline called Service Engineering (SE). Based on SE, this paper aims at proposing a novel eco-design methodology of service toward sustainable production and consumption. For the sake of this, a methodology of modeling and designing services is presented. Then, it is proved to be effective through an application. In SE, positive and negative changes of customers are modeled as value and cost, respectively. In addition, the model to describe a target customer is provided for grounding the identified value. A design methodology including the identification of value with realization structures is also provided. Furthermore, SE allows designing services in parallel with products. In the application to service redesign of an existing hotel in Italy, it was demonstrated that the presented methods and tools facilitate designers adding new value like the view of outside through a window in energy-saving structure. It was also proved to deal with both products and services through generating a solution called ""cash-back per non-wash "" system for washing towels",2005,0, 1361,Service-oriented software engineering (SOSE) framework,"The gap between business decision making and software engineering causes inefficiency and quality problems in software development. Software engineers do not understand organization's value creation objectives and their influence on software production and structure. For this reason software does not fulfill the requirements of business and software quality is inadequate too often. The objective of the authors in the service-oriented software engineering project (SOSE) is to develop methods and tools to improve quality and profitability of software development. In this paper the authors described SOSE framework and clarify with examples its phases, utility, and application in pilot projects. SOSE framework's first activity is to create a well-defined business case. Then, business processes and data concepts are identified, to meet business requirements of the business case, and modelled with informal diagrams like UML and BPML. Finally, the refinement continues with use case maps, system-level services, and business service components. It is proposed that service, process, entity, and utility components are used as design elements of the business service component. In implementation platform independent and platform specific models were utilized. This study has been carried out in cooperation with ICT companies and their customers in electricity domain in Finland.",2005,0, 1362,SESQ: A Novel System for Building Domain Specific Web Search Engines,"

Nowadays the Web represents a huge heterogeneous data source. The rapid growth of data volume and the dynamic nature of the Web make it difficult for users to find relevant information for a specific domain. To meet this demand, we have designed and implemented a novel system, called SESQ for building domain specific search engine. Using SESQ, the user first needs to specify the data schema of the domain and gives the seed for the data of the schema; then writes extracting rules to indicate how to get instance data of the schema from relevant web pages. The system will extract the instance data for the schema from the web pages and find new web sites and web pages relevant to the schema by crawling. SESQ provides a highly efficient data storage and index structure for the collected data, and provides an interactive query interface for end users to represent structural query on the data. Besides, the data can be further analyzed by some analytical tools (such as OLAP) .

",2006,0, 1363,Shakra: tracking and sharing daily activity levels with unaugmented mobile phones,"

This paper explores the potential for use of an unaugmented commodity technology--the mobile phone-- as a health promotion tool. We describe a prototype application that tracks the daily exercise activities of people, using an Artificial Neural Network (ANN) to analyse GSM cell signal strength and visibility to estimate a user's movement. In a short-term study of the prototype that shared activity information amongst groups of friends, we found that awareness encouraged reflection on, and increased motivation for, daily activity. The study raised concerns regarding the reliability of ANN-facilitated activity detection in the 'real world'. We describe some of the details of the pilot study and introduce a promising new approach to activity detection that has been developed in response to some of the issues raised by the pilot study, involving Hidden Markov Models (HMM), task modelling and unsupervised calibration. We conclude with our intended plans to develop the system further in order to carry out a longer-term clinical trial.

",2007,0, 1364,Shape and pose parameter estimation of 3D multi-part objects,"This paper introduces Cubistic Representation as a novel 3D surface shape model. Cubistic representation is a set of 3D surface fragments, each fragment contains subject's 3D surface shape and its color and redundantly covers the subject surface. By laminating these fragments using a given pose parameter, the subject's appearance can be synthesized. Using cubistic representation, we propose a real-time 3D rigid object tracking approach by acquiring the 3D surface shape and its pose simultaneously. We use the particle filter scheme for both shape and pose estimation, each fragment is used as a partial shape hypothesis and is sampled and refined by a particle filter. We also use the RANSAC algorithm to remove wrong fragments as outliers to refine the shape. We also implemented an online demonstration system with GPU and a Kinect sensor and evaluated the performance of our approach in a real environment.",2013,0, 1365,Sharing Reasoning About Faults in Spreadsheets: An Empirical Study,"Although researchers have developed several ways to reason about the location of faults in spreadsheets, no single form of reasoning is without limitations. Multiple types of errors can appear in spreadsheets, and various fault localization techniques differ in the kinds of errors that they are effective in locating. In this paper, we report empirical results from an emerging system that attempts to improve fault localization for end-user programmers by sharing the results of the reasoning systems found in WYSIWYT and UCheck. By evaluating the visual feedback from each fault localization system, we shed light on where these different forms of reasoning and combinations of them complement - and contradict - one another, and which heuristics can be used to generate the best advice from a combination of these systems",2006,0, 1366,Short and Long-term Impacts of SPI in Small Software Firms,"Software process improvement for small firms is a significant challenge. The RAPID method provides a way for small firms to participate in process improvement programs without the enormous expenditures usually associated with such initiatives. There are successes but unfortunately it is not always just software processes that need improvement; business processes are also problematic. This paper reports on an Australian experience with the RAPID method and, in a retrospective, reviews the outcomes for five small firms. Their stories provide a range of experiences, and highlight the concerns in implementing improvements within this class of organisation.",2006,0, 1367,Similarities in Business and IT Professional Ethics: The Need for and Development of A Comprehensive Code of Ethics,"The study of business ethics has led to the development of various principles that are the foundation of good and ethical business practices. A corresponding study of Information Technology (IT) professionals? ethics has led to the conclusion that good ethics in the development and uses of information technology correspond to the basic business principle that good ethics is good business. Ergo, good business ethics practiced by IT professionals is good IT ethics and vice versa. IT professionals are professionals in businesses; a difficulty presented to these professionals, however, is the number and diversity of codes of ethics to which they may be held. Considering the existence of several formalized codes of ethics prepared by various IT professionals? associations, a more harmonized approach seems more reasonable. This paper attempts to present a review of the purpose of codes of ethics, the persons who should be covered by such codes and to organize codes of ethics for business in general and IT professionals in particular and to make the argument that, once again, good ethics is good business practice, regardless of the profession or occupation concerned",2005,0, 1368,Simplified Use Case Driven Approach (SUCADA) for Conversion of Legacy System to COTS Package,"The conversion of a legacy system to a system based on a commercial-off-the-shelf (COTS) package demands a dedicated guidance. The assumption that it is just a matter of adopting a selected package may prove disastrous and even more expensive than building the system in house and from scratch. Building a software solution based on a COTS package not only has its risks, but it is also different from a custom development effort, and it needs to follow a rigorous methodology for a successful implementation. Therefore, it is necessary to define how to solve some of the challenges that this type of project presents, and how to balance customer requirements with the features offered by the COTS package. To successfully and efficiently convert a legacy system into a new system based on COTS package, we developed and present a methodology that utilizes a general process flow chart, simplified use cases, and a mapping to the COTS package functionality. We also present the findings of a case study on the applicability and effectiveness of the proposed methodology for the conversion of a legacy laboratory information management system (LIMS).",2008,0, 1369,Simulating Families of Studies to Build Confidence in Defect Hypotheses,"While it is clear that there are many sources of variation from one development context to another, it is not clear a priori what specific variables will influence the effectiveness of a process in a given context. For this reason, we argue that knowledge about software process must be built from families of studies, in which related studies are run within similar contexts as well as very different ones. Previous papers have discussed how to design related studies so as to document as precisely as possible the values of likely context variables and be able to compare with those observed in new studies. While such a planned approach is important, we argue that an opportunistic approach is also practical. The approach would combine results from multiple individual studies after the fact, enabling recommendations to be made about process effectiveness in context. In this paper, we describe two processes with which we have been working to build empirical knowledge about software development processes: one is a manual and informal approach, which relies on identifying common beliefs or 'folklore' to identify useful hypotheses and a manual analysis of the information in papers to investigate whether there is support for those hypotheses; the other is a formal approach based around encoding the information in papers into a structured hypothesis base that can then be searched to organize hypotheses and their associated support. We test these processes by applying them to build knowledge in the area of defect folklore (i.e. commonly accepted heuristics about software defects and their behavior). We show that the formal methodology can produce useful and feasible results, especially when it is compared to the results output from the more manual, expert-based approach. The formalized approach, by relying on a reusable hypothesis base, is repeatable and also capable of producing a more thorough basis of support for hypotheses, including results from papers or articles that may have been overlooked or not considered by the experts.",2005,0, 1370,Simulating Fighter Pilots,"Endotracheal Intubation (ETI) is a common airway procedure used to connect the larynx and the lungs through a windpipe in patients under emergency situations. The process is carried out by a laryngoscope inserted into the mouth, used to help doctors in visualizing the glottis and inserting the tube. Currently, very few studies on objective evaluation of the biomechanics of the doctors during the procedure have been done. Additionally, these studies have been concentrated only on the overall performance analysis, without any segmentation, with a consequent loss of important information. In this paper, the authors present a preliminary study on a methodology to objectively evaluate and segment the biomechanical performance of doctors during the ETI, using surface electromyography and inertial measurement units. In particular, the validation has been performed by comparing three kinds of laryngoscopes involving an expert doctor. Finally, results are presented and commented.",2013,0, 1371,Simulation Coercion Applied to Multiagent DDDAS,"AbstractThe unpredictable run-time configurations of dynamic, data-driven application systems require flexible simulation components that can adapt to changes in the number of interacting components, the syntactic definition of their interfaces, and their role in the semantic definition of the entire system. Simulation coercion provides one solution to this problem through a human-controlled mix of semi-automated analysis and optimization that transforms a simulation to meet a new set of requirements posed by dynamic data streams. This paper presents an example of one such coercion tool that uses off-line experimentation and similarity-based lookup functions to transform a simulation to a reusable abstract form that extends a static feedback control algorithm to a dynamic, data-driven version that capitalizes on extended run-time data to improve performance.",2004,0, 1372,Simulation-specific characteristics and software reuse,"We argue that simulations possess interesting characteristics that facilitate adaptation. Simplifying assumptions, stochastic sampling, and event generation are common features which lend themselves to adaptation for reuse. In this paper, we explore simulation-specific characteristics amenable to adaptation and the ways they can be exploited in support of reuse. Our work is of particular relevance to research in component based simulations and dynamic data driven application systems, where adaptability and reuse are essential.",2005,0, 1373,Simulation-Supported Workflow Optimization in Process Engineering,"The results of software OPTAN testing on the examples of modeling and optimization of technological processes of printed circuit boards manufacturing using various techniques are presented. The software efficiency is confirmed, and ways of the technological processes research improvement are suggested.",2012,0, 1374,Simulators for Driving Safety Study ? A Literature Review,"Warning systems are being developed to improve traffic safety using visual, auditory, and/or tactile displays by informing drivers of the existence of a threat in the roadway. Behavioral and safety effects of driver dependence on such a warning system, especially when the warning system is unreliable, were investigated in a driving-simulator study. Warning-system accuracy was defined in terms of miss rate (MR) and positive predictive value (PPV) (PPV is the fraction of warnings that were correct detections). First, driver behavior and performance were measured across four warning-system accuracy conditions. Second, the authors estimated the probability of collision in each accuracy condition to measure the overall system effectiveness in terms of safety benefit. Combining these results, a method was proposed to evaluate the degree of driver dependence on a warning system and its effect on safety. One major result of the experiment was that the mean driving speed decreased as the missed detection rate increased, demonstrating a decrease in driver's reliance on warnings when the system was less effective in detecting threats. Second, both the acceleration-pedal and brake-pedal reaction times increased as the PPV of the warning system decreased, demonstrating a decrease in driver compliance with warnings when the system became more prone to false alarms. A key implication of the work is that performance is not necessarily directly correlated to warning-system quality or trends in subjective ratings, highlighting the importance of objective evaluation. Practical applications of the work include design and analysis of in-vehicle warning systems",2006,0, 1375,Single Image Subspace for Face Recognition,"This study proposes a new feature descriptor, local directional mask maximum edge pattern, for image retrieval and face recognition applications. Local binary pattern (LBP) and LBP variants collect the relationship between the centre pixel and its surrounding neighbours in an image. Thus, LBP based features are very sensitive to the noise variations in an image. Whereas the proposed method collects the maximum edge patterns (MEP) and maximum edge position patterns (MEPP) from the magnitude directional edges of face/image. These directional edges are computed with the aid of directional masks. Once the directional edges (DE) are computed, the MEP and MEPP are coded based on the magnitude of DE and position of maximum DE. Further, the robustness of the proposed method is increased by integrating it with the multiresolution Gaussian filters. The performance of the proposed method is tested by conducting four experiments onopen access series of imaging studies-magnetic resonance imaging, Brodatz, MIT VisTex and Extended Yale B databases for biomedical image retrieval, texture retrieval and face recognition applications. The results after being investigated the proposed method shows a significant improvement as compared with LBP and LBP variant features in terms of their evaluation measures on respective databases.",2016,0, 1376,"SIP-Based Content Development for Wireless Mobile Devices with Delay Constraints","The use of mobile phones and PDAs are increasing by the minute and the user demands and needs are also increasing. Usage of SIP protocol is also increasing. Thus combining the two by creating applications comply with the demands and needs of the mobile phone and PDA users and industry. This paper illustrates and describes a way of developing User Agents for wireless devices, such as handsets and PDAs. We briefly describe the tools we used to develop the applications. We also illustrate how we used the described tools, by showing a simulation of our developed system and how it works.",2005,0, 1377,Slice-Hoisting for Array-Size Inference in MATLAB,"Multilevel converters are power electronics system models not well defined; however, there are parameter variation problems. The model is multivariable, complex, and nonlinear. To combat such problems, various adaptive control techniques have been proposed. Fuzzy control doesn't strictly need any mathematical model plant. It is based on operator experience and heuristics, and it is easy to apply. This control is basically an adaptive and nonlinear control which gives robust performance for a nonlinear plant with complex behavior. Using MATLAB fuzzy logic toolbox, practical experience, and heuristics, a control algorithm of a multilevel converter based on a fuzzy control is presented in this paper. A complete fuzzy inference process to control the converter is shown. Simulation results show that the implemented fuzzy control algorithm is a good adaptive control",2007,0, 1378,SM CMM Model to Evaluate and Improve the Quality of Software Maintenance Process: Overview of the model,"Software maintenance function suffers from a scarcity of management models that would facilitate its evaluation, management and continuous improvement. This paper is part of a series of papers that presents a software maintenance capability maturity model (SMCMM). The contributions of this specific paper are: 1) to describe the key references of software maintenance; 2) to present the model update process conducted during 2003; and 3) to present, for the first time, the updated architecture of the model.",2004,0, 1379,SMART Rehabilitation: Implementation of ICT Platform to Support Home-Based Stroke Rehabilitation,"

Stroke is the biggest cause of severe disability in the UK. The National Service Framework for Older People recommends that rehabilitation should continue until maximum recovery has been achieved. However, due to cost factors inpatient length of stay is decreasing and outpatient rehabilitation facilities are limited. The level of therapy could be improved by providing assistive technology, in the form of tele-rehabilitation, within patients' homes. This paper presents the development of the SMART rehabilitation system, a home-based tele-rehabilitation system to argument upper limb rehabilitation, with the emphasis on the implementation of the system ICT platform and user interface design.

",2007,0, 1380,SmartTransplantation - Allogeneic Stem Cell Transplantation as a Model for a Medical Expert System,

Public health care has to make use of the potentials of IT to meet the enormous demands on patient management in the future. Embedding artificial intelligence in medicine may lead to an increase in quality and safety. One possibility in this respect is an expert system. Conditions for an expert system are structured data sources to extract relevant data for the proposed decision. Therefore the demonstrator 'allo-tool' was designed. The concept of introducing a 'Medical decision support system based on the model of Stem Cell Transplantation' was developed afterwards. The objectives of the system are (1) to improve patient safety (2) to support patient autonomy and (3) to optimize the workflow of medical personnel.

,2007,0, 1381,SME Adoption of Enterprise Systems in the Northwest of England,"AbstractThe attention of software vendors has moved recently to SMEs (small- to medium-sized enterprises), offering them a vast range of enterprise systems (ES), which were formerly adopted by large firms only. From reviewing information technology innovation adoption literature, it can be argued that IT innovations are highly differentiated technologies for which there is not necessarily a single adoption model. Additionally, the question of why one SME adopts an ES while another does not is still understudied. This study intends to fill this gap by investigating the factors impacting SME adoption of ES. A qualitative approach was adopted in this study involving key decision makers in nine SMEs in the Northwest of England. The contribution of this study is twofold: it provides a framework that can be used as a theoretical basis for studying SME adoption of ES, and it empirically examines the impact of the factors within this framework on SME adoption of ES. The findings of this study confirm that factors impacting the adoption of ES are different from factors impacting SME adoption of other previously studied IT innovations. Contrary to large companies that are mainly affected by organizational factors, this study shows that SMEs are not only affected by environmental factors as previously established, but also affected by technological and organizational factors.",2007,0, 1382,SOC: A Distributed Decision Support Architecture for Clinical Diagnosis,This paper introduces a novel distributed decision support system to help radiologists in the diagnosis of soft tissue tumors (STT). Decision support systems are based on pattern recognition engines that discriminate between benign/malignant character and histological groups with a satisfactory estimated efficiency.,2004,0, 1383,Soccer players identification based on visual local features,"

Semantic detection and recognition of objects and events contained in a video stream has to be performed in order to provide content-based annotation and retrieval of videos. This annotation is done as a means to be able to reuse the video material at a later stage, e.g. to produce new TV programmes. A typical example is that of sports videos, where videos are annotated in order to reuse the video clips that show key highlights and players to produce short summaries for news and sports programmes. In order to select the most interesting actions among all the possibly detected highlights further analysis is required; i.e. the shots that contain a key action are typically followed by close-ups of the players that take part in the action. Therefore the automatic identification of these players would add considerable value both to the annotation and retrieval of the key highlights and key players of a sport event. The problem of detecting and recognizing faces in broadcast videos is a widely studied topic. However, in the case of soccer videos, and sports videos in general, the current techniques are not suitable for the task of face recognition, due to the high variations in pose, illumination, scale and occlusion that may happen in an uncontrolled environment. In this paper a method that copes with these problems, exploiting local features to describe a face, without requiring a precise localization of the distinguishing parts of a face, and the set of poses to describe a person and perform a more robust recognition, is presented. A similarity metric based on the number of matched interest points, able to cope with different face sizes, is also presented and experimentally validated.

",2007,0, 1384,Social Comparisons to Motivate Contributions to an Online Community,"Self-concept refers to an individual's conscious reflection of personal qualities constructed in a social context, and is a potent influence on the individual's social, psychological, and behavioral functioning. In this work we examine whether the use of online communities improves members' self-concepts, thereby increasing the social value they gain from, and loyalty they feel towards, these communities. Furthermore, we investigate whether the influence of self-concept improvement on perceived social value varies across two contrasting computing platforms for online communities--social networks and virtual worlds. The results of an online survey of Face book and Second Life members support the positive influence of self-concept improvement on perceived social value and member loyalty, and the moderating effect of computing platforms on these relationships.",2012,0, 1385,Social Conventions and Issues of Space for Distributed Collaboration,"

We followed the work of an international research network that holds regular meetings in technology-enhanced environments. The team is geographically distributed and to support its collaborative work it uses a set of technical artifacts, including audio- and videoconferencing systems and a media space. We have been studying some of the techniques and social conventions the team develops for its collaboration, and different aspects of what it mean to be located in a shared but distributed workspace. Our approach has been to analyze the initiatives and responses made by the team members. Over time the group created conventions; e.g. the chair introduces team members participating only by audio and members turn off their microphones when not talking. The latter convention led to the side effect of faster decision making. We also identified two characteristics, implicit excluding and explicit including, in a situation where the majority of the team members were co-located.

",2007,0, 1386,Social Factors Relevant to Capturing Design Decisions,"We present results from a qualitative study of design decision making that used interviews, observations and participatory observations to describe inherent traits of software design decision makers. We find that designers do not always strive for optimal design solutions, that designers do not always consider alternatives when making design decisions, and that alternatives are considered more often in groups of people having a casual conversation. We highlight that tool support for capturing design rationale and intent should first recognize the way decisions are inherently made in software environments and we provide a summary of our results as an indicator of requirements for such tools.",2007,0, 1387,Software Architecture Analysis of Usability,"Studies of software engineering projects reveal that a large number of usability related change requests are made after its deployment. Fixing certain usability problems during the later stages of development has proven to be costly, since some of these changes require changes to the software architecture i.e. this often requires large parts of code to be completely rewritten. Explicit evaluation of usability during architectural design may reduce the risk of building a system that fails to meet its usability requirements and may prevent high costs incurring adaptive maintenance activities once the system has been implemented. In this paper, we demonstrate the use of a scenario based architecture analysis technique for usability we developed, at two case studies.",2005,0, 1388,Software architecture at a large financial firm,"The paper proposes a software architecture for cloud robotics which intends three subsystems in the cloud environment: Middleware Subsystem, Background Tasks Subsystem, and Control Subsystem. The architecture invokes cloud technologies such as cloud computing, cloud storage, and other networking platforms arranged on the assistances of congregated infrastructure and shared services for robotics, for instance Robot Operating System (ROS). Since the architecture is looking for reliable, scalable, and distributed system for the heterogeneous large-scale autonomous robots, Infrastructure as a Service (IaaS) is chosen among the cloud services. Three major tasks can be handled by the proposed software architecture Computing, Storage, and Networking. Hadoop-MapReduce provides the appropriate framework in the cloud environment to process and handle these tasks.",2016,0, 1389,Software Component Technologies for Heavy Vehicles,"A novel approach to evaluation of hardware and software testability, represented in the form of register transfer graph, is proposed. Instances of making of software graph models for their subsequent testing and diagnosis are shown.",2008,0, 1390,Software Cost Estimation Inhibitors - A Case Study,"Software cost estimation is one of the most challenging activities in software project management. Since the software cost estimation affects almost all activities of software project development such as: biding, planning, and budgeting, the accurate estimation is very crucial to the success of software project management. However, due to the inherent uncertainties in the estimation process and other factors, the accurate estimates are often obtained with great difficulties. Therefore, it is safer to generate interval based estimates with a certain probability over them. In the literature, many approaches have been proposed for interval estimation. In this study, we propose a navel method namely Analogy Based Sampling (ABS) and compare ABS against the well established Bootstrapped Analogy Based Estimation (BABE) which is the only existing variant of analogy based method with the capability to generate interval predictions. The results and comparisons show that ABS could improve the performance of BABE with much higher efficiencies and more accurate interval predictions.",2008,0, 1391,Software design patterns for information visualization.,"Despite a diversity of software architectures supporting information visualization, it is often difficult to identify, evaluate, and re-apply the design solutions implemented within such frameworks. One popular and effective approach for addressing such difficulties is to capture successful solutions in design patterns, abstract descriptions of interacting software components that can be customized to solve design problems within a particular context. Based upon a review of existing frameworks and our own experiences building visualization software, we present a series of design patterns for the domain of information visualization. We discuss the structure, context of use, and interrelations of patterns spanning data representation, graphics, and interaction. By representing design knowledge in a reusable form, these patterns can be used to facilitate software design, implementation, and evaluation, and improve developer education and communication",2006,0, 1392,Software engineering challenges for mutable agent systems.,"Spaceflight software continues to experience exponential growth as functionality migrates from hardware to software. The resulting complexity of these mission critical systems demands new approaches to software systems engineering in order to effectively manage the development efforts and ensure that reliability is not compromised. Model-based systems /software engineering (MBE) approaches present attractive solutions to address the size and complexity through abstraction and analytical models. However, there are many challenges that must be addressed before MBE approaches can be effectively adopted on a large scale across an entire system. In this position paper, we highlight some of the key challenges based on our experiences with flight software programs employing elements of MBE.",2013,0, 1393,Software engineering education (SEEd),"The author describes a software package, running under MSDOS, developed to assist lecturers in the assessment of software assignments. The package itself does not make value judgments upon the work, except when it can do so absolutely, but displays the students' work for assessment by qualified staff members. The algorithms for the package are presented, and the functionality of the components is described. The package can be used for the assessment of software at three stages in the development process: (1) algorithm logic and structure, using Warnier-Orr diagrams; (2) source code structure and syntax in Modula-2; and (3) runtime performance of executable code",1992,0, 1394,Software Engineering Education Improvement: An Assessment of a Software Engineering Programme,"Israel Aircraft Industries has developed a comprehensive educational program in software engineering. Goals of the program include: the retraining of college graduates to become software engineers with specializations in one of three application areas (data processing, embedded computer systems, and CAD/CAM systems), and enhancement of the knowledge of currently practicing software engineers. The program is centered around three distinct full-time courses of study having an average duration of 7 months. The training program also includes a large number of short courses and seminars. The company is currently planning an M.Sc. program in embedded computer systems and software engineering in cooperation with one of the universities in Israel.",1987,0, 1395,Software engineering practice versus evidence-based software engineering research,"Evidence-based research has been matured and established in many other disciplines such as in Medicine and Psychology. One of the methods that has been widely used to support evidence-based practices is the Systematic Literature Review (SLR) method. The SLR is a review method that aims to provide unbiased or fair evaluation to existing research evidence. The aim of this study is to gather the trends of evidence-based software engineering (SE) research in Malaysia in particular to identify the usage of SLR method among researchers, academics or practitioners. Based on our tertiary study, we found only 19 published work utilizing evidence-based practices in Malaysia within SE and Computer Science related domains. We have also conducted a survey during SLR workshops for the purpose of gathering perceptions on using SLR. The survey was participated by 78 academics and researchers from five universities in Malaysia. Our findings show that researchers in this country are still at preliminary stage in practicing evidence-based approach. We believe that knowledge and skill on using SLR should be promoted to encourage more researchers to apply it in their research.",2014,0, 1396,Software engineering research strategy: Combining experimental and explorative research (EER).,"AbstractIn this paper a new Experimental and Explorative Research (EER) research strategy is proposed. It combines experimental software engineering with exploratory research of new technologies. EER is based on several years experience of using and developing the approach in research of future mobile applications. In large international projects explorative application research includes quite often both industrial software developers and experienced researchers. This kind of an experimental research environment resolves the subject problem found in student experiments. It also does not have the difficulties found in experimental design and control of industrial projects that are constrained by strict commercial conditions. EER strategy provides benefits for both worlds: (1) experimental software engineering research benefits from almost industry level projects that can be used as experimentation environments, and (2) future mobile telecom application research benefits from better control and understanding of the characteristics of the applications and their development methods and processes.",2004,0, 1397,Software estimation: a fuzzy approach,"A successful project is one that is delivered on time, within budget and with the required quality. Accurate software estimation such as cost estimation, quality estimation and risk analysis is a major issue in software project management. A number of estimation models exist for effort prediction. However, there is a need for novel model to obtain more accurate estimations. As Artificial Neural Networks (ANN's) are universal approximators, Neuro-fuzzy system is able to approximate the non-linear function with more precision by formulating the relationship based on its training. In this paper we explore Neuro-fuzzy techniques to design a suitable model to utilize improved estimation of software effort for NASA software projects. Comparative Analysis between Neuro-fuzzy model and the traditional software model(s) such as Halstead, WalstonFelix, Bailey-Basili and Doty models is provided. The evaluation criteria are based upon MMRE (Mean Magnitude of Relative Error) and RMSE (Root mean Square Error). Integration of neural networks, fuzzy logic and algorithmic models into one scheme has resulted in providing robustness to imprecise and uncertain inputs.",2012,0, 1398,Software maintenance maturity model (smmm): the software maintenance process model,This paper presents a method for modeling and evaluating the performances of telecommunication software maintenance process. The method can be used as a special technique within the given generic model as part of an organizational effort to upgrade process maturity and efficiency. It is based on the modeling of software maintenance process as queueing networks and applies process simulation to determine its performances. The method allows efficient comparison of the alternative process designs without the risks associated with the experiments in real life. Its activities can be implemented in various telecommunication software maintenance processes and other software processes to be formally presented and analyzed for improvement purposes. Implementation is presented in a case study with software maintenance process in a telecommunications company. The results show practicability and applicability of the method in the software organization with the aim of improving software maintenance process,2002,0, 1399,Software maintenance seen as a knowledge management issue,"Software maintenance is one of an important process in software development life cycle. After development team delivered the software and users operated on it, the requests from user that related to the software are always raised and issued to maintenance team. The expectation of customers is that maintenance team could support them by answering the question, solving the problem, or developing a new system to support their operations. For this reason, the Modification Request Management Framework (MRMF) is presented in this paper. This proposed framework is focused on the problem/modification identification, classification, and prioritization process that are the first activity on software maintenance process. This framework has been developed by including the principle of Knowledge Asset and Taxonomy. The benefit of MRMF framework application is to classify, manage the request from user and to establish the supporting information for maintenance team to proceed on software maintenance process effectively.",2014,0, 1400,Software performance modeling using UML and Petri nets.,"Commercial servers, such as database or application servers, often attempt to improve performance via multi-threading. Improper multi-threading architectures can incur contention, limiting performance improvements. Contention occurs primarily at two levels: (1) blocking on locks shared between threads at the software level and (2) contending for physical resources (such as the cpu or disk) at the hardware level. Given a set of hardware resources and an application design, there is an optimal number of threads that maximizes performance. This paper describes a novel technique we developed to select the optimal number of threads of a target-tracking application using a simulation-based colored Petri nets (CPNs) model. This paper makes two contributions to the performance analysis of multi-threaded applications. First, the paper presents an approach for calibrating a simulation model using training set data to reflect actual performance parameters accurately. Second, the model predictions are validated empirically against the actual application performance and the predicted data is used to compute the optimal configuration of threads in an application to achieve the desired performance. Our results show that predicting performance of application thread characteristics is possible and can be used to optimize performance.",2008,0, 1401,Software Productivity Measurement Using Multiple Size Measures,"Productivity measures based on a simple ratio of product size to project effort assume that size can be determined as a single measure. If there are many possible size measures in a data set and no obvious model for aggregating the measures into a single measure, we propose using the expression AdjustedSize/Effort to measure productivity. AdjustedSize is defined as the most appropriate regression-based effort estimation model, where all the size measures selected for inclusion in the estimation model have a regression parameter significantly different from zero (p<0.05). This productivity measurement method ensures that each project has an expected productivity value of one. Values between zero and one indicate lower than expected productivity, values greater than one indicate higher than expected productivity. We discuss the assumptions underlying this productivity measurement method and present an example of its use for Web application projects. We also explain the relationship between effort prediction models and productivity models.",2004,0, 1402,Software reliability prediction by soft computing techniques,"Software reliability prediction is very challenging in the starting phases of software development. In the past few years many software reliability models have been proposed for assessing reliability of software but building accurate prediction models is hard due to the recurrent changes in data in the domain of software engineering. As a result, the prediction models built on one dataset show a significant decrease in their accuracy when they are used with new data. The objective of this paper is to introduce a new approach that optimizes the accuracy of software reliability predictive models when used with raw data. We propose Ant Colony Optimization Technique (ACOT) to predict software reliability based on data collected from literature. An ant colony system with an accompanying TSP algorithm has been used, which has been changed by implementing different algorithms and extra functionality, in an attempt to achieve better software reliability results with new data. The intellectual behavior of the ant colony framework by means of a colony of cooperating artificial ants are resulting in very promising results. The method is validated with real dataset using Normalized Root Mean Square Error (NRMSE).",2014,0, 1403,Software Systems Engineering with Model-Based Design,"Further, automotive software systems engineers must deliver features that cross multiple domains (body, chassis, powertrain, multimedia, driver assistance, personalization, and human machine interfaces) and reside on a distributed network of modules. These vehicle systems are also delivered through the cooperation of many automotive and non-automotive suppliers based in various geographic locations, which poses a significant project management challenge to the software systems engineering team. In addition to managing the initial project, the software systems engineer must also explicitly design for and support the re-use of stand-alone software features, mechatronic subsystems and entire vehicle-level, electronic-control-system electrical architectures. Finally, the traditional automotive software systems engineering lifecycle has expanded to include processes, methods, tools, and infrastructure (PMTI) that must integrate across all phases and domains of the technology innovation and product delivery- maintenance-disposal lifecycle.",2007,0, 1404,"Software Testing Research: Achievements, Challenges, Dreams","Software engineering comprehends several disciplines devoted to prevent and remedy malfunctions and to warrant adequate behaviour. Testing, the subject of this paper, is a widespread validation approach in industry, but it is still largely ad hoc, expensive, and unpredictably effective. Indeed, software testing is a broad term encompassing a variety of activities along the development cycle and beyond, aimed at different goals. Hence, software testing research faces a collection of challenges. A consistent roadmap of the most relevant challenges to be addressed is here proposed. In it, the starting point is constituted by some important past achievements, while the destination consists of four identified goals to which research ultimately tends, but which remain as unreachable as dreams. The routes from the achievements to the dreams are paved by the outstanding research challenges, which are discussed in the paper along with interesting ongoing work.",2007,0, 1405,Sojourn time asymptotics in processor sharing queues with varying service rate,"

This paper addresses the sojourn time asymptotics for a GI/GI/?? queue operating under the Processor Sharing (PS) discipline with stochastically varying service rate. Our focus is on the logarithmic estimates of the tail of sojourn-time distribution, under the assumption that the job-size distribution has a light tail. Whereas upper bounds on the decay rate can be derived under fairly general conditions, the establishment of the corresponding lower bounds requires that the service process satisfies a sample-path large-deviation principle. We show that the class of allowed service processes includes the case where the service rate is modulated by a Markov process. Finally, we extend our results to a similar system operation under the Discriminatory Processor Sharing (DPS) discipline. Our analysis relies predominantly on large-deviations techniques.

",2007,0, 1406,Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning,"AbstractIn this paper, we present a constraint-partitioning approach for finding local optimal solutions of large-scale mixed-integer nonlinear programming problems (MINLPs). Based on our observation that MINLPs in many engineering applications have highly structured constraints, we propose to partition these MINLPs by their constraints into subproblems, solve each subproblem by an existing solver, and resolve those violated global constraints across the subproblems using our theory of extended saddle points. Constraint partitioning allows many MINLPs that cannot be solved by existing solvers to be solvable because it leads to easier subproblems that are significant relaxations of the original problem. The success of our approach relies on our ability to resolve violated global constraints efficiently, without requiring exhaustive enumerations of variable values in these constraints. We have developed an algorithm for automatically partitioning a large MINLP in order to minimize the number of global constraints, an iterative method for determining the optimal number of partitions in order to minimize the search time, and an efficient strategy for resolving violated global constraints. Our experimental results demonstrate significant improvements over the best existing solvers in terms of solution time and quality in solving a collection of mixed-integer and continuous nonlinear constrained optimization benchmarks.",2005,0, 1407,Solving Timetabling Problem Using Genetic and Heuristic Algorithms,"In this paper, we propose a hybrid algorithm that combines genetic and heuristic approach. By using this method, solving timetabling problem is converted to finding the optimal arrangement of elements on a 2D matrix. This algorithm was implemented and tested with the synthetic and real data of Nong lam University of HCM City, Vietnam. The experimental results reveal the usability and potential of the proposed algorithm in solving timetabling problems.",2007,0, 1408,Soup or Art? The Role of Evidential Force in Empirical Software Engineering,"Software project managers' decisions should be based on solid evidence, not on common wisdom or vendor hype. What distinguishes the science from the art is the way in which we as managers and practitioners make decisions, by forming rational arguments from the evidence we have - evidence that comes both from our experience and from related research. That is, we move from the part to the whole, examining the body of evidence to determine what we know about the best ways to build good software. This view isn't particular to software engineering or even to mathematical sciences; it's what characterizes good science in general. This article examines the ways in which careful understanding of argumentation and evidence can lead to more effective empirical software engineering - and ultimately to better decision making and higher-quality software products and processes.",2005,0, 1409,Source-Level Linkage: Adding Semantic Information to C++ Fact-bases,"Facts extracted from source code have been used to support a variety of software engineering activities, ranging from architectural understanding, through detection of design patterns, to program exploration. Several fact extractors have been developed and published in the literature, but most of them extract facts only from individual compilation units. Linking multiple fact-bases is largely overlooked. Source-level linkage is different from compilation linkage. Its goal is to assist a software engineer, not to produce an executable program. Thus a source-level linker needs to collect as many as possible facts that may be potentially helpful to a software engineer's task, many of which are not available from a compiler linker. We present the design of a source-level linker for C++. This linker has been used to analyze a dozen of Microsoft Foundation Classes (MFC) programs and over 200 C++ programs that cover an extensive subset of C++ features, including templates from the standard template library (STL). As a further validation, we design a structural constraint language, SCL, to express and machine-check a wide range of constraints on the abstract semantics graph (ASG) produced by the linker",2006,0, 1410,Spatial Complexity Metrics: An Investigation of Utility,"Software comprehension is one of the largest costs in the software lifecycle. In an attempt to control the cost of comprehension, various complexity metrics have been proposed to characterize the difficulty of understanding a program and, thus, allow accurate estimation of the cost of a change. Such metrics are not always evaluated. This paper evaluates a group of metrics recently proposed to assess the ""spatial complexity"" of a program (spatial complexity is informally defined as the distance a maintainer must move within source code to build a mental model of that code). The evaluation takes the form of a large-scale empirical study of evolving source code drawn from a commercial organization. The results of this investigation show that most of the spatial complexity metrics evaluated offer no substantially better information about program complexity than the number of lines of code. However, one metric shows more promise and is thus deemed to be a candidate for further use and investigation.",2005,0, 1411,SPECIFICATION AND AUTOMATIC GENERATION OF SIMULATION MODELS WITH APPLICATIONS IN SEMICONDUCTOR MANUFACTURING,"This article gives an overview of a framework for automatically generating large-scale simulation models from a domain specific problem definition data schema, here semiconductor manufacturing. This simulation model uses an object-oriented Petri net data structure. The Petri net based simulation uses the same enabling rules as classical Petri nets, but has extensions of time and priorities. This approach minimizes the effort of model verification. Each object identified in the problem data specification is mapped to corresponding Petri net fragments. The Petri net simulation model is synthesized from verifiable subnets. This allows ensuring the liveness of the final Petri net simulation model. The applicability of this approach is demonstrated by generating a simulation model based on the Sematech data set.",2007,0, 1412,Specification and evaluation of safety properties in a component-based software engineering process.,"

Over the past years, component-based software engineering has become an established paradigm in the area of complex software intensive systems. However, many techniques for analyzing these systems for critical properties currently do not make use of the component orientation. In particular, safety analysis of component-based systems is an open field of research. In this chapter we investigate the problems arising and define a set of requirements that apply when adapting the analysis of safety properties to a component-based software engineering process. Based on these requirements some important component-oriented safety evaluation approaches are examined and compared.

",2005,0, 1413,"Specifying Reusable Security Requirements","Application-level Web security refers to vulnerabilities inherent in the code of a Web-application itself (irrespective of the technologies in which it is implemented or the security of the Web-server/back-end database on which it is built). In the last few months, application-level vulnerabilities have been exploited with serious consequences: Hackers have tricked e-commerce sites into shipping goods for no charge, usernames and passwords have been harvested, and confidential information (such as addresses and credit-card numbers) has been leaked. We investigate new tools and techniques which address the problem of application-level Web security. We 1) describe a scalable structuring mechanism facilitating the abstraction of security policies from large Web-applications developed in heterogeneous multiplatform environments; 2) present a set of tools which assist programmers in developing secure applications which are resilient to a wide range of common attacks; and 3) report results and experience arising from our implementation of these techniques.",2003,0, 1414,Speculative optimization using hardware-monitored guarded regions for java virtual machines,"

Aggressive dynamic optimization in high-performance Java Virtual Machines can be hampered by language features like Java's exception model, which requires precise detection and handling of program-generated exceptions. Furthermore, the compile-time overhead of guaranteeing correctness of code transformations precludes many effective optimizations from consideration. This paper describes a novel approach for circumventing the optimization-crippling effects of exception semantics and streamlining the implementation of aggressive optimizations at run time. Under a hardware-software hybrid model, the runtime system delineates guarded regions of code and specifies a contract--in the simplest case, one that requires exception-free execution--that must be adhered to in order to ensure that the aggressively optimized code within that region will behave as the programmer expects. The contracted runtime condition is assumed to be true, and code within a guarded region is aggressively optimized based on this assumption. Hardware monitors for exceptions throughout the region execution, and undoes the effects of the guarded region if an exception occurs, re-executing the region with a conventionally optimized version. Since exceptions are very rare, code can be optimized as if optimization-crippling conditions did not exist, leading to compile time reduction, code quality improvement, and potential performance improvement up to 67.7% and averaging 15.9% in our limit study of a set of Java benchmarks.

",2007,0, 1415,Spiral Multi-aspect Hepatitis Data Mining,"Extraction of meaningful information from large experimental data sets is a key element in bioinformatics research. One of the challenges is to identify genomic markers in Hepatitis B Virus (HBV) that are associated with HCC (liver cancer) development by comparing the complete genomic sequences of HBV among patients with HCC and those without HCC. In this study, a data mining framework, which includes molecular evolution analysis, clustering, feature selection, classifier learning, and classification, is introduced. Our research group has collected HBV DNA sequences, either genotype B or C, from over 200 patients specifically for this project. In the molecular evolution analysis and clustering, three subgroups have been identified in genotype C and a clustering method has been developed to separate the subgroups. In the feature selection process, potential markers are selected based on Information Gain for further classifier learning. Then, meaningful rules are learned by our algorithm called the Rule Learning, which is based on Evolutionary Algorithm. Also, a new classification method by Nonlinear Integral has been developed. Good performance of this method comes from the use of the fuzzy measure and the relevant nonlinear integral. The nonadditivity of the fuzzy measure reflects the importance of the feature attributes as well as their interactions. These two classifiers give explicit information on the importance of the individual mutated sites and their interactions toward the classification (potential causes of liver cancer in our case). A thorough comparison study of these two methods with existing methods is detailed. For genotype B, genotype C subgroups C1, C2, and C3, important mutation markers (sites) have been found, respectively. These two classification methods have been applied to classify never-seen-before examples for validation. The results show that the classification methods have more than 70 percent accuracy and 80 percent sensitivity for most da- - ta sets, which are considered high as an initial scanning method for liver cancer diagnosis.",2011,0, 1416,Static Analysis of Object References in RMI-Based Java Software,"Distributed applications provide numerous advantages related to software performance, reliability, interoperability, and extensibility. This paper focuses on distributed Java programs built with the help of the remote method invocation (RMI) mechanism. We consider points-to analysis for such applications. Points-to analysis determines the objects pointed to by a reference variable or a reference object field. Such information plays a fundamental role as a prerequisite for many other static analyses. We present the first theoretical definition of points-to analysis for RMI-based Java applications, and an algorithm for implementing a flow- and context-insensitive points-to analysis for such applications. We also discuss the use of points-to information for computing call graph information, for understanding data dependencies due to remote memory locations, and for identifying opportunities for improving the performance of object serialization at remote calls. The work described in this paper solves one key problem for static analysis of RMI programs, and provides a starting point for future work on improving the understanding, testing, verification, and performance of RMI-based software.",2005,0, 1417,Static program analysis based on virtual register renaming,"Static single assignment form (SSA) is a popular program intermediate representation (IR) for static analysis. SSA programs differ from equivalent control flow graph (CFG) programs only in the names of virtual registers, which are systematically transformed to comply with the naming convention of SSA. Static single information form (SSI) is a recently proposed extension of SSA that enforces a greater degree of systematic virtual register renaming than SSA. This dissertation develops the principles, properties, and practice of SSI construction and data flow analysis. Further, it shows that SSA and SSI are two members of a larger family of related IRs, which are termed virtual register renaming schemes (VRRSs). SSA and SSI analyses can be generalized to operate on any VRRS family member. Analysis properties such as accuracy and efficiency depend on the underlying VRRS. This dissertation makes four significant contributions to the field of static analysis research. First, it develops the SSI representation. Although SSI was introduced five years ago, it has not yet received widespread recognition as an interesting IR in its own right. This dissertation presents a new SSI definition and an optimistic construction algorithm. It also sets SSI in context among the broad range of IRs for static analysis. Second, it demonstrates how to reformulate existing data flow analyses using new sparse SSI-based techniques. Examples include liveness analysis, sparse type inference and program slicing. It presents algorithms, together with empirical results of these algorithms when implemented within a research compiler framework. Third, it provides the only major comparative evaluation of the merits of SSI for data flow analysis. Several qualitative and quantitative studies in this dissertation compare SSI with other similar IRs. Last, it identifies the family of VRRSs, which are all CFGs with different virtual register naming conventions. Many extant IRs are classified as VRRSs. Several new IRs are presented, based on a consideration of previously unspecified members of the VRRS family. General analyses can operate on any family member. The required level of accuracy or efficiency can be selected by working in terms of the appropriate family member.",2006,0, 1418,Statistical Absolute Evaluation of Gene Ontology Terms with Gene Expression Data,"

We propose a new testing procedure for the automatic ontological analysis of gene expression data. The objective of the ontological analysis is to retrieve some functional annotations, e.g. Gene Ontology terms, relevant to underlying cellular mechanisms behind the gene expression profiles, and currently, a large number of tools have been developed for this purpose. The most existing tools implement the same approach that exploits rank statistics of the genes which are ordered by the strength of statistical evidences, e.g. p-values computed by testing hypotheses at the individual gene level. However, such an approach often causes the serious false discovery. Particularly, one of the most crucial drawbacks is that the rank-based approaches wrongly judge the ontology term as statistically significant although all of the genes annotated by the ontology term are irrelevant to the underlying cellular mechanisms. In this paper, we first point out some drawbacks of the rank-based approaches from the statistical point of view, and then, propose a new testing procedure in order to overcome the drawbacks. The method that we propose has the theoretical basis on the statistical meta-analysis, and the hypothesis to be tested is suitably stated for the problem of the ontological analysis. We perform Monte Carlo experiments for highlighting the disadvantages of the rank-based approach and the advantages of the proposed method. Finally, we demonstrate the applicability of the proposed method along with the ontological analysis of the gene expression data of human diabetes.

",2007,0, 1419,Statistical Reliability with Applications,"If a random variable can be expressed as a weighted sum of other random variables having known distributions which can be approximated piecewise by, for example, polynomials, the distribution of the random variable can be obtained, relatively easily, by the use of the algorithm described in this paper.",1959,0, 1420,Statistically rigorous java performance evaluation,"Recently, as speeds of computer processors and networks are rapidly increasing, a lot of researches are actively progressing to develop efficient and lightweight parallel computing platforms using heterogeneous and networked computers. According to this technical trend, this paper designs and implements a message passing library called JMPI(Java Message Passing Interface) which complies with MP], the MPI standard specification for Java language. This library provides some graphic user interface tools to enable parallel computing environments to be configured very simply by their administrators and JMPI applications to be executed very conveniently. Especially, it is implemented as two versions based on two typical distributed system communication mechanisms, Socket and RMI. According to these communication mechanisms, the performance of each message passing system is evaluated by measuring its processing speed with respect to the increasing number of computers by executing three well-known applications. Experimental results show that the most efficient processing speedup can be obtained by increasing the number of the computers in consideration of network traffics generated by applications.",2007,0, 1421,Strategies for Working with Digital Medical Images,"Medical images are a critical component of the healthcare system with great impact on the society’s welfare. Traditionally medical images were stored on film but the advances in modern imaging modalities made it possible to store them electronically. Thus, this research proposes a novel framework for classifying various strategies for storing, retrieving and processing digital medical images. In addition to a detailed discussion, the assessment of the classification framework includes a potential usage scenario of the framework. For researchers, this study identifies important strategies and points out future research directions while, for practitioners, the proposed framework might help medical users develop a lucid understanding of the different approaches and their advantages and disadvantages.",2006,0, 1422,STRATUM: A METHODOLOGY FOR DESIGNING HEURISTIC AGENT NEGOTIATION STRATEGIES,"

Automated negotiation is a powerful (and sometimes essential) means for allocating resources among self-interested autonomous software agents. A key problem in building negotiating agents is the design of the negotiation strategy, which is used by an agent to decide its negotiation behavior. In complex domains, there is no single, obvious optimal strategy. This has led to much work on designing heuristic strategies, where agent designers usually rely on intuition and experience. In this article, we introduce STRATUM, a methodology for designing strategies for negotiating agents. The methodology provides a disciplined approach to analyzing the negotiation environment and designing strategies in light of agent capabilities and acts as a bridge between theoretical studies of automated negotiation and the software engineering of negotiation applications. We illustrate the application of the methodology by characterizing some strategies for the Trading Agent Competition and for argumentation-based negotiation.

",2007,0, 1423,Structural Protein Interactions Predict Kinase-Inhibitor Interactions in Upregulated Pancreas Tumour Genes Expression Data,"

Micro-arrays can identify co-expressed genes at large scale. The gene expression analysis does however not show functional relationships between co-expressed genes. To address this problem, we link gene expression data to protein interaction data. For the gene products of co-expressed genes, we identify structural domains by sequence alignment and threading. Next, we use the protein structure interaction PSIMAP to find structurally interacting domains. Finally, we generate structural and sequence alignments of the original gene products and the identified structures and check conservation of the relevant interaction interfaces. From this analysis, we derive potentially relevant protein interactions for the gene expression data.

We applied this method to co-expressed genes in pancreatic ductal carcinoma. Our method reveals among others a number of functional clusters related to the proteasome, signalling, ubiquitinisation, serine proteases, immunoglobulin and kinases. We investigate the kinase cluster in detail and reveal an interaction between the cell division control protein CDC2 and the cyclin-dependent kinase inhibitor CDKN3, which is also confirmed by literature. Furthermore, our method reveals new interactions between CDKN3 and the cell division protein kinase CDK7 and between CDKN3 and the serine/threonine-protein kinase CDC2L1.

",2005,0, 1424,Structuring Software Architecture Project Memories,"Any global software development project needs to deal with distances -- geographical, cultural, time zone, etc. -- between the groups of developers engaged in the project. To successfully manage the risks caused by such distances, there is a need to explicate and present the distances in a form suitable for manual or semi-automatic analysis, the goal of which is to detect potential risks and find ways of mitigating them. The paper presents a technique of modeling a global software development project suitable for such analysis. The project is modeled as a complex socio-technical system that consists of functional components connected with each other through output-input relationships. The components do not coincide with the organizational units of the project and can be distributed through the geographical and organizational landscape of the project. The modeling technique helps to explicate and represent various kinds of distances between the functional components to determine which of them constitute risk factors. The technique was developed during two case studies, of which the second is used for presenting and demonstrating the new modeling technique in the paper.",2015,0, 1425,Study of Design Characteristics in Evolving Software Using Stability as a Criterion,"There are many ideas in software design that are considered good practice. However, research is still needed to validate their contributions to software maintenance. This paper presents a method for examining software systems that have been actively maintained and used over the long term and are potential candidates for yielding lessons about design. The method relies on a criterion of stability and a definition of distance to flag design characteristics that have potentially contributed to long-term maintainability. It is demonstrated by application to an example of long-lived scientific software. The results from this demonstration show that the method can provide insight into the relative importance of individual elements of a set of design characteristics for the long-term evolution of software",2006,0, 1426,Study on Rapid Prototyping Methodology of the Lecture Contents for the IT SoC Certificate Program,"This paper describes the development methodology of the prototype applied to create the lecture contents of the IT Soc certificate program at graduate school quickly. IT SoC certificate program (SoCCP) develops the lecture contents of the 16 major courses to educate high-level human talent in the MS and PhD courses of IT SoC design field. Specifically, the rapid prototype development methodology is able to obtain contents successfully through role definition and collaboration between the subject matter expert (SME) and the instructional designer. The effectiveness of the rapid prototyping (RP) methodology to develop lecture contents is ensured and improved the quality them continuously through the review by college and industrial professionals in terms of the appropriateness of subcontents, feasibility of lecture, and wide applicability at participating universities of the SoCCP.",2007,0, 1427,Studying Software Engineers: Data Collection Techniques for Software Field Studies,"For manufacturing firms to increase productivity and quality while reducing their inventory and operational costs, a simple and easy-to-use data collection system is needed on the shop floor. Such a system developed for an electronic card manufacturing line is discussed. Facility operations are described, and the requirements for an ideal software solution are outlined. An independent development effort was undertaken to satisfy the unique requirements of the production line. As work progressed, a number of factors conspired to produce schedule and cost overruns that caused management to scale back the scope considerably. What eventually emerged was a much leaner and less capable system than originally envisioned. The factors that put the project in jeopardy are discussed, and some suggestions and lessons gained from the experience, which are applicable to any software development project, are offered",1993,0, 1428,Style-based architectural analysis for migrating a webbased regional trade information system,"In this paper, we present the MIDARCH method for selecting a middleware platform in Enterprise Application Integration (EAI) and migration projects. Its specific contribution is the use of architectural styles (MINT Styles) as a vehicle for binding architectural knowledge. In addition, an ongoing case study is presented which applies the MIDARCH method to a web-based regional trade information system. The project involves the integration of three subsystems, which have been developed rather independently in the past, two of which are already web-based. The major motivation for migrating the system is to improve evolvability of the system and to make it more apt for the supply to a larger number of customers.",2006,0, 1429,Success Factors and Impacts of Mobile Business Applications: Results from a Mobile e-Procurement Study,"

Based on the concept of task/technology fit, a research framework and exploratory case study are presented that assess success factors and impacts of mobile business applications. Preliminary empirical evidence for the applicabilit y of the framework was obtained for a mobile electronic procurement system implemented at a Fortune 100 company. For different user groups, the relationships between the characteristics of technology and tasks, usage, and organizational impacts were analyzed. The results indicate a need for simple but highly functional mobile applications that complement existing information systems. The study provides a basis for further research to improve the design and management of business applications based on emerging technologies.

",2004,0, 1430,Successful Collaborative Software Projects for Medical Devices in an FDA Regulated Environment: Myth or Reality?,"The significance of executing a successful collaborative distributed software project in a regulated medical device industry is often recognized but such projects are never perfectly achieved. There expectations of standard software being delivered 'bug-free, on time and on budget'. In addition, FDA regulations impose a high standard on such medical device software by mandating the adherence to strict processes on software development. These processes include verification and validation of the software to assure the safety and effectiveness of the medical device. In such a regulated global software development environment, questions regarding mis- communication and lack of appropriate knowledge transfer are often raised, discussed and projected as being the main reason why collaborative projects either fail or are delayed. The requirements hold not only for the software that becomes part of the product but also for software used in its development. In this paper we analyze the nature of the global software development collaborative environment and identify several challenges in such projects for internally used software. These challenges include the burden of providing to the FDA a set of traceable documentation as evidence of following a specific process from software conception to software validation. We also discuss the human motivations that, when faced with these challenges, can affect success. We present two case studies of such globally distributed software development projects with entirely different focuses and study the underlying challenges and success factors. Finally we also discuss the driving factors for successful collaborative efforts and draw upon the incentives from our individual experiences in a real-world FDA regulated global medical device industry environment.",2007,0, 1431,Supervision Based on Place Invariants: A Survey,"This paper describes a method for constructing a Petri net feedback controller for a discrete event system modeled by a Petri net. The controller enforces a set of linear constraints on the plant and consists of places and arcs. It is computed using the concept of Petri net place invariants. The size of the controller is proportional to the number of constraints which must be satisfied. The method is very attractive computationally, and it makes possible the systematic design of Petri net controllers for complex industrial systems",1994,0, 1432,Support for Task Modeling ? A ?Constructive? Exploration,"This paper describes the design and implementation of an agent based network for the support of collaborative switching tasks within the control room environment of the National Grid Company plc. This work includes aspects from several research disciplines, including operational analysis, human computer interaction, finite state modelling techniques, intelligent agents and computer supported co-operative work. Aspects of these procedures have been used in the analysis of collaborative tasks to produce distributed local models for all involved users. These models have been used as the basis for the production of local finite state automata. These automata have then been embedded within an agent network together with behavioural information extracted from the task and user analysis phase. The resulting support system is capable of task and communication management within the transmission despatch environment",2001,0, 1433,Supporting Change Impact Analysis for Service Oriented Business Applications,"Business applications encode various business processes within an organization. Business process specification languages such as BPEL (Business Process Execution Language) are commonly used to integrate various services in order to automate business processes within an organization. To remain competitive edge, managers frequently modify their processes. Determining the cost of modifying a business process is not trivial since the changes to the business process have to account for source code changes in various services. In this paper, we propose an approach to estimating the cost of a business process change in a service oriented business application. The approach applies change impact analysis techniques to business process specifications, and source code. The approach generates an initial change impact set from business process components. These components are then mapped to the corresponding source code entities. These code entities act as seeds for traditional source code impact analysis. Using code dependencies, such as call and inheritance relations, we derive a metric to capture the complexity of particular business process changes. Managers can then use this metric to gauge the cost and resources needed to implement changes in their business processes without having to study the code. We demonstrated the feasibility of our approach using an experiment on an open source service oriented business application.",2007,0, 1434,Supporting Personal Collections across Digital Libraries in Spatial Hypertext,"Creating, maintaining, or using a digital library requires the manipulation of digital documents. Information workspaces provide a visual representation allowing users to collect, organize, annotate, and author information. The visual knowledge builder (VKB) helps users access, collect, annotate, and combine materials from digital libraries and other sources into a personal information workspace. VKB has been enhanced to include direct search interfaces for NSDL and Google. Users create a visualization of search results while selecting and organizing materials for their current activity. Additionally, metadata applicators have been added to VKB. This interface allows the rapid addition of metadata to documents and aids the user in the extraction of existing metadata for application to other documents. A study was performed to compare the selection and organization of documents in VKB to the commonly used tools of a Web browser and a word processor. This study shows the value of visual workspaces for such effort but points to the need for subdocument level objects, ephemeral visualizations, and support for moving from visual representations to metadata.",2004,0, 1435,Supporting Salespersons Through Location Based Mobile Applications and Services,"AbstractThe paper aims at assessing how mobile location applications and services can support salespersons, for greater performance when they are operating within a mobile work environment. After briefly discussing the state of the art issues associated with mobile location technologies, the paper conceptualises key dimensions of location-based mobile support. The paper then suggests a categorization of salespersons tasks based on both properties of location-based mobile support and the areas of salespersons tasks that may be affected by mobile location technologies. A third section suggests potential mobile location services and applications that can support salespersons in performing effectively their everyday tasks and links such applications to the determinant of salespersons performance. The paper concludes with a discussion of a number of critical issues such as salespersons privacy, risk of information overload, autonomy and some core areas of further research.",2004,0, 1436,Supporting task-oriented modeling using interactive UML views,"The UML is a collection of 13 diagram notations to describe different views of a software system. The existing diagram types display model elements and their relations. Software engineering is becoming more and more model-centric, such that software engineers start using UML models for more tasks than just describing the system. Tasks such as analysis or prediction of system properties require additional information such as metrics of the UML model or from external sources, e.g. a version control system. In this paper we identify tasks of model-centric software engineering and information that is required to fulfill these tasks. We propose views to visualize the information to support fulfilling the tasks. This paper reports on a large-scale controlled experiment to validate the usefulness of the proposed views that are implemented in our MetricView Evolution tool. The results of the experiment with 100 participants are statistically significant and show that the correctness of comprehension is improved by 4.5% and that the time needed is reduced by 20%.",2007,0, 1437,Supporting the Developers of Context-Aware Mobile Telemedicine Applications,"From a developing world perspective, mobile phone is the primary technology for the majority of people and will be for the foreseeable future. It will be their connection to the internet, their communication tool, school book, vaccination report, photo album and many other things. Despite this reality, relatively few mobile applications exist. However, a tool to support the programming of mobile applications can significantly impact and improve programmer's productivity and software quality. In this paper, we develop a concept for a mobile tooling framework that extends the Netbeans integrated development environment (IDE) for mobile programming. Mobile Tools for Netbeans (MobiNET) design would support the development of mobile applications for various mobile phones. We evaluate Netbeans as a development took and drawing from insights gained in interviews with mobile software developers, conceptualize MobiNET. A working prototype is discussed and evaluated.",2009,0, 1438,Survey of component-based software development.,"Because of the extensive uses of components, the Component-Based Software Engineering (CBSE) process is quite different from that of the traditional waterfall approach. CBSE not only requires focus on system specification and development, but also requires additional consideration for overall system context, individual components properties and component acquisition and integration process. The term component-based software development (CBD) can be referred to as the process for building a system using components. CBD life cycle consists of a set of phases, namely, identifying and selecting components based on stakeholder requirements, integrating and assembling the selected components and updating the system as components evolve over time with newer versions. This work presents an indicative literature survey of techniques proposed for different phases of the CBD life cycle. The aim of this survey is to help provide a better understanding of different CBD techniques for each of these areas",2007,0, 1439,Survey of Software Inspection Research: 1991-2005,"Surveys are a popular research tool often used in empirical software engineering studies. While researchers are urged to replicate existing surveys, such replication brings with it challenges. This paper presents a concrete example of a replication of a survey used to determine the extent of adoption of software development best practice. The study replicated a European survey which was adapted and administered in a different context of Australian software development organisations. As well as discussing problems encountered, this paper presents a set of recommendations formulated to overcome identified challenges. Implementation of the recommendations would strengthen the value and contribution of surveys to the body of knowledge of empirical software engineering research.",2005,0, 1440,Survey of the Use and Documentation of Architecture Design Rationale,"Many claims have been made about the problems caused by not documenting design rationale. The general perception is that designers and architects usually do not fully understand the critical role of systematic use and capture of design rationale. However, there is to date little empirical evidence available on what design rationale mean to practitioners, how valuable they consider them, and how they use and document design rationale during the design process. This paper reports an empirical study that surveyed practitioners to probe their perception of the value of design rationale and how they use and document background knowledge related to their design decisions. Based on eighty-one valid responses, this study has discovered that practitioners recognize the importance of documenting design rationale and frequently use them to reason about their design choices. However, they have indicated barriers to the use and documentation of design rationale. Based on the findings, we conclude that much research is needed to develop methodology and tool support for design rationale capture and usage. Furthermore, we put forward some research questions that would benefit from further investigation into design rationale in order to support practice in industry.",2005,0, 1441,Symphony: View-Driven Software Architecture Reconstruction,"Authentic descriptions of a software architecture are required as a reliable foundation for any but trivial changes to a system. Far too often, architecture descriptions of existing systems are out of sync with the implementation. If they are, they must be reconstructed. There are many existing techniques for reconstructing individual architecture views, but no information about how to select views for reconstruction, or about process aspects of architecture reconstruction in general. In this paper we describe view-driven process for reconstructing software architecture that fills this gap. To describe Symphony, we present and compare different case studies, thus serving a secondary goal of sharing real-life reconstruction experience. The Symphony process incorporates the state of the practice, where reconstruction is problem-driven and uses a rich set of architecture views. Symphony provides a common framework for reporting reconstruction experiences and for comparing reconstruction approaches. Finally, it is a vehicle for exposing and demarcating research problems in software architecture reconstruction.",2004,0, 1442,Synthesising Research Results,"In this paper, we test the effect of using speech synthesis when interacting with a spoken dialog system (SDS). We use a user simulation to connect our speech synthesis to a real, state-of-the-art automatic speech recognition (ASR) component deployed in a working commercial SDS via a standard telephone line. In a series of experiments, we compare human-machine dialogs and their recognition scores with simulated dialogs using synthesis. Our results show that a good text-to-speech synthesis configuration rivals human speech both in recognition scores as well as variability. This makes the speech interface in user simulation quite attractive.",2012,0, 1443,"System Quality Requirements Engineering (SQUARE): Case Study on Asset Management System, Phase II",?This report describes the second phase of an application of the System Quality Requirements Engineering (SQUARE) Methodology developed by the Software Engineering Institute's Networked Systems Survivability Program on an asset management system. An overview of the SQUARE process and the vendor is presented followed by a description of the system under study. The research completed on Steps 4 through 9 of this 9-step process is then explained and feedback on its implementation is provided. The report concludes with a summary of findings and gives recommendations for future considerations of SQUARE testing. This report is one of a series of reports resulting from research conducted by the SQUARE team as part of an independent research and development project of the Software Engineering Institute.,2005,0, 1444,System Test Planning of Software: An Optimization Approach,This paper extends an exponential reliability growth model to determine the optimal number of test cases to be executed for various use case scenarios during the system testing of software. An example demonstrates a practical application of the optimization model for system test planning,2006,0, 1445,"Systematic Construction of Goal-Oriented COTS Taxonomies","Efficient estimation of power consumption is vital when designing large digital systems. The technique called power emulation can speed up estimation by implementing power models alongside a design on an FPGA. Current state-of-the-art power emulation methods construct models using various custom techniques, but there is no study on how the existing methods relate to each other nor how their differences impact the final quality of the model. We propose a methodology which describes the breadth of current approaches to automated construction of power emulation models. We also evaluate the current methods, finding that there is significant variation in accuracy and complexity. In 32.8 % of all tests, the average accuracy of the least complex method is better than that of the most advanced method at less than 0.3 % the hardware overhead. This result fuels the hope that further innovation may yield models with high accuracy at low implementation cost. Our software frameworks and experimental data are made available to promote continued work on the field.",2016,0, 1446,Systematic integration between requirements and architecture.,"This paper presents a systematic comparison between two different implementations of a distributed network on chip: fully asynchronous and multi-synchronous. The NoC architecture has been designed to be used in a globally asynchronous locally synchronous clusterized multi processors system on chip. The 5 relevant parameters are silicon area, network saturation threshold, communication throughput, packet latency and power consumption. Both architectures have been physically implemented and simulated by SystemC/VHDL co-simulation. The electrical parameters have also been evaluated by post layout SPICE simulation for a 90nm CMOS fabrication process, taking into account the long wire effects",2007,0, 1447,Systematization of Knowledge about Performative Verbs: Capturing Speaker?s Intention,"In order to support effective and smooth recognition of the intentions embedded within texts shared by multiple knowledge agents (e.g. humans or knowledge systems), this paper systematizes knowledge about performative verbs, which is considered as a verb to indicate speaker intentions, and proposes an intention reference system. The system enables two kinds of references; (a) a reference of the cognitive elements of intention and (b) a reference of intentions with cognitive elements.Consequently, it becomes easy to share the text addressor?s intention at least in the group even in the text-based communications on the Web. In addition to semantic web that tags the semantic content of the text, tagging the action of the text expects to realize more smoothly text communication with less misunderstanding.",2008,0, 1448,Systemic Analysis Applied to Problem Solving: The Case of the Past Participle in French,"AbstractIn segmenting a given language or languages in general in a systemic manner, we show how it is possible to effect computations which lead to reliable human language technology applications. Some problems have not yet been solved in language processing; we will show how in applying the theory systemic analysis and the calculus of SyGuLAC (Systemic Grammar using a Linguistically motivated Algebra and Calculus), we can solve difficult problems. Systems such as the ones outlined in the paper can be created whenever there is a need. The only requirements are that such systems can be manipulated, and that they be verifiable and above all traceable. A system ought to be computable and be able to be represented in its entirety; if not it cannot be verified.Keywords: human language technology, language calculability, system, systemic analysis, agreement of French participle.",2004,0, 1449,TABASCO: a taxonomy-based domain engineering method,"We discuss TABASCO, a method for constructing Domain-Specific Toolkits (DSTs). We present TABASCO in the context of domain engineering and generative programming. We discuss the steps of TABASCO in detail, with a focus on the software construction side, giving examples based on actual applications of the method. In doing so, we show that TABASCO is a domain engineering method aimed at a particular kind of software domain, and how this method is applied in practice.",1992,0, 1450,Tabletop Sharing of Digital Photographs for the Elderly,"Handoff of objects and tools occurs frequently and naturally in face-to-face work; in tabletop groupware, however, digital handoff is often awkward. In this paper, we investigate ways of improving support for digital handoff in tabletop systems. We first observed how handoff works at a physical table, and then compared the performance of tangible and standard transfer techniques on digital tables. Based on our observations, we developed a new technique called force-field handoff that allows objects to drift between pointers that are approaching one another. We tested force-field handoff in an experiment, and found that it is significantly faster than current digital handoff; no difference was found with tangible handoff. In addition, force-field handoff was preferred by the majority of participants.",2008,0, 1451,Tackling Offshore Communication Challenges with Agile Architecture-Centric Development,"Offshoring is not as popular as it seems. According to a recent German survey, only 1.5% of all outsourcing activities target offshore locations. This is a remarkably small figure taking into account the widely published purported benefits of offshoring. In this paper we demonstrate that communication problems are at the core of offshoring woes. This does not come as a surprise as they also play a major role in onshore projects. Based on our experience in tackling these challenges with our well established communication-centered agile design and development approach, we present case-study-reinforced advice for successful offshore projects. We show that a common view of the underlying architecture is of paramount importance for these projects.",2007,0, 1452,Tasks and scenario-based evaluation of information visualization techniques,"This paper highlights the importance of uncertainty visualization in information fusion, reviews general methods of representing uncertainty and presents perceptual and cognitive principles from Tufte, Chambers and Bertin as well as users experiments documented in the literature. Examples of uncertainty representations in information fusion are analyzed using these general theories. These principles can be used in future theoretical evaluations of existing or newly developed uncertainty visualization techniques before usability testing with actual users.",2007,0, 1453,Taxonomic Dimensions for Studying Situational Method Development,"AbstractThis paper is concerned with fragmented literature on situational method development, which is one of fundamental topics related to information systems development (ISD) methods. As the topic has attracted many scholars from various and possibly complementary schools of thought, different interpretations and understandings of key notions related to method development are present. In this paper, we regard such understandings as both challenges and opportunities for studying this topic. Upon the extensive review of relevant research, this paper shows how this literature fragmentation has resulted in and what needs to be done to make sense of the various understandings for studying situational ISD methods. For the latter, we propose the use of a number of taxonomic dimensions. We argue that these dimensions can help to ease the conduct of literature review and to position disparate research endeavors concerning situational method development properly. In particular, we discuss three basic studies to demonstrate how the taxonomic dimensions can be useful in studying the subject matter.",2007,0, 1454,Taxonomy of algorithm animation languages,All kinds of modified BP algorithms in the MATLAB's neural networks toolbox are discussed in the optimization techniques and compared with speed and memory. Different problem types applied to those algorithms are proposed.,2004,0, 1455,Teaching an Undergraduate AI Course with Games and Simulation,"Global software engineering is a growing field of research. The ability to develop software at remote sites provides means to utilize talents and skills in different parts of the world. Organizations and companies benefit from such diverse pool of developers. Recently, global software engineering courses started to be popular in academic settings to prepare generations of developers who can function in a professional way in such distributed setting. Courses are normally offered as part of computer science or software engineering degrees. There are different challenges pertaining to team members, environment and the interlacing factors like time zones, cultural diversity of team members, location barriers and gender issues. Simulation games have been used to teach classical software engineering courses. Simulation games can be used to illustrate and experiment with concepts like team management, performance and tool selection. SimSE is an educational simulation tool that provides graphical simulation environment to help students to practice anticipated challenges during software development. In this paper, we propose a model for distributed global software development simulation games. The model includes factors like time zones, cultural diversity of users (mainly Hofstede's culture dimensions are used), location barriers and gender issues. These factors will result in game triggers that may affect the development of the virtual project. The model is then implemented using the SimSE model builder. The game will be illustrated showing how it can be used in teaching global software engineering courses. The results will be verified using existing models.",2015,0, 1456,Teaching computer game design and construction.,"Computer game, a new field of artificial intelligence, as the name suggests, is to make the computer learn to think and play chess games like human beings. As one of the important research field of the artificial intelligence, computer game, which is considered as the touchstone of the artificial intelligence, has brought many important methods and theories to the field. Connect6, is a newly introduced game recent years, the research of game technology and algorithm on which is still remained relatively little. In this paper, I put forward a design and optimization of Connect6 computer game system, including technology of separating interface from computer kernel, move generator, improvement of search strategy and system optimization based on threat. These technologies are all new explorations to Connect6. The experiments and tests prove that our optimized program has advantages. And these technologies have helped us win the first prize in the 2013 National Undergraduate Computer Game Competition.",2014,0, 1457,Teaching Empirical Methods to Undergraduate Students Working Group Results,"AbstractIn this paper, we report the experiences of a working group who met, as part of the 2006 Dagstuhl Seminar on Empirical Software Engineering, to discuss the teaching of empirical methods to undergraduate students. The nature of the discussion meant that the group also indirectly considered teaching empirical methods to postgraduate students, mainly as a contrast to understand what is appropriate to teach at undergraduate and postgraduate level. The paper first provides a summary of the respective experiences of the participants in the working group. This summary is then used to informally assess the progress that has been made since the previous Dagstuhl Seminar, held in 1992. The paper then reviews some issues that arise when teaching methods to undergraduate students. Finally, some recommendations for the future development of courses are provided.",2007,0, 1458,Teaching Evidence-Based Software Engineering to University Students,"An undergraduate degree programme in software engineering was designed to include a systems analysis module, in which the teaching was based on a particular structured methodology. Experience is described of the conflicts that this caused within the curriculum of the degree, and of the way in which these were solved. This involved the development of a structure for the topics forming the subject of systems analysis, which is described along with the new structure for the systems analysis module that was derived from it. It is argued that this structure for systems analysis is also applicable to object oriented approaches and some experience is discussed of applying it within MSc courses as well as undergraduate ones",1998,0, 1459,Teaching Software Architecture Design,"Teaching software architecture design in an academic course so that it would equip the students with industrially useful capabilities is challenging. The real software architecture design problems are less clear than what the students are used to learning; the existing mass of assets of an industrial environment is hard to bring into a classroom; and so forth. We have designed a special course into an academic software engineering curriculum, taking into account the industrial needs in teaching the problem of understanding and solving demanding software architecture design problems. The course form is similar to an industrial architecture study assigned to a team of architects. In this paper, we discuss the industrial motivation for the course, the development of the course to its current form, and the lessons learned from running the course.",2008,0, 1460,Teaching Software Evolution in Open Source,"Most software engineering courses require students to develop small programs from scratch, but professional engineers typically work on the evolution of large software systems. Using open source software and a software change process model can narrow this gap without imposing excessive demands on students or instructors.",2007,0, 1461,Technology Adds New Principles to Persuasive Psychology: Evidence from Health Education,"

Computer-technology has led to the use of new principles of persuasion. These new principles constitute the unique working mechanisms of persuasion by means of computer. In the present study, three tailored messages that each contained one potential working mechanism - personalization, adaptation or feedback - were compared with a standard information condition. Two hundred and two students who smoked tobacco daily were randomly divided over four conditions. After the computer pre-test questionnaire, they read the information in their condition and filled in the immediate post-test. After 4 months, they were sent a follow-up questionnaire assessing their quitting activity. The data show that personalization (44.5%) and feedback (48.7%) but not adaptation (28.6%) led to significantly more quitting activity after 4 months than did the standard information (22.9%). Moreover, the effect of condition on quitting activity was mediated by individuals' evaluations of the extent to which the information took into account personal characteristics.

",2006,0, 1462,Temporal Processing in a Spiking Model of the Visual System,"This paper summarizes how Convolutional Neural Networks (ConvNets) can be implemented in hardware using Spiking neural network Address-Event-Representation (AER) technology, for sophisticated pattern and object recognition tasks operating at mili second delay throughputs. Although such hardware would require hundreds of individual convolutional modules and thus is presently not yet available, we discuss methods and technologies for implementing it in the near future. On the other hand, we provide precise behavioral simulations of large scale spiking AER convolutional hardware and evaluate its performance, by using performance figures of already available AER convolution chips fed with real sensory data obtained from physically available AER motion retina chips. We provide simulation results of systems trained for people recognition, showing recognition delays of a few miliseconds from stimulus onset. ConvNets show good up scaling behavior and possibilities for being implemented efficiently with new nano scale hybrid CMOS/nonCMOS technologies.",2010,0, 1463,Test Patterns with TTCN-3,"Radial Tchebichef moments as a discrete orthogonal moment in the polar coordinate have been successfully used in the field of pattern recognition. However, the scaling invariant property of these moments has not been studied due to the complexity of the problem. In this paper, we present a new method to construct a complete set of scaling and rotation invariants extract from radial Tchebichef moments, named radial Tchebichef moment invariants (RCMI). Experimental results show the efficiency and the robustness to noise of the proposed method for recognition tasks.",2010,0, 1464,Testing in Software Product Lines,"Software Product Line (SPL) test is an emergent research subject, and works found in the literature address some topics such as test process, plain and guidelines, and most recently, strategies to integrate test methods and criteria in the SPL engineering. Strategies that are SPL oriented allow reuse of test assets and were proposed as an alternative to those ones that test product by product in a separated way. However, studies conducted to evaluate the benefits of such strategies are necessary, and have not been conducted yet. Considering this fact, this paper presents results from the application of the SPL oriented strategy, named Reusable Asset Instantiation (RAI). A comparison with the strategy Product by Product is conducted, considering reuse, and test data generation based on use cases.",2011,0, 1465,Testing vs. code inspection vs. what else?: male and female end users' debugging strategies,"

Little is known about the strategies end-user programmers use in debugging their programs, and even less is known about gender differences that may exist in these strategies. Without this type of information, designers of end-user programming systems cannot know the ""target"" at which to aim, if they are to support male and female end-user programmers. We present a study investigating this issue. We asked end-user programmers to debug spreadsheets and to describe their debugging strategies. Using mixed methods, we analyzed their strategies and looked for relationships among participants' strategy choices, gender, and debugging success. Our results indicate that males and females debug in quite different ways, that opportunities for improving support for end-user debugging strategies for both genders are abundant, and that tools currently available to end-user debuggers may be especially deficient in supporting debugging strategies used by females.

",2008,0, 1466,Text Mining for Finding Functional Community of Related Genes Using TCM Knowledge,"We present a novel text mining approach to uncover the functional gene relationships, maybe, temporal and spatial functional modular interaction networks, from MEDLINE in large scale. Other than the regular approaches, which only consider the reductionistic molecular biological knowledge in MEDLINE, we use TCM knowledge(e.g. Symptom Complex) and the 50,000 TCM bibliographic records to automatically congregate the related genes. A simple but efficient bootstrapping technique is used to extract the clinical disease names from TCM literature, and term co-occurrence is used to identify the disease-gene relationships in MEDLINE abstracts and titles. The underlying hypothesis is that the relevant genes of the same Symptom Complex will have some biological interactions. It is also a probing research to study the connection of TCM with modern biomedical and post-genomics studies by text mining. The preliminary results show that Symptom Complex gives a novel top-down view of functional genomics research, and it is a promising research field while connecting TCM with modern life science using text mining.",2004,0, 1467,Texture Analysis for Classification of Endometrial Tissue in Gray Scale Transvaginal Ultrasonography,"AbstractComputer-aided classification of benign and malignant endometrial tissue, as depicted in 2D gray scale transvaginal ultrasonography (TVS), was attempted by computing texture-based features. 65 TVS endometrial images were collected (15 malignant, 50 benign) and processed with a wavelet based enhancement technique. Two regions of interest (ROIs) were identified (endometrium, endometrium margin) on each processed image. Thirty-two textural features were extracted from each ROI employing first and second order statistics texture analysis algorithms. Textural feature-based models were generated for differentiating benign from malignant endometrial tissue employing stepwise logistic regression analysis. Models performance was evaluated by means of receiver operating characteristics (ROC) analysis. The best benign versus malignant classification was obtained from the model combining three textural features from endometrium and four textural features from endometrium margin, with corresponding area under ROC curve (Az) 0.956.",2006,0, 1468,The (Practical) Importance of SE Experiments,"At present the experimental resources of practical courses at universities are waste and experimental results are not ideal, and it is very important to grasp the experimental process of the students timely and accurate and improve the level of experimental teaching. The paper designs and develops college students experiment management platform, and uses software engineering system to carry out detail research on design ideas, methods and techniques, and B/S architecture, Spring MVC and Hibernate are used to achieve framework structure, then the background uses MySQL database as development tools. The practice has indicated that the system facilitates the management of experimental task and performance statistics, greatly improving the efficiency of teaching activities.",2013,0, 1469,The Amount of Information on Emotional States Conveyed by the Verbal and Nonverbal Channels: Some Perceptual Data,"

In a face-to-face interaction, the addressee exploits both the verbal and nonverbal communication modes to infer the speaker's emotional state. Is such an informational content redundant? Is the amount of information conveyed by each communication mode the same or is it different? How much information about the speaker's emotional state is conveyed by each mode and is there a preferential red communication mode for a given emotional state? This work attempts to give an answer to the above questions evaluating the subjective perception of emotional states in the single (either visual or auditory channel) and the combined channels (visual and auditory). Results show that vocal expressions convey the same amount of information as the combined channels and that the video alone conveys poorer emotional information than the audio and the audio and video together. Interpretations of these results (that seem to not support the data reported in literature proving the dominance of the visual channel in the emotion's perception) are given in terms of cognitive load, language expertise and dynamicity. Also, a mathematical model inspired to the information processing theory is hypothesized to support the suggested interpretations.

",2007,0, 1470,The antecedents and impacts of information processing effectiveness in inter-organizational collaborative software development,"In parallel with business globalization and technological innovation, customer demand and the diversity of core business expertise (competency) are also substantial influences on the development of new styles of business practices. In order to effectively and strongly compete in the boundary-less modern global economy, organizations have actively explored new ways of collaboration, which resulted in institution of virtual enterprises and inter-organizational alliances.

Communication, effective information sharing and knowledge extraction have been identified as integral components of inter-organizational collaboration. However, there has been little focus on comprehensive benchmarking tools or approaches to assess the potential of organizations for collaborative communication.

This research focuses on the effectiveness of the information processing (IP). Inter-organizational IP addresses the information exchange, as well as information utilization across organizational boundaries. Through a literature review, a set of factors influencing the effectiveness of inter-organizational communication has been identified. Using these factors, this research is aimed at assessing the level of communication effectiveness for two organizations with the common goal to collaborate for the purpose of software development.

These influential factors have been grouped into three categories: (a) organizational background, (b) contingency processes, and (c) information technology. A key hypothesis of this research is that a higher level of inter-organizational communication will result in a higher level of software development performance. The success of software development will be classified and measured along two dimensions: process performance and quality performance.

A multidisciplinary framework is developed to address organizational, managerial strategy and technology issues for the effective establishment of inter-organizational communication. The relative weight and effect of influential factors and intervening variables are studied and evaluated. The effect of these variables on success of software development performance has been studied among number of practical dimensions.",2006,0, 1471,The Antecedents of Online Consumers? Perceived Usefulness of Website: A Protocol Analysis Approach,"

Internet-based interactive multimedia technologies enable online firms to display featured products via a variety of product information and various presentation formats. This study investigates how consumers evaluate the usefulness of online product presentations from their experience with the virtual products. Three different product displays on two products are tested in a survey. Using a written protocol analysis approach, the study has confirmed our expectations on the impact of information quality and system quality on consumers' online shopping experience.

",2007,0, 1472,The Application of ICT in Collaborative Working in Construction Projects: A Critical Review,"The changing business environment characterized with tensing competitiveness and widely global collaboration requires construction organizations establishing effective and efficient inter-organization management systems to support themselves survival. Collaborative working (CW) is emerging for improving performance and enhancing competitiveness with responding to the changing environment in construction. This research presents the definition of CW underpinned by the principle of collaboration. Through a thorough literature review on selected paper from well-known academic journals in construction management, a critical review on the application of information and communication technology (ICT) in CW in construction projects is presented with focusing on design, project management, and integrated inter-organization management. Some limits of research are concluded, and future research directions are recommended.",2007,0, 1473,The Biometrics Grid: A Solution to Biometric Technologies,"The biometrics grid can simplify resource management and access by making biometrics data as easy to access as Web pages via a Web browser. It tests user-designed modules, which are wrapped to Web services and deployed into a Web services container. Risk-resilient strategies are used for job scheduling. A case study of two biometrics recognition processes in voiceprint and face illustrates the process. The biometrics grid could benefit academic collaborations and improve biometric data sharing, as well as enhance biometric systems' efficiency in scientific research and commercial applications.",2007,0, 1474,The Cache Complexity of Multithreaded Cache Oblivious Algorithms,"Cache oblivious algorithms are designed to get the good benefit from any of the underlying hierarchy of caches without the need to know about the exact structure of the cache. These algorithms are cache oblivious i.e., no variables are dependent on hardware parameters such as cache size and cache line length. Optimal utilization of cache memory has to be done in order to get the full performance potential of the hardware. We present here the miss rate comparison of cache oblivious matrix multiplication using the sequential access recursive technique and normal multiplication program. Varying the cache size the respective miss rates in the L1 cache are taken and then comparison is done. It is found that the miss rates in the L1 cache for the cache oblivious matrix multiplication program using the sequential access recursive technique is comparatively lesser than the naive matrix multiplication program.",2013,0, 1475,The Claims Library Capability Maturity Model: Evaluating a Claims Library,"One of the problem that plagues Human-Computer Interaction (HCI) software is its development cost. Many software companies forego the usability engineering aspect of their projects due to the time required to design and test user interfaces. Unfortunately, there is no ""silver bullet"" for user interface design and implementation because they are inherently difficult tasks. As computers are moving off the desktop, the greatest challenge for designers will be integrating these systems seamlessly into our everyday lives. The potential for reuse in user interfaces lies in reducing the time and effort required for this task, without sacrificing design quality. In this work we begin with an iterative development cycle for a claims library based on prominent literature within the HCI and software engineering fields. We constructed the Claims Library to be a repository of potentially reusable notification system claims. We examine the library through theoretical and practical perspectives. The theoretical perspective reveals tradeoffs in the initial implementation that relate to Krueger's taxonomy of reuse. The practical perspective stems from experience in designing and conducting usability testing for an in-vehicle input device using the Claims Library. While valuable, these examinations did not provide a distinct method of improving the library. Expecting to uncover a specific diagnosis for the problems in the library, it was unclear how they should be approached with further development efforts. With this realization, we saw that a more important and immediate contribution would not be another iteration of the Claims Library design. Rather, a clarification of the underlying theory that would better inform future systems development seemed a more urgent and worthy use of our experience. This clarification would need to have several characteristics to include: composed of a staged or prioritized architecture, represents an ideal model grounded in literature, and possesses intermediate development objectives and assessment points. As a solution, we propose the Claims Library Capability Maturity Model (CL-CMM), based on the theoretical deficiencies that should guide development of a claims library, as noted in the two evaluations. This thesis delivers a five-stage model to include process areas, goals, and practices that address larger threads of concern. Our capability maturity model is patterned after models in software engineering and human resource management. We include a full description of each stage, a gap analysis method of appraisal, and an example of its use. Several directions for future work are noted that are necessary to continue development and validation of the model.",2004,0, 1476,The co-adaptive neural network approach to the Euclidean Travelling Salesman Problem,"Neural networks have been suggested as tools for the solution of hard combinatorial optimization problems. The traveling salesman problem (TSP) is commonly considered as a benchmark for connectionist methods. Here we use the random neural network (RN) model, and apply the dynamical random neural network (DRNN) approach to solve approximately TSP. The advantage of the RN model is that a relatively fast, and purely analytical and numerical approach can be used. Furthermore the RN model equations can be directly solved in full parallelism. We show that DRNN yields solutions of TSP close to the optimal in a majority of the instances tested",1993,0, 1477,The computational complexity of component selection in simulation reuse,"Simulation composability has been much more difficult to realize than some initially imagined. We believe that success lies in explicit considerations for the adaptability of components. In this paper, we show that the complexity of optimal component selection for adaptable components is NP-complete. However, our approach allows for the efficient adaptation of components to construct a complex simulation in the most flexible manner while allowing the greatest opportunity to meet all requirements, all the while reducing time and costs. We demonstrate that complexity can vary from polynomial, to NP, and even to exponential as a function of seemingly simple decisions made about the nature of dependencies among components. We generalize these results to show that regardless of the types or reasons for dependencies in component selection, just their mere existence makes this problem very difficult to solve optimally.",2005,0, 1478,The Conceptualization of a Configurable Multi-party Multi-message Request-Reply Conversation,"

Organizations, to function effectively and expand their boundaries, require a deep insight into both process orchestration and choreography of cross-organization business processes. The set of requirements for service interactions is significant, and has not yet been sufficiently refined. Service Interaction Patterns studies by Barros et al. demonstrate this point. However, they overlook some important aspects of service interaction of bilateral and multilateral nature. Furthermore, the definition of these patterns are not precise due to the absence of a formal semantics. In this paper, we analyze and present a set of patterns formed around the subset of patterns documented by Barros et al. concerned with Request-Reply interactions, and extend these ideas to cover multiple parties and multiple messages. We concentrate on the interaction between multiple parties, and analyze issues of a non-guaranteed response and different aspects of message handling. We propose one configurable, formally defined, conceptual model to describe and analyze options and variants of request-reply patterns. Furthermore, we propose a graphical notation to depict every pattern variant, and formalize the semantics by means of Coloured Petri Nets. In addition, we apply this pattern family to evaluate WS-BPEL v2.0 and check how selected pattern variants can be operationalized in Oracle BPEL PM.

",2007,0, 1479,The consistency of empirical comparisons of regression and analogy-based software project cost prediction,"The objective is to determine the consistency within and between results in empirical studies of software engineering cost estimation. We focus on regression and analogy techniques as these are commonly used. We conducted an exhaustive literature search using predefined inclusion and exclusion criteria and identified 67 journal papers and 104 conference papers. From this sample we identified 11 journal papers and 9 conference papers that used both methods. Our analysis found that about 25% of studies were internally inconclusive. We also found that there is approximately equal evidence in favour of, and against analogy-based methods. We confirm the lack of consistency in the findings and argue that this inconsistent pattern from 20 different studies comparing regression and analogy is somewhat disturbing. It suggests that we need to ask more detailed questions than just: ""What is the best prediction system?"".",2005,0, 1480,The Contract Winning Process A Guide for Small Development Companies,"In order to survive in today?s business world it is necessary to win contracts. If companies fail to do this then their existence is threatened. Therefore, the manner in which companies conduct their contract winning activities become of paramount importance. Much focus in software engineering research and academic literature centres around the post-contract winning activities, such as project planning, costing and scheduling. The emphasis on the contract winning process, though not neglected, is quite small in comparison. There exists a need for more research in this interesting area and this thesis aims to partly address this need. Consequently, the main focus of this research is the contract winning process. The approach used to investigate this area consisted of a theoretical study followed by an empirical study, where eight small development companies were interviewed. The findings show that a uniform formal process does not exist for winning and negotiating contracts. As a result of these findings, from both the theoretical and empirical studies, a contract winning process model for small development companies was formulated. The proposed model consists of five sequential stages with recommended activities for each stage. The model is intended for small software engineering development companies but because the model is generic it could also be used by non-software companies.",2004,0, 1481,The Data Abstraction Layer as Knowledge Provider for a Medical Multi-agent System,"

The care of senior patients requires a great amount of human and sanitary resources. The K4Care Project is developing a new European model to improve the home care assistance of these patients. This medical model will be supported by an intelligent platform. This platform has two main layers: a multi-agent system and a knowledge layer. In this paper, it is reviewed the initial design of the system, and some improvements are presented. The main contribution is the introduction of an intermediate layer between agents and knowledge: the Data Abstraction Layer. Using this additional layer agents can have a transparent access to many different knowledge sources, which have data stored in different languages. In addition, the new layer would make possible to make intelligent treatment of the queries in order to generate answers in a more effective and efficient way.

",2007,0, 1482,The Design of a Portable Kit of Wireless Sensors for Naturalistic Data Collection,"

In this paper, we introduce MITes, a flexible kit of wireless sensing devices for pervasive computing research in natural settings. The sensors have been optimized for ease of use, ease of installation, affordability, and robustness to environmental conditions in complex spaces such as homes. The kit includes six environmental sensors: movement, movement tuned for object-usage-detection, light, temperature, proximity, and current sensing in electric appliances. The kit also includes five wearable sensors: onbody acceleration, heart rate, ultra-violet radiation exposure, RFID reader wristband, and location beacons. The sensors can be used simultaneously with a single receiver in the same environment. This paper describes our design goals and results of the evaluation of some of the sensors and their performance characteristics. Also described is how the kit is being used for acquisition of data in non-laboratory settings where real-time multi-modal sensor information is acquired simultaneously from several sensors worn on the body and up to several hundred sensors distributed in an environment.

",2006,0, 1483,"The Design, Implementation and Use of Domain Specific Languages","We demonstrate the application of a Model-Driven Software Development (MDSD) methodology using the example of an analysis framework designed for a data logging device in the field of vehicle testing. This mobile device is capable of recording the data traffic of automotive-specific bus systems like Controller Area Network (CAN), Local Interconnect Network (LIN), FlexRay and Media Orientied Systems Transport (MOST) in real-time. In order to accelerate the subsequent analysis of the tremendous amount of data, it is advisable to pre-filter the recorded log data on device, during the test-drive. To enable the test engineer of creating data analyses we built a component-based library on top of the languages System{C}/C++. Problematic with this approach is that still substantial programming knowledge is required for implementing filter algorithms, which is usually not the domain of a vehicle test engineer. In a next step we developed a graphical modelling language on top of our library and a graphical editor. The editor is able of verifying a model as well as of generating source code which eliminates the need of manually implementing a filter algorithm. In our contribution we show the design of the graphical language and the editor using the Eclipse platform and the Graphical Modelling Framework (GMF). We describe the automatic extraction of meta-information, such as available components, their interfaces and categorization annotations by parsing the library's C++ implementation with the help of Xtext. The editor will use that information to build a dedicated tool palette providing components that the designer can instantiate and interconnect using drag-and-drop.",2010,0, 1484,THE DEVELOPMENT OF A SHARABLE CONTENT OBJECT REFERENCE MODEL (SCORM) BEST PRACTICES GUIDE FOR INSTRUCTIONAL DESIGNERS,"Learning technology standards are increasingly gaining in importance in the field of Web-based teaching. At present, two standards dominating the market are taking shape. These are the SCORM standard of the ADL initiative and the AICC standard of the AICC organization. Based on the AICC and LOM meta data standards, the SCORM standard stands the chance to become the standard dominating the market. A number of restrictions are involved with the SCORM standard, though. The article shows general deficiencies of the SCORM standard that are critical concerning the market value of SCOs, the process of producing WBTs on the basis of different SCO Within the field of instructional design which relates to the Department of Defense and other government entities, and increasingly the private sector as well, SCORM is a standard which will remain and continue to evolve for years to come. As spending for training decreases, and the need for reusable distributed learning as opposed to traditional classroom instruction increases, more instructional designers will be designing units of study conforming to the Sharable Content Object Reference Model. SCORM is even finding its way into digital gaming, enabling game based learning to be played on any platform and allowing the sharing and reusing of game based objects (Prensky, 2001). Organizations that have yet to implement SCORM into their production process will be forced to do so, and organizations already designing instruction around the SCORM standard must work toward maintaining the standard and implementing its evolutions into their processes. It is my belief that in most cases, designing effective computer based distributed learning around SCORM is not difficult, but does require preparation and foresight. It is essential that instructional designers, especially if they lack experience with SCORM, receive adequate training, not only the technical aspects of the standard which applies to them, but also the best practices in designing courseware around the SCORM goal of reusability. As was demonstrated in my pre-instruction and post-instruction surveys, however, the attitude of instructional designers remains that educational content is more important than SCORM conformance. I agree with this position, and accept as true the ideal that at no time should SCORM interfere with the soundness of the instruction. Overall, my recommendation is organizations which utilize the SCORM standard will benefit from extensive instructional designer training on SCORM best practices. In my particular Nicholas - 21 organization, the mandate of SCORM conformance is left to the individuals who are on our team in a technical capacity, and instructional designers are given very little information about SCORM. The end result is best practice errors which, although not technical violations, limit the reusability of designed courseware. A thorough block of instruction on SCORM best practices, such as the one demonstrated in this project will not only help instructional designers be more familiar with the SCORM standard, but will also help them comply with the mandate of courseware content being reusable in order to save fiscal resources for the organization. ",2006,0, 1485,The development of a supply chain management process maturity model using the concepts of business process orientation.,"The concept of process maturity proposes that a process has a lifecycle that is assessed by the extent to which the process is explicitly defined, managed, measured and controlled. A maturity model assumes that progress towards goal achievement comes in stages. The supply chain maturity model presented in this paper is based on concepts developed by researchers over the past two decades. The Software Engineering Institute has also applied the concept of process maturity to the software development process in the form of the capability maturity model. This paper examines the relationship between supply chain management process maturity and performance, and provides a supply chain management process maturity model for enhanced supply chain performance.",2004,0, 1486,The Development of Remote E-Voting Around the World: A Review of Roads and Directions,"

Democracy and elections have more than 2.500 years of tradition. Technology has always influenced and shaped the ways elections are held. Since the emergence of the Internet there has been the idea of conducting remote electronic elections. In this paper we reviewed 104 elections with a remote e-voting possibility based on research articles, working papers and also on press releases. We analyzed the cases with respect to the level where they take place, technology, using multiple channels, the size of the election and the provider of the system. Our findings show that while remote e-voting has arrived on the regional level and in organizations for binding elections, on the national level it is a very rare phenomenon. Further paper based elections are here to stay; most binding elections used remote e-voting in addition to the paper channel. Interestingly, providers of e-voting systems are usually only operating in their own territory, as out-of-country operations are very rare. In the long run, for remote e-voting to become a reality of the masses, a lot has to be done. The high number of excluded cases shows that not only documentation is scarce but also the knowledge of the effects of e-voting is rare as most cases are not following simple experimental designs used elsewhere.

",2007,0, 1487,The Difference is in Messaging,Presents the President's message for this issue of the publication.,2016,0, 1488,The Effect of Animation Location and Timing on Visual Search Performance and Memory,

The current study investigated the effects of animation location and timing on visual search speed and accuracy and their effects on memory about the animated strings. Visual search accuracy was measured using the sensitivity measurement d' in signal detection theory (SDT) model. Results showed that black-and-white animations had no significant effect on visual search and color animations slowed down the search significantly but had no significant effect on search accuracy. The size of the effect that an animation had on the search speed did not depend on its location or timing. Nor did the ability to recognize the animated string or the preference judgment about the animated string depend on its location and timing. Animated strings were rated more preferable than new strings even in the absence of explicit memory about the animated strings.

,2007,0, 1489,The Effect of Communication Frequency and Channel Richness on the Convergence Between Chief Executive and Chief Information Officers,"

Convergence (i.e., mutual understanding) between an organization's CEO and CIO is critical to its efforts to successfully exploit information technology. Communication theory predicts that greater communication frequency and channel richness lead to more such convergence. A postal survey of 202 pairs of CEOs and CIOs investigated the effect of communication frequency and channel richness on CEO/ CIO convergence, as well as the effect of convergence on the financial contribution of information systems (IS) to the organization. Convergence was operationalized in terms of the current and future roles of information technology (IT) as defined by the strategic grid. Rigorous validation confirmed the current role as composed of one factor and the future role as composed of three factors (i.e., managerial support, differentiation, and enhancement). More frequent communication predicted convergence about the current role, differentiation future role, and enhancement future role. The use of richer channels predicted convergence about the differentiation future role. Convergence about the current role predicted IS financial contribution. From a research perspective, the study extended theory about communication frequency, media richness, convergence, and the role of IT in organizations. From a managerial perspective, it provided direction for CEOs and CIOs interested in increasing their mutual understanding of the role of IT.

",2005,0, 1490,The Effect of Knowledge-of-External-Representations upon Performance and Representational Choice in a Database Query Task,"AbstractThis study examined the representation selection preference patterns of participants in adatabase query task. In the database task, participants were provided with achoice of information-equivalent data representations and chose one of them to use in answering database queries. Arange of database tasks were posed to participants - some required the identification of unique entities, some required the detection of clusters of similar entities, and some involved the qualitative comparison of values, etc. Participants were divided, post hoc, into two groups on the basis of apre-experimental task (card sort) designed to assess knowledge of external representations (KER). Results showed that low and high KER groups differed most in terms of representation selection on cluster type database query tasks. Participants in the low group tended to change from more graphical representations such as scatterplots to less complex representations (like bar charts or tables) from early to late trials. In contrast, high KER participants were able to successfully use awider range of ER types. They also selected more appropriate ERs (ie. ones that the diagrammatic reasoning literature predicts to be well-matched to the task).",2004,0, 1491,The Effect of Model Granularity on Student Performance Prediction Using Bayesian Networks,"

A standing question in the field of Intelligent Tutoring Systems and User Modeling in general is what is the appropriate level of model granularity (how many skills to model) and how is that granularity derived? In this paper we will explore models with varying levels of skill generality (1, 5, 39 and 106 skill models) and measure the accuracy of these models by predicting student performance within our tutoring system called ASSISTment as well as their performance on a state standardized test. We employ the use of Bayes nets to model user knowledge and to use for prediction of student responses. Our results show that the finer the granularity of the skill model, the better we can predict student performance for our online data. However, for the standardized test data we received, it was the 39 skill model that performed the best. We view this as support for fine-grained skill models despite the finest grain model not predicting the state test scores the best.

",2007,0, 1492,The effect of the number of concepts on the readability of schemas: an empirical study with data models,"

The number of concepts in a model has been frequently used in the literature to measure the ease of use in creating model schemas. However, to the best of our knowledge, nobody has looked at its effect on the readability of the model schemas after they have been created. The readability of a model schema is important in situations where the schemas are created by one team of analysts and read by other analysts, system developers, or maintenance administrators. Given the recent trend of models with increasing numbers of concepts such as the unified modeling language (UML), the effect of the number of concepts (NOC) on the readability of schemas has become increasingly important. In this work, we operationalize readability along three dimensions: effectiveness, efficiency, and learnability. We draw on the Bunge Wand Weber (BWW) framework, as well as the signal detection recognition theory and the ACT theory from cognitive psychology to formulate hypotheses and conduct an experiment to study the effects of the NOC in a data model on these readability dimensions. Our work makes the following contributions: (a) it extends the operationalization of the readability construct, and (b) unlike earlier empirical work that has focused exclusively on comparing models that differ along several dimensions, this work proposes an empirical methodology that isolates the effect of a model-independent variable (the NOC) on readability. From a practical perspective, our findings have implications both for creators of new models, as well as for practitioners who use currently available models for creating schemas to communicate requirements during the entire lifecycle of a system.

",2004,0, 1493,The Effect of User Factors on Consumer Familiarity with Health Terms: Using Gender as a Proxy for Background Knowledge About Gender-Specific Illnesses,"

An algorithm estimating vocabulary complexity of a consumer health text can help improve readability of consumer health materials. We had previously developed and validated an algorithm predicting lay familiarity with health terms on the basis of the terms' frequency in consumer health texts and experimental data. Present study is part of the program studying the influence of reader factors on familiarity with health terms and concepts. Using gender as a proxy for background knowledge, the study evaluates male and female participants' familiarity with terms and concepts pertaining to three types of health topics: male-specific, female-specific and gender-neutral. Of the terms / concepts of equal predicted difficulty, males were more familiar with those pertaining to neutral and male-specific topics (the effect was especially pronounced for “difficult” terms); no topic effect was observed for females. The implications for tailoring health readability formulas to various target populations are discussed.

",2006,0, 1494,The Effectiveness of Educational Technology: A Preliminary Study of Learners from Small and Large Power Distance Cultures,"

The cultural background of learners has been highlighted as crucial in determining the effectiveness of educational technology. This paper focuses on the influence of power distance in determining the effectiveness of educational technology. Utilizing a multiple case study, we examined the perception of learners from small and large power distance societies in terms of satisfaction with learning, self-efficacy with educational technology and perceived learning. Our findings show that the availability of educational technology enhances the learning outcomes of both cultures. The study suggests the notion that learning outcomes differ for learners from small and large power distance cultures.

",2007,0, 1495,The Effects of Cultural Diversity in Virtual Teams Versus Face-to-Face Teams,"Globalization and advances in communication technology have fuelled the emergence of global virtual teams (GVTs), which have salient cultural diversity. Prior research has demonstrated that cultural diversity has both positive and negative effect on team performance, but the mechanism for this phenomenon is still unknown. This paper presents a theory model to explore the relationship among cultural diversity, conflict, conflict management behaviors and performance in GVTs. Compared with the traditional face-to-face (FTF) teams, culture diversity may produce more conflict and lead to different conflict management behaviors in GVTs, which in turn affects team performance. Conflict management behaviors are treated as moderating variables between conflict and team performance. An experiment design for testing the hypotheses is described.",2008,0, 1496,The effects of task complexity and time availability limitations on human performance in database query tasks,"Prior research on human ability to write database queries has concentrated on the characteristics of query interfaces and the complexity of the query tasks. This paper reports the results of a laboratory experiment that investigated the relationship between task complexity and time availability, a characteristic of the task context not investigated in earlier database research, while controlling the query interface, data model, technology, and training. Contrary to expectations, when performance measures were adjusted by the time used to perform the task, time availability did not have any effects on task performance while task complexity had a strong influence on performance at all time availability levels. Finally, task complexity was found to be the main determinant of user confidence. The implications of these results for future research and practice are discussed.",2005,0, 1497,The Effects of Visual Versus Verbal Metaphors on Novice and Expert Learners? Performance,"

Since 1990 there have been a series of studies examining the effects of diagrams versus text on computer user's performance. There have also been studies investigating the effects of metaphors on learning and information searching. Research results indicate that verbal meatphors help learners to develop more complete mental models. However, little is know about the effects of visual metaphors that possess both the features of diagrams and metaphors. In response to the gaps in the metaphor research literature, the present study aims to compare the effects of visual versus verbal metaphors in facilitating novices and experts in the comprehension and construction of mental models.

",2007,0, 1498,The E-Government Melting Pot: Lacking New Public Management and Innovation Flavor?,"

The paper argues that e-government literature has by large not infused New Public Management (NPM) literature or innovation studies on e-government. Rather, e-government literature has used relative simple frameworks and observations from the NPM and innovation studies and applied them in studies of e-government implementation. Based on a literature review of 60 peer and double blind reviewed scientific studies, this paper argues that the domain has only been subject to research for about half a decade and that the domain is still unexplored in many aspects. One major absence is a lack of cross referencing of studies and limited number of cumulative studies on whether e-government can aid NPM or fuel innovation. However, the good news is that the literature review demonstrates that researchers entering the domain mainly base their research on empirical studies.

",2006,0, 1499,The EMISQ method and its tool support-expert-based evaluation of internal software quality,"AbstractThere is empirical evidence that internal software quality, e.g., the quality of source code, has great impact on the overall quality of software. Besides well-known manual inspection and review techniques for source code, more recent approaches utilize tool-based static code analysis for the evaluation of internal software quality. Despite the high potential of code analyzers the application of tools alone cannot replace well-founded expert opinion. Knowledge, experience and fair judgment are indispensable for a valid, reliable quality assessment, which is accepted by software developers and managers. The EMISQ method (Evaluation Method for Internal Software Quality), guides the assessment process for all stakeholders of an evaluation project. The method is supported by the Software Product Quality Reporter (SPQR), a tool which assists evaluators with their analysis and rating tasks and provides support for generating code quality reports. The application of SPQR has already proved its usefulness in various code assessment projects around the world. This paper introduces the EMISQ method and describes the tool support needed for an efficient and effective evaluation of internal software quality.",2008,0, 1500,The Empirical Paradigm Discussion and Summary,"A broad range of issues related to thin oxide characterization and reliability prediction were discussed in this group in order to accommodate the varied backgrounds and interests of the participants. A key topic of discussion in both sessions was on the correlation between charge to breakdown, Qbd, and long-term reliability (TDDB). Many participants voiced that Qbd is not a good parameter to estimate the reliability. Some data taken at NIST showed that oxides with the same Qbd can have much different TDDB values. However, some participants expressed that Qbd can be a good parameter in determining variations in oxide quality due to process variations. Although, there is no consensus onwhat percentage drop in Qbd would decide that an oxide is of bad quality.",1995,0, 1501,The Empirical Paradigm Introduction,"The 42 papers in this special issue are focused on progress in solid-state, fiber, and tunable sources. The issue is dedicated to the memory of Theodore H. Maiman, who kicked off the rapid growth of solid-state laser development in 1960. The papers are organized into six subject areas: power scaling strategies; fiber lasers and amplifiers; thin-disk lasers; planar waveguide and thin slab lasers; tunable sources and nonlinear frequency conversion; and progress in laser materials.",2007,0, 1502,The Epistemology of Validation and Verification Testing,"We designed a cell-network-based full-custom test-chip for gray-scale/color image segmentation of real-time video-signals in 350nm CMOS technology. From this digital test-chip design, fully-integrated QVGA-size video-picture-segmentation chips, with 250μsec segmentation time per frame, at 10MHz are estimated to become possible at the 90nm technology node.",2004,0, 1503,"The Evaluation of Large, Complex UML Analysis and Design Models",This paper describes techniques for analyzing large UML models. The first part of the paper describes heuristics and processes for creating semantically correct UML analysis and design models. The second part of the paper briefly describes the internal DesignAdvisor research tool that was used to analyze Siemens models. The results are presented and some interesting conclusions are drawn.,2004,0, 1504,The Evolution of the Internal Offshore Software Development Model at Dell Inc.,"Global software development not only enables lower operational costs, but makes it possible for companies to have an organizational branch which can drive standardization across all IT segments. Dell Inc global development services took off in 2001 and since then has expanded to Brazil, two centers in India, and one center in Russia. For six years these centers were to some extent independent, and performed the role of a software development consulting/factory inside Dell. Now, the global development centers have migrated to a business centric model, on which they are no longer a separate structure, but directly bound to the IT segments of each business area. The purpose of this paper is to show the evolution of the internal offshore software development model adopted by Dell and the results achieved by this endeavor.",2007,0, 1505,"The Experimental Paradigm in Reverse Engineering: Role, Challenges, and Limitations","In many areas of software engineering, empirical studies are playing an increasingly important role. This stems from the fact that software technologies are often based on heuristics and are moreover expected to be used in processes where human intervention is paramount. As a result, not only it is important to assess their cost-effectiveness under conditions that are as realistic and representative as possible, but we must also understand the conditions under which they are more suitable and applicable. There exists a wealth of empirical methods aimed at maximizing the validity of results obtained through empirical studies. However, in the case of reverse engineering, as for other domains of investigation, researchers and practitioners are faced with specific constraints and challenges. This is the focus of this keynote address and what the current paper attempts to clarify",2006,0, 1506,The Features People Use to Recognize Human Movement Style,"AbstractObservation of human movement informs a variety of person properties. However, it is unclear how this understanding of person properties is derived from the complex visual stimulus of human movement. I address this topic by first reviewing the literature on the visual perception of human movement and then discussing work that has explored the features used to discriminate between different styles of movement. This discussion includes work on quantifying human performance at style recognition, exaggeration of human movement and finally experimental derivation of a feature space to represent human emotion.",2004,0, 1507,The Focus Group Method as an Empirical Tool in Software Engineering,"This paper reflects on three cases where the focus group method was used to obtain feedback and experiences from software engineering practitioners and application users. The focus group method and its background are presented, the method's weaknesses and strengths are discussed, and guidelines are provided for how to use the method in the software engineering context. Furthermore, the results of the three studies conducted are highlighted and the paper concludes in a discussion on the applicability of the method for this type of research. In summary, the focus group method is a cost-effective and quick empirical research approach for obtaining qualitative insights and feedback from practitioners. It can be used in several phases and types of research. However, a major limitation of the method is that it is useful only in studying concepts that can be understood by participants in a limited time. We also recommend that in the software engineering context, the method should be used with sufficient empirical rigor.",2004,0, 1508,The FOSSology project,This work focuses on establishing coordination models and method with different information in the operation process of established construction project supply chains (CPSCs). A two-level programming model for collaborative decision making is established to find optimal solutions for all stakeholders in CPSCs. An agent-based negotiation framework for CPSCs coordination in dynamic decision environment is designed based on the intelligent agent technology and multi-attribute negotiation theory. This work also presents a relative entropy method to help negotiators (stakeholders) reaches an acceptable solution when negotiations fail or man-made termination. This is a summary of the first author's Ph.D. thesis supervised by Yaowu Wang and Geoffrey Qiping Shen (Hong Kong Polytechnic University) and defended on 25 June 2006 at the Harbin Institute of Technology (China).,2009,0, 1509,The fundamental design issues of Jeliot 3,"This paper discusses the design and simulation of an 8-bit, 100 Ms/s, full Nyquist analog-to-digital converter at 3.3 V supply. This project is a preliminary step to achieve the final goal of a 10-bit ADC with the same frequency performance",1999,0, 1510,The Future of Empirical Methods in Software Engineering Researc,"While empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behaviour. The paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout",1999,0, 1511,The Future of Empirical Methods in Software Engineering Research,"We present the vision that for all fields of software engineering (SE), empirical research methods should enable the development of scientific knowledge about how useful different SE technologies are for different kinds of actors, performing different kinds of activities, on different kinds of systems. It is part of the vision that such scientific knowledge will guide the development of new SE technology and is a major input to important SE decisions in industry. Major challenges to the pursuit of this vision are: more SE research should be based on the use of empirical methods; the quality, including relevance, of the studies using such methods should be increased; there should be more and better synthesis of empirical evidence; and more theories should be built and tested. Means to meet these challenges include (1) increased competence regarding how to apply and combine alternative empirical methods, (2) tighter links between academia and industry, (3) the development of common research agendas with a focus on empirical methods, and (4) more resources for empirical research.",2007,0, 1512,The Gap Between Small Group Theory and Group Support System Research,"

Small group research and the development of small group theory have flourished in recent years, yet most group support systems (GSS) research is conducted without regard to theories of small groups. Here we contrast the richness of small group theory with the theoretical poverty of most experimental group support systems research. We look first at the state of small group theory, contrasting it with the state of theory in GSS work, using 10 recently published GSS studies as examples. Looking to small group theory as the basis for GSS research would add a great deal to GSS work, the topic of the next section in the paper. Absent a reliance on small group theory, however, we propose an alternative approach: Drop the GSS term altogether and return to a term that better describes what GSS research has always been about, supporting meetings, as captured in the phrase Electronic Meeting Systems.

",2007,0, 1513,"The Identity, Dynamics, and Diffusion of MIS","Based on Delerablee's identity-based broadcast encryption scheme (IBBE), we propose a new efficient dynamic identity-based broadcast encryption scheme (DIBBE), and prove its security in the Random Oracle model. The proposed scheme need not to setup a max potential receivers set in advance, and it has constant size of the public key, private key and header of cipertext. We also compare the performance of our scheme with the Delerablee's, the cost to decrypt in our revised scheme is also a constant size. So our scheme is efficient and practical for large receivers.",2010,0, 1514,The impact of software engineering research on modern progamming languages.,"Software engineering research and programming language design have enjoyed a symbiotic relationship, with traceable impacts since the 1970s, when these areas were first distinguished from one another. This report documents this relationship by focusing on several major features of current programming languages: data and procedural abstraction, types, concurrency, exceptions, and visual programming mechanisms. The influences are determined by tracing references in publications in both fields, obtaining oral histories from language designers delineating influences on them, and tracking cotemporal research trends and ideas as demonstrated by workshop topics, special issue publications, and invited talks in the two fields. In some cases there is conclusive data supporting influence. In other cases, there are circumstantial arguments (i.e., cotemporal ideas) that indicate influence. Using this approach, this study provides evidence of the impact of software engineering research on modern programming language design and documents the close relationship between these two fields.",2005,0, 1515,The Impact of Structuring the Interface as a Decision Tree in a Treatment Decision Support Tool,"

This study examined whether interfaces in computer-based decision aids can be designed to reduce the mental effort required by people to make difficult decisions about their healthcare and allow them to make decisions that correspond with their personal values. Participants (N=180) considered a treatment scenario for a heart condition and were asked to advise a hypothetical patient whether to have an operation or not. Attributes for decision alternatives were presented via computer in one of two formats; alternative-tree or attribute-tree. Participants engaged in significantly more compensatory decision strategies (i.e., comparing attributes of each option) using an interface congruent with their natural tendency to process such information (i.e., alternative-tree condition). There was also greater correlation (p<.05) between participants' decision and personal values in the alternative-tree. Patients who are ill and making decisions about treatment often find such choices stressful. Being able to reduce some of the mental burden in such circumstances adds to the importance of interface designers taking account of the benefits derived from structuring information for the patient.

",2007,0, 1516,The impact of the Abilene Paradox on double-loop learning in an agile team,"This paper presents a qualitative investigation of learning failures associated with the introduction of a new software development methodology by a project team. This paper illustrates that learning is more than the cognitive process of acquiring a new skill; learning also involves changes in behaviour and even beliefs. Extreme Programming (XP), like other software development methodologies, provides a set of values and guidelines as to how software should be developed. As these new values and guidelines involve behavioural changes, the study investigates the introduction of XP as a new learning experience. Researchers use the concepts of single and double-loop learning to illustrate how social actors learn to perform tasks effectively and to determine the best task to perform. The concept of triple-loop learning explains how this learning process can be ineffective, accordingly it is employed to examine why the introduction of XP was ineffective in the team studied. While XP should ideally foster double-loop learning, triple-loop learning can explain why this does not necessarily occur. Research illustrates how power factors influence learning among groups of individuals; this study focuses on one specific power factor - the power inherent in the desire to conform. The Abilene Paradox describes how groups can make ineffective decisions that are contrary to that which group members personally desire or believe. Ineffective decision-making occurs due to the desire to conform among group members; this was shown as the cause of ineffective learning in the software team studied. This desire to conform originated in how the project team cohered as a group, which was, in turn, influenced by the social values embraced by XP.",2007,0, 1517,The Impact of Verbal Stimuli in Motivating Consumer Response at the Point of Purchase Situation Online,"

This paper is a response to the lack of knowledge regarding actual online purchase behavior, and introduces behavior analysis as an alternative framework in studying consumers' purchase behavior. Motivation to confirm an order online can from the concept of motivating operation (MO) be analyzed as those antecedents in the environmental setting (included verbal stimuli) that; (1) have an effect on the consequences of responding, and (2) influence the responses (including purchase) related to those consequences. Using the functional analytic framework from behavior analysis, MO is identified as a likely predictor of consumer tendency to confirm their online orders.

",2007,0, 1518,The impacts of user review on software responsiveness: Moderating requirements uncertainty,"Rapidly changing business environments and evolving processes increase the uncertainties in IS development. To produce a high-quality system that responds to user needs is challenging. We attempted to determine whether user reviews during the development process could reduce uncertainties and improve the product. Technology structuration theory indicated that users, as actors participating in reviews during the development of a system, could help reduce uncertainty in the organizational requirements and thus improve the software product. A survey of system developers indicated that user requirements uncertainty had a direct, negative effect on software responsiveness but that user review, serving as a moderator, could reduce this effect.",2008,0, 1519,The influence of information presentation formats on complex task decision-making performance,"Understanding the influence of information presentation formats on decision-making effectiveness is an important component of human-computer interaction user interface design. The pervasive nature and ease of use associated with information display formats in wideiy used personal productivity software suggests that decision-makers are likely to create and/or use documents with both text-based and more visually oriented information displays. Past research has investigated the role of these displays on simple decision tasks; however, empirical research has not extended to more complex tasks, more comparable to the types of tasks decision-makers face every day. Results from the empirical analysis suggest that the relationship between information presentation format and decision performance is moderated by the complexity of the task. More specifically, spatial formats result in superior decision accuracy for simple- and complex-spatial tasks and faster decision time for all tasks except the complex-symbolic task where graphs and tables result in equivalent decision time.",2006,0, 1520,The Influence of the Level of Abstraction on the Evolvability of Conceptual Models of Information Systems,"Over the years, we have seen an increase in the level of abstraction used in building software. Academic and practitioners' literature contains numerous but vague claims that software based on abstract conceptual models (such as analysis and design patterns, frameworks and software architectures) has evolvability advantages. Our study validates these claims. We investigate evolvability at the analysis level, i.e. at the level of the conceptual models that are built of information systems (e.g. UML-models). More specifically, we focus on the influence of the level of abstraction of the conceptual model on the evolvability of the model. Hypotheses were tested with regard to whether the level of abstraction influences the time needed to apply a change, the correctness of the change and the structure degradation incurred. Two controlled experiments were conducted with 136 subjects. Correctness and structure degradation were rated by human experts. Results indicate that, for some types of change, abstract models are better evolvable than concrete ones. Our results provide insight into how the rather vague claims in literature should be interpreted.",2004,0, 1521,The Influence of Visual Angle on the Performance of Static Images Scanning,"

The present study addressed to explore the influence of visual angle on the performance of static images scanning with a 2 (scanning distance) × 2 (scanning type) × 3 (visual angle) mixed design. The results demonstrated significant effects of three factors on participants' performance. Stimuli at 5.5° and 8.4° rather than 2.7° could facilitate the performance of static image scanning. However, the effect of visual angle on mental image scanning was smaller than on retinal image scanning. These findings were interpreted in terms of the theory of working memory and the theory of mental image. The implication of these findings in human-computer interface was discussed at last.

",2007,0, 1522,The Interdisciplinary Study of Interdependencies,"Two subjective methods, scale method and z-score method, were conducted for evaluating the image quality of four glossy commercial ink-jet papers. For pair comparison, the results of scale method and z-score method were similar. For categorical judgment, the image quality difference of ink-jet paper could be calculated by z-score method. Therefore, the z-score method was more suitable for practice.",2008,0, 1523,"The internet and clinical trials: Background, online resources, examples and issues","Both the Internet and clinical trials were significant developments in the latter half of the twentieth century: the Internet revolutionized global communications and the randomized controlled trial provided a means to conduct an unbiased comparison of two or more treatments. Large multicenter trials are often burdened with an extensive development time and considerable expense, as well as significant challenges in obtaining, backing up and analyzing large amounts of data. Alongside the increasing complexities of the modern clinical trial has grown the power of the Internet to improve communications, centralize and secure data as well as to distribute information. As more and more clinical trials are required to coordinate multiple trial processes in real time, centers are turning to the Internet for the tools to manage the components of a clinical trial, either in whole or in part, to produce lower costs and faster results. This paper reviews the historical development of the Internet and the randomized controlled trial, describes the Internet resources available that can be used in a clinical trial, reviews some examples of online trials and describes the advantages and disadvantages of using the Internet to conduct a clinical trial. We also extract the characteristics of the 5 largest clinical trials conducted using the Internet to date, which together enrolled over 26000 patients.",2005,0, 1524,The Knowledge Based Software Process Improvement Program: A Rational Analysis,"Knowledge management is the key area of focus in the present information technology scenario. It forms a basis to derive standards and models and steers organizations through an enjoyable journey, an improved endeavor to reach the destination. Software process improvement program is a crucial venture for organizations functioning under a framework model and aspiring for higher maturity levels. While earlier works for software process improvement have been considering wider range of initiatives, we deem knowledge management to be a contemporary approach for refining software process improvement activities. This paper is committed to a rational analysis into the knowledge-based guidance for implementing a software process improvement program. The work is directed by four research questions that focus on the knowledge based SPI initiative. The role of knowledge components and a knowledge driven model (KDM) are assessed by a measurement model. The impact of KDM on the end-product and its real effect on SPI is measured by quantifying the productivity of the projects, eventually the organization. An implementation of the knowledge driven software process improvement (SPI) program is explained with a suitable case study, an organization working towards attaining CMM level. Future issues pertaining to knowledge based process improvement forms the concluding note of this work.",2007,0, 1525,The LOFAR central processing facility architecture.,"Manifa Central Processing Facility (CPF), the fifth largest oil field in the world, is connected by 25 manmade islands and 20 kilometers of causeways. The current production capacity of heavy crude oil at the CPF is 500,000 barrels per day (bpd). The full production capacity of the plant is 900,000 bpd, which the facility will meet in 2014. The power plant generation includes two combustion gas turbine (CGT) generators and two heat recovery steam generators (HRSGs) providing steam to two steam turbine generators (STGs), with a total generation capacity of about 500 MW. Two tie lines at 115 kV connect the Manifa CPF to the external utility system. A power management system controls the Manifa CPF frequency once the Manifa CPF islands from the external grid. Some of the severe external disturbances require Manifa islanding operation in less than 15 cycles to maintain system stability. This very critical CPF requires the ability to quickly identify an islanding condition correctly. This paper discusses the islanding scheme design details for local and remote signals. For significant power exchange with an external grid, local measurement-based islanding can correctly and quickly identify the islanding condition.",2014,0, 1526,The methodological soundness of requirements engineering papers: a conceptual framework and two case studies,"This paper was triggered by concerns about the methodological soundness of many RE papers. We present a conceptual framework that distinguishes design papers from research papers, and show that in this framework, what is called a research paper in RE is often a design paper. We then present and motivate two lists of evaluation criteria, one for research papers and one for design papers. We apply both of these lists to two samples drawn from the set of all submissions to the RE’03 conference. Analysis of these two samples shows that most submissions of the RE’03 conference are design papers, not research papers, and that most design papers present a solution to a problem but neither validate this solution nor investigate the problems that can be solved by this solution. We conclude with a discussion of the soundness of our results and of the possible impact on RE research and practice.",2006,0, 1527,The Moderating Effect of Leader-member Exchange on the Job Insecurity-Organizational Commitment Relationship,"AbstractJob insecurity has become an important issue for society and organizations in the last decades due to uncertain economic conditions, global competition, and the advancement of information technology. As job insecurity have detrimental consequences for employees and organizations, it is vital to identify variables that could buffer against the negative effects of job insecurity. In this study, we examined the moderating effect of Leader-member exchange on the relation between job insecurity and organizational commitment. Data collected from 314 employees indicated that the negative relationship between qualitative insecurity and affective commitment was alleviated as Leader-member exchange increased. Furthermore, the positive relation between quantitative insecurity and continuance commitment decreased as Leader-member exchange increased.",2007,0, 1528,The Motivation of Software Engineers: Developing a Rigorous and Usable Model,This report presents a summary of the work undertaken in the one year EPSRC ?Modelling Motivation in Software Engineering? (MoMSE) project (2005). The aim of this work is to produce a model of motivation in software engineering. We give an overview of how we developed a model of motivation in Software Engineering (SE). Our model of motivation reflects three viewpoints: 1. motivation in the SE literature; 2. classic theory of motivation to include models tailored specifically to reflect motivation in software engineering; and 3. empirical investigations into the motivation phenomenon.,2007,0, 1529,The Offshoring Stage Model: an epilogue,"Model-driven engineering (MDE) promotes automated model transformations along the entire development process. Guaranteeing the quality of early models is essential for a successful application of MDE techniques and related tool-supported model refinements. Do these models properly reflect the requirements elicited from the owners of the problem domain? Ultimately, this question needs to be asked to the domain experts. The problem is that a gap exists between the respective backgrounds of modeling experts and domain experts. MDE developers cannot show a model to the domain experts and simply ask them whether it is correct with respect to the requirements they had in mind. To facilitate their interaction and make such validation more systematic, we propose a methodology and a tool that derive a set of customizable questionnaires expressed in natural language from each model to be validated. Unexpected answers by domain experts help to identify those portions of the models requiring deeper attention. We illustrate the methodology and the current status of the developed tool MOTHIA, which can handle UML Use Case, Class, and Activity diagrams. We assess MOTHIA effectiveness in reducing the gap between domain and modeling experts, and in detecting modeling faults on the European Project CHOReOS.",2016,0, 1530,The pervasive discourse: an analysis,A Laplace technique is used to analyze the most general class E amplifier: that with finite DC-feed inductance and finite output network Q. The analysis is implemented with a computer program using PC-MATLAB software. A listing of the program is provided,1989,0, 1531,The Pudding of Trust,"Trust - ""reliance on the integrity, ability, or character of a person or thing"" - is pervasive in social systems. We constantly apply it in interactions between people, organizations, animals, and even artifacts. We use it instinctively and implicitly in closed and static systems, or consciously and explicitly in open or dynamic systems. An epitome for the former case is a small village, where everybody knows everybody, and the villagers instinctively use their knowledge or stereotypes to trust or distrust their neighbors. A big city exemplifies the latter case, where people use explicit rules of behavior in diverse trust relationships. We already use trust in computing systems extensively, although usually subconsciously. The challenge for exploiting trust in computing lies in extending the use of trust-based solutions, first to artificial entities such as software agents or subsystems, then to human users' subconscious choices.",2004,0, 1532,The Relationship Between Social Presence and Group Identification Within Online Communities and Its Impact on the Success of Online Communities,"

In order to encourage more girls to choose STEM-study courses (Science, Technology, Engineering, and Mathematics) we created an online community and e-mentoring program for German high school girls and women engaged in STEM vocational fields. Within the online community, we brought together girls and female role models. A community platform was offered for getting to know and exchange with other community members. Within this community, we used quantitative methods to measure the students' levels of social presence and group identity, and tested to see if a correlation between those two factors exists. We further evaluated if the group identity has an impact on the girls interest and willingness to participate in STEM.

",2007,0, 1533,The Role of Content Representations in Hypermedia Learning: Effects of Task and Learner Variables,"

We discuss the role of content representations in hypermedia documents. The phrase “content representation” covers a broad category of knowledge visualization devices ranging from local organizers, e.g., headings, introductions and connectors, to global representations, e.g., topic lists, outlines and concept maps. Text processing research has demonstrated that the principled use of content representations can facilitate the acquisition of knowledge from texts. As regards the role of global content representations in hypermedia learning, the effects vary according to individual and situation variables. We review empirical studies investigating different types of global representations in the context of comprehension and information search tasks. The evidence suggests that networked concept maps are most effective for users with some level of prior knowledge, in nonspecific task contexts.

",2005,0, 1534,The Role of Controlled Experiments Working Group Results,"The paper describes: (1) an Internet collaboration system “UNIVERSAL CANVAS” which supports an efficient awareness by `interactive image control'even on narrow band networks; and (2) the experiments and evaluation in group work distance learning to exchange language and cross-cultural communication between Japanese and French students by using UNIVERSAL CANVAS. It explains how the unique features of this system could facilitate the effective use of shared Web pages in a multiparty environment from the view points of location, numbers of people, method of the lecture, and whether with or without interactive image control function",2000,0, 1535,"The Role of Perceived Control, Attention-Shaping, and Expertise in IT Project Risk Assessment","This study investigates how individuals assess risks in IT development projects under different conditions. We focus on three conditions: the perceived control over the IT project, the use of an attention-shaping tool, and the expertise of the individual conducting the assessment. A role-playing experiment was conducted including 102 practitioners with high expertise in IT projects and 105 university students with low expertise. Our study suggests first, that perceived control is a powerful factor influencing risk perception but not continuation behavior. Second, while the attention shaping tool proved more useful for individuals with low expertise, such tools should be used with caution because they create blind spots in risk awareness for those with less expertise. Third, individuals with more expertise perceived higher levels of risks in IT projects, as compared to those with less expertise. Implications of these findings are discussed, with potential avenues for future research and suggestions for IT project managers.",2006,0, 1536,The role of post-adoption phase trust in B2C e-service loyalty: towards a more comprehensive picture,"AbstractDespite the extensive interest in trust within information systems (IS) and e-commerce disciplines, only few studies examine trust in the post-adoption phase of the customer relationship. Not only gaining new customers by increasing adoption, but also keeping the existing ones loyal, is largely considered important for e-business success. This paper scrutinizes the role of trust in customer loyalty, focusing on B2C e-services by conducting a three-sectional literature review stemming from IS, e-commerce and marketing. The key findings of this study are: 1. Literature discussing the role of trust after the adoption phase is relatively scarce and fragmented 2. In the empirical testing trust is mostly viewed as a monolith 3. Quantitative research methods dominate the field 4. Since trust may play a role during the whole relationship, also dynamic ways to scrutinize trust would be appropriate. Implications of these findings are discussed and ideas for further research suggested.",2007,0, 1537,The Role of Process Measurement in Test-Driven Development,"Test-Driven Development (TDD) is a coding technique in which programmers write unit tests before writing or revising production code. We present a process measurement approach for TDD that relies on the analysis of fine-grained data collected during coding activities. This data is mined to produce abstractions regarding programmers? work patterns. Programmers, instructors, and coaches receive concrete feedback by visualizing these abstractions. Process measurement has the potential to accelerate the learning of TDD, enhance its effectiveness, aid in its empirical evaluation, and support project tracking.",2001,0, 1538,The role of replications in empirical software engineering?a word of warning,,2008,0, 1539,The role of trust in OSS communities? Case Linux Kernel community,"With the prosperity of the Open Source Software, various software communities are formed and they attract huge amounts of developers to participate in distributed software development. For such software development paradigm, how to evaluate the skills of the developers comprehensively and automatically is critical. However, most of the existing researches assess the developers based on the Implementation aspects, such as the artifacts they created or edited. They ignore the developers' contributions in Social collaboration aspects, such as answering questions, giving advices, making comments or creating social connections. In this paper, we propose a novel model which evaluate the individuals' skills from both Implementation and Social collaboration aspects. Our model defines four metrics from muti-dimensions, including collaboration index, technical skill, community influence and development contribution. We carry out experiments on a real-world online software community. The results show that our approach can make more comprehensive measurement than the previous work.",2014,0, 1540,The Role of User Involvement in Requirements Quality and Project Success,"User involvement is the key concept in the development of useful and usable systems and has positive effects on system success and user satisfaction. This paper reports the results of interviews and a survey conducted to investigate the role of user involvement in defining user requirements in development projects. The survey involved 18 software practitioners working in software related development projects in 13 companies in Finland. In addition, eight software practitioners working in three companies were interviewed. By combining qualitative and statistical analysis, we examine how users are involved in development projects and how user involvement influences projects. The analysis shows that, although it is rare in development projects, early user involvement is related to better requirements quality. The analysis also shows that involving users and customers as the source of information is related to project success.",2005,0, 1541,The Role of Value Compatibility in Information Technology Adoption,"This paper addresses the problem of specifying an evaluation methodology capable of incorporating the multi-faceted and multi-dimensional nature of the problem as it arises in relation to novel health care technology. A multi-dimensional value criterion model-based approach is proposed, and its applicability discussed in the context of telematic home haemodialysis.",2003,0, 1542,The Seduced Speaker: Modeling of Cognitive Control,"This paper represents the practical use of Fuzzy Cognitive Maps (FCMs) in order to model the control engineering educational critical success factors. FCMs are fuzzy signed digraphs with feedbacks, and they can model the events, values, goals as a collection of concepts by forging a causal link between these concepts. In this study, the concepts of the FCM model is developed by the help of the academics, then the suggested FCMs of each academic is aggregated to build the final FCM to model the control engineering educational critical success factors. Afterwards the model is coded in Matlab to study four scenarios via different simulations. The results of the simulations show the effectiveness of FCMs to understand the success factors of educational organizations and programs.",2013,0, 1543,The Significance of Textures for Affective Interfaces,"Laser-induced discrete bump texture results in a simplified HDI model represented by five critical parameters: the bump height Hb , bump curvature radius Rb, bump spacing Lb, the slider crown height Hw, and the lube thickness Hl . The HDI tribology performance relies upon how to select these parameters in bump texture design as well as how to measure and control them. In this paper, analysis on fundamental bump-design issues is presented. It is found that proper discrete texture design should assure bumps are in elastic-contact regime but close to the elastic-contact limit for minimum wear failure and stiction. Minimum deformation and shear stress at bump contacts are also desired for wear durability. Longitudinal bump pattern provides best glide performance among patterns with same Hb. Stiction is low within the toe-dipping lube regime where λ=Hl/Hb is below a critical value, otherwise stiction depends on all five parameters. Results from laser-texture experiments are also discussed for understanding of the discrete bump texture design",1996,0, 1544,The Structural Complexity of Software: An Experimental Test,"This research examines the structural complexity of software and, specifically, the potential interaction of the two dominant dimensions of structural complexity, coupling and cohesion. Analysis based on an information processing view of developer cognition results in a theoretically driven model with cohesion as a moderator for a main effect of coupling on effort. An empirical test of the model was devised in a software maintenance context utilizing both procedural and object-oriented tasks, with professional software engineers as participants. The results support the model in that there was a significant interaction effect between coupling and cohesion on effort, even though there was no main effect for either coupling or cohesion. The implication of this result is that, when designing, implementing, and maintaining software to control complexity, both coupling and cohesion should be considered jointly, instead of independently. By providing guidance on structuring software for software professionals and researchers, these results enable software to continue as the solution of choice for a wider range of richer, more complex problems.",2005,0, 1545,The Student TechCorps: providing experiential learning opportunities to students in Computing and Information Science,"Approximately three years ago, faculty in Lock Haven University's Business Administration, Computer Science and Information Technology Department established the Student TechCorps, a group of majors in Computing and Information Science who serve as technology consultants and tutors to faculty and staff. In this paper, we describe the process of setting up the group and the results of work done by the group to date.",2005,0, 1546,The Tao of Modeling Spaces,"In order to develop highly secure database systems to meet the requirements for class B2, an extended formal security policy model based on the BLP model is presented in this paper. A method for verifying security model for database systems is proposed. According to this method, the development of a formal specification and verification to ensure the security of the extended model is introduced. During the process of the verification, a number of mistakes have been identified and corrections have been made. Both the specification and verification are developed in Coq proof assistant. Our formal security model was improved and has been verified secure. This work demonstrates that our verification method is effective and sufficient and illustrates the necessity for formal verification of the extended model by using tools.",2008,0, 1547,The Test Community of Practice Experience in Brazil,"This paper relates the experience acquired by the Brazil Global Development Center (GDC) Test Team through the implementation of a global community of practice to solve problems inherent to globally distributed projects. This community consists in a group of professionals - Dell employees and alliances all over the world - which collaborates to share knowledge and experience about the test discipline and lessons learned in projects. A description of the implementation methods will be presented, as well as the results achieved by the organization so far",2006,0, 1548,The ToscanaJ Suite for Implementing Conceptual Information Systems,"Information exists in the form of multiple media objects, i.e. in the form of text, audio, image, and video objects. Textual, acoustic, and visual modalities of information associated with the multiple media objects. Existing search broadly categorized as media specific and multiple media based approaches. Mostly the multiple media search approaches not focused towards aggregated search, blended integration of the search results, and user navigation/browsing of the search space via multiple modalities of information. In this research, we will propose a novel framework to implement Next Generation of multiple media search systems. The framework supports the implementation of search systems that provides aggregated search, blended integration of the search results, and navigation within the database of media objects by exploiting multiple modalities of information associated with media objects. A Multiple Media Information Search System implemented and validated via proposed framework.",2015,0, 1549,The unseen and unacceptable face of digital libraries,"We describe a shape comparison method applicable to fast screening of large facial databases. The proposed technique derives holistic similarity measures without the explicit need of point-to-point correspondence thus delivering speed and tolerance to local non-rigid distortions. Specifically, we developed a face similarity measure derived as a variant of the Hausdorff distance by introducing the notion of a neighborhood function and associated penalties. Binary edge representation is used to provide robustness to changes in illumination. Experimental results on a large facial data set demonstrate that our approach produces excellent search results even when less than 1% of the original grey-scale face image information is stored in the face database",1998,0, 1550,The Use of Dynamic Display to Improve Reading Comprehension for the Small Screen of a Wrist Watch,"

This study explored the feasibility of displaying dynamic Chinese text on the screen of a wrist watch. Three design factors (i.e., dynamic display, presentation method, and speed) were examined to investigate their effects on users' reading comprehension under two different conditions of task types. The results of this study indicated the following: (1) There was no significant difference between leading display and RSVP in both single- and dual-task conditions; (2) Presentation method was a significant factor. The participant's reading comprehension was significantly better with the word-by-word format than with the character-by-character one in both single- and dual-task conditions; (3) The participant's reading comprehension was not significantly different among the three different speed settings in the single-task condition. However, participants had significantly higher reading comprehension scores under the slower speed settings of 150 and 250 cpm than under the faster speed setting of 350 cpm in the dual-task condition.

",2007,0, 1551,The use of eBooks and interactive multimedia as alternative forms of technical documentation,"The use of eBooks and interactive multimedia in technical documentation is an emerging and important trend for delivering abstract and complex technical information that is enticing, engaging, and -most important of all- effective. With the substantial (and growing) number of documents available electronically, it is a non-trivial task for technical writers to even reach their target audience, let alone engage them. Both eBooks and interactive multimedia feature unique characteristics that serve two important functions: piquing the interest in the user, and aiding in the transmittal of complex technical information. Further, the use of eBooks and interactive multimedia in technical documentation helps to differentiate from the myriad other technical documents. At the IBM Toronto Software Laboratory, the Media Design Studio (MDS) works collaboratively with the information development community to produce graphics and diagrams for technical documentation. This paper explores alternative forms of IBM technical documentation in the form of two case studies-one an eBook and the other a Macromedia Flash-based interactive multimedia presentation. Both projects were co-developed by the writers and graphic designers, with a mandate to create a rich, graphical approach to entice and engage users to read and understand complex technical concepts.",2005,0, 1552,The use of group support systems in focus groups: Information technology meets qualitative research,"This paper explores focus groups supported by group support systems (GSS) with anonymous interaction capability in two configurations: same time/same place and same time/different place. After reviewing the literature, we compare and contrast these anonymity-featured GSS-supported focus groups with traditional focus groups and discuss their benefits and limitations. We suggest directions for future research concerning GSS-supported focus groups with respect to technological implications (typing skills and connection speeds), national culture (high and low context; power distance), and lying behavior (adaptation of model of Hancock, J. T., Thom-Santelli, J., & Ritchie, T. (2004). Deception and design: The impact of communication technology on lying behavior. Proceedings of the 2004 conference on human factors in computing systems (pp. 129-134), whereby lying is a function of three design factors: synchronicity, recordability, and distributedness).",2007,0, 1553,The Use of Mobile Phones to Support Children?s Literacy Learning,"AbstractThe goal of this study was to develop a mobile-phone based intervention that would encourage parents to engage their children in daily literacy-learning activities. The intervention content included text messages for parents, audio messages for parents and children, and Sesame Street letter videos for children. Messaging to parents suggested real-world activities that they could use to engage their children in learning letters. Pre- and post-interviews indicated a significant increase in the frequency with which parents reported engaging their children in literacy activities after participating in this study. In addition, 75% of lower-income participants and 50% of middle-income participants reported that they believed watching the Sesame Street letter videos helped their children learn letters. More than 75% of participants reported believing that a mobile phone used in this way can be an effective learning tool, since mobile-phone delivery made it extremely easy to incorporate literacy activities into their daily routines.",2007,0, 1554,The Use of Virtual Reality to Train Older Adults in Processing of Spatial Information,"

The present study examined the effect of virtual reality/VR on training older adults in spatial-based performance. Navigating emergency escape routes in a local hospital was exemplified as the taks domain. 15 older adults and 15 college students participated in an experiment where VR, VR plus a bird-view map, and two-diemtional/2D map presentations were manipulated as within-subject treatment levels of training media. The results indicated that the older subject was less advantaged in identifying the correct turns leading to the emergency exits. While the older subject was also found to have more difficulty in recalling route landmarks, the 2D and VR-plus-map presentations produced significantly stronger spatial memory than the pure VR media for both age groups. When mental rotation was evaluated, the older subject was able to achieve comparable performance if emergency routes were trained by the VR, and the VR-plus-map presentations. Detailed implications were discussed for the design of training media with age considerations.

",2007,0, 1555,The Value of Empirical Evidence for Practitioners and Researchers,"This paper studies earnings restatements' value relevence of Chinese listed companies with the price model and returns model, using earnings restatements announcement listed companies from 1999 to 2009 as the samples. The results show that earnings restatements aren't value relevent in Chinese capital market. Market's reliance on earnings restatements information is minimal, and restatements can't influence investors' value judgments. But there are negative market reactions to noncore restatements. Therefore, we should give more attention to earnings restatements, and strengthen regulation and supervision of information disclosure of listed companies.",2011,0, 1556,The W-Process for Software Product Evaluation: A Method for Goal-Oriented Implementation of the ISO 14598 Standard,"

The importance of software product evaluations will grow with the awareness of the need for better software quality. The process to conduct such evaluations is crucial to get evaluation results that can be applied and meet customers' expectations. This paper reviews a well-known evaluation process: the ISO 14598 standard. The review focuses on the difficulties in selecting and evaluating the appropriate evaluation techniques. The review shows that the standard has problems in applying evaluation processes in practice due to insufficient attention to goal definition and to relationships between activities being implicit. Also, the standard ignores the trade-off between goals and resources and pays insufficient attention to feedback. To address these deficiencies, the W-process is proposed. It extends the standard through an improved process structure and additional guidelines.

",2004,0, 1557,"Theories, tools and research methods in program comprehension: past, present and future.","Program comprehension research can be characterized by both the theories that provide rich explanations about how programmers comprehend software, as well as the tools that are used to assist in comprehension tasks. During this talk the author review some of the key cognitive theories of program comprehension that have emerged. Using these theories as a canvas, the author then explores how tools that are popular today have evolved to support program comprehension. Specifically, the author discusses how the theories and tools are related and reflect on the research methods that were used to construct the theories and evaluate the tools. The reviewed theories and tools will be further differentiated according to human characteristics, program characteristics, and the context for the various comprehension tasks. Finally, the author predicts how these characteristics will change in the future and speculate on how a number of important research directions could lead to improvements in program comprehension tools and methods.",2005,0, 1558,Theorizing about the Design of Information Infrastructures: Design Kernel Theories and Principles,"In this article we theorize about the design of information infrastructures (II). We define an information infrastructure as a shared, evolving, heterogeneous installed base of IT capabilities based on open and standardized interfaces. Such information infrastructures, when appropriated by a community of users offer a shared resource for delivering and using information services in a (set of) community. Information infrastructures include complex socio-technical ensembles like the Internet or EDI networks. Increased integration of enterprise systems like ERP or CRM systems has produced similar features for intra-organizational systems. Our theorizing addresses the following challenge in designing information infrastructures: how to tackle their inherent complexity, scale and functional uncertainty? These systems are large, complex and heterogeneous. They never die and evolve over long periods of time while they adapt to needs unknown during design time. New infrastructures are designed as extensions to or improvements of existing ones in contrast to green field design. The installed base of the existing infrastructure and its scope and complexity influence how the new infrastructure can be designed. Infrastructure design needs to focus on installed base growth and infrastructure flexibility as to avoid technological traps (lock-ins). These goals are achieved by enacting design principles of immediate usefulness, simplicity, utilization of existing installed base and modularization as shown by our analysis of the design of Internet and the information infrastructure for health care in Norway",2004,0, 1559,Thermal Management in Embedded Systems,"Multiprocessor System-on-Chip (MPSoC) is emerging as an integral component in embedded systems, such as mobile phones, PDAs, handheld medical devices, etc. Due to its immense processing power and incumbent hardware resources the users are able to execute multiple concurrent applications. The more applications the MPSoC executes the more heat it dissipates. Such eminent heat dissipation damages the device and induces faults, causing a shorter life span. Thus, thermal aware designs are of utmost important for durable and dependable embedded devices. The MPSoC could potentially control and manage the heat dissipation using techniques such as computation migration. However, this would be based on the operational behavior of the hardware, but not on the behavior of the applications mapped and executed on the hardware. We propose an interactive temperature management where the users are allowed to provide inputs to effectively manage the heat dissipation and at the same time are able to operate the applications according to their will. We propose and demonstrate a formal design methodology to assist the people involved during the design process.",2011,0, 1560,"Three approaches as pillars for interpretive information systems research: development research, action research and grounded theory","This paper addresses practical approaches and models, based on the paradigm of the 'interpretivist school', to operationalise research in information systems. The study overviews research paradigms and some current issues in IS research, then describes, discusses and illustrates three approaches, namely, development research, action research, and grounded theory, advocating them as proposed pillars for interpretive IS research. With the present emphasis on user-centricity and empowerment of previously technologically-disenfranchised domains, inquiry processes emanating from the social sciences and humanities are relevant to IS, particularly with relation to interactive systems to bridge the digital divide and for the design and development of emerging technology. Each of the approaches suggested has an underlying theoretical framework and reflective methods, and can serve as a model to guide the research process, offering a unifying thread, cohesion and internal consistency to a research study.",2005,0, 1561,Three Paradigms of Computer Science,"The computer industry is playing an increasingly important role in India's economy. However, for a number of reasons, many researchers are not doing high-quality work. Computer science research in India takes place at academic, government-sponsored, and industry-sponsored institutions. These major institutions can conduct effective research because they have sufficient funding, high-quality programs, good equipment, and an effective infrastructure. This is not the case at India's many other institutions. Many local observers say that Indian computer scientists make advances in existing areas of research but rarely do cutting-edge work. This occurs, in part, because many Indian computer scientists receive little direction and have few co-workers in their fields, which means they work in relative isolation",1997,0, 1562,Time Synchronization in Heterogeneous Sensor Networks,"Secure time synchronization is one of the key concerns for some sophisticated sensor network applications. Most existing time synchronization protocols are affected by almost all attacks. In this paper we consider heterogeneous sensor networks (HSNs) as a model for our proposed novel time synchronization protocol based on pairing and identity based cryptography (IBC). This is the first approach for time synchronization protocol using pairing-based cryptography in heterogeneous sensor networks. The proposed scheme reduces the key spaces of nodes as well as it prevents from all major security attacks. Security analysis indicated that the proposed scheme is robust against reply attacks, masquerade attacks, delay attacks, and message manipulation attacks.",2008,0, 1563,Time-bounded adaptation for automotive system software,"Software is increasingly deployed in vehicles as demand for new functionality increases and cheaper and more powerful hardware becomes available. Likewise, emerging wireless communication protocols allow the integration of new software into vehicles, thereby enabling time-bounded adaptive response to changes that occur in mobile environments. Examples of time-bounded adaptation include adaptive cruise control and the dynamic integration of location-aware services within fixed time bounds. This paper provides three contributions to the study of time-bounded adaptation for automotive system software. First, we categorise automotive systems with respect to requirements for dynamic software adaptation. Second, we define a taxonomy that captures various dimensions of dynamic adaptation in emerging automotive system software. Third, we use this taxonomy to analyse existing research projects in the automotive domain. Our analysis shows that although time-bounded synchronisation of applications and data is a key requirement for next-generation automotive systems, it is not adequately covered by existing work.",2008,0, 1564,Tool assisted identifier naming for improved software readability: an empirical study,"This paper describes an empirical study investigating whether programmers improve the readability of their source code if they have support from a source code editor that offers dynamic feedback on their identifier naming practices. An experiment, employing both students and professional software engineers, and requiring the maintenance and production of software, demonstrated a statistically significant improvement in source code readability over that of the control.",2005,0, 1565,Tool support for detecting defects in object-oriented models,"Object-oriented models are commonly used in software projects. They may be affected, however, by various defects introduced easily due to e.g. wrong understanding of modelled reality, making wrong assumptions or editorial mistakes. The defects should be identified and corrected as early as possible, preferably before the model is used as the basis for the subsequent representations of the system. To assure the effectiveness of the defect detection process we need both, better analysis methods and effective tool support. The paper introduces a new analytical method called UML-HAZOP and presents a tool supporting the application of this method.",2005,0, 1566,Toward a Unified Model for Requirements Engineering,"One of the problem areas in requirements engineering has been the integration of functional and nonfunctional requirements and use cases. Current practice is to partition functional and nonfunctional requirements such that they are often defined by different teams. Functional requirements are defined by writing text-based use cases or, less frequently, creating a business model, then walking through the use cases, and extracting (often in a haphazard fashion) detailed requirements. The problem is exacerbated when analysts are at different locations. Siemens experience with outsourcing and offshoring has demonstrated that the use of graphical languages significantly reduces cultural and communication problems when teams (e.g. analysis and design) are at different locations. Tracing between functional and nonfunctional requirements, use cases, hazards and features is complicated by the use of different Medias. Introducing new symbols and new relationships allows the creation of a unified model with intrinsic tracing. This, in turn, improves clarity and facilitates communication when teams are at different locations and/or are lacking in domain expertise",2006,0, 1567,Toward an engineering discipline for grammarware,"Systems engineering is developing rapidly, while new standards are created and new tools are being developed. However, the theoretical understanding and the conceptual foundation of systems engineering are still in their early stages. For example, although real-world systems exhibit considerable differences, there is very little distinction in the literature between the system type and the description of its actual system engineering pursuit. We suggest here a new approach to systems engineering. It is based on the premise that the actual process of systems engineering must be adaptive to the real system type. Using this concept, we present a two-dimensional (2-D) taxonomy in which systems are classified according to four levels of technological uncertainty, and three levels of system scope. We then describe the differences found in systems engineering styles in various areas, such as system requirements, functional allocation, systems design, project organization, and management style. We also claim that adapting the wrong system and management style may cause major difficulties during the process of system creation. Two examples will be analyzed to illustrate this point: the famous Space Shuttle case and one of the system development projects we studied",1997,0, 1568,Toward Formalizing Domain Modeling Semantics in Language Syntax,A recent paper on domain modeling had State Charts with semantic errors.,2005,0, 1569,Toward Generic Title Generation for Clustered Documents,

A cluster labeling algorithm for creating generic titles based on external resources such as WordNet is proposed. Our method first extracts category-specific terms as cluster descriptors. These descriptors are then mapped to generic terms based on a hypernym search algorithm. The proposed method has been evaluated on a patent document collection and a subset of the Reuters-21578 collection. Experimental results revealed that our method performs as anticipated. Real-case applications of these generic terms show promising in assisting humans in interpreting the clustered topics. Our method is general enough such that it can be easily extended to use other hierarchical resources for adaptable label generation.

,2006,0, 1570,Toward standards for reporting research: A review of the literature on computer-supported collaborative learning,"We conducted a meta-review of the computer supported collaborative learning (CSCL) literature. This literature included a rich array of methodologies, theoretical and operational definitions, and collaborative models. However, the literature lacked an overall framework for reporting important design and research details. This paper highlights key findings from our systematic review. The paper: (a) presents the array of definitions, tools, and supports researched in the CSCL literature and (b) proposes standards for reporting collaborative models, tools, and research. These standards, which have implications for both the CSCL and computer-supported collaborative work areas, have potential to build a shared language upon which cross-disciplinary communication and collaboration may be based",2006,0, 1571,Towards a basic theory to model model driven engineering,"During the last years the increasing installation of distributed energy resources, electric vehicles bus also energy storage systems has lead to new challenges in terms of power system planning and operation. New intelligent approaches turning power distribution grids into Smart Grids are necessary. For their realization standardized automation, control and communication systems are essential. Moreover, an integrated engineering approach for the development of Smart Grid applications using these standards is equally required. Model-driven engineering methods well known from computer science together with implementation methods from automation domain have the potential to provide the basis for such a required integrated engineering concept. The main aim of this paper therefore is to facilitate the integration of legacy Smart Grid applications into a model-driven engineering approach. It introduces transformation rules for the conversion of applications implemented with textual programming languages into the IEC 61499 reference model. A brief overview of existing model-driven approaches applied in industrial environments is provided and their applicability for Smart Grid application development is discussed. The special requirements of the Smart Grid domain is taken into account by the proposed transformation approach and provided examples.",2015,0, 1572,Towards a Computerized Infrastructure for Managing Experimental Software Engineering Knowledge,"The growing interest in experimental studies in software engineering and the difficulties found in their execution had led software engineering researchers to look for ways to (semi) automate the experimental process. This paper introduces the concept of experimental Software Engineering Environment (eSEE) ? an infrastructure capable of instantiating software engineering environments to manage knowledge about the definition, planning, execution and packaging of experimental studies in software engineering.",2004,0, 1573,Towards a Dynamic Ontology Based Software Project Management Antipattern Intelligent System,"The Software Project Management Antipattern Intelligent System (PROMAISE) is proposed as a Web-enabled knowledge-base framework that uses antipattern OWL ontologies in order to provide intelligent and up to date advice to software project managers regarding the selection of appropriate antipatterns in a software project. Antipatterns provide information on commonly occurring solutions to problems that generate negative consequences. These mechanisms are documented using informal paper based structures that do not readily support knowledge sharing and reuse. Antipattern OWL ontologies can be used to build a dynamic antipattern knowledge base, which can update itself automatically. This will allow the accessibility and transferability of up-to-date computer-mediated software project management knowledge to software project managers by encoding antipatterns into computer understandable ontologies. PROMAISE can function with this knowledge base in order to assist software project managers in the process of selecting applicable antipatterns.",2007,0, 1574,Towards a Flow Analysis for Embedded System C Programs,"Reliable program worst-case execution time (WCET) estimates are a key component when designing and verifying real-time systems. One way to derive such estimates is by static WCET analysis methods, relying on mathematical models of the software and hardware involved. This paper describes an approach to static flow analysis for deriving information on the possible execution paths of C programs. This includes upper bounds for loops, execution dependencies between different code parts and safe determination of possible pointer values. The method builds upon abstract interpretation, a classical program analysis technique, which is adopted to calculate flow information and to handle the specific properties of the C programming language.",2005,0, 1575,Towards a Framework for Real Time Requirements Elicitation,"Eliciting complete and correct requirements is a major challenge in software engineering and incorrect requirements are a constant source of defects. It often happens that requirements are either recorded only partially or not at all. Also, commonly, the rationale behind the requirements is not recorded or may be recorded, but is not accessible for the developers who need this information to support the decision making process when requirements change or need clarification. Our proposed framework is designed to solve those problems by using video to record the requirements elicitation meetings and automatically extract important stakeholder statements. Those statements are made available to the project members as video clips by using an RE database to access the statements and/or by the integration with the Sysiphus system.",2006,0, 1576,Towards a Global Research Infrastructure for Multidisciplinary Study of Free/Open Source Software Development,"AbstractThe Free/Open Source Software (F/OSS) research community is growing across and within multiple disciplines. This community faces a new and unusual situation. The traditional difficulties of gathering enough empirical data have been replaced by issues of dealing with enormous amounts of freely available public data from many disparate sources (online discussion forums, source code directories, bug reports, OSS Web portals, etc.). Consequently, these data are being discovered, gathered, analyzed, and used to support multidisciplinary research. However at present, no means exist for assembling these data under common access points and frameworks for comparative, longitudinal, and collaborative research across disciplines. Gathering and maintaining large F/OSS data collections reliably and making them usable present several research challenges. For example, current projects usually rely on direct access to, and mining of raw data from groups that generate it, and both of these methods require unique effort for each new corpus, or even for updating existing corpora. In this paper, we identify several needs and critical factors in F/OSS empirical research across disciplines, and suggest recommendations for design of a global research infrastructure for multi-disciplinary research into F/OSS development.",2008,0, 1577,Towards a Hierarchical Framework for Predicting the Best Answer in a Question Answering System,"

This research aims to develop a model for identifying predictive variables for the selection of the best quality answer in a question-answering (QA) system. It was found that accuracy, completeness and relevance are strong predictors of the quality of the answer.

",2007,0, 1578,Towards a hypertext reading/comprehension model,"With the rapid development of underground metropolitan transportation, the problem of safety risks, especially the fire risks, have always been accorded great attention by researchers and practitioners. In order to reduce the uncertainty of fire risks in underground metropolitan transportation, the main objective of this paper is to devise a proactive approach to dynamically predict and control conditions leading to fire hazards. A literature review of fire risks in underground metropolitan transportation is conducted firstly. Then, a predictive model that is based on the continuous tracking of fire hazards is applied to predict the fire risks of underground metropolitan transportation. Using the model, the certain system of underground metropolitan transportation can be identified as ""under control"" or ""out of control"" based on the methods of sampling and control charts. This research would provide us with an effective and valid method to lessen the uncertainty of fire hazards in underground metropolitan transportation as much as possible.",2008,0, 1579,Towards a megamodel to model software evolution through transformations,"Model Driven Engineering is a promizing approach that could lead to the emergence of a new paradigm for software evolution, namely Model Driven Software Evolution. Models, Metamodels and Transformations are the cornerstones of this approach. Combining these concepts leads to very complex structures which revealed to be very difficult to understand especially when different technological spaces are considered such as XMLWare (the technology based on XML), Grammarware and BNF, Modelware and UML, Dataware and SQL, etc. The concepts of model, metamodel and transformation are usually ill-defined in industrial standards like the MDA or XML. This paper provides a conceptual framework, called a megamodel, that aims at modelling large-scale software evolution processes. Such processes are modeled as graphs of systems linked with well-defined set of relations such as RepresentationOf (μ), ConformsTo (χ) and IsTransformedIn (τ).",2005,0, 1580,Towards a Reference Process Model for Event Management,"This paper introduces an object oriented approach to business process modeling. The approach integrates decisions on organisational design with information systems development. Based on the understanding of business processes as a customer-supplier relationship a general process model is introduced which summarizes fundamental characteristics of business processes. These characteristics are the ground for an object oriented approach which extends the OO modeling-technique OMT with features for business process modeling. An example illustrates the principal approach, the main modeling steps and the methodical support. It is based on an outside (macro) and an inside (micro) view on business processes",1996,0, 1581,Towards a Requirements-Driven Workbench for Supporting Software Certification and Accreditation,"Security certification activities for software systems rely heavily on requirements mandated by regulatory documents and their compliance evidences to support accreditation decisions. Therefore, the design of a workbench to support these activities should be grounded in a thorough understanding of the characteristics of certification requirements and their relationships with certification activities. To this end, we utilize our findings from the case study of a certification process of The United States Department of Defense (DoD) to identify the design objectives of a requirements-driven workbench for supporting certification analysts. The primary contributions of this paper are: identifying key areas of automation and tool support for requirements-driven certification activities; an ontology-driven dynamic and flexible workbench architecture to address process variability; and a prototype implementation.",2007,0, 1582,Towards a Theory of Intrusion Detection,"Cooperative frameworks for intrusion detection and response exemplify a key area of today's computer research: automating defenses against malicious attacks that increasingly are taking place at grander speeds and scales to enhance the survivability of distributed systems and maintain mission critical functionality. At the individual host-level, intrusion response often includes security policy reconfiguration to reduce the risk of further penetrations. However, runtime policy changes may cause traditional software components, designed without (dynamic) security in mind, to fail in varying degrees, including termination of critical processes. This paper presents security agility, a strategy to provide software components with the security awareness and adaptability to address runtime security policy changes, describes how security agility is packaged in a prototype toolkit and illustrates how the toolkit can be integrated with intrusion detection and response frameworks to help automate flexible host-based response to intrusions",2000,0, 1583,Towards a Unified Catalogue of Non-Technical Quality Attributes to Support COTS-Based Systems Lifecycle Activities,"Several activities of the COTS-based systems lifecycle are supported not only by the analysis of their technical quality but also (and sometimes mostly) by considering how they fulfill some non-technical quality features considered relevant (licensing, reputation, costs and similar issues). Whilst many catalogues of technical quality features exist, it is not the case for non-technical ones, which are often managed in an ad-hoc form. In a recent work, we proposed a catalogue of non-technical quality features, designed to integrate smoothly into the ISO/IEC 9126-1 standard. In this paper, we detail the process used for the composition of the catalogue, which embraces the inclusion of several non-technical quality features already identified in the literature as well as others which have emerged form our own experience in industrial COTS components selection processes. We also outline some potential applications of the resulting catalogue, intended to support several activities of the COTS-based systems lifecycle. Finally, we describe a COTS selection process carried out in a telecommunications company",2007,0, 1584,Towards a web of patterns,"The benefits and importance of electronic medical record (EMR) systems have been well recognized in the health-care industry. Yet, their wide adoption still face significant barriers in providing on-demand secure medical information access while preserving patients' privacy. Understanding the usage pattern of an EMR system is the first essential step towards building such environment. This paper conducts an in-depth trace analysis of a large-scale EMR system that has been in operation for more than a decade at the Vanderbilt Medical Center. Our study demonstrates several important characteris- tics of EMR system usage from the perspective of user-initiated sessions. First, the workload of the EMR system is highly stable and consistent with a weekly pattern. Second, EMR behavior varies between users, but each user's behavior tends to be consistent with a slow rate of migration across sessions. Finally, the degree of access between users and medical records is sparse, echoing the limits of patient-caregiver relationships that manifest in real healthcare operations. We believe these observations can assist in the development of system security measures, such as EMR-specific anomaly detection systems, and facilitate system performance optimization.",2011,0, 1585,Towards an Approach for Managing the Development Portfolio in Small Product-Oriented Software Companies,"Managing product development activities as an explicit portfolio is crucial to the long-term success of product-oriented software companies. Portfolio management has been studied in the field of new product development for over two decades, but existing approaches transfer poorly to small software companies due to contextual differences. Based on new product development and software engineering literature and three company cases, this paper presents an approach for implementing portfolio management in small, product-oriented software companies, along with initial experiences. The approach integrates portfolio management basics such as strategic alignment, portfolio balancing and go/kill/hold decision-making with modern, time-paced software development processes for the small company context. Our findings suggest that using the proposed approach increases awareness of what projects and other development activities are underway, and how these are resourced. It also helps in making informed decisions and trade-offs when necessary.",2005,0, 1586,Towards an Automated Analysis of Biomedical Abstracts,"Recent advances in bio-molecular imaging have afforded biologists a more thorough understanding of cellular functions in complex tissue structures. For example, high resolution fluorescence images of the retina reveal details about tissue restructuring during detachment experiments. Time sequence imagery of microtubules provides insight into subcellular dynamics in response to cancer treatment drugs. However, technological progress is accompanied by a rapid proliferation of image data. Traditional analysis methods, namely manual measurements and qualitative assessments, become time consuming and are often nonreproducible. Computer vision tools can efficiently analyze these vast amounts of data with promising results. This paper provides an overview of several challenges faced in bioimage processing and our recent progress in addressing these issues",2006,0, 1587,Towards Automated Evaluation of Trust Constraints,"Despite the increased use of robotic catheter navigation systems, and the growing interest in surgical skill evaluation in the field of endovascular intervention, there is a lack of objective and quantitative metrics for performance evaluation. So far very little research has studied operator behavioral patterns using catheter kinematics, operator forces and motions, and catheter-tissue interactions. This paper proposes a framework for automated and objective assessment of performance by measuring catheter-tissue contact forces and operator motion patterns across different skill levels, and using language models to learn the underlying force and motion patterns that are characteristic of skill. Discrete HMMs are utilized to model operator behavior for varying skill levels performing different catheterization tasks, resulting in cross-validation classification accuracies of 94% (expert) and 98% (novice) using the force-based skill models, as well as 83% (expert) and 94% (novice) using the motion-based models. The results motivate the design of improved metrics for endovascular skill assessment with further applications towards performance evaluation of robot-assisted endovascular catheterization.",2015,0, 1588,Towards Awareness in the Large,"Management of shared artifacts is critical to ensure the correct integration and behavior of code created by multiple teams working in concert. Awareness of inter-team development activities and their effects on shared artifacts provides developers the opportunity to detect potential integration problems earlier and take proactive steps to avoid these conflicts. However, current awareness tools do not provide such kinds of awareness making them unsuitable for global software development. In this paper, we discuss their drawbacks, present three strategies to make them suitable for global settings, and illustrate these strategies through a new view for Palantir that better addresses awareness in the large",2006,0, 1589,Towards building a solid empirical body of knowledge in testing techniques,"Testing technique-related empirical studies have been performed for 25 years. We have managed to accumulate a fair number of experiments in this time, which might lead us to think that we now could have a sizeable empirically backed body of knowledge (BoK) on testing techniques. However, the experiments in this field have some flaws, and, consequently, the empirical BoK we have on testing techniques is far from solid. In this paper, we use the results of a survey that we did on empirical testing techniques studies to identify and discuss solutions that could lead to the formation of a solid empirical BoK. The solutions are related to two fundamental experimental issues: (1) the rigorousness of the experimental design and analysis, and (2) the need for a series of community-wide agreements to coordinate empirical research and assure that studies ratify and complement each other.",2004,0, 1590,Towards Evidence in Software Engineering,"The aggregation of studies is of growing interest for the empirical software engineering community, since the numbers of studies steadily grow. We discuss challenges with the aggregation of studies into a common body of knowledge, based on a quantitative and qualitative evaluation of experience from the Experimental Software Engineering Network, ESERNET. Challenges are that the number of studies available is usually low, and the studies that exist are often too scattered and diverse to allow systematic aggregation as a means for generating evidence. ESERNET therefore attempted to coordinate studies and thus create research synergies to achieve a sufficiently large number of comparable studies to allow for aggregation; however, the coordination approach of ESERNET proved to be insufficient. Based on some lessons learned from ESERNET, a four-step procedure for evolving Empirical Software Engineering towards the generation of evidence is proposed. This consists of (1) developing a methodology for aggregating different kinds of empirical results, (2) establishing guidelines for performing, analyzing, and reporting studies as well as for aggregating the results for every kind of empirical study, (3) extract evidence, that is, apply the methodology to different areas of software engineering, and (4) package the extracted evidence into guidelines for practice.",2004,0, 1591,Towards Generalised Accessibility of Computer Games,"Seabed resource exploitation and conservation efforts are extending to offshore areas where the distribution of benthic epifauna (animals living on the seafloor) is unknown. There is a need to survey these areas to determine how biodiversity is distributed spatially and to evaluate and monitor ecosystem states. Seafloor imagery, collected by underwater vehicles, offer a means for large-scale characterization of benthic communities. A single submersible dive can image thousands of square metres of seabed using video and digital still cameras. As manual, human-based analysis lacks large-scale feasibility, there is a need to develop efficient and rapid techniques for automatically extracting biological information from this raw imagery. To meet this need, underwater computer vision algorithms are being developed for the automatic recognition and quantification of benthic organisms. Focusing on intelligent analysis of distinct local image features, the work has the potential to overcome the unique challenges associated with visually interpreting benthic communities. The current incarnation of the system is a significant step towards generalized benthic species mapping, and its feature-based nature offers several advantages over existing technology.",2010,0, 1592,"Towards Real Time Epidemiology: Data Assimilation, Modeling and Anomaly Detection of Health Surveillance Data Streams","

An integrated quantitative approach to data assimilation, prediction and anomaly detection over real-time public health surveillance data streams is introduced. The importance of creating dynamical probabilistic models of disease dynamics capable of predicting future new cases from past and present disease incidence data is emphasized. Methods for real-time data assimilation, which rely on probabilistic formulations and on Bayes' theorem to translate between probability densities for new cases and for model parameters are developed. This formulation creates future outlook with quantified uncertainty, and leads to natural anomaly detection schemes that quantify and detect disease evolution or population structure changes. Finally, the implementation of these methods and accompanying intervention tools in real time public health situations is realized through their embedding in state of the art information technology and interactive visualization environments.

",2007,0, 1593,Towards Understanding the Rhetoric of Small Source Code Changes,"Understanding the impact of software changes has been a challenge since software systems were first developed. With the increasing size and complexity of systems, this problem has become more difficult. There are many ways to identify the impact of changes on the system from the plethora of software artifacts produced during development, maintenance, and evolution. We present the analysis of the software development process using change and defect history data. Specifically, we address the problem of small changes by focusing on the properties of the changes rather than the properties of the code itself. Our study reveals that 1) there is less than 4 percent probability that a one-line change introduces a fault in the code, 2) nearly 10 percent of all changes made during the maintenance of the software under consideration were one-line changes, 3) nearly 50 percent of the changes were small changes, 4) nearly 40 percent of changes to fix faults resulted in further faults, 5) the phenomena of change differs for additions, deletions, and modifications as well as for the number of lines affected, and 6) deletions of up to 10 lines did not cause faults.",2005,0, 1594,Traceability and Communication of Requirements in Digital I&C Systems Development,"A set of optimization goal functions designed to improve the efficiency and linearity performance of helix traveling-wave tubes (TWT) is described. These goal functions were implemented in the CHRISTINE suite of large-signal helix TWT codes along with a steepest-descent optimization algorithm to automate the process of circuit parameter variation and to facilitate the rapid exploration of alternative TWT designs. We compare the predicted power, efficiency, and linearity of four different helix TWT circuits, each developed according to a different set of optimization criteria. Out of these designs, a single design was selected to be further developed for use in C-band high-data-rate communications experiments. The detailed design of this linearized TWT with a predicted 1-dB small-signal bandwidth of 1.2 GHz, small-signal centerband gain of 35.7 dB (fc=5.5 GHz), and centerband saturated output power of 52 dBm (158.5 W) is presented.",2002,0, 1595,Trainee reactions and task performance: a study of open training in object-oriented systems development,"AbstractIn this study, we examine two trainee reactions: ease of learning and ease of use and their relationships with task performance in the context of object-oriented systems development. We conducted an experiment involving over 300 subjects. From that pool 72 trainees that met all of the criteria were selected for analysis in a carefully controlled study. We found that ease of learning was strongly correlated to task performance whereas ease of use was not. The finding was unexpected; ease of learning and ease of use are two overlapping concepts while their effects on task performance are very different. We offer a theoretical explanation to the paradoxical finding and its implications to the improvement of training evaluation.",2009,0, 1596,Trust and Cooperation in Peer-to-Peer Systems,"Summary form only given. We present a trust brokering system that operates in a peer-to-peer manner. The network of trust brokers operate by providing peer reviews in the form of recommendations regarding potential resource targets. One of the distinguishing features of our work is that it separately models the accuracy and honesty concepts. By separately modeling these concepts, our model is able to significantly improve the performance. We apply the trust brokering system to a resource manager to illustrate its utility in a public-resource grid environment. The simulations performed to evaluate the trust-aware resource management strategies indicate that high levels of ""robustness"" can be attained by considering trust while allocating the resources.",2004,0, 1597,Turnover of information technology professionals: the effects of internal labor market strategies,"Retaining information technology (IT) professionals is important for organizations, given the challenges in sourcing for IT talent. Prior research has largely focused on understanding employee turnover from an intra-individual perspective. In this study we examine employee turnover from a structural perspective. We investigate the impact on IT turnover of organizations' Internal Labor Market (ILM) strategies. ILM strategies include human resource rules, practices, and policies including hiring and promotion criteria, job ladders, wage systems and training procedures. We collect data on ILM strategies and turnover rates for eight major IT jobs across forty-one organizations and analyze the data using confirmatory agglomerative hierarchical clustering techniques. Our results show that organizations adopt distinct ILM strategies for different IT jobs, and that these strategies relate to differential turnover rates. Specifically, technically-oriented IT jobs cluster in craft ILM strategies that are associated with higher turnover, whereas managerially-oriented IT jobs cluster in industrial ILM strategies that are associated with lower turnover. Further, depending on their contingencies of goal orientation (not-for-profit versus for-profit), IT focus (IT producer versus user), and information intensity (IT critical versus support), organizations adopt an industrial ILM strategy for their technically-oriented IT jobs to dampen turnover. Not-for-profit and IT user organizations where IT is critical adopt industrial ILM strategies for their technically-oriented IT jobs to attenuate turnover and improve the predictability of their IT workforce. IT producers and IT users where IT plays a supporting role adopt craft ILM strategies that engender higher turnover to remain timely and flexible in IT skills acquisition.",2004,0, 1598,Two Challenges in Genomics That Can Benefit from Petascale Platforms,"

Supercomputing and newsequencing techniques have dramatically increased the number of genomic sequences now publicly available. The rate in which new data is becoming available, however, far exceeds the rate in which one can perform analysis. Examining the wealth of information contained within genomic sequences presents numerous additional computational challenges necessitating high-performancemachines. While there are many challenges in genomics that can greatly benefit from the development of more expedient machines, herein we will focus on just two projects which have direct clinical applications.

",2006,0, 1599,Two controlled experiments concerning the comparison of pair programming to peer review.,"This paper reports on two controlled experiments comparing pair programming with single developers who are assisted by an additional anonymous peer code review phase. The experiments were conducted in the summer semester 2002 and 2003 at the University of Karlsruhe with 38 computer science students. Instead of comparing pair programming to solo programming this study aims at finding a technique by which a single developer produces similar program quality as programmer pairs do but with moderate cost. The study has one major finding concerning the cost of the two development methods. Single developers are as costly as programmer pairs, if both programmer pairs and single developers with an additional review phase are forced to produce programs of similar level of correctness. In conclusion, programmer pairs and single developers become interchangeable in terms of development cost. As this paper reports on the results of small development tasks the comparison could not take into account long time benefits of either technique.",2005,0, 1600,Two Phase Indexes Based Passage Retrieval in Biomedical Texts,"

The biomedical literature is growing at a double-exponential pace. Passage-level retrieval is more effective to provide the information section than document-level retrieval. This paper presents a method of two phase indexes based passage retrieval. First two phase indexes: paragraph index and sentence-level half-overlapped windows index are built. Then, BM25 model is used to retrieval on the two phase indexes. At last, the passage and paragraph retrieval results are combined as the result of the passage retrieval. The experiment result shows that the performance is improved 5% with two phase indexes than only with the paragraph index.

",2007,0, 1601,Two Technology-Enhanced Courses Aimed at Developing Interpersonal Attitudes and Soft Skills in Project Management,"

Recent strategies in the European Union encourage educational styles which promote the development of attitudes and skills as a basis for knowledge construction. The question is whether technology-enhanced settings have the potential to support such educational styles. The Person-Centered Approach, developed by the American psychologist Carl Rogers and adapted in several innovative educational settings holds great promise in promoting experiential, whole person learning. In this paper we illustrate technology-enhanced, person-centered education by describing two course settings and scenarios in which we emphasize, respectively, constructive, interpersonal attitudes and soft skills in the context of project management. As a result of each of the two courses students stated that they had learned significantly on the level of attitudes and soft skills. They considered exchange and discussion with colleagues and active participation during the course as the top factors from which they benefited. Furthermore, the majority of students felt that it was easier for them to work in teams and to establish social relationships in the two courses presented in this article than in traditional courses.

",2006,0, 1602,"Ubiquitous Interactive Art Displays: Are theyWanted, are they Intuitive?","The purpose of this study was to create a ubiquitous proximity activated interactive digital display system providing adjusted artworks as content for evaluating viewer reactions and opinions to determine if similar interactive ubiquitous systems are a beneficial, enjoyable and even an appropriate way to display art. Multimedia used in galleries predominately provides content following set patterns and disregards the viewer. Interactive displays using viewer location usually require the viewer?s conscious participation through carrying some form of hardware or using expensive sensing equipment. We created an inexpensive, simple system that reacts to the user in a ubiquitous manner, allowing the evaluation of the usability and suitability of such systems in the context of viewing art. Results from testing show that interactive displays are generally enjoyed and wanted for displaying art, however even simple ubiquitous displays can cause user difficulty due to the transparency of their interaction. ",2006,0, 1603,Ubiquitous Interactive Video Editing Via Multimodal Annotations,"

Considering that, when users watch a video with someone else, they are used to make comments regarding its contents --- such as a comment with respect to someone appearing in the video --- in previous work we exploited ubiquitous computing concepts to propose the <em>watching-and-commenting</em>authoring paradigm in which a user's comments are automatically captured so as to automatically generate a corresponding annotated interactive video. In this paper we revisit and extend our previous work and detail our prototype that supports the <em>watching-and-editing</em>paradigm, discussing how a ubiquitous computing platform may explore digital ink and associated gestures to support the authoring of multimedia content while enhancing the social aspects of video watching.

",2008,0, 1604,Ultrasound Estimation of Fetal Weight with Fuzzy Support Vector Regression,"In this paper, we present a new support vector regression (SVR) based strategy for simultaneously extracting multiple linear structures in a training data set. As in fuzzy c-prototypes algorithms [17], [18], [10], we introduce fuzzy weights in the SVR formulation which assign to each data point a membership value according to c-structures. We propose to solve the corresponding dual problem under an iterative strategy with an initialization step. Experiments show the benefits of robustness properties of SVR in comparison with the standard fuzzy c-prototypes algorithm. Next, the motion estimation problem is used to illustrate its applicability and relevance in respect of real-world applications.",2007,0, 1605,Uncovering the reality within virtual software teams,"When implementing software development in a global environment, a popular strategy is the establishment of virtual teams. The objective of this paper is to examine the effective project management of this type of team. In the virtual team environment problems arise due to the collaborative nature of software development and the impact distance introduces. Distance specifically impacts coordination, visibility, communication and cooperation within a virtual team. In these circumstances the project management of a virtual team must be carried out in a different manner to that of a team in a single-site location. Results from this research highlighted six specific project management related areas that need to be addressed to facilitate successful virtual team operation. Organizational structure, risk management, infrastructure, process, conflict management and team structure and organization. Additional related areas are the sustained support of senior management and the provision of effective infrastructure",2006,0, 1606,Understanding and Aiding Code Evolution by Inferring Change Patterns,"Evolution continues to play an ever-increasing role in software engineering. Although changing a program is the core of software evolution, program change patterns have not been considered as a first class entity in most classic studies of software evolution. Past empirical studies of software evolution primarily relied on quantitative and statistical analyses of a program over time, but did not focus on semantic and qualitative change patterns of a program. We hypothesize that by treating change patterns as first class entities we can better understand software evolution and also aid programmers in changing software. Our goal is to infer clone evolution patterns from a set of program versions stored in a source code repository. We defined a set of common clone evolution patterns based on our insights from the copy and paste study.",2007,0, 1607,Understanding Business Strategies of Networked Value Constellations Using Goal- and Value Modeling,"In goal-oriented requirements engineering (GORE), one usually proceeds from a goal analysis to a requirements specification, usually of IT systems. In contrast, we consider the use of GORE for the design of IT-enabled value constellations, which are collections of enterprises that jointly satisfy a consumer need using information technology. The requirements analysis needed to do such a cross-organizational design not only consists of a goal analysis, in which the relevant strategic goals of the participating companies are aligned, but also of a value analysis, in which the commercial sustainability of the constellation is explored. In this paper we investigate the relation between strategic goal- and value modeling. We use theories about business strategy such as those by Porter to identify strategic goals of a value constellation, and operationalize these goals using value models. We show how value modeling allows us to find more detailed goals, and to analyze conflicts among goals",2006,0, 1608,Understanding knowledge sharing activities in free/open source software projects: An empirical study,"Free/Open Source Software (F/OSS) projects are people-oriented and knowledge intensive software development environments. Many researchers focused on mailing lists to study coding activities of software developers. How expert software developers interact with each other and with non-developers in the use of community products have received little attention. This paper discusses the altruistic sharing of knowledge between knowledge providers and knowledge seekers in the Developer and User mailing lists of the Debian project. We analyze the posting and replying activities of the participants by counting the number of email messages they posted to the lists and the number of replies they made to questions others posted. We found out that participants interact and share their knowledge a lot, their positing activity is fairly highly correlated with their replying activity, the characteristics of posting and replying activities are different for different kinds of lists, and the knowledge sharing activity of self-organizing Free/Open Source communities could best be explained in terms of what we called ''Fractal Cubic Distribution'' rather than the power-law distribution mostly reported in the literature. The paper also proposes what could be researched in knowledge sharing activities in F/OSS projects mailing list and for what purpose. The research findings add to our understanding of knowledge sharing activities in F/OSS projects.",2008,0, 1609,Understanding Research Field Evolving and Trend with Dynamic Bayesian Networks,"

In this paper, we proposes a method to understand how research fields evolve through the statistical analysis of research publications and the number of new authors in a particular field. Using a Dynamic Bayesian Network, together with the proposed transitive closure property, a more accurate model can be constructed to better represent the temporal features of how a research field evolves. Experiments on the KDD related conferences indicate that the proposed method can discover interesting models effectively and help researchers to get a better insight looking at unfamiliar research areas.

",2007,0, 1610,Understanding Stakeholder Values as a Means of Dealing with Stakeholder Conflicts,"

This paper reports on a quantitative study, which examines the link between software characteristics, sought after consequences and personal values in software evaluation, whilst investigating the stakeholders' understanding of software quality. The study involved a survey of 403 subjects, which were then analyzed quantitatively with bi-variate analysis, and multivariate analysis of variance. The research argues that trade-offs in software development projects are often experienced in software development because of conflicts. These conflicts involve schedules, priorities and are very much caused by the different stakeholder views of quality, different desired consequences sought by the different stakeholders and more importantly influenced by the desired values of the different stakeholders. The research finds that different classes of stakeholders have different views of software quality. The research also finds that certain values sought by the stakeholder influences their sought after consequences required in the developed software product. However, it is not just any values that affect the stakeholder, but rather, it is the values affected by the evaluated software, which influences the selection of characteristics and sought after consequences. Values, which are important, but are not affected by software use, do not influence the stakeholder. As such, these results help us gain a better understanding of what types of values influence the choice of characteristics in software evaluation, and the desired consequences in a software product, and why conflicts exist during software development life-cycle. The results provide a number of important insights and suggest several conclusions. The study showed (1) that stakeholders differ in their priorities in the sought after consequences of the software being developed; (2) that the desired values, which are perceived to be affected by the software differ between stakeholders and influence the choice of characteristic and consequence; (3) that the consequence, value relationship as described in the Software Evaluation Framework can be valuable to understand the conflicts and trade-offs fond in software development.

",2005,0, 1611,Understanding the Impact of Assumptions on Experimental Validity,"Empirical studies are used frequently in software engineering as a method for studying and understanding software engineering techniques and methods. When conducting studies, researchers make assumptions about three objects, people, processes and products. Researchers usually focus their study on only one of those objects. But, regardless of which type of object is chosen as the focus of the study, researchers make assumptions about all three objects. The impact of those assumptions on experimental validity varies depending on the focus of the study. In this paper, we discuss the various types of assumptions that researchers make. We relate those assumptions back to some concepts from social science research. We then use the results of a people-focused study to illustrate the impact of the assumptions on the results of that study.",2004,0, 1612,Understanding the information needs of public health practitioners: A literature review to inform design of an interactive digital knowledge management system,"The need for rapid access to information to support critical decisions in public health cannot be disputed; however, development of such systems requires an understanding of the actual information needs of public health professionals. This paper reports the results of a literature review focused on the information needs of public health professionals. The authors reviewed the public health literature to answer the following questions: (1) What are the information needs of public health professionals? (2) In what ways are those needs being met? (3) What are the barriers to meeting those needs? (4) What is the role of the Internet in meeting information needs? The review was undertaken in order to develop system requirements to inform the design and development of an interactive digital knowledge management system. The goal of the system is to support the collection, management, and retrieval of public health documents, data, learning objects, and tools. Method:: The search method extended beyond traditional information resources, such as bibliographic databases, tables of contents (TOC), and bibliographies, to include information resources public health practitioners routinely use or have need to use-for example, grey literature, government reports, Internet-based publications, and meeting abstracts. Results:: Although few formal studies of information needs and information-seeking behaviors of public health professionals have been reported, the literature consistently indicated a critical need for comprehensive, coordinated, and accessible information to meet the needs of the public health workforce. Major barriers to information access include time, resource reliability, trustworthiness/credibility of information, and ''information overload''. Conclusions:: Utilizing a novel search method that included the diversity of information resources public health practitioners use, has produced a richer and more useful picture of the information needs of the public health workforce than other literature reviews. There is a critical need for public health digital knowledge management systems designed to reflect the diversity of public health activities, to enable human communications, and to provide multiple access points to critical information resources. Public health librarians and other information specialists can serve a significant role in helping public health professionals meet their information needs through the development of evidence-based decision support systems, human-mediated expert searching and training in the use information retrieval systems.",2007,0, 1613,Understanding the Interdependences Among Performance Indicators in the Domain of Industrial Services,"AbstractWithin the context of the EU-Project InCoCo-s, one of the key aims is to standardize integrative industrial service processes in order to facilitate transparency on service operation performance and the resulting customer benefit. Therefore the Service Performance Measurement System (SPMS) has been developed in order to quantify both the efficiency and effectiveness of industrial service operation activities and to support the measurement of customers benefit through industrial service activities. But performance indicators are only a measurable expression of the underlying system performance, a system which is ordinarily complex in nature. It follows, therefore, that it would be beneficial to understand the interdependences between performance indicators in order to better utilize them in evaluating the options for improvements in system performance and the monitoring of an often complex system. Based on a comprehensive literature review and making best use of the tools and expertise available to the InCoCo-S consortium, a process to develop an understanding of the interdependences between performance indicators was created and executed. The results provide both the service provider and the manufacturing customer with an insight into those performance indicators to be targeted for improvement actions and those better suited to monitoring.",2007,0, 1614,Understanding the Parallel Programmer,"As low-cost multiprocessing reaches a wider market, a greater number of programmers will need to be trained for parallel programming. Current studies exploring usability engineering for parallel programming focus only on experienced parallel programmers. This paper applies the card sorting method used in psychology research to understanding the software needs of the novice parallel programmer. This paper demonstrates that novices organize parallel problems by domain type whereas experts use parallel communication type.",2006,0, 1615,"Understanding the Term Reference Model in Information Systems Research: History, Literature Analysis and Explanation","

The heart of every scientific discipline is its own unique, uniform and acknowledged terminology. As an application-oriented mediator between business administration and computer science, information systems research in particular is in need of a theoretical foundation and an instrument capable of translating basic theoretical knowledge into practical applications. Its dependency on and proximity to actual practice, as well as the rapid development of information technology often get in the way of the sound, systematic and consistent formation of concepts. Reference modeling is especially in need of a theoretical foundation. Due to the strong influence of implementation-oriented thought within this field, a gap has resulted between research and practice which has often led to undesirable developments. The high expectations organization and application system developers have on the reutilization of reference models are often disappointed. Apparently, the recommendations made by reference model developers often do not meet the expectations of potential model-users. One reason for this is the non-uniform grasp of the term reference model. This article attempts to counteract this deficiency by way of a detailed analysis of the way the term reference model is used and understood.

",2005,0, 1616,Unified Use Case Statecharts: Case Studies,"Traffic lights are a form of intersection traffic controlling operations are the most commonly applied, especially in urban areas. Traffic lights at the intersection plays an important role in determining the smooth distribution of vehicles on the road, so traffic light system of regulation that would better facilitate traffic flow on a set path. Goal of Settings Logic Analysis of Traffic Lights is based on fuzzy logic: First, Knowing membership functions of input and output of vehicles coming vehicles and output of vehicles at the intersection of Mount Bawakaraeng and duration of green light output in the fuzzy set. Fourth, Knowing the duration of effective green light by using fuzzy logic at the intersection of Mount Bawakaraeng. In this paper there are three membership functions, namely: input membership functions of the arrival of vehicles, vehicle output membership function and membership function duration of green light output. Each membership function is formed by classifying the input and output fuzzy sets by using the mamdani triangle curve representation. To input the number of vehicles coming = d. To iutput the number of vehicle = V. Membership function of output duration of green lights = l. Results Discussion that the duration of effective green light at the intersection of Mount Bawakaraeng Makassar when re-employment, namely: For the arrival of the vehicle 17 and the output of the vehicle 16 so that the effective duration of green lights 50 seconds, for the arrival of vehicles 17 vehicles 18 and outputs the duration of green lights effective 51 seconds, for the arrival of 19 vehicles and 17 vehicles output then the effective duration of green lights 51 seconds, for the arrival of the vehicle 20 and the output of the vehicle 18 so that the effective duration of green lights 54 seconds, for the arrival of 21 vehicles and 19 vehicles, the duration of the output green light Effective 57 seconds. Conclusion First, Know the membership function of input and - - output number of vehicles coming vehicle at the intersection of Mount Bawakaraeng by Veterans in Makassar and the output duration. Burning green light, so the third relation the input variable arrival of vehicles, vehicle output and the output of green light duration is closely related to each other, Second The results obtained indicate the duration of effective green light at the intersection of Mount Bawakaraeng with Veterans at the road when people come home from work.",2010,0, 1617,Unified use case statecharts: Case studies (2007,"Programmable logic controllers (PLCs) are commonly used in the implementation of industrial control systems. Statecharts are a suitable tool for the specification of complex reactive control systems. Statemate produced by iLogix Inc. enables the design, validation and simulation of a statechart model. The authors (1998) have previously developed a methodology and supporting software to enable the targeting of PLCs from a Statemate statechart model. This paper describes a case study which has been undertaken to verify the correct operation of the methodology and software tool",1998,0, 1618,Unveiling the Structure: Effects of Social Feedback on Communication Activity in Online Multiplayer Videogames,"

Feedback intervention in computer-mediated situations can be interpreted as a way to augment communication. According to this idea, this study investigates the effect of providing a group with a Social Network Analysis-based feedback on communication in an on-line game where players talk to each other via textual chat. Three different situations across two different sessions were compared: an Informed Group with a correct feedback, a not-Informed Group with no feedback and a mis-Informed group with an incorrect feedback. Results show that giving correct information increases the related dimensions of communication, while the absence of feedback and the incorrect feedback were not accompanied by any significant modification.

",2007,0, 1619,Upregulation of topoisomerase II? expression in advanced gallbladder carcinoma: a potential chemotherapeutic target,"Purpose The lack of treatment options other than surgical resection results in unfavourable prognosis of advanced gallbladder carcinoma. The aim of this study was to identify cancer-specific cellular targets that would form the basis for some therapeutic approaches for this disease. Methods Twelve advanced gallbladder carcinoma tissue samples and three samples of normal gallbladder epithelium were screened to identify differentially expressed genes by DNA microarray analysis. The results obtained were validated in an independent sample set by quantitative real-time reverse transcription-polymerase chain reaction (RT-PCR). Among the genes picked-up, one molecule, topoisomerase II? (TOPO II?), was further assessed immunohistochemically as a potential chemotherapeutic target, and the growth inhibitory effects of etoposide, doxorubicin and idarubicin, representative TOPO II? inhibitors, on two different gallbladder carcinoma cell lines were compared with that of gemcitabine and 5-fulorouracil. Results Five upregulated genes were identified: four cell cycle-related genes (TOPO II?, cyclin B2, CDC28 protein kinase regulatory subunit 2, ubiquitin-conjugating enzyme E2C) and a metabolism-related gene (?-glutamyl hydrolase). Immunohistochemically, TOPO II? was expressed in gallbladder cancer cells, and 16 of 35 cases (46%) had strong TOPO II? expression defined as having a labeling index of >50%. In in vitro growth inhibition assay, etoposide, as well as doxorubicin and idarubicin, was the most effective for OCUG-1 cells that had strong TOPO II? expression, while gemicitabine was the most effective for NOZ cells with weak TOPO II? expression. Etoposide induced apoptosis of OCUG-1 cells. Conclusions TOPO II? might be an effective chemotherapeutic target in advanced gallbladder carcinoma, especially when it is expressed strongly.",2008,0, 1620,UQL: A UML-based Query Language for Integrated Data,"The authors describe the basic ideas and concepts behind the Information Retrieval Query Language (IRQL) that is used as one of the back-ends in the GETESS project. The front-end provides a user interface which is embedded in a dialogue system. This dialogue system allows queries to be formulated in a user friendly (i.e. exploiting a limited range of natural language) and interactive way. Access to the analyzed data is provided by IRQL. The principal focus of IRQL development is the integration of concepts of information retrieval, database query languages, and query languages for semi-structured data. Therefore, we will be able to exploit the structure of documents, if known, and can additionally use information retrieval techniques regardless of whether the structure is known or not. Our approach develops a query language that is compatible with the recently adopted SQL99 standard and information retrieval clauses (e.g. Boolean retrieval). Our data model extends the object-relational model and additionally supports an abstraction of attributes. That is, we can use attribute-independent queries as well as attribute-dependent ones as in RDBMSs. We evaluate IRQL queries by mapping them to queries supported by existing systems such as object-relational DBMSs, full-text DBMSs, or conventional search engines, and post processing the results supplied by these systems, if necessary",2000,0, 1621,Usability Evaluation of User Interfaces Gener with a Model-Driven Architecture Tool. Chapter 2,"Model-Driven Architecture (MDA) has recently attracted interest of both research community and industry corporations. It specifies an automated process of developing interactive applications from high-level models to code generation. This approach can play a key role in the fields of Software Engineering (SE) and Human-Computer Interaction (HCI). However, although there are some MDA-compliant methods to develop user interfaces, none of them explicitly integrates usability engineering to user interface engineering. This chapter addresses this issue by showing how the usability of user interfaces that are automatically generated by an industrial MDA-compliant CASE tool can be assessed. The goal is to investigate if",2007,0, 1622,Usability of E-learning tools,"The correlation between the effort to develop a learning process and early size measures could be used to assess the usability of an employed tool. In particular, when the measures are obtained from the learning process specification and they are relevant effort indicators we can assert that the technical competences of instructional designers are not relevant for the tool usage. We present initial results of applying empirical analysis to confirm a previously usability study performed on the ASCIO-S (Adaptive Self consistent learning Object SET) editor, a visual language based tool for developing adaptive learning processes",2006,0, 1623,Use of Agent Prompts to Support Reflective Interaction in a Learning-by-Teaching Environment,"

A learning-by-teaching environment (Biswas, Schwarz, Bransford et al., 2001), can be used to create a context in which student can play the role of tutor through teaching the agent tutee. Without meaningful feedback from the agent, there is no reason to expect student's engagement with the teaching interaction and growth in learning. This study tries to investigate the design of student-agent reflective interaction triggered by the agent prompts in a learning-by-teaching agent environment, Betty's Brain. A pilot study in using the prompts within the agent environment is undertaken. The result gives us some preliminary evidence that the agent prompt support on reflective interaction can be positive in enhancing student's learning when pursuing learning-by-teaching activities.

",2008,0, 1624,Use of Graphical Probabilistic Models to build SIL claims based on software safety standards such as IEC61508-3,"AbstractSoftware reliability assessment is different from traditional reliability techniques and requires a different process. The use of development standards is common in current good practice. Software safety standards recommend processes to design and assure the integrity of safety-related software. However the reasoning on the validity of these processes is complex and opaque. In this paper an attempt is made to use Graphical Probability Models (GPMs) to formalise the reasoning that underpins the construction of a Safety Integrity Level (SIL) claim based upon a safety standard such as IEC61508 Part 3. There are three major benefits: the reasoning becomes compact and easy to comprehend, facilitating its scrutiny, and making it easier for experts to develop a consensus using a common formal framework; the task of the regulator is supported because to some degree the subjective reasoning which underpins the expert consensus on compliance is captured in the structure of the GPM; the users will benefit from software tools that support implementation of IEC61508, such tools even have the potential to allow cost-benefit analysis of alternative safety assurance techniques.This report and the work it describes were funded by the Health and Safety Executive. The opinions or conclusions expressed are those of the authors alone and do not necessarily represent the views of the Health and Safety Executive.",2006,0, 1625,Use of relative code churn measures to predict system defect density,"Software systems evolve over time due to changes in requirements, optimization of code, fixes for security and reliability bugs etc. Code churn, which measures the changes made to a component over a period of time, quantifies the extent of this change. We present a technique for early prediction of system defect density using a set of relative code churn measures that relate the amount of churn to other variables such as component size and the temporal extent of churn. Using statistical regression models, we show that while absolute measures of code chum are poor predictors of defect density, our set of relative measures of code churn is highly predictive of defect density. A case study performed on Windows Server 2003 indicates the validity of the relative code churn measures as early indicators of system defect density. Furthermore, our code churn metric suite is able to discriminate between fault and not fault-prone binaries with an accuracy of 89.0 percent.",2005,0, 1626,User Needs Analysis and requirements engineering: Theory and practice,"Several comprehensive User Centred Design methodologies have been published in the last decade, but while they all focus on users, they disagree on exactly what activities should take place during the User Needs Analysis, what the end products of a User Needs Analysis should cover, how User Needs Analysis findings should be presented, and how these should be documented and communicated. This paper highlights issues in different stages of the User Needs Analysis that appear to cause considerable confusion among researchers and practitioners. It is our hope that the User-Centred Design community may begin to address these issues systematically. A case study is presented reporting a User Needs Analysis methodology and process as well as the user interface design of an application supporting communication among first responders in a major disaster. It illustrates some of the differences between the User-Centred Design and the Requirements Engineering communities and shows how and where User-Centred Design and Requirements Engineering methodologies should be integrated, or at least aligned, to avoid some of the problems practitioners face during the User Needs Analysis.",2006,0, 1627,Using ?Cited by? Information to Find the Context of Research Papers,"

This paper proposes a novel method of analyzing data to find important information about the context of research papers. The proposed CCTVA (Collecting, Cleaning, Translating, Visualizing, and Analyzing) method helps researchers find the context of papers on topics of interest. Specifically, the method provides visualization information that maps a research topic's evolution and links to other papers based on the results of Google Scholar and CiteSeer. CCTVA provides two types of information: one type shows the paper's title and the author, while the other shows the paper's title and the reference. The goal of CCTVA is enable both novices and experts to gain insight into how a field's topics evolve over time. In addition, by using linkage analysis and visualization, we identify five special phenomena that can help researchers conduct literature reviews.

",2008,0, 1628,Using a Hybrid Method for Formalizing Informal Stakeholder Requirements Inputs,"Success of software development depends on the quality of the requirements specification. Moreover, good - sufficiently complete, consistent, traceable, and testable - requirements are a prerequisite for later activities of the development project. Without understanding what the stakeholders really want and need, and writing these requirements, projects will not develop what the stakeholders wanted. During the development of the WinWin negotiation model and the EasyWinWin requirements negotiation method, we have gained considerable experience in capturing informal requirements in over 100 projects. However, the transition from informal representations to semi-formal and formal representations is still a challenging problem. Based on our analysis of the projects to date, we have developed an integrated set of gap-bridging methods as a hybrid method to formalize informal stakeholder requirements inputs. The basic idea is that orchestrating these gap-bridging methods through the requirements engineering process can significantly eliminate requirements related problems and ease the process of formality transition.",2006,0, 1629,Using ABC Model for Software Process Improvement: A Balanced Perspective,"Recently, many related researches focus on using mathematical approaches or artificial techniques to efficiently improve software development process (SDP). This paper provides a managerial viewpoint to discuss software process improvement (SPI) and introduces an alternative orientation that can lead to asking new questions. We combine activity-based costing (ABC), balanced scorecard (BSC) and capability maturity model (CMM) into SPI and propose a new model, called the ABC Model, also called the ABCM. There are two purposes of the proposed model. The first is to reshape the effective SDP in terms of goals and strategies of organizations. The second is to evaluate the performance of SDP based on the balanced perspective. This paper has two perspectives introduced, a balanced perspective on SDP, and a process-based perspective on the ABCM. Finally, this research is a longitudinal and practical research and employs a case study to propose a feasible model for SPI.",2006,0, 1630,Using Abstraction in the Verification of Simulation Coercion,"Simulation coercion concerns the adaptation of an existing simulation to meet new requirements. Interactions among course-of-action options available during coercion can become sufficiently complex that full verification of the simulation as it is adapted becomes cost-prohibitive. To address this issue we introduce two forms of abstraction, as employed in the model-checking community, to support verification of critical features of the simulation. We extend existing abstraction methods to facilitate our goals, and propose a useful abstraction method based on partial traces. As a case study, we apply our abstraction methods to the verification of a coercion of an existing simulation.",2006,0, 1631,Using Actor Portrayals to Systematically Study Multimodal Emotion Expression: The GEMEP Corpus,"

Emotion research is intrinsically confronted with a serious difficulty to access pertinent data. For both practical and ethical reasons, genuine and intense emotions are problematic to induce in the laboratory; and sampling sufficient data to capture an adequate variety of emotional episodes requires extensive resources. For researchers interested in emotional expressivity and nonverbal communication of emotion, this situation is further complicated by the pervasiveness of expressive regulations. Given that emotional expressions are likely to be regulated in most situations of our daily lives, spontaneous emotional expressions are especially difficult to access. We argue in this paper that, in view of the needs of current research programs in this field, well-designed corpora of acted emotion portrayals can play a useful role. We present some of the arguments motivating the creation of a multimodal corpus of emotion portrayals (Geneva Multimodal Emotion Portrayal, GEMEP) and discuss its overall benefits and limitations for emotion research.

",2007,0, 1632,Using Agent-Based Modelling Approaches to Support the Development of Safety Policy for Systems of Systems,"

A safety policy defines the set of rules that governs the safe interaction of agents operating together as part of a system of systems (SoS). Agent autonomy can give rise to unpredictable, and potentially undesirable, emergent behaviour. Deriving rules of safety policy requires an understanding of the capabilities of an agent as well as how its actions affect the environment and consequently the actions of others. Methods for multi-agent system design can aid in this understanding. Such approaches mention organisational rules. However, there is little discussion about how they are derived. This paper proposes modelling systems according to three viewpoints: an agent viewpoint, a causal viewpoint and a domain viewpoint. The agent viewpoint captures system capabilities and inter-relationships. The causal viewpoint describes the effect an agent's actions has on its environment as well as inter-agent influences. The domain viewpoint models assumed properties of the operating environment.

",2006,0, 1633,Using Bayesian Belief Networks to Model Software Project Management Antipatterns,"In spite of numerous traditional and agile software project management models proposed, process and project modeling still remains an open issue. This paper proposes a Bayesian network (BN) approach for modeling software project management antipatterns. This approach provides a framework for project managers, who would like to model the cause-effect relationships that underlie an antipattern, taking into account the inherent uncertainty of a software project. The approach is exemplified through a specific BN model of an antipattern. The antipattern is modeled using the empirical results of a controlled experiment on extreme programming (XP) that investigated the impact of developer personalities and temperaments on communication, collaboration-pair viability and effectiveness in pair programming. The resulting BN model provides the precise mathematical model of a project management antipattern and can be used to measure and handle uncertainty in mathematical terms",2006,0, 1634,Using Cognitive Affective Interaction Model to Construct On-Line Game for Creativity,"

The paper constructed an online game for Creativity. The development of the game is based on the Cognitive Affective Interaction Model that was designed to help students develop the skills for divergent and creative thinking. First, we proposed a framework for designing creativity games. Then, an online game system is constructed with the strategies of teaching for creativity. We proved that the creativity of the learners can be improved by the proposed game-based learning system. Conclusively, game-based learning creates a new opportunity for creativity.

",2006,0, 1635,Using Context Distance Measurement to Analyze Results across Studies,"Providing robust decision support for software engineering (SE) requires the collection of data across multiple contexts so that one can begin to elicit the context variables that can influence the results of applying a technology. However, the task of comparing contexts is complex due to the large number of variables involved. This works extends a previous one in which we proposed a practical and rigorous process for identifying evidence and context information from SE papers. The current work proposes a specific template to collect context information from SE papers and an interactive approach to compare context information about these studies. It uses visualization and clustering algorithms to help the exploration of similarities and differences among empirical studies. This paper presents this approach and a feasibility study in which the approach is applied to cluster a set of papers that were independently grouped by experts.",2007,0, 1636,Using correlation and accuracy for identifying good estimators,"

Human-based estimation remains the predominant methodology of choice [1]. Understanding the human estimator is critical for improving the effort estimation process. Every human estimator draws upon their background in terms of domain knowledge, technical knowledge, experience, and education in formulating an estimate. This research presented at the PROMISE 2007 workshop assessed the goodness of human estimation based only on project accuracy. This research extends the goodness of human estimation to also include component correlation. Thus, a good estimator is accurate and also does a good job of ranking component effort. Using this revised definition of goodness of estimation produces an average classification rate of 93.3 percent over 1000 trials. Furthermore, the resulting decision tree is extremely intuitive.

",2008,0, 1637,Using Data Mining Technology to improve Manufacturing Quality - A Case Study of LCD Driver IC Packaging Industry,"In recent year, because of the professional teamwork, to improve the qualification percentage of products, to accelerate the acknowledgement of product defects and to find out the solution, the LCD driver IC packaging factories have to establish an analysis mode for quality problems of product for more effective and quicker acquisition of needed information and to improve the customer's satisfaction for information system. The past information system used neural network to improve the yield rate of production. In this research employs the star schema of data warehousing as the base of line analysis, and uses decision tree in data mining to establish a quality analysis system for the defects found in the production processes of package factories in order to provide an interface for problem analysis, enabling quick judgment and control over the cause of problem to shorten the time solving the quality problem. The result of research shows that the use of decision tree algorithm reducing the numbers of defected inner leads and chips has been improved, and using decision tree algorithm is more suitable than using neural network in quality problem classification and analysis of the LCD driver IC packaging industry",2006,0, 1638,Using Economics as Basis for Modelling and Evaluating Software Quality,"The economics and cost of software quality have been discussed in software engineering for decades now. There is clearly a relationship and a need to manage cost and quality in combination. Moreover, economics should be the basis of any quality analysis. However, this implies several issues that have not been addressed to an extent so that managing the economics of software quality is common practice. This paper discusses these issues, possible solutions, and research directions.",2007,0, 1639,Using Enterprise Architecture Standards in Managing Information Technology,"Enterprise Architecture (EA) defines the guidelines for the design and implementation of Information Technology (IT). Acting as the force that ensures alignment of organizational business plan(s) with IT, current EA frameworks are techno-centric in that business goals, strategies and governance are considered only from the informational aspects of automation. This means the current EA frameworks do not address the human behavior of stakeholders during EA excluding factors such as the effect of organizational change caused by EA and the new roles, duties, and responsibilities they are assigned. This paper extends previous work incorporating ideas from the Theory of Structuration to address human and organizational behavior as significant inputs to EA. This paper describes human and organizational behavior utilizing an approach to EA that includes preparing the organization, planning, educating and training staff and aspects of organizational and behavioral theory that focuses on communication and collaboration as a part of EA development.",2016,0, 1640,Using Entropy Analysis to Find Encrypted and Packed Malware,"In statically analyzing large sample collections, packed and encrypted malware pose a significant challenge to automating the identification of malware attributes and functionality. Entropy analysis examines the statistical variation in malware executables, enabling analysts to quickly and efficiently identify packed and encrypted samples",2007,0, 1641,Using flexible points in a developing simulation of selective dissolution in alloys,"Coercion is a semi-automated simulation adaptation technology that uses subject-matter expert insight about model abstraction alternatives, called flexible points, to change the behavior of a simulation. Coercion has been successfully applied to legacy simulations, but never before to a simulation under development. In this paper, we describe coercion of a developing simulation and compare it with our experience coercing legacy simulations. Using a simulation of selective dissolution in alloys as a case study, we observe that applying coercion early in the development process can be very beneficial, aiding subject matter experts in formalizing assumptions and discovering unexpected interactions. We also discuss the development of new coercion tools and a new language (Flex ML) for working with flexible points.",2007,0, 1642,Using Formal Specification Techniques for Advanced Counseling Systems in Health Care,"

Computer-based counseling systems in health care play an important role in the toolset available for doctors to inform, motivate and challenge their patients according to a well-defined therapeutic goal. In order to study value, use, usability and effectiveness of counseling systems for specific use cases and purposes, highly adaptable and extensible systems are required, which are - despite their flexibility and complexity - reliable, robust and provide exhaustive logging capabilities. We developed a computer-based counseling system, which has some unique features in that respect: The actual counseling system is generated out of a formal specification. Interaction behavior, logical conception of interaction dialogs and the concrete look & feel of the application are separately specified. In addition, we have begun to base the formalism on a mathematical process calculus enabling formal reasoning. As a consequence e.g. consistency and termination of a counseling session with a patient can be verified. We can precisely record and log all system and patient generated events; they are available for advanced analysis and evaluation.

",2007,0, 1643,Using Fuzzy Set Theory to Assess Country-of-Origin Effects on the Formation of Product Attitude,"

Several researchers on country-of-origin (coo) have expressed their interest in knowing how consumers' emotional reactions toward coo-cues affect product attitude formation. This paper shows how Fuzzy Set Theory might serve as a useful approach to that problem. Data was gathered by means of self-administered questionnaires. Technically, orness of OWA-operators enabled us to distinguish consumers expressing highly positive versus less positive emotions toward coo. It appeared that this variance in emotional estate goes together with a difference in aggregating product-attribute beliefs.

",2006,0, 1644,Using Historical In-Process and Product Metrics for Early Estimation of Software Failures,"The benefits that a software organization obtains from estimates of product quality are dependent upon how early in the product cycle that these estimates are available. Early estimation of software quality can help organizations make informed decisions about corrective actions. To provide such early estimates we present an empirical case study of two large scale commercial operating systems, Windows XP and Windows Server 2003. In particular, we leverage various historical in-process and product metrics from Windows XP binaries to create statistical predictors to estimate the post-release failures/failure-proneness of Windows Server 2003 binaries. These models estimate the failures and failure-proneness of Windows Server 2003 binaries at statistically significant levels. Our study is unique in showing that historical predictors for a software product line can be useful, even at the very large scale of the Windows operating system",2006,0, 1645,Using ICT to Improve the Education of Students with Learning Disabilities,"AbstractThe potential of Information and Communications Technology in all forms of education has been well demonstrated. In this paper we examine how ICT can improve the education of students with learning disabilities (LD). We will begin by examining the nature of learning disabilities and discussing the different approaches to schooling for students with LD. Learning models have evolved over recent years in response to many factors including the advent of technology in education. This is particularly important in this arena where technology can make a significant difference to educating these students, but only if it is used appropriately. The paper then looks at a case study of use of ICT in a school catering for students with LD.",2008,0, 1646,"Using Lexical, Terminological and Ontological Resources for Entity Recognition Tasks in the Medical Domain","

This paper reports on a case-study of applying various publicly available resources (lexical, terminological and ontological) for medical recognition tasks, that is, for identifying medical entities in the analysis of clinical practice guideline texts. The paper provides a methodological support that systematises the entity recognition task in the medical domain. Preliminary analysis shows that many of the medical linguistic expressions describing goals and intentions in natural language are included in the current terminological resources. So, these resources can be used as a means of disambiguating and structuring this type of expressions, with the final aim of indexing guideline repositories for efficient searching.

",2007,0, 1647,Using Linear Regression Models to Analyse the Effect of Software Process Improvement,

In this paper we publish the results of a thorough empirical evaluation of a CMM-based software process improvement program that took place at the IT department of a large Dutch financial institution. Data of 410 projects collected over a period of four years are analysed and a productivity improvement of about 20% is found. In addition to these results we explain how the use of linear regression models and hierarchical linear models greatly enhances the sensitivity of analysis of empirical data on software improvement programs.

,2006,0, 1648,Using measurements to support real-option thinking in agile software development,

This position paper applies real-option-theory perspective to agile software development. We complement real-option thinking with the use of measurements to support midcourse decision-making from the viewpoint of the client. Our position is motivated by using empirical data gathered from secondary sources.

,2008,0, 1649,Using Mutation Analysis for Assessing and Comparing Testing Coverage Criteria,"The empirical assessment of test techniques plays an important role in software testing research. One common practice is to seed faults in subject software, either manually or by using a program that generates all possible mutants based on a set of mutation operators. The latter allows the systematic, repeatable seeding of large numbers of faults, thus facilitating the statistical analysis of fault detection effectiveness of test suites; however, we do not know whether empirical results obtained this way lead to valid, representative conclusions. Focusing on four common control and data flow criteria (block, decision, C-use, and P-use), this paper investigates this important issue based on a middle size industrial program with a comprehensive pool of test cases and known faults. Based on the data available thus far, the results are very consistent across the investigated criteria as they show that the use of mutation operators is yielding trustworthy results: generated mutants can be used to predict the detection effectiveness of real faults. Applying such a mutation analysis, we then investigate the relative cost and effectiveness of the above-mentioned criteria by revisiting fundamental questions regarding the relationships between fault detection, test suite size, and control/data flow coverage. Although such questions have been partially investigated in previous studies, we can use a large number of mutants, which helps decrease the impact of random variation in our analysis and allows us to use a different analysis approach. Our results are then; compared with published studies, plausible reasons for the differences are provided, and the research leads us to suggest a way to tune the mutation analysis process to possible differences in fault detection probabilities in a specific environment",2006,0, 1650,Using Practice Outcome Areas to Understand Perceived Value of CMMI Specific Practices for SMEs,"

In this article, we present a categorization of CMMI Specific Practices, and use this to reanalyze prior work describing the perceived value of those practices for Small-to-Medium-sized Enterprises (SMEs), in order to better understand the software engineering practice needs of SMEs. Our categorization is based not on process areas, but on outcome areas (covering organizational, process, project, and product outcomes) and on the nature of activities leading to outcomes in those areas (covering planning, doing, checking, and improvement activities). Our reanalysis of the perceived value of Specific Practices for the CMMI Level 2 Process Areas shows that SMEs most value practices for working on project-related outcomes, and for planning and doing work on product-related outcomes. Our categorization of practices will serve as a framework for further study about CMMI and other SPI approaches.

",2007,0, 1651,Using Simulated Students for the Assessment of Authentic Document Retrieval,"

In the REAP system, users are automatically provided with texts to read that are targeted to their individual reading abilities and needs. To assess such a system, students with different abilities use it, and then researchers measure how well it addresses their needs. In this paper, we describe an approach using simulated students to perform this assessment. This enables researchers to determine if the system functions well enough for the students to learn the curriculum and how factors such as corpus size and retrieval criteria affect performance. We discuss how we have used simulated students to assess the REAP system and to prepare for an upcoming study, as well as future work.

",2006,0, 1652,Using Sketching to Aid the Collaborative Design of Information Visualisation Software - A Case Study,"AbstractWe present results of a case study involving the design of Information Visualisation software to support work in the field of computational biology. The software supports research among scientists with very different technical backgrounds. In the study, the design process was enhanced through the use of sketching and design patterns. The results were that the use of sketching as an integral part of a collaborative design process aided creativity, communication, and collaboration. These findings show promise for use of sketching to augment other design methodologies for Information Visualisation.",2006,0, 1653,Using social agents to visualize software scenarios,"Enabling nonexperts to understand a software system and the scenarios of usage of that system can be challenging. Visually modeling a collection of scenarios as social interactions can provide quicker and more intuitive understanding of the system described by those scenarios. This project combines a scenario language with formal structure and automated tool support (ScenarioML) and an interactive graphical game engine featuring social automomous characters and text-to-speech capabilities. We map scenarios to social interactions by assigning a character to each actor and entity in the scenarios, and animate the interactions among these as social interactions among the corresponding characters. The social interactions can help bring out these important aspects: interactions of multiple agents, pattern and timing of interactions, non-local inconsistencies within and among scenarios, and gaps and missing information in the scenario collection. An exploratory study of this modeling's effectiveness is presented.",2006,0, 1654,Using Software Development Progress Data to Understand Threats to Project Outcomes,"In this paper we describe our on-going longitudinal study of a large complex software development project. We discuss how we used project metrics data collected by the development team to identify threats to project outcomes. Identifying and addressing threats to projects early in the development process should significantly reduce the chances of project failure. We have analysed project data to pinpoint the sources of threats to the project. The data we have used is embedded in the project's fortnightly progress reports produced by the project team. The progress reports are part of the software measurement program this company operates. The company has highly mature development processes which were assessed at CMM level 5 in 2004. Our analysis shows that standard project progress data can generate rich insights into the project; insights that go beyond those anticipated when the metrics were originally specified. Our results reveal a pattern of threats to the project that the project team can focus on mitigating. The project team is already aware of some threats, for example that communication with the customer is a significant threat to the project. But there are other threats the team is not aware of for example that people issues within the software team are not a significant threat to the project",2005,0, 1655,Using Speech Act Profiling for Deception Detection,This paper presents the initial results of analysis of nonlinear spectral features for classifying truthful and deceptive speech. These features are derived on a Bark scale based on the psychoacoustic masking property of human speech perception. Truthful and deceptive speeches are established a posteriori by a male speaker under jeopardy. Test results using significant energy features at Bark bands and a neural network have a potential to show delicate variations between truthful and deceptive utterances.,2013,0, 1656,Using UML in the context of agent-oriented software engineering: State of the art.,"AbstractMost of the methodologies and notations for agent-oriented software engineering developed over the past few years are based on the Unified Modeling Language (UML) or proposed extensions of UML. However, at the moment an overview on the different approaches is missing. In this paper. we present a state-of-the-art survey of the different methodologies and notations that, in one way or the other, rely on the usage of UML for the specification of agent-based systems. We focus on two aspects, i.e., design methodologies for agent-oriented software engineering, and different types of notations (e.g., for interaction protocols, social structures, or ontologies) that rely on UML.1",2004,0, 1657,Using Unified Modeling Language for Conceptual Modelling of Knowledge-Based Systems,"The paper discusses how hypertext can be used to provide explanations in knowledge-based systems (KBS), from both conceptual and implementation perspectives. To this end, it proposes a generic approach to providing hypertext-based explanations, which is based on the functional match between hypertext and explanations in MPS. A simulated KBS for financial analysis (Hyper-FINALYZER) is also described to demonstrate the approach. First, deep knowledge can be linked to KBS output with referential links. Second, various concepts and procedures involved in problem solving can be linked to each other with both referential links and organizational links to reflect the interdependence among domain constructs and the complexity of the task domain.<>",1994,0, 1658,Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement,"The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage?especially from the pilot phase, parallel processing of data and correctly positioned process controls?should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.",2007,0, 1659,Utilization of cache area in on-chip multiprocessor,"Network-on-chip (NoC) architectures have been recently proposed as the communication framework for large-scale chips. A well-designed NoC architecture can facilitate the IP cores to communicate with each other efficiently. In this paper, we propose a systematic mapping scheme, called area utilization based mapping (AUBM), to map the IP cores from the communication core graph to the mesh network. In AUBM, the IP cores can be of various sizes. Extensive experiments have been conducted for evaluating the mapping schemes. AUBM is compared with previously proposed schemes for different applications as well as synthetic workloads. Our experiment results show that AUBM outperforms others in almost all cases in terms of the mapping cost involving traffic volume and chip area.",2012,0, 1660,Validating Mediqual Constructs,"Use case models capture and describe the functional requirements of a software system. A use case driven development process, where a use case model is the principal basis for constructing an object-oriented design, is recommended when applying UML. There are, however, some problems with use case driven development processes and alternative ways of applying a use case model have been proposed. One alternative is to apply the use case model in a responsibility-driven process as a means to validate the design model. We wish to study how a use case model best can be applied in an object-oriented development process and have conducted a pilot experiment with 26 students as subjects to compare a use case driven process against a responsibility-driven process in which a use case model is applied to validate the design model. Each subject was given detailed guidelines on one of the two processes, and used those to construct design models consisting of class and sequence diagrams. The resulting class diagrams were evaluated with regards to realism, that is, how well they satisfied the requirements, size and number of errors. The results show that the validation process produced more realistic class diagrams, but with a larger variation in the number of classes. This indicates that the use case driven process gave more, but not always more appropriate, guidance on how to construct a class diagram. The experiences from this pilot experiment were also used to improve the experimental design, and the design of a follow-up experiment is presented.",2003,0, 1661,"Validation (V&V), Model-based Verification Integration of Structured Review and Modelbased Verification: a Case Study","The author examines potential changes resulting from current research and evaluation studies of software verification and validation techniques and tools. She offers specific predictions with regard to: standards and guidelines; specification, modeling, and analysis; reviews, inspections, and walkthroughs; and tests.<>",1989,0, 1662,Validation metrics for response histories: perspectives and case studies,"

The quantified comparison of transient response results are useful to analysts as they seek to improve and evaluate numerical models. Traditionally, comparisons of time histories on a graph have been used to make subjective engineering judgments as to how well the histories agree or disagree. Recently, there has been an interest in quantifying such comparisons with the intent of minimizing the subjectivity, while still maintaining a correlation with expert opinion. This increased interest has arisen from the evolving formalism of validation assessment where experimental and computational results are compared to assess computational model accuracy. The computable measures that quantify these comparisons are usually referred to as validation metrics. In the present work, two recently developed metrics are presented, and their wave form comparative quantification is demonstrated through application to analytical wave forms, measured and computed free-field velocity histories, and comparison with Subject Matter Expert opinion.

",2007,0, 1663,Validation of design methods: lessons from medicine,"The definition of design decision helps architect with explicit design space exploration and improved trace ability from requirement, software architecture to implementation. In this paper, we proposed an automatic design decision validation method. We first extend the design decision meta-model to record the evaluation criterion, impact scope, and analysis methods for software requirement. Our evaluation algorithm analyzes change impact and prepares analysis input for each affected quality requirement. The evaluation process is automatically executed based on an analysis method integration framework. If a quality requirement cannot be fulfilled by relevant design decisions, our search algorithm will explore the design spaces to locate possible conflicted design decisions. We implement our approach by extending a design decision modeling tool ABC/DD, and use our approach to validate a web based interactive application for TV.",2013,0, 1664,Value-Based Business-IT Alignment in Networked Constellations of Enterprises,"Business-ICT alignment is the problem of matching ICTservices with the requirements of the business. In businesses of any significant size, business-ICT alignment is a hard problem, which is currently not solved completely. With the advent of networked constellations of enterprises, the problem gets a new dimension, because in such a network, there is not a single point of authority for making decisions about ICT support to solve conflicts in requirements these various enterprises may have. Network constellations exist when different businesses decide to cooperate by means of ICT networks, but they also exist in large corporations, which often consist of nearly independent business units, and thus have no single point of authority anymore. In this position paper we discuss the need for several solution techniques to address the problem of business-ICT alignment in networked constellations. Such techniques include: -RE techniques to describe networked value constellations requesting and offering ICT services as economic value. These techniques should allow reasoning about the matching of business needs with available ICT services in the constellation. - RE techniques to design a networked ICT architecture that supports ICT services required by the business, taking the value offered by those services, and the costs incurred by the architecture, into account. - Models of decision processes about ICT services and their architecture, and maturity models of those processes.The techniques and methods will be developed and validated using case studies and action research.",2005,0, 1665,Value-Based Processes for COTS-Based Applications,"Economic imperatives are changing the nature of software development processes to reflect both the opportunities and challenges of using COTS products. Processes are increasingly moving away from the time-consuming composition of custom software from lines of code (although these processes still apply for developing the COTS products themselves) toward assessment, tailoring, and integration of COTS or other reusable components. Two factors are driving this change: COTS or other reusable components can provide significant user capabilities within limited costs and development time, and more COTS products are becoming available to provide needed user functions.",2005,0, 1666,Value-Based Software Engineering: Overview and Agenda,"ISO/IEC 29110 offers a customized set of standards and guides for very small entities to help them improve their competitiveness in quality, cost, and schedule.",2016,0, 1667,Value-Oriented Requirements Prioritization in a Small Development Organization,"Requirements engineering, especially requirements prioritization and selection, plays a critical role in overall project development. In small companies, this often difficult process can affect not only project success but also overall company survivability. A value-oriented prioritization (VOP) framework can help this process by clarifying and quantifying the selection and prioritization issues. A case study of a small development company shows a successful VOP deployment that improved communications and saved time by focusing requirements decisions for new product releases on core company values",2007,0, 1668,Valuing computer science education research?,"The computer industry is playing an increasingly important role in India's economy. However, for a number of reasons, many researchers are not doing high-quality work. Computer science research in India takes place at academic, government-sponsored, and industry-sponsored institutions. These major institutions can conduct effective research because they have sufficient funding, high-quality programs, good equipment, and an effective infrastructure. This is not the case at India's many other institutions. Many local observers say that Indian computer scientists make advances in existing areas of research but rarely do cutting-edge work. This occurs, in part, because many Indian computer scientists receive little direction and have few co-workers in their fields, which means they work in relative isolation",1997,0, 1669,Verification and validation of a project collaboration tool.,"This paper specifically addresses the methods in use for the F-14 Digital Flight Control System (DFCS) program, however many of the methods used in this effort are currently being applied to the F-18E/F, V-22 and EA-6B programs. Incorporation of a control law design into the flight control computer's operational flight program requires the engineer to follow specific design and implementation tasks in order to prove the design. These design tasks include detailed control law development, open-loop feedback stability robustness tests, and closed-loop control law performance testing. The implementation tasks include coding the design into a full non-linear simulation, verification of the control law execution, validation of the control law performance, and certification to ensure the complete system is qualified for flight testing. Many of these tasks were accomplished using a full non-linear simulation of the F-14 combined with tools developed using the SIMULINK graphical analysis package. The paper discusses the complete process from control law design to piloted evaluation while placing emphasis on the tools that were used to complete this effort",1994,0, 1670,Verification of Clinical Guidelines by Model Checking,"Clinical guidelines systematically assist practitioners with providing appropriate health care for specific clinical circumstances. However, a significant number of guidelines are lacking in quality. In this paper, we use the UML modeling language to capture guidelines and model checking techniques for their verification. We have established a classification of possible properties to be verified in a guideline and we present an automated approach based on a translation from UML to PROMELA, the input language of the SPIN model checker. Our approach is illustrated with a guideline based on a guideline published by the National Guideline Clearing House (NGC).",2008,0, 1671,Verifying Access Control Policies through Model Checking,"Access control mechanisms are a widely adopted technology for information security. Since access decisions (i.e., permit or deny) on requests are dependent on access control policies, ensuring the correct modeling and implementation of access control policies is crucial for adopting access control mechanisms. To address this issue, we develop a tool, called ACPT (Access Control Policy Testing), that helps to model and implement policies correctly during policy modeling, implementation, and verification.",2010,0, 1672,Version Management for Reference Models: Design and Implementation,"IoT (Internet of Things) is a promising technology to bring a boom of reform, including the combination of wireless communication technology, embedded system, mobile application and cloud technology. Although the IoT technologies have improved a lot over these years, most of them focus on single intelligent product rather than the integration of the IoT network. In this paper, a centralized management model (CMM) is proposed, which is designed to provide IoT-based devices with communication services. In addition, we provide the method of deploying the entire system for smart home scenario, including embedded appliances, mobile devices and centralized home gateway with linkage policy. Experiments demonstrate that the proposed model can significantly make IoT system more convenient and intelligent.",2015,0, 1673,"Vertically Differentiated Information Goods: Entry Deterrence, Rivalry Clear-out or Coexistence","In this paper we develop models to analyze price, quality and versioning strategies of information goods producers to deter entry and maintain market power. We find that in a competitive environment, firms provide higher quality information goods with a better ?price quality ratio? than in a monopoly. In the high-end market an incumbent monopolist can strategically set its quality to deter entry. In the low-end market, the incumbent monopolist can implement versioning strategies to deter entry and different versions exist as a signal to prevent potential entry. A vertically differentiated market is often referred to as a ?natural oligopoly? for traditional goods, whereas it can be regarded as a ?natural monopoly? for information goods.",2006,0, 1674,Vibration Fault Diagnosis of Large Generator Sets Using Extension Neural Network-Type 1,"

This paper proposes a novel neural network called Extension Neural Network-Type 1 (ENN1) for vibration fault recognition according to generator vibration characteristic spectra. The proposed ENN1 has a very simple structure and permits fast adaptive processes for new training data. Moreover, the learning speed of the proposed ENN1 is shown to be faster than the previous approaches. The proposed method has been tested on practical diagnostic records in China with rather encouraging results.

",2006,0, 1675,View-Based Eigenspaces with Mixture of Experts for View-Independent Face Recognition,"The proposed view-independent face recognition model based on mixture of expert, ME, uses feature extraction, C1 standard model feature, C1 SMF, motivated from biology on the CMU PIE dataset. The strength of the proposed model is using fewer training data as well as attaining high recognition rate since C1 standard model feature and the combining method based on ME were jointly used.",2009,0, 1676,Virtual organization security policies: An ontology-based integration approach,"With the popularity of heterogeneous network devices and security products, pervasive network security management has been a fashion. However, a chief problem lies in how to characterize various attack scenarios from the viewpoint of both security information and security policies for automation. This paper discusses the potential of applying an integration of ontology-based and policy-based approaches to automate pervasive network security management, and then proposes a model in order to validate the feasibility of this integrated approach.",2008,0, 1677,Visual categorization of brain computer interface technologies,"We present an automated solution for the acquisition, processing and classification of electroencephalography (EEG) signals in order to remotely control a remotely located robotic hand executing communicative gestures. The Brain-Computer Interface (BCI) was implemented using the Steady State Visual Evoked Potential (SSVEP) approach, a low-latency and low-noise method for reading multiple non-time-locked states from EEG signals. As EEG sensor, the low-cost commercial Emotiv EPOC headset was used to acquire signals from the parietal and occipital lobes. The data processing chain is implemented in OpenViBE, a dedicated software platform for designing, testing and applying Brain-Computer Interfaces. Recorded commands were communicated to an external server through a Virtual Reality Peripheral Network (VRPN) interface. During the training phase, the user controlled a local simulation of a dexterous robot hand, allowing for a safe environment in which to train. After training, the user's commands were used to remotely control a real dexterous robot hand located in Bologna (Italy) from Plymouth (UK). We report on the robustness, accuracy and latency of the setup.",2014,0, 1678,Visual querying and analysis of large software repositories,These Keynotes speeches the following: Large Scale Analysis of Software Repositories in Industry: Experiences from the CodeMine Project.,2014,0, 1679,Visualization of Interactions Within a Project: The IVF Framework,"

Almost all projects can be considered as cooperative undertakings. Their strategic management as well as the daily operations causes numerous interactions to occur, either among persons or among persons and resources. These interactions have been studied from various viewpoints but few researchers have focused on their visualization. The graphical representation of the cooperation is however a powerful tool to help the project participants to get a correct understanding of the situation. This paper proposes thus a structuring framework (IVF – Interaction Visualization Framework) of the visualization techniques used to display such interactions. Three basic axes of classification are used to structure the study. Which objects are visualized? Why are they visualized? How are they visualized? For each axis, several properties have been identified and the admitted values have been specified. This work can be considered as a first step towards a structured view of the ‘visualization of cooperation’ domain.

",2005,0, 1680,Visualization Patterns: A Context-Sensitive Tool to Evaluate Visualization Techniques,"In the myriad of visualization tools/techniques available to the users, it is hard to fathom the applicability of a given tool/technique to the visualization problem in hand. The tool users/evaluators have no guidance mechanism that could describe the suitability of visualization tools/techniques to fulfill their objectives. A tool may be good in one context and bad in another. This 'context of use' has become a pandemic in almost all measures of evaluations. To deal with this complex factor of tool selection/evaluation, we propose to describe a visualization tool/technique by encapsulating a technique in a pattern format describing the applicable context of use for it. We highlight the usefulness of such visualization patterns for evaluation by describing an exemplar visualization pattern solving a problem of displaying dependencies among software objects in the context of static software structure representation.",2007,0, 1681,Visualizing the Expertise Space,"Expertise management systems are being widely adopted in organizations to manage tacit knowledge embedded in employees' heads. These systems have successfully applied many information technologies developed in fields such as information retrieval and document management to support expertise information collection, processing, and distribution. In this paper, we investigate the potentials of applying visualization techniques to support exploration of an expertise space. We implemented two widely applied dimensionality reduction visualization techniques, the self-organizing map and multidimensional scaling; to generate expert map and expertise field map visualizations based on an expertise data set. Our proposed approach is generic for automatic mapping of expertise space of an organization, research field, scientific domain, etc. Our initial analysis on the visualization results indicated that the expert map and expertise field map captured useful underlying structures of the expertise space and had the potential to support more efficient and effective expertise information searching and browsing.",2004,0, 1682,WASP: Protecting Web Applications Using Positive Tainting and Syntax-Aware Evaluation,"Many software systems have evolved to include a Web-based component that makes them available to the public via the Internet and can expose them to a variety of Web-based attacks. One of these attacks is SQL injection, which can give attackers unrestricted access to the databases that underlie Web applications and has become increasingly frequent and serious. This paper presents a new highly automated approach for protecting Web applications against SQL injection that has both conceptual and practical advantages over most existing techniques. From a conceptual standpoint, the approach is based on the novel idea of positive tainting and on the concept of syntax-aware evaluation. From a practical standpoint, our technique is precise and efficient, has minimal deployment requirements, and incurs a negligible performance overhead in most cases. We have implemented our techniques in the Web application SQL-injection preventer (WASP) tool, which we used to perform an empirical evaluation on a wide range of Web applications that we subjected to a large and varied set of attacks and legitimate accesses. WASP was able to stop all of the otherwise successful attacks and did not generate any false positives.",2008,0, 1683,Web Service Testing and Usability for Mobile Learning,"Based on the summary of recent renowned publications, Mobile Learning (ML) has become an emerging technology, as well as a new technique that can enhance the quality of learning. Due to the increasing importance of ML, the investigation of such impacts on the e-Science community is amongst the hot topics, which also relate to part of these research areas: Grid Infrastructure, Wireless Communication, Virtual Research Organization and Semantic Web. The above examples contribute to the demonstrations of how Mobile Learning can be applied into e-Science applications, including usability. However, there are few papers addressing testing and quality engineering issues - the core component for software engineering. Therefore, the major purpose of this paper is to present how Web Service Testing for Mobile Learning can be carried out, in addition to re-investigating the influences of the usability issue with both quantitative and qualitative research methods. Out of many mobile technologies available, the Pocket PC and Tablet PC have been chosen as the equipment; and the OMII Web Service, the 64-bit .NET e-portal and GPS-PDA are the software tools to be used forWeb Service testing.",2006,0, 1684,Web services-based security requirement elicitation.,"This paper designs a Web Services-based security model for digital watermarking; the watermark embedding and watermark detection technology components will be the part of the Web Services of the site. The entire system architecture is based on Web Services via SOAP service requester and the exchange of information between service providers, and uses digital certificates, XML encryption and digital signatures and other security technology to ensure information exchange security. The model proposes for digital watermarking and Web services combined with a certain reference value, it also works in the marketing of multimedia network environment effectively protect digital products.",2011,0, 1685,Web site evolution.,The following topics are dealt with: Web site reverse engineering and maintenance; Web site clustering and clone detection; Web technologies; and Web site architecture analysis and evolution.,2003,0, 1686,Web-Based Engineering Portal for Collaborative Product Development,"The intention of this project was to enable the usage of shared materials for software engineering courses in seven universities located in four countries: Germany, Bulgaria, Serbia and Montenegro, and the Former Yugoslav Republic of Macedonia. All participants play active roles by making contributions to the course materials and conducting courses in their home universities. This has led to novel aspects for our project: namely, its multi-lateral character and a plethora of interesting contributions from different educational environments. These unique elements impacted on both the nature of the course material and the management of the project",2005,0, 1687,"Website Credibility, Active Trust and Behavioural Intent","

This paper evaluates data from an international anti-poverty campaign to assess if common principles from e-marketing and persuasive technology apply to online social marketing. It focuses on the relationships between website credibility, users' active trust attitudes and behavioural intent. Using structural equation modelling, the evaluation found a significant relationship between these variables and suggests strategies for online behavioural change interventions.

",2008,0, 1688,What Are You Feeling? Investigating Student Affective States During Expert Human Tutoring Sessions,"

One-to-one tutoring is an extremely effective method for producing learning gains in students and for contributing to greater understanding and positive attitudes towards learning. However, learning inevitably involves failure and a host of positive and negative affective states. In an attempt to explore the link between emotions and learning this research has collected data on student affective states and engagement levels during high stakes learning in one-to-one expert tutoring sessions. Our results indicate that only the affective states of confusion, happiness, anxious, and frustration occurred at significant levels. We also investigated the extent to which expert tutors adapt their pedagogical and motivational strategies in response to learners' affective and cognitive states.

",2008,0, 1689,What help do older people need?: constructing a functional design space of electronic assistive technology applications,"In times of ageing populations and shrinking care resources, electronic assistive technology (EAT) has the potential of contributing to guaranteeing frail older people a continued high quality of life. This paper provides users and designers of EAT with an instrument for choosing and producing relevant and useful EAT applications in the form of a functional design space. We present the field study that led to the design space, and give advice on using the tool.",2005,0, 1690,What is Business Process Management: A Two Stage Literature Review of an Emerging Field,AbstractBusiness Process Management (BPM) is an emerging new field in business. However there is no academically agreed upon conceptual framework. The aim of this paper is to establish a conceptual framework grounded in the recent literature. The purpose of this work is to ensure a better foundation for future research and to discussion of the implications of BPM on Enterprise Information Systems (EIS). The starting point of this study is a focused literature review of the BPM concept. This literature review leads to the formulation of a conceptual framework for BPM which is evaluated using a quantitative lexical analysis of a broader literature sample. Finally the implication of the BPM on EIS is discussed and potential future research opportunities are outlined.,2008,0, 1691,What is embedded systems and how should it be taught?---results from a didactic analysis,"This paper provides an analysis of embedded systems education using a didactic approach. Didactics is a field of educational studies mostly referring to research aimed at investigating what's unique with a particular subject and how this subject ought to be taught. From the analysis we conclude that embedded systems has a thematic identity and a functional legitimacy. This implies that the subject would benefit from being taught with an exemplifying selection and using an interactive communication, meaning that the education should move from teaching “something of everything” toward “everything of something.” The interactive communication aims at adapting the education toward the individual student, which is feasible if using educational methods inspired by project-organized and problem-based learning. This educational setting is also advantageous as it prepares the students for a future career as embedded system engineers. The conclusions drawn from the analysis correlate with our own experiences from education in mechatronics as well as with a recently published study of 21 companies in Sweden dealing with industrial software engineering.",2005,0, 1692,What Makes Evaluators to Find More Usability Problems?: A Meta-analysis for Individual Detection Rates,"

Since many empirical results have been accumulated in usability evaluation research, it would be very useful to provide usability practitioners with generalized guidelines by analyzing the combined results. This study aims at estimating individual detection rate for user-based testing and heuristic evaluation through meta-analysis, and finding significant factors, which affect individual detection rates. Based on the results of 18 user-based testing and heuristic evaluation experiments, individual detection rates in user-based testing and heuristic evaluation were estimated as 0.36 and 0.14, respectively. Expertise and task type were found as significant factors to improve individual detection rate in heuristic evaluation.

",2007,0, 1693,What Makes Game Players Want to Play More? A Mathematical and Behavioral Understanding of Online Game Design,"

The online game industry is a rapidly growing Internet-based business that has become very competitive in recent years. Game vendors have the option of designing online games in such a manner that they can match players against other players. Therefore, the interesting question is to identify conditions in these human-computer-human interactions that can motivate players to be engrossed and play more games and for longer periods of time. We approach this issue using a novel combination of mathematically based Tournament Theory and behaviorally oriented Flow theory to propose that when players' skills are equally matched, the challenge intensity of the game is moderate and players will play more games and for long. We also propose that individual traits such as performance goal orientation will moderate these effects. We test our ideas with a laboratory research design. Our preliminary findings provide support for our ideas.

",2007,0, 1694,What Stories Inform Us About the Users?,"A user story (US) is reopened for reworking due to shortcomings from four major fronts- business analyst (BA), developer, quality analyst (QA) and environmental issues. BA is responsible for capturing requirements and documenting the requirements in the form of user stories; developer is responsible for the implementation of the user story; and the QA is responsible for testing of US. Now if any of three does not perform his job accurately then the probability of reopening of US increases. So we can reduce the probability of reopening of US by improving on shortcoming from three ends BA, developer and QA. As for as environmental issues, are concerned, they can be controlled by QA, developer and BA. The aim of the paper is to identify different areas from BA, Developer and QA's end to reduce the probability of reopening of a US and thereby reducing the user story reopen count in the Scrum development.",2015,0, 1695,When Is Assistance Helpful to Learning? Results in Combining Worked Examples and Intelligent Tutoring,"

When should instruction provide or withhold assistance? In three empirical studies, we have investigated whether worked examples, a high-assistance approach, studied in conjunction with tutored problems to be solved, a mid-level assistance approach, can lead to better learning. Contrary to prior results with <em>untutored</em>problem solving, a low-assistance approach, we found that worked examples alternating with isomorphic tutored problems did not produce more learning gains than tutored problems alone. However, the examples group across the three studies learned more efficiently than the tutored-alone group. Our studies, in conjunction with past studies, suggest that mid-level assistance leads to better learning than either lower or higher level assistance. However, while our results are illuminating, more work is needed to develop predictive theory for what combinations of assistance yield the most effective and efficient learning.

",2008,0, 1696,When Software Engineers Met Research Scientists: A Case Study,"Understanding motivation of software engineers has important implications for industrial practice. This complex construct seems to be affected by diverse environmental conditions and can affect multiple dimensions of work effectiveness. In this article, we present a grounded theory that describes the motivation of software engineers working in a not-for-profit private research and development organisation. We carried out a holistic case study for seven months, using structured interviews, diary studies, and documental analysis for data collection, and grounded theory techniques for data analysis and synthesis. The results point to task variety and technical challenges as the main drivers of motivation, and inequity and high workload (caused by poor estimations in the software process) as the main obstacles to motivation in the organisation.",2013,0, 1697,When Success Isn?t Everything ? Case Studies of Two Virtual Teams,"AbstractResearchers have been attempting to identify the factors that contribute to virtual team success. Two virtual teams were studied over six-months using an interpretive approach and qualitative data collection techniques. The outcomes of these teams were outwardly very poor. Yet, team members considered themselves successful in relation to the circumstances in which they found themselves. The team members identified the factors they believed contributed to the outcomes and the rationale for why they were successful despite the outward appearances. The interpretive approach allowed for an exploration of the circumstances, and how these perspectives were derived. The cases indicate that working in distributed mode can be problematic if teamwork issues are not addressed, and a technological focus adopted.",2005,0, 1698,When Technology Meets the Mind: A Comparative Study of the Technology Acceptance Model,"

Issues related to technology, including diffusion, acceptance, adoption, and adaptation, have been the focus of research for different disciplines including Information Systems (IS), System Dynamics, Psychology, and Management Science. Of all research conducted and models developed to study technology related issues, the Technology Acceptance Model (TAM) stands out as most prominent, particularly in the field of IS. However, technology acceptance research has been relatively limited in its application to the public sector. Therefore, there is a concurrent need to develop and gain empirical support for models of technology acceptance within the public sector, and to examine technology acceptance and utilization issues among public employees to improve the success of IS implementation in this arena. In this paper we present a more comprehensive, yet parsimonious model of technology acceptance and suggest testing it both in public and private sectors to help understand the similarities and differences (if any) between the two sectors.

",2005,0, 1699,WildCAT: a generic framework for context-aware applications,"We present two blueprints of ad-hoc wireless applications implementable on low-cost hardware platforms. For this demo we focus on a combination of MSP430F148 microcontroller with CC1100 RF module. In addition to the actual working hardware, we demonstrate a powerful emulation engine, which allows us to study virtual deployments of wireless ad-hoc networks running our applications. Our objective is to illustrate the main components of Olsonet's framework for practical ad hoc networking.",2007,0, 1700,Wireless and Wearable Overview: Stages of Growth Theory in Medical Technology Applications,"Mobile medical applications have the capacity to provide services for patients and healthcare professionals regardless of time or place. The aim of this paper is to explore the current status of mobile, wireless and wearable technological applications within the medical environment. After conducting a literature review on the availability of mobile, wireless and wearable computing applications within medicine, a summary of their purpose, features and functions was conceptually mapped to the Gibson and Nolan (1974) Stages of Growth Framework. Findings from the literature, the mapping process and limitations for growth are discussed within each of the technology categories. Limitations and challenges of development are highlighted and suggestions are made for future research.",2005,0, 1701,Working together inside an emailbox,"The correction methods for inertial navigation system of the ""pig"" are proposed in this paper. These methods are related with such sensors as inclinometers and potentiometers and provide independent information about the angles used for improving the final data position. These methods make it possible to design independent navigation systems for a sustained use inside of pipelines.",2007,0, 1702,Working with Alternative Development Life Cycles: A Multiproject Experiment,"A variety of life cycle models for software systems development are generally available. However, it is generally difficult to compare and contrast the methods and very little literature is available to guide developers and managers in making choices. Moreover in order to make informed decisions developers require access to real data that compares the different models and the results associated with the adoption of each model. This paper describes an experiment in which fifteen software teams developed comparable software products using four different development approaches (V-model, incremental, evolutionary and extreme programming). Extensive measurements were taken to assess the time, quality, size, and development efficiency of each product. The paper presents the experimental data collected and the conclusions related to the choice of method, its impact on the project and the quality of the results as well as the general implications to the practice of systems engineering project management.",2005,0, 1703,Writing for computer science: a taxonomy of writing tasks and general advice,"Computer science graduates lack written communication skills crucial to success in the workplace. Professional and academic organizations including ACM, IEEE, ABET, CSAB, and NACE have stressed the importance of teaching computer science undergraduates to write for years, yet the writing problem persists. In this paper we provide guidance to computer science instructors who want student writing skills to improve. First, we organize prior work on writing for computer science into a goal-oriented taxonomy of writing tasks. Each task includes a clear, concise, and detailed model that can be used as the framework for a student writing assignment. Second, we provide general advice for incorporating writing into any computer science course. Finally, we discuss the application of our taxonomy and advice to writing tasks in several computer science courses.",2006,0, 1704,XAROP: A Midterm Report on Introducing a Decentralized Semantics based Application,"Knowledge management solutions relying on central repositories sometimes have not met expectations, since users often create knowledge ad-hoc using their individual vocabulary and using their own individual IT infrastructure (e.g., their laptop). To improve knowledge management for such decentralized and individualized knowledge work, it is necessary to, first, provide a corresponding decentralized IT infrastructure and to, second, deal with specific problems such as security and semantic heterogeneity. In this paper, we describe the technical peer-to-peer platform that we have built and summarize some of our experiences applying the platform in case study for coopetitioning organizations in the tourism sector.",2004,0,