2nd INTERNATIONAL SOFTWARE QUALITY WEEK EUROPE (QWE'98) 9-13 November 1998, Brussels, Belgium PAPER AND PRESENTATION ABSTRACTS |
[ QW Series
| QWE'98 Home
| Download (Call, Ad)
| Download Brochure
| Send Brochure ]
Preliminary version, subject to change.
Updated 22 September 1998.
[ Kudos
| Tour
| PROGRAM
| Abstracts
| Bios
| Y2K Clock ]
[ Advisory Board
| Sponsors
| Exhibits
| REGISTER
| Hotels
| Brussels ]
[ AB
| CD
| EF
| GH
| IJ
| KL
| MN
| OP
| QR
| ST
| UV
| WX
| YZ ]
This paper provides an overview of risk analysis fundamentals, focusing on software testing with the key objectives of reducing the cost of the project test phase and reducing future potential production costs by optimising the test process. The phases of Risk Identification, Risk Strategy, Risk Assessment, Risk Mitigation (Reduction) and Risk Prediction are discussed. Of particular interest is the use of indicators to identify the probability and the consequences of individual risks (errors) if they occur.The body of this paper contains a case study of the system test stage of a project to develop a very flexible retail banking application with complex test requirements. The project required a methodology that would identify functions in their system where the consequence of a fault would be most costly (either to the vendor's customers or to the vendor) and also a technique to identify those functions with the highest probability of faults.
A risk analysis was performed and the functions with the highest risk, in terms of probability and cost, were identified. A risk based approach to testing was introduced, i.e. during testing resources would be focused in those areas representing the highest risk. To support this approach, a well defined but flexible, test organisation was developed.
The test process was strengthened and well defined control procedures were implemented. The level of test documentation produced prior to test execution was kept to a minimum and as a result, more responsibility was passed to the individual tester. To support this approach, good progress tracking was essential to show the actual progress made and to calculate the resources required to complete the test activities.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
From this presentation people will see the need to prepare high quality test environments as a mandatory prerequisite for quality testing. High quality test environments are defined as high coverage with low volumes of data. They will also learn how they can quickly set up and use test environments that are reactive to testers requirements.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
The tutorial represents an up-to-date and fresh evolution of a workshop that has been successfully deployed to a number of international events in the last year.It consists of the following parts:
JUMP TO TOP OF PAGE
- Peculiarities on Internet/ Intranet projects and products
- Test approach for Internet/ Intranet applications
- Testing of static WWW
- Syntactic/ Security/ Service testing
- Testing of dynamic WWW
- Module/ Integration/ System Testing
- Web CAST tooling
- Case Studies with analysis of ROI
RETURN TO QWE'98 PROGRAM
This is an overview of the testing field. Its purpose is to provide you with the technical and conceptual vocabulary of testing. Testing has emerged as a field within software engineering and has acquired a big vocabulary. It has progressed, in the past 20 years, from intuition to science -- from personal heuristics to well-understood practices rooted in theory and confirmed by use and experiments.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This is an overview testing and how test methods apply to the solution of the Y2K problem.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
In these days of uncertainty -- Y2K, Euro, Japanese economy, US lawsuits against Microsoft and Intel, consolidation on an international scale, escalating privatization of European PTT's, Far-East Economic Tigers becoming pussycats, international terrorism, etc. We can turn to the prophecies of Nostradamus to unambiguously see what will evolve in the software industry, especially with respect to quality and testing, from now out to 2005. The talk will provide excerpts from Nostradamus and Internationally renown seer and prognosticator Beizer will interpret them for the audience.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Increasing global competition by shortening the software products development life cycle while guaranteeing products quality is one of the main concerns of software intensive organizations.The aim of this paper is first to identify which are the most relevant factors for TTM - Time to Market - in software intensive organizations, second to select the SPICE 98 model processes that clearly impact on those factors and finally to define an improvement plan based on the selected SPICE processes.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
The central problem of object-oriented testing is determining method activation sequences for the testing of a class. In the literature, this is called the "intra-class" level of testing. For object-oriented languages such as Javatm, specification-based strategies are further developed than structural test strategies. Although both are essential for effective testing, structural strategies are especially well suited for the Java environment, where dynamic linking and bean technology make it virtually impossible to predict the sequences that will be invoked by client objects at run-time.A testable model of the interaction among class methods is needed for test design, at the class interface level. An orthogonal model of intra-class interactions at the implementation level is necessary to assess the adequacy of a test suite at class scope. This paper extends the class scope implementation model by developing information-flow paths from the class flow graph.
Information flow analysis elucidates the implicit paths that result as methods access instance and class variables. This theory is capable of identifying fundamental subsequences which can be composed to produce almost all necessary method activation sequences.
A surprising result of this theory is that a test that exercises a particular sequence may not be an effective test of that sequence. The theory indicates that a special form of path coverage must be performed during testing to assess the effectiveness of tests.
The exposition of the theory will be informal, using a simple Java applet to demonstrate the basic concepts. We shall also discuss how the theory can be extended beyond intra-class testing to the integration of testing of a Java applet or application.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This paper will show:JUMP TO TOP OF PAGE
- Interdisciplinary approach will be key factor to future market success.
- A number of key issues of how testing and QA actually work in their social environment are largely neglected or ignored at present.
- Plenty can be learned at the crossroads of technology and social sciences, although this place is frightening for engineers and psychologists/sociologists alike.
RETURN TO QWE'98 PROGRAM
We present a dynamic Bayesian model useful to predict the expected number of failures over future tests, after having already observed some test results. It makes no assumption on how tests are selected and can be applied in early test phases, or when reliability growth models cannot be used. Some examples are presented.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Classes (objects) have distinctly different behavior patterns (modes). A mode must be identified to select an effective test strategy. This tutorial presents new approaches for domain/state modeling to characterize class modality and shows how to produce effective test suites from these models.Participants in this seminar will learn how to:
(Click here for Bob Binders's Home Page)
- Identify the mode of the class under test.
- Develop a domain model of a class.
- Develop a state model of class behavior.
- Develop a domain-based test plan using the vertex probe strategy.
- Develop a behavior-based test plan using the FREE state model.
- Develop a test suite that achieved either vertex oir N+ state coverage.
(rbinder@rbsc.com)
(TOP OF PAGE)
RETURN TO QWE'98 PROGRAM
Early software quality assurance is essential to mastering EURO conversion projects. Allianz Life Assurance sets a specific priority for it by using innovative software inspection technologies such as Perspective-based inspections and quantitative models to control defect content and assess inspection processes. This paper describes the transfer of these technologies to Allianz and their impact on software quality.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This tutorial will cover the following topics:JUMP TO TOP OF PAGE
- Growing need for test automation: an essential part of an improvement process.
- Not just a technical issue, but also an organizational issue.
- An approach to structure this automation process. A model is provided, which includes a life cycle and technical and organizational aspects.
- Using test tools wisely: Proven techniques (including "data driven" and "framework" architectures) to develop flexible and maintainable automated test suites.
- Life demonstration of such techniques.
- Cost/benefit samples from the real world.
RETURN TO QWE'98 PROGRAM
This presentation is an account of the practical experience obtained in a Belgian organization during the execution of a program for structuring the testing approach. In this young and very dynamic organization the on-going business demanded major attention of all employees. This puts alot of restrictions on the feasibility of any change programme. this presentation will show how the dilemma was recognized and was dealt with in practice. This presentation offers attendees an example of a successful phase-based implementation of structured testing. This provides attendees with a reference for their own change programmes, and the benefit of getting acquainted with major pitfalls and lessons learned in practice.Topics of this presentation are:
JUMP TO TOP OF PAGE
- The phases of a program for introducing structured testing
- The improvement actions
- Getting commitment and involvement in all phases
- Pitfalls and lessons learned in this process
RETURN TO QWE'98 PROGRAM
The audience will get a good insight in the main principles behind the method. It will become clear how the testing can be organized that tests are developed in a manageable way. Test automation is an integral part, but it is not dominating. The focus is on the tests and their results. Although too many details will be avoided the information will be concrete enough to allow the audience to make a start with applying the method. Based on experiences with existing customers, ideas will be described how the test processes can be embedded in the organization.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will discuss the advantages of using a practical set of metrics throughout a software development project life cycle. It will focus on how the information provided by these metrics can be used in today's product development environment. It will examine a specific set of quality metrics and discuss the advantages and possible disadvantages of each metric. In parallel to discussing the individual metrics, I'll discuss how the metrics cross check each other, can provide data for follow-on product development projects, and can be tailored to suit the needs of each individual organization. Finally, I will discuss how organizations currently not using any metrics today can easily get started.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
The paper describes:
JUMP TO TOP OF PAGE
- the context (a small software house) of a process improvement initiative aimed to improve the management of customer's requirements
- the adopted guidelines, derived from: ISO/IEC 12207 and ISO/IEC PDTR 15504 (SPICE) (process model), IEEE J-STD-016-1995 (documentation), ami and GQM (improvement measurements)
- the measurements results
RETURN TO QWE'98 PROGRAM
The challenge for testers: reduce the testing interval without reducing quality. One answer: find a new way to approach test design and test generation. This paper will discuss an ongoing Lucent Technologies experiment in automated test generation from a behavioral model of the software product under test. Results indicate that our new approach can increase the effectiveness of our testing while reducing the cost of test design and generation.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- Hidden dangers in Y2000 remediation projects
- Why code scanning tools can't help in determining Y2000 readiness
- How to test for Y2000 readiness
- How to use test tools to improve quality throughout the Y2000 project life-cycle
RETURN TO QWE'98 PROGRAM
This presentation addresses the testing needs for Year 2000 compliance and the data management needs resulting from the advent of the euro. Further, it discusses how Data CommanderTM, a testing and data management tool, can help satisfy both these requirements.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will review COQ's benefits, goals, strategies, implementation process and expectations. The goal of this presentation is to assist management and professionals with starting a COQ process. It is a learning process that results in improvements no matter what level within an organization it is implemented. This presentation will also discuss pitfalls, implementation ideas, and expected results.Cost of Quality (COQ) provides management with a measurable way of administering & managing the quality of process. It has been used in the manufacturing and service industries for decades but only recently has the software organizations been exposed to COQ.
COQ can provide the software industry with remarkable insight as well as improved quality and profitability. But there needs to be established an understanding what COQ is and is not. Also there are better ways to implement COQ than others. We can learn from the other industries that have been using COQ as a tool to improve organizations performance levels.
The software industry spends on the average 50% of every sales dollar on quality. That is excessive, manufacturing averages 15-20% and the service industry is at 25 - 30%.
COQ is part of the overall quality improvement process. It is the measurement tools used to judge the cost effectiveness of a quality improvement process. It is not a magic bullet; there are pitfalls and wrong ways to implement COQ.
As Dr. Edward Deming said "Running a company by profit alone is like driving a car by looking in the rearview mirror. It tells you where you've been, but not where you are going" COQ is the "gap" in GAAP (Generally Applied Accounting Principles).
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Many software development organizations are increasingly using the object-oriented (OO) approach as the basis for their information technology development strategies and delivered solutions. But how does one measure the quality of object-oriented software? This tutorial presentation will provide practical and useful knowledge centered on measuring object-oriented software quality using emerging OO code-level analysis and process techniques and automated tool technology support as well as defining OO quality and what it might mean. Examples will be drawn from C++ and Java.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This panel will provide a moderated forum for discussing the impact, change, and reality posed by the Euro currency conversion. The cultural, political, and liability concerns posed by the conversion will be discussed as well as the real-world challenges and experiences encountered by those working the Euro problem. The intent of the panel is to provoke a lively and spirited discussion on the facts versus the myths surrounding the Euro conversion, including the technical and managerial challenges associated with the Euro issue. Audience participation is encouraged.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
The implications of the European Monetary Union (EMU) for financial and non-financial firms are significant. Many companies' long-term plans are affected by these challenges: prospects for growth, inflation and labor costs, exchange rate and pricing difficulties, a risk of protectionism, and market competitiveness. These factors have to be kept in mind as European companies begin the conversion process.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This paper describes a new technique to solve the most critical elements of the "Year 2000 Problem". Y2K problem has both management and technical aspects. The technical part includes (1) testing for the Y2K faults, (2) the program and database fixing process, and (3), so that Y2K bugs can be found automatically. This means that we do not need to know the value of any output, or how to select input data. Our method is not only applicable, but fast and reliable as well. To do this our method consists of three main parts. The key idea of our approach is to compare the branch (or output) functions for different input years. This method reveals the Y2K bug even if the output of the program was correct, i.e., the tester would not observe any failure. To satisfy reliability requirements we apply a very strong testing criterion. To speed up testing we apply slicing, by which only a small subset of the program has to be adequately analyzed.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
CMM was designed for a specific purpose but it is being adopted by many organizations whose target objective is not selling software for the American DoD. The reason may be that CMM is a staged improvement model whose success is proven and whose adoption is considered feasible by many companies. But CMM has also rigid structure and its main orientation to big companies could make its adoption expensive and difficult for SME. Is it possible to hide the model complexity throughout the steps of an improvement plan without losing its principles? Is it possible to design evaluation tools and improvement methods that can be used by sensible managers, not only be specialized consultants? This paper summarizes the intent of the European Software Institute (ESI) in developing a tool that using SPICE characteristics could help medium and small size development organizations in achieving SW CMM level 2 benefits: BIG-CMM.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:
- A broad definition of Risk
- A method for specification of risk level objectives
- A template specification of risk levels
- Examples of integration of risk specification in setting any objectives
- A Template Policy for Risk Management
- 10 Principles of Risk Management:
1. Frequent Feedback
2. Rigorous Requirements
3. Requirement Impact Estimation
4. Upstream Pollution Control
5. Personal Risk Responsibility
6. Design Out Risk
7. Maximum Risk Policy
8. Maximize Profit, not minimize Risk itself
9. Backups are part of the Price
10. Contract Out Risk- A Method for evaluation of risks for many objectives and many strategies: Impact Estimation
- Using Evolutionary Project Management to control project risk
- Quality Assurance as a risk controlling vehicle
- Is Testing Enough?
- What about Defect Detection Inspections, early?
- What about Defect Prevention through continuous improvement of processes?
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
In order to reduce cost and improve reuse rate we propose a testing automation environment integrating several techniques:JUMP TO TOP OF PAGE
- To create a knowledge database as the core of the testing automation environment which groups the test cases and enough information to execute them.
- To use Web technology in a manner which reduces client setup to a minimum and to deploy the maximum testing through all the company.
- To share testing resources and simulation software, so to allow to clients access to expensive resources (due to development time or hardware/device availability).
RETURN TO QWE'98 PROGRAM
Many organizations have some form of review for documents and/or code. However, the process may not be as effective or as efficient as it should be - defects are missed which should have been found, yet the process is costly. Inspection is the most cost-effective technique for identifying major costly defects early in the life cycle. Inspection can be applied to any written documents: proposals, contracts, specifications, test documents, designs, user manuals, code, etc. and can be used by non-technical people, technical software developers, managers or users. Many organizations have achieved significant improvements in quality, productivity and time to market by using the ideas taught on this tutorial. Participants are invited to bring their own important documents to see for themselves how the inspection technique will enable them to improve their quality.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
(Abstract to be Supplied)
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:
- Evolutionary testing is well-suited for verifying the temporal correctness of real-time software automatically.
- Several experiments showed the good performance and practicality of evolutionary testing.
- Combination of evolutionary testing with systematic testing further improves the test quality.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- Adequacy criteria provides useful test completion indicators for testers. We present techniques for adequacy measurement for objects in OO software.
- Coverage criteria for objects should address inter-method flows, both direct and indirect, in addition to the usual internal method control flows.
- Object flow based on criteria subsume criteria such as statement or branch coverage; criteria based on method structure alone cannot capture coverage of object flows arising through object data.
RETURN TO QWE'98 PROGRAM
Any observer of human behavior will acknowledge that hindsight is almost always better than foresight. The challenge with Software Testing is to adapt the hindsight learned on one project into foresight on the next. Results from the categorization and analysis of Software Development Process Improvements on the next project. What is often ignored is the benefit that Software Development Defect Analysis has in relation to Software Testing. Since the major purpose of Software Testing is to find bugs, examining defect patterns and trends, and then testing in those areas, will help focus the efforts of any Software Test engineer. The first part of this paper will look at Defect Analysis results from six Hewlett-Packard Software Development projects and explain how observed defect trends can be interpreted to help improve the focus of Software Testing.In the past, Software Test organizations have often focused on some amount of effort on analyzing defects from Software Development projects. There are several models actively used, and proven successful, to analyze Software Development Defects. What is equally important, but sometimes overlooked, is the examination of defect trends in the Software Testing process. In contrast to multiple models available for categorizing Software Development defects, there are only a few models available, and proven useful, in categorizing Software Testing Process defects. The second part of this paper will introduce a step by step process and model to help categorize Software Testing Process Defects.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Due to their multiple interaction modes, operational process control systems cannot be fully tested on site. Single nodes should be fully tested off-site. ABB has also tested complete networks representing typical configurations at its factories.On-site testing and software/hardware upgrades must be planned long in advance, to fit plant maintenance outages. The scope of testing for a particular site must be defined in close cooperation with the plant's engineers. In addition to a precise configuration check, it is important to identify all external communication interfaces and their possible failure modes.
ABB has developed a Y2K concept based on four pillars. Experience to date at customer sites is very positive. This presentation will dwell upon practical aspects such as cost/schedule estimation, test methods and tools, and organizational aspects.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
In the rush to automate software testing, many people forget to pay attention to the basics -- i.e., ensuring they have a valid and effective software test process and prioritizing their automation needs for greater chance of success. Edward Kit addresses these issues and concerns when he presents:JUMP TO TOP OF PAGE
- Integrating Tools and Your Testing Process
- Fixing Three Common Pitfalls of Capture/Playback
- The Potential Synergy of Five Key Testing Tools
- The Importance of Technical Reviews
- Key Software Testing Success Factors
RETURN TO QWE'98 PROGRAM
As systematic measuring gains more and more importance for the development of systems and software, techniques are needed to interpret such measurements in a correct way and to reduce the measurement effort to a minimum. These demands can be fulfilled by statistical techniques that enable conclusions from a small sample of observations to the whole set of objects with known bounds for the error probability. Most important for the success of these methods is the fact that the underlying mathematical theory has to be implemented only once by an expert. Later on also non-experts can handle these methods without any problems. We use metrics based on accounting data as an industrial case study. Routine observations of project accounts (working hours, budget, planned project length, ...) are used to predict the outcome of projects (delay, budget overdraw, ...). Our case study includes more than 300 projects.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Commercial requirement management packages are becoming more popular. This talk addresses the current misconceptions regarding what commercial requirement management packages will do for an organization. In particular, a simple and reasonable requirement management process that utilizes a simple, freely accessible, requirement management RDBMS is presented as an alternative. The process and the tools used to implement requirement management are equally important. Simple processes and software tools utilized by this author and presented are made available to all conference participants.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Thirty years ago software was not considered a concrete value. Everyone agreed on its importance, but it was not considered as a good or possession. Nowaways, software is part of the balance of an organization. Data is slowly following the same process. The information owned by an organization is an important part of its assets. Information can be used to competitive advantage. However, data has long been understimated by the software communityUsually, methods and techniques apply to software (including data schemata), but the data itself has often been considered as an external problem. Validation and verification techniques usually assume that data is provided by an external agent and concentrate only on software.
In this work we present different issues related to data quality from a software engineering point of view. We propose three main streams that should be analyzed: data quality metrics, data testing, and data quality requirements in the software development process. We point out the main problem and opportunities in each of them.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Increasingly, as the world moves to an internet based system of commerce, there is increasing need for ways to assure E-Commerce applications. Using modern methods based in the TestWorks product line, including the new CAPBAK/Web system, it is now possible to perform end-to-end content validations of E-Commerce applications.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Remote Testing Technology (RTT) represents a powerful packaging of TestWorks technology that supports detailed analysis of user interactions and early field use of an Application Under Test (AUT). This approach effectively makes TestWorks your onboard "Quality Agent" for early-phase release analysis of products.In the RTT methodology the AUT is pre-processed by one of several TestWorks tools and the instrumented AUT is readied for deployment. Once in the hands of users, this breakthrough application of TestWorks technology makes it possible for the first time to retrieve accurate, realistic, and definative data about how an AUT is actually used in the field.
As users work with the AUT the TestWorks-installed monitoring software silently and invisibly records GUI interactions, keyboard activity, internal structural coverage, interface coverage, and feature coverage data. This information collection activity operates at such low overhead that the user is generally unaware that data is being collected.
Data can be accumulated locally for immediate analysis with the TestWork-based analysis tools. Alternatively, the field-collected information captured by RTT can be sent by Email to a central analysis site, or can be transmitted by HTTP or other available InterNet protocols direct to an analysis website.
Analysis methods include direct viewing of captured user interactions in the scripting language, playback of all or part of the script, coverage analysis of module and system interactions, and display of feature coverage values. All information can be correlated directly with source code module names, source code lines, etc.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This tutorial will quickly, efficiently teach you the basics of how to apply Software Reliability Engineering (SRE) to testing and development to make software more reliable and to develop and test it faster and cheaper.SRE is based on four simple, powerful ideas:
JUMP TO TOP OF PAGE
- Set quantitative reliability objectives that balance customer needs for reliability, timely delivery, and cost
- Track reliability during test
- Characterize quantitatively how users will employ your product
- Maximize efficiency of development and test by focusing resources on the most used and/or most critical operations, by realistically reproducing field conditions, and by delivering just enough reliability.
RETURN TO QWE'98 PROGRAM
Operational profiles are quantitative descriptions of how software-based systems are expected to be used in the field. This presentation will outline the rapidly expanding role of operational profiles in testing. It will focus on how operational profiles are developed, including identifying initiators of operations, creating operations lists, determining operation occurrence rates, and deriving operation occurrence probabilities. Then it will show how operational profiles are applied in test preparation, especially selection of test cases. Finally, it will demonstrate how operational profiles can be used to allocate time in test execution. Some of the latest results from the preceding week's ISSRE conference will be summarized.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
As the countries of Europe enter the 21st century, they face a challenge of historic proportions. Conversion to a common currency, the euro, will not only involve economic issues -- for example, the reduction in exchange rate risks and the increase in price transparency -- but also a number of business and technical issues. This article will explore a number of business and technical issues concerning the euro conversion and suggest ways of managing the risk.The PDF version of the paper can be downloaded from http://www.sysmod.com/eurorisk.pdf
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Based upon the methodology of Structured Software Testing, ps_testware has developed a number of implementation techniques for IT conversion projects. It was a very conscious decision not focused on Y2K only. Y2K is a particular case of IT conversion activity. Due to this long term vision, major investments were possible to create highly developed practical techniques.This presentation gives a sneak preview of a testing method developed by ps_testware. Where do I start with testing in Y2K projects? How do I deal with the fixed deadline of 31 December 1999? Do I need to test the Inventory, Clustering or Triage? How to create a test plan if there is no time to waste.
Between 40% and 50% of the Y2K budget is spent on testing. How do you know whether your testing is effective and efficient? How do you measure productivity?
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- It is important that a program which is going to be formally verified is designed with this purpose in mind
- There exist tools which can analyze systems with a very large state space in a reasonably short amount of time, especially if the programs are well designed
- One might benefit from applying statespace-reduction techniques which are not sound in all cases
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- The reasons why Testing is such a big and expensive part of every company's Year 2000 solution
- The opportunities for using Year 2000 testing as an investment in improving your company's ongoing testing program
- How to identify test cases that can protect your company against the most serious threats to your business posed by the Year 2000 problem
- Methodologies, tools and techniques for Year 2000 testing that help to select these high-payoff test cases and allow you to execute (and re-execute) these test cases efficiently
- How these methodologies, tools and techniques can be applied in a manner that addresses Year 2000 testing issues and simultaneously produces lasting value in your company's ongoing testing program
RETURN TO QWE'98 PROGRAM
After a short introduction, the attendees will learn about The Context of testing in the real world and software and test process improvement: why testing, the aims of testing, quality management and testing, what to test, required structure, SDLC and testing, the evolution of testing, the challenges for testing, need of improvement, need of a dedicated test improvement model, other available models, the model requirements, etc.The second part of the tutorial is about the Test Process Improvement Model. The TPI (tm) model will be explained in detail: the scope, the characteristics, the key areas, the requirements (checkpoints) for the different levels of maturity, the test maturity matrix, the assessment, the relation between the key areas and how to select priorities, the improvement suggestions, etc.
To be successful with any improvement activity an adequate Management of Change approach is required. The third part of the tutorial is about The Application of the Model. The main steps of changing will be taught: how to create awareness, establish goals and scope for change, the assessment process, selection and planning of improvement actions, implementation and evaluation. Delegates will learn about related subjects such as the use of metrics, the requirements for the change team, the human aspects, how to deal with resistance and finally some do's and don'ts will complete the tutorial.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Use cases are graphical notations that were created to help document and test system behavior. Ivar Jacobson introduced use cases in 1992, and they are now defined in the new UML (Unified Modeling Language). However, use cases fall short of their intended purpose, because they do not provide enough information for test case or script generation. Testers must go outside use cases to find enough information to create test cases.However, the UML is extendable, and standard use cases can be extended with additional information to become test-ready use cases. Test-ready use cases contain sufficient information for automatic test case creation. This paper describes test-ready extensions to use cases and the process of automatic test case generation.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Requiring early, rigorous understanding of the system boundary, valid inputs, potential outputs, and possible input sequencing through the development of a sequence-based specification pays dividends in the development of usage models for statistical software testing.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This one-day tutorial focuses on guiding participants through the requirements definition process. The emphasis of this exercise is on making requirements testable so that they can be negotiated, communicated and traced throughout the project. A requirement is measurable if there is an unambiguous way of determining whether a given solution fits that requirement.Requirements engineering is by its nature a multi-disciplinary activity involving a wide range of participants. Requirements engineers must interact with both customers and systems developers, and this calls for both social and technical skills. The notion of making requirements testable is therefore crucial to effective requirements definition. It is a means of feedback from requirements engineer to customer in order to validate requirements, and a means of verifying that solutions constructed by developers meet the specified requirements. In order to convey the realism (difficulties and solutions) of such a requirements engineering process, the format of this tutorial is mostly interactive and participative. Attendees are guided (in groups) through the requirements specification process of a familiar (but nevertheless complex) computer based system. This guidance will be supported by a "requirements template" constructed and refined by the tutorial instructors through many years of teaching and application to large development projects.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will:
- Provide project managers and software developers with the knowledge to institute an affordable metrics program that will evaluate the quality of their project's products and to help them identify and track project risks.
- Demonstrate a model for metrics programs and a core set of metrics being applied within NASA, including metrics for object-oriented development, re-engineering and COTS applications
- Discuss metric program costs, benefits and techniques for starting a program
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
JUMP TO TOP OF PAGE
- Requirement metric program guidelines including requirement quality attributes
- Identify metrics available in the requirements phases that assist in the verification and validation of the requirements and test plans
- Demonstrate how NASA has applied these metrics to improve their testing processes and hence product quality
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- How automated tools can be used to test EMU projects
- How to use automated regression testing to shorten the conversion life-cycle
- Test planning as a key factor for a successful conversion project
RETURN TO QWE'98 PROGRAM
Abstract to be supplied.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
When a software development engineer watches a user navigate through software the first time, the reaction is often shock and dismay. This reaction is largely due to the disconnect between how the user sees software and how the development engineer views it. Product Quality Profiling (PQP) is a model to help the system test engineer/organization bridge the gap between development engineers and software users.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
(Abstract to be Supplied)
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:
- Systems Integration and VV&T strategies are key to reducing costs and increasing quality.
- Systems Integration and VV&T strategies frequently are not strategies at all.
- The use of Internet-based process guidelines has been very effective.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
These issues are significantly different. The introduction of the Euro requires the extension of applications to support new business processes and exploit potential benefits.However both projects require similar infrastructure, and organizations should be able to re-use a lot of their investment from y2k. Indeed the relatively small amount of mechanical conversion work looks like a mini-y2k project!
I will be looking at how the characteristics of these two projects are reflected in the testing requirements.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
(Abstract to be supplied.)
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
The objective of OMP/CAST is the introduction of formal procedures and software tools for software testing. This is done in the context of a software development environment, in which graphical user interfaces, relational databases, algorithms, multi-language interfaces and multi-platform implementations are common. Thanks to computer aided testing, we expect to increase the stability of the released software even more.The OMP/CAST project runs from May 1st, 1997 to October 30th, 1998.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
Several investigations have shown that in addition to functionality and reliability, usability is a very important success factor. Sometimes it is possible to test the software extensively in a usability lab environment. However, in most other situations a usability test has to be carried out with minimum resources. Expert reviews and checklists are often applied to encounter this problem. These techniques have the disadvantage that the real stakeholder, e.g. the user, isn't involved. Within the European ESPRIT project MuSIC a questionnaire based method has been developed that serves to determine the quality of a software product from a user's perspective.Software Usability Measurement Inventory (SUMI) is a questionnaire based method that can be designed for cost effective usage. It is backed by an extensive reference database embedded in an analysis and report generation tool. For each specified user group a number of individuals are asked to fill out the SUMI questionnaire. SUMI give insight into the usability aspects of learnability, efficiency, affect (likeability), control and helpfulness. It provides a comparison between the scores of the product-under-test and the state of the practice regarding usability in the software market. Ongoing research has resulted in tailored questionnaires for multimedia and internet based applications.
SUMI has been applied in practice in a great number of projects. This paper deals with some examples, focusing mainly on a project within a large development and manufacturing company that applied SUMI during the introduction of a new Product Data Management System (PDMS). The PDMS had more than one thousand users. The results, usability improvements, cost and benefits are described in detail. Conclusions are drawn regarding the applicability and the limitations of SUMI for usability testing.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
At QW'96 and '97 Brüel & Kjaer reported the experiences of a software process improvement (SPI) project where we demonstrated that the introduction of static and dynamic analysis in our software development process had a significant impact on the quality of our products.The basis for this project was a thorough analysis of error reports from previous projects which showed the need to perform a more systematic unit test of our products. However, the analysis also showed that the major cause of bugs stemmed from requirements related issues.
We are currently conducting another SPI project where we analyze the requirements related bugs in order to find and introduce effective prevention techniques in our requirements engineering process with the objective of reducing the number of requirements related error reports by 50%.
This presentation will cover the analysis results, a set of effective prevention techniques, and also the practical experiences using some of these techniques on real-life development projects.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
One of the main problems with automating software testing is its complexity. Genetic algorithms aim at such complex problems. For example, they address the problem of test data generation without instructing them, step by step, on how to do it. Instead of this, their learning algorithm is inspired by the theory of evolution. Using this approach neatly sidesteps many of the problems encountered by other systems in attempting to automate the test process. René describes a test tool performing automatic coverage testing by means of genetic algorithms.
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM
This presentation will show:JUMP TO TOP OF PAGE
- How to program so that system components are independent.
- How to use Markov models to combine independent component reliabilities and thereby calculate a system reliability.
- How to remove artifacts from Markov models that reduce the accuracy of the reliability estimates.
RETURN TO QWE'98 PROGRAM
(Abstract to be Supplied)
JUMP TO TOP OF PAGE
RETURN TO QWE'98 PROGRAM