Search This Blog

Monday, November 19, 2007

Info to protect yourself from attacks

Viruses
A virus is a small piece of software that piggybacks on real programs. For example, a virus might attach itself to a program such as a spreadsheet program. Each time the spreadsheet program runs, the virus runs, too, and it has the chance to reproduce (by attaching to other programs) or wreak havoc.

E-mail viruses
An e-mail virus moves around in e-mail messages, and usually replicates itself by automatically mailing itself to dozens of people in the victim's e-mail address book. We never send emails from Barnard Team


Worms
A worm is a small piece of software that uses computer networks and security holes to replicate itself. A copy of the worm scans the network for another machine that has a specific security hole. It copies itself to the new machine using the security hole, and then starts replicating from there, as well.

Trojan horses
A trojan horse program is a harmful piece of software that is disguised as legitimate software. Trojan horses cannot replicate themselves, in contrast to viruses or worms. A trojan horse can be deliberately attached to otherwise useful software by a programmer, or it can be spread by tricking users into believing that it is useful. To complicate matters, some trojan horses can spread or activate other malware, such as viruses.

Backdoors
A backdoor is a secret or undocumented means of getting into a computer system. Many programs have backdoors placed by the programmer to allow them to gain access to troubleshoot or change the program. Some backdoors are placed by hackers once they gain access to allow themselves an easier way in next time or in case their original entrance is discovered.

Phishing
Phishing is a kind of spam email looks like it comes from a bank or some other trusted company or institution. The email claims to need personal information to update your account information. A link in the email will direct you to a legitimate-looking website that asks for your password, account number, or credit card information. There are a number of “URL spoofing” techniques including using
an IP address (eg http://192.168.1.1/), which relies on the user ignoring the URL completely or being confused by its complexity;
a completely different domain, which relies on the user not looking at the URL at all;
a plausible-sounding but fake domain (eg https://www.paypayl-secure.com), which relies on the user not knowing the exact domain name;
a visible-to-the-eye letter substitution (eg https://www.paypa1.com), which relies on the user not looking too closely at the URL’s individual letters;
an invisible letter substitution, which is almost indetectable;
an address with a username that looks like a domain name (eg http://www.paypal.com@www.evil.com), which also relies on the user not knowing exactly what the domain should be.

Browser Hijacker
A hijacker is a program which changes some settings in your browser. Hijackers can be removed with a program called HijackThis!
Browser Hijackers usually changes users start page. And most often its hard to change that start page to another page or blank page. After restart of the computer browser Hijackers sets their own page again. These changed start Pages usually leads to pay per click sites, where owners of browser hijacker earns money for every click or to the porn sites where owners also get paid for clicks;
Browser Hijackers changes users 'search' page. All querries are passed to pay per click sites, where owners of browser hijacker earns money for every click or to the porn sites where owners also get paid for clicks;
Browser Hijackers transmits all web pages user visits to the owners of the parasites.

Common Signs that you have a virus:
(Any of these symptoms could also point to problems with the operating system, software, hardware, adware, and/or spyware.)
Your computer is noticeably slower than it used to be, or seems to be busy doing something else.
Your network connection is slower than usual, or seems really busy (this can just as often be the network or ISP).
Your computer freezes or crashes repeatedly.
Your antivirus program stops running without an error message.
You notice strange processes running on your computer (this assumes you know what the usual processes are).
Folders on your computer start sharing themselves across the network.
Files start appearing on your hard drive, perhaps in multiple locations.
Microsoft Word or Excel suddenly start warning you about macros existing in your documents.
You can’t access certain web pages like Symantec or Windows Updates, or you can’t run LiveUpdate.

Antivirus Software
Antivirus software is a type of application you install to protect your system from viruses, worms and other malicious code. Anti-virus software typically uses two different techniques to accomplish this:

Examining (scanning) files to look for known viruses matching definitions in a virus dictionary
Identifying suspicious behavior from any computer program which might indicate infection
Most commercial anti-virus software uses both of these approaches, with an emphasis on the virus dictionary approach. Remember that your Antivirus software is only as good as its definition files! If you don’t update your definitions (LiveUpdate), scans won’t catch everything.

Now, for a quick SYMANTEC TUTORIAL

Scheduled Scans > New Scheduled Scan
Configure > Enable Auto-Protect
File > Schedule Updates

You have a virus: what should you do?
View > Quarantine
Shows a list of all the viruses that Symantec found but could not delete. Make a list of all threats, including their name, filename, and original location. The Symantec website has instructions on how to delete all viruses it finds. Sometimes there is a patch you can download and run that will automatically get rid of the virus.

Spyware
Spyware is a piece of software that collects and sends information (such as browsing patterns in the more benign case or credit card numbers in more serious ones) on users. They usually work and spread like Trojan horses. The category of spyware is sometimes taken to include adware of the less-forthcoming sort.

Adware
Adware or advertising-supported software is any software application in which advertisements are displayed while the program is running. These applications include additional code that displays the ads in pop-up windows or through a bar that appears on a computer screen.

To get rid of adware and spyware, download AdAware and Spybot from the resnet website. AdAware and Spybot do not run on their own, you should run them once a week or if you notice more popups.

One way to prevent adware and spyware is to use a browser with pop-up protection like Mozilla or Mozilla Thunderbird (http://www.mozilla.org). You can also download software that blocks pop-ups.

Be aware that if you choose to block pop-up windows, you will need to tell your browser to accept them from websites like eBear or else they won’t work correctly. To block unwanted pop-ups in Mozilla:

go to Edit > Preferences
select Privacy & Security/Popup Windows
check “Block unrequested popup windows”

Windows XP Service Pack 2 installs a popup blocker for Internet Explorer automatically.

Firewall
Basically, a firewall is a protective barrier between your computer, or internal network, and the outside world. Traffic into and out of the firewall is blocked or restricted as you choose. By blocking all unnecessary traffic and restricting other traffic to those protocols or individuals that need it you can greatly improve the security of your internal network.


Firewalls use one or more of three methods to control traffic flowing in and out of the network:

Packet filtering - Packets (small chunks of data) are analyzed against a set of filters. Packets that make it through the filters are sent to the requesting system and all others are discarded.
Proxy service - Information from the Internet is retrieved by the firewall and then sent to the requesting system and vice versa.
Stateful inspection - A newer method that doesn't examine the contents of each packet but instead compares certain key parts of the packet to a database of trusted information. Information traveling from inside the firewall to the outside is monitored for specific defining characteristics, then incoming information is compared to these characteristics. If the comparison yields a reasonable match, the information is allowed through. Otherwise it is discarded.
Warning: If you are not properly configuring your firewall/antispyware/ad blocker etc, it may totally block the internet traffic.



Dangerous Stuff That Looks Cool

Some things look like they’d be really great add-ons for your computer. But be warned; some things can be really dangerous. My suggestion would be to research things before downloading them. You can Google them and see if anyone has written any reviews/opinions about the thing you’re interested in. www.cnet.com is a trusted website that releases a lot of reviews about all kinds of products. You may want to check that out too. Here are some ones to look out for.

WEATHERBUG

Certainly, having the weather on your desktop is convenient. But this is not the way to do it
Weatherbug tends to get into your system and never get out. It can also come saddled with, or make your computer more vulnerable to, spyware, which we all know is not good at all
Instead, set www.weather.com as one of your favorites, so that you can check out the weather whenever you want
Or, if you use Firefox (free web browser) instead of Internet Explorer, there’s an add-in that you can download that keeps the weather in the lower right-hand corner of your browser and updates automatically
SCREENSAVERS
Anytime you see an ad for some cool, 3-D screensaver, I know you want to download it. Because so do I.
However, screensavers are not as innocent as wallpapers. They are “programs” that have to be installed
You never really know what you’re getting when you install a program
If you ever have to give any information, like an email address, DON’T DO IT. It’s just an excuse for spyware, adware, and spam
The only time I’d say “go for it” is when you are downloading a screensaver from a trusted source (for example, Mercedes-Benz has screensavers that you can download; I don’t think Mercedes is going to give you spyware)


Test your firewall/the security of your connection:

ShieldsUp! http://www.grc.com/x/ne.dll?rh1dkyd2
LeakTest http://www.grc.com/lt/leaktest.htm



Warning: If you are not properly configuring your firewall/antispyware/ad blocker etc, it may totally block the internet traffic.

Friday, November 16, 2007

Software Testing Techniques

Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. For life-critical software (e.g. flight control, reactor monitoring), testing can cost 3 to 5 times as much as all other activities combined. The destructive nature of testing requires that the developer discard preconceived notions of the correctness of his/her developed software.

Software Testing Fundamentals

Testing objectives include


1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as yet undiscovered error.
3. A successful test is one that uncovers an as yet undiscovered error.
Testing should systematically uncover different classes of errors in a minimum amount of time and with a minimum amount of effort. A secondary benefit of testing is that it demonstrates that the software appears to be working as stated in the specifications. The data collected through testing can also provide an indication of the software's reliability and quality. But, testing cannot show the absence of defect -- it can only show that software defects are present.

White Box Testing

White box testing is a test case design method that uses the control structure of the procedural design to derive test cases. Test cases can be derived that
1. guarantee that all independent paths within a module have been exercised at least once,
2. exercise all logical decisions on their true and false sides,
3. execute all loops at their boundaries and within their operational bounds, and
4. exercise internal data structures to ensure their validity.

The Nature of Software Defects

Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. General processing tends to be well understood while special case processing tends to be prone to errors.
We often believe that a logical path is not likely to be executed when it may be executed on a regular basis. Our unconscious assumptions about control flow and data lead to design errors that can only be detected by path testing.
Typographical errors are random.

Basis Path Testing

This method enables the designer to derive a logical complexity measure of a procedural design and use it as a guide for defining a basis set of execution paths. Test cases that exercise the basis set are guaranteed to execute every statement in the program at least once during testing.

Flow Graphs

Flow graphs can be used to represent control flow in a program and can help in the derivation of the basis set. Each flow graph node represents one or more procedural statements. The edges between nodes represent flow of control. An edge must terminate at a node, even if the node does not represent any useful procedural statements. A region in a flow graph is an area bounded by edges and nodes. Each node that contains a condition is called a predicate node. Cyclomatic complexity is a metric that provides a quantitative measure of the logical complexity of a program. It defines the number of independent paths in the basis set and thus provides an upper bound for the number of tests that must be performed.

The Basis Set

An independent path is any path through a program that introduces at least one new set of processing statements (must move along at least one new edge in the path). The basis set is not unique. Any number of different basis sets can be derived for a given procedural design. Cyclomatic complexity, V(G), for a flow graph G is equal to
1. The number of regions in the flow graph.
2. V(G) = E - N + 2 where E is the number of edges and N is the number of nodes.
3. V(G) = P + 1 where P is the number of predicate nodes.

Deriving Test Cases

1. From the design or source code, derive a flow graph.
2. Determine the cyclomatic complexity of this flow graph.
o Even without a flow graph, V(G) can be determined by counting the number of conditional statements in the code.
3. Determine a basis set of linearly independent paths.
o Predicate nodes are useful for determining the necessary paths.
4. Prepare test cases that will force execution of each path in the basis set.
o Each test case is executed and compared to the expected results.

Automating Basis Set Derivation

The derivation of the flow graph and the set of basis paths is amenable to automation. A software tool to do this can be developed using a data structure called a graph matrix. A graph matrix is a square matrix whose size is equivalent to the number of nodes in the flow graph. Each row and column correspond to a particular node and the matrix corresponds to the connections (edges) between nodes. By adding a link weight to each matrix entry, more information about the control flow can be captured. In its simplest form, the link weight is 1 if an edge exists and 0 if it does not. But other types of link weights can be represented:
the probability that an edge will be executed,
the processing time expended during link traversal,
the memory required during link traversal, or
the resources required during link traversal.
Graph theory algorithms can be applied to these graph matrices to help in the analysis necessary to produce the basis set.

Loop Testing

This white box technique focuses exclusively on the validity of loop constructs. Four different classes of loops can be defined:
1. simple loops,
2. nested loops,
3. concatenated loops, and
4. unstructured loops.


Simple Loops

The following tests should be applied to simple loops where n is the maximum number of allowable passes through the loop:
1. skip the loop entirely,
2. only pass once through the loop,
3. m passes through the loop where m < n,
4. n - 1, n, n + 1 passes through the loop.

Nested Loops

The testing of nested loops cannot simply extend the technique of simple loops since this would result in a geometrically increasing number of test cases. One approach for nested loops:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their minimums. Add tests for out-of-range or excluded values.
3. Work outward, conducting tests for the next loop while keeping all other outer loops at minimums and other nested loops to typical values.
4. Continue until all loops have been tested.

Concatenated Loops

Concatenated loops can be tested as simple loops if each loop is independent of the others. If they are not independent (e.g. the loop counter for one is the loop counter for the other), then the nested approach can be used.


Unstructured Loops

This type of loop should be redesigned not tested!!!
Other White Box Techniques
Other white box testing techniques include:
1. Condition testing
o exercises the logical conditions in a program.
2. Data flow testing
o selects test paths according to the locations of definitions and uses of variables in the program.


Black Box Testing

Introduction

Black box testing attempts to derive sets of inputs that will fully exercise all the functional requirements of a system. It is not an alternative to white box testing. This type of testing attempts to find errors in the following categories:
1. incorrect or missing functions,
2. interface errors,
3. errors in data structures or external database access,
4. performance errors, and
5. initialization and termination errors.
Tests are designed to answer the following questions:
1. How is the function's validity tested?
2. What classes of input will make good test cases?
3. Is the system particularly sensitive to certain input values?
4. How are the boundaries of a data class isolated?
5. What data rates and data volume can the system tolerate?
6. What effect will specific combinations of data have on system operation?
White box testing should be performed early in the testing process, while black box testing tends to be applied during later stages. Test cases should be derived which
1. reduce the number of additional test cases that must be designed to achieve reasonable testing, and
2. tell us something about the presence or absence of classes of errors, rather than an error associated only with the specific test at hand.

Equivalence Partitioning

This method divides the input domain of a program into classes of data from which test cases can be derived. Equivalence partitioning strives to define a test case that uncovers classes of errors and thereby reduces the number of test cases needed. It is based on an evaluation of equivalence classes for an input condition. An equivalence class represents a set of valid or invalid states for input conditions.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, then one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, then one valid and one invalid equivalence class are defined.
4. If an input condition is boolean, then one valid and one invalid equivalence class are defined.
Boundary Value Analysis

This method leads to a selection of test cases that exercise boundary values. It complements equivalence partitioning since it selects test cases at the edges of a class. Rather than focusing on input conditions solely, BVA derives test cases from the output domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be developed to exercise the minimum and maximum numbers and values just above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be designed to exercise the data structure at its boundary.
Cause-Effect Graphing Techniques
Cause-effect graphing is a technique that provides a concise representation of logical conditions and corresponding actions. There are four steps:
1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
2. A cause-effect graph is developed.
3. The graph is converted to a decision table.
4. Decision table rules are converted to test cases.

Client/Server architecture

Introduction

The first part of this essay is the introduction to Client/Server architecture, which includes three sections: What is the Client/Server Computing, Architectures for Client/Server System, and Critical Issues Involved in Client/Server System Management.

Client/Server computing is a current reality for professional system developers and for sophisticated departmental computing users. The section, What is the Client/Server Computing, points out the definition and major characteristics of Client/Server computing. Netcentric (or Internet) computing, as an evolution of Client/Server model, has brought new technology to the forefront. Hence, the major characteristics and differences between Netcentic and traditional Client/Server computing are also presented in this section.

Both traditional and Netcentric computing are tiered architectures. The brief introduction for three popular architectures, namely, 2-tiered architecture, modified 2-tiered architecture, and 3-tiered architecture are found in the section -- The Architecture for Client/Server Computing.

The second part of this essay is about Client/Server software testing. There are four sections in this part: Introduction to Client/Server Software Testing, Testing Plan for Client/Server Computing, Client/Server Testing in Different Layers, and Special Concerns for Internet Computing—Security Testing.

In the section Introduction to the Client/Server Software Testing, we present some basic characteristics of Client/Server software testing from different points of view.

Because of the difference between traditional and Client/Server software testing, a practical testing plan based on application functionality is attached in section 2 Testing Plan for Client/Server Software Testing. We also give some detailed explanation for different test plans, such as, system test plan, operational plan, acceptance test plan, and regression test plan, which are parts of a Client/Server testing plan.

As mentioned in Part I, a Client/Server system has several layers, which can be viewed conceptually and physically. Viewed physically, the layers are client, server, middleware, and network. In section 3 Client/Server Testing in Different Layers, specific concerns related to client, server and network problems, testing techniques, testing tools and some activities are addressed separately in Testing on the Client Side, Testing on the Server Side, and Network Testing.

For Internet-based Client/Server systems, security is one of the major concerns. Hence, this essay also includes some security risks that need to be tested in the Part II, section 4 Special Concerns for Internet Computing—Security Testing.



Client/Server Software Testing



Introduction to Client/Server architecture:

Client/Server system development is the preferred method of constructing cost-effective department- and enterprise-level strategic corporate information systems. It allows the rapid deployment of information systems in end-user environments.

1: What is Client/Server Computing?

Client/Server computing is a style of computing involving multiple processors, one of which is typically a workstation and across which a single business transaction is completed [1].
Client/Server computing recognizes that business users, and not a mainframe, are the center of a business. Therefore, Client/Server is also called “client-centric” computing.

Today, Client/Server computing is extended to the Internet—netcentric computing (network centric computing), the concepts of business users have expanded greatly. Forrester Report describes the netcentric computing as “Remote servers and clients cooperating over the Internet to do work” and says that Internet Computing extends and improves the Client/Server model [2].

The characteristics of Client/Server computing includes:
1. There are multiple processors.
2. A complete business transaction is processed across multiple servers

Netcentric computing ---- as an evolution of Client/Server model, has brought new technology to the forefront, especially in the area of external presence and access, ease of distribution, and media capabilities. Some of new technologies are [3]:

a. Browser, which provides a “universal client”. In the traditional Client/Server environment, distributing an application internally or externally for an enterprise requires that the application be recompiled and tested for all specific workstation platforms (operating systems). It also usually requires loading the application on each client machine. The browser-centric application style offers an alternative to this traditional problem. The web browser provides a universal client that offers users a consistent and familiar user interface. Using a browser, a user can launch many types of applications and view many types of documents. This can be accomplished on different operating systems and is independent of where the applications or documents reside.
b. Direct supplier-to-customer relationships. The external presence and access enabled by connecting a business node to the Internet has opened up a series of opportunities to reach an audience outside a company’s traditional internal users.
c. Richer documents. Netcentric technologies (such as HTML, documents, plug-ins, and Java) and standardization of media information formats enable support for complex documents, applications and even nondiscrete data types such as audio and video.
d. Application version checking and dynamic update. The configuration management of traditional Client/Server applications, which tend to be stored on both the client and server sides, is a major issue for many corporations. Netcentric computing can checking and update application versions dynamically.

2: Architectures for Client/Server System.

Both traditional Client/Server as well as netcentric computing are tiered architectures. In both cases, there is a distribution of presentation services, application code, and data across clients and servers. In both cases, there is a networking protocol that is used for communication between clients and servers. In both cases, they support a style of computing where processes on different machines communicate using messages. In this style, the “client” delegates business functions or other tasks (such as data manipulation logic) to one or more server processes. Server processes respond to messages from clients.

A Client/Server system has several layers, which can be visualized in either a conceptual or a physical manner. Viewed conceptually, the layers are presentation, process, and database. Viewed physically, the layers are server, client, middleware, and network.

2.1. Client/Server 2-tiered architecture:

2-tiered architecture is also known as the client-centric model, which implements a “fat” client. Nearly all of the processing happens on the client, and client accesses the database directly rather than through any middleware. In this model, all of the presentation logic and the business logic are implemented as processes on the client.

2-tiered architecture is the simplest one to implement. Hence, it is the simplest one to test. Also, it is the most stable form of Client/Server implementation, making most of the errors that testers find independent of the implementation. Direct access to the database makes it simpler to verify the test results.

The disadvantage of this model is the limit of the scalability and difficulties for maintenance. Because it doesn’t partition the application logic very well, changes require reinstallation of the software on all of the client desktops.

2.2. Modified 2-tiered architecture:

Because of the nightmare of maintenance of the 2-tiered Client/Server architecture, the business logic is moved to the database side, implemented using triggers and procedures. This kind of model is known as modified 2-tiered architecture.

In terms of software testing, modified 2-tiered architecture is more complex than 2-tiered architecture for the following reasons:
a. It is difficult to create a direct test of the business logic. Special tools are required to implement and verify the tests.
b. It is possible to test the business logic from the GUI, but there is no way to determine the numbers of procedures and/or triggers that fires and create intermediate results before the end product is achieved.
c. Another complication is dynamic database queries. They are constructed by the application and exist only when the program needs them. It is very difficult to be sure that the test generates a query “correctly”, or as expected. Special utilities that show what is running in memory must be used during the tests.

2.3. 3-tiered architecture:

For 3-tiered architecture, the application is divided into a presentation tier, a middle tier, and a data tier. The middle tier is composed of one or more application servers distributed across one or more physical machines. This architecture is also termed the “the thin client—fat server” approach.
This model is very complicated for testing because the business and/or data objects can be invoked from many clients, and the objects can be partitioned across many servers. The characteristics make the 3-tiered architecture desirable as a development and implementation framework at the same time make testing more complicated and tricky.

3: Critical Issues Involved in Client/Server System Management:


Hurwitz Consulting Group, Inc. has provided a framework for managing Client/Server systems that identifies eight primary management issues [4]:

a. Performance
b. Problem
c. Software distribution
d. Configuration and administration
e. Data and storage
f. Operations
g. Security
h. License

II Client/Server Software Testing:

Software testing for Client/Server systems (Desktop or Webtop) presents a new set of testing problems, but it also includes the more traditional problems testers have always faced in the mainframe world. Atre describes the special requirements of Client/Server testing [5]:
a. The client’s user interface
b. The client’s interface with the server
c. The server’s functionality
d. The network (the reliability and performance of the network)

1. Introduction to the Client/Server Software Testing:

We can view the Client/Server software testing from different perspectives:

a. From a “distributed processing” perspective: Since Client/Server is a form of distributed processing, it is necessary to consider its testing implication from that point of view. The term “distributed” implies that data and processes are dispersed across various and miscellaneous platforms. Binder states several issues needed to be considered in the Client/Server environments [6].
· Client GUI considerations
· Target environment and platform diversity considerations
· Distributed database considerations (including replicated data)
· Distributed processing considerations (including replicated processes)
· Nonrobust target environment
· Nonlinear performance relationships
b. From a cross-platform perspective: The networked cross-platform nature of Client/Server systems requires that we pay much more attention to configuration testing and compatibility testing. The purpose of configuration testing is to uncover the weakness of the system operated in the different known hardware and software environments. The purpose of comparability testing is to find any functionally inconsistency of the interface across hardware and software.
c. From a cross-window perspective: The current proliferation of Microsoft Windows environments has created a number of problems for Client/Server developers. For example, Windows 3.1 is a 16-bit environment, and Window 95 and Window NT are 32-bit environment. Mixing and matching 16- bit and 32-bit code/16bits or 32bits systems and products causes major problems. Now there exit some automated tools that can generate both 16-bit and 32-bit test scripts.

2. Testing Plan for Client/Server Computing:

In many instances, testing Client/Server software cannot be planned from the perspective of traditional integrated testing activities because this view either is not applicable at all or is too narrow, and other dimensions must be considered. The following are some specific considerations needing to be addressed in a Client/Server testing plan.
· Must include consideration of the different hardware and software platforms on which the system will be used.
· Must take into account network and database server performance issues with which mainframe systems did not have to deal.
· Has to consider the replication of data and processes across networked servers

See attached “Client/Server test plan based on application functionality” [7].

In the test plan, we may address or construct several different kinds of testing:
a. The system test plan: System test scenarios are a set of test scripts, which reflect user behaviors in a typical business situation. It’s very important to identify the business scenarios before constructing the system test plan.

See attached CASE STUDY: The business scenarios for the MFS imaging system

b. The user acceptance test plan: The user acceptance test plan is very similar to the system test plan. The major difference is direction. The user acceptance test is designed to demonstrate the major system features to the user as opposed to finding new errors.

See attached CASE STUDY: Acceptance test specification for the MFS imaging system

c. The operational test plan: It guides the single user testing of the graphical user interface and of the system function. This plan should be constructed according to subsection A and B of Section II in the testing plan template -- Client/Server test plan based on application functionality. (See attached Appendix I)

d. The regression test plan: The regression test plan occurs at two levels. In Client/Server development, regression testing happens between builds. Between system releases, regression testing also occurs postproduction. Each new build/release must be tested for three aspects:
· To uncover errors introduced by the fix into previously correct function.
· To uncover previously reported errors that remain.
· To uncover errors in the new functionality.

e. Multiuser performance test plan: It is necessary to be performed in order to uncover any unexpected system performance problem under load. This test plan should be constructed form Section V of the testing plan template-- Client/Server test plan based on application functionality. (See attached Appendix I)


3. Client/Server Testing in Different Layers:

3.1. Testing on the Client Side—Graphic User Interface Testing:

3.1.1 The complexity for Graphic User Interface Testing is due to:
a. Cross-platform nature: The same GUI objects may be required to run transparently (provide a consistent interface across platforms, with the cross-platform nature unknown to the user) on different hardware and software platforms
b. Event-driven nature: GUI-base applications have increased testing requirements because they are in an event-driven environment where user actions are events that determine the application’s behavior. Because the number of available user actions is very high, the number of logical paths in the supporting program code is also very high.
c. The mouse, as an alternate method of input, also raises some problems. It is necessary to assure that the application handles both mouse input and keyboard input correctly.
d. The GUI testing also requires testing for the existence of a file that provides supporting data/information for text objects. The application must be sensitive to the existence, or nonexistence.
e. In many cases, GUI testing also involves the testing of the function that allows end-users to customize GUI objects. Many GUI development tools give the users the ability to define their own GUI objects. The ability to do this requires the underlying application to be able to recognize and process events related to these custom objects.

3.1.2 GUI testing techniques:

Many traditional software testing techniques can be used in GUI testing.

a. Review techniques such as walkthroughs and inspections [8]. These human testing procedures have been found to be very effective in the prevention and early correction of errors. It has been documented that two-thirds of all of the errors in finished information systems are the results of logic flaws rather than poor coding [9]. Preventive testing approaches, such as walkthroughs and inspections can eliminate the majority of these analysis and design errors before they go through to the production system.

b. Data validation techniques: Some of the most serious errors in software systems have been the result of inadequate or missing input validation procedures. Software testing has powerful data validation procedures in the form of the Black Box techniques of Equivalence Partitioning, Boundary Analysis, and Error Guessing. These techniques are also very useful in GUI testing.

c. Scenario testing: It is a system-level Black Box approach that also assure good White Box logic-level coverage for Client/Server systems.

d. The decision logic table (DLT): DLT represents an external view of the functional specification that can be used to supplement scenario testing from a logic-coverage perspective. In DLTs, each logical condition in the specification becomes a control path in the finished system. Each rule in the table describes a specific instance of a pathway that must be implemented. Hence, test cases based on the rules in a DLT provide adequate coverage of the module’s logic independent of its coded implementation.

In addition to these traditional testing techniques, a number of companies have begun producing structured capture/playback testing tools that address the unique properties of GUIs. The difference between traditional capture/playback and structured capture/playback paradigm is that capture/playback occurs at an external level. It records input as keystrokes or mouse actions and output as screen images that are saved and compared against inputs and output images of subsequent tasks.

Structured capture/playback is based on an internal view of external activities. The application program’s interactions with the GUI are recorded as internal ‘events” that can be saved as “scripts” written in some certain language.

3.2 Testing on the Server Side---Application Testing:

There are several situations that scripts can be designed to invoke during several tests: load testing, volume tests, stress tests, performance tests, and data-recovery tests.

3.2.1 Client/Server loading tests:

Client/Server systems must undergo two types of testing: single-user-functional-based testing and multiuser loading testing.
Multiuser loading testing is the best method to gauge Client/Server performance. It is necessary in order to determine the suitability of application server, database server, and web server performance. Because multiuser load test requires emulating a situation in which multiple clients access a single server application, it is almost impossible to be done without automation.

For the Client/Server load testing, some common objectives include:
· Measuring the length of time to complete an entire task
· Discovering which hardware/software configuration provides optimal performance
· Tuning database queries for optimal response
· Capturing Mean-Time-To-Failure as a measure of reliability
· Measuring system capacity to handle loads without performance degradation
· Identifying performance bottlenecks

Based on the test objectives, a set of performance measurements should be described. Typical measurements include:
· End-to-end response time
· Network response time
· GUI response time
· Server response time
· Middleware response time

3.2.2 Volume testing:
The purpose of volume testing is to find weaknesses in the system with respect to its handling of large amount of data during extended time periods

3.2.3 Stress testing:
The purpose of stress testing is to find defects of the system capacity of handling large numbers of transactions during peak periods. For example, a script might require users to login and proceed with their daily activities while, at the same time, requiring that a series of workstations emulating a large number of other systems are running recorded scripts that add, update, or delete from the database.

3.2.4 Performance testing:
System performance is generally assessed in terms of response time and throughput rates under differing processing and configuration conditions. To attack the performance problems, there are several questions should be asked first:
· How much application logic should be remotely executed?
· How much updating should be done to the database server over the network from the client workstation?
· How much data should be sent to each in each transaction?

According to Hamilton [10], the performance problems are most often the result of the client or server being configured inappropriately.

The best strategy for improving client-sever performance is a three-step process [11]. First, execute controlled performance tests that collect the data about volume, stress, and loading tests. Second, analyze the collected data. Third, examine and tune the database queries and, if necessary, provide temporary data storage on the client while the application is executing.

3.2.5 Other server side testing related to data storage:
· Data recovery testing
· Data backup and restoring testing
· Data security testing
· Replicated data integrity testing.

3.2.6 Examples for automated server testing tools:
LoadRunning/XL, offered from Mercury Interactive, is a Unix-based automated server testing tool that tests the server side of multiuser Client/Server application. LoadRunning/PC is similar to products based on Windows environments.

SQL Inspector and ODBC Inspector are tools for testing the link between the client and the server. These products monitor the database interface pipeline and collect information about all database calls or a selected subset of them.

SQL Profiler, is used for tuning database calls. It stores and displays statistics about SQL commands embedded in Client/Server applications.

SQLEYE is an NT-based tool, offered by Microsoft. It can track the information passed through the SQL Server and its client. Client application connect indirectly to SQL server through SQLEYE, which allows users to view the queries sent to SQL Server, the returned results, row counts, message, and errors

3.3 Networked Application Testing

Testing the network is beyond the scope of an individual Client/Server project as it may serve more than a single Client/Server project. Thus, network testing falls into the domain of the network management group. As Robert Buchanan [12] said: “If you haven’t tested a network solution, it’s hard to say if it works. It may ‘work’. It may execute all commands, but it may be too slow for your needs”.

Nemzom blames the majority of network performance problem on insufficient network capacity [13]. He views bandwidth and latency as the critical determinants of network speed and capacity. He also sees interactions among intermediate network nodes (switches, bridges, routers and gateways) as adding to the problem.

Elements of network testing include:
· Application response time measures
· Application functionality
· Throughput and performance measurement
· Configuration and sizing
· Stress testing and performance testing
· Reliability

It is necessary to measure application response time while the application is completing a series of tasks. This kind of measure reflects the user’s perception of the network, and is applicable through the entire network life cycle phase. Testing application functionality involves testing shared functionality across workstations, shared data, and shared processes. This type of testing is applicable during the development and evolution. Configuration and sizing measure the response of specific system configurations. This is done for different network configurations until the desired performance level is reached. . The point of stress testing is to overload network resource such as routers or hubs. Performance testing can be used to determine how many network devices will be required to meet the network’s performance requirements. Reliability testing involves running the network for 24-72 hrs under a medium-to-heavy load. From a reliability point of view, it is important that the network remain functional in the event of a node failure.

4 Special Concerns for Internet Computing --- Security Testing:

For internet-based Client/Server systems, security testing for the web server is important. The web server is your LAN’s window to the world and, conversely, is the world’s window to your LAN.

The following excerpt is taken from the WWW Security FAQ [14]:

It’s a maxim in system security circles that buggy software opens up security holes. It’s a maxim in software development circles that large, complex programs contain bugs. Unfortunately, web servers are large, complex programs that can contain security holes. Furthermore, the open architecture of web server allows arbitrary CGI scripts to be executed on the server’s side of the connection in response to remote requests. Any CGI script installed at your site may contain bugs, and every such bug is a potential security hole.

Three types of security risks have been identified [15]:

1. The primary risk is errors in the web server side misconfiguration that would allow remote users to:
· Steal confidential information
· Execute commands on the server host, thus allowing the users to modify the system
· Gain information about the server host that would allow them to break into the system
· Launch attacks that will bring the system down.
2. The secondary risk occurs on the Browser-side
· Active content that crashes the browser, damages your system, breaches your company’s privacy, or creates an annoyance.
· The misuse of personal information provided by the end user.
3. The tertiary risk is data interception during data transfer.

The above risks are also the focuses of web server security testing. As a tester, it is your responsibility to test if the security extends provided by the server meet the user’s expectation for the network security.


Summary:

Client/Server system development is the preferred method of constructing cost-effective department- and enterprise-level strategic corporate information systems. It allows the rapid deployment of information systems in end-user environments

Both traditional Client/Server as well as netcentric computing are tiered architectures. Currently, the dominant three types of Client/Server architectures include 2-tiered architecture, modified 2-tiered architecture, and three tiered architecture. 2-tiered architecture is the simplest one to implement, and the simplest one to test. The characteristics of the 3-tiered architecture that make desirable as development and implementation framework at the same time make testing more complicated

Testing Client/Server software cannot be planned from the perspective of traditional integrated testing activities. In a Client/Server testing plan, some specific considerations, such as different hardware and software platforms, network and database server performance issues, the replication of data and processes across networked servers, etc. need to be addressed.

The complexity for GUI (Graphic User Interface) testing is increase because of some characteristics of GUIs, for instance, its cross-platform nature, event-driven nature, and an additional input method—mouse. Many traditional software testing techniques can be used in GUI testing. Currently, a number of companies have begun producing structured capture/playback tools that address the unique properties of GUIs.

There are several situations that scripts can be designed to be invoked during server tests: load testing, volume tests, stress tests, performance tests, and data-recovery tests. These types of testing are nearly impossible without automation. Some sophisticated testing tools used in server side testing already emerged in the market, such as LoadRunning/Xl, SQL Inspector, SQL profiler, and SQLEYE.

Network test is a necessary but difficult series of tasks. Its difficulty is compounded by the fact that Client/Server development may be targeted for an exiting network or for one that is yet to be installed. Proactive network management and proper capacity planning will be very helpful. In addition, performance and stress testing can ease the network testing burden.

For internet-based Client/Server systems, security testing for the web server is important. The web server is your LAN’s window to the world and, conversely, is the world’s window to your LAN. As a tester, it is your responsibility to find weakness in the system security

the document back into its original batch)

Thursday, November 15, 2007

Wayanad .....

I am from Kalpetta. Heart of Wayanad. Majority of the families in kalpetta area are migrated from other districts of kerala.The nearest forest is vythiri 20 minutes distance away from my home. Our family name is Palliyaallil. My grandfather (Late Achuthanathan ) migrated from Baalushery to Vythiri in 1952. Agricultural commodities are also a major source of income for our family. We cultivate Black Pepper, Coffee, Ginger, Vanilla and Coconut


History
Wayanad, One of the fourteen districts in Kerala (India) is situated in an elevated picturesque mountainous plateau in Western Ghats. It is a quiet place where scenic beauty wild life and traditional matter, simplicity is a virtue and beauty still blossoms from the mountainous horizon and from the green glaze of alluring vegetation. Wayanad hills are contiguous to Mudumala in Tamil Nadu and Bandhipur in Karnataka, thus forming a vast land mass for the wild life to move about in its most natural abode. Now the Land of Black Pepper, Coffee, Tea and Ginger.
In the ancient times this land was ruled by the Rajas of the Veda tribe. In later times, Wayanad came under the rule of Pazhassi Rajahs of Kottayam royal dynasty. When Hyder Ali became the ruler of Mysore, he invaded Wayanad and brought it under his way. In the days of Tipu, Wayanad was restored to the Kottayam royal dynasty. But Tipu handled entire Malabar to the British after the Sreerandapattam truce that he made with them. This was followed by fierce encounters between the British and Kerala Varma Pazhassi Rajah of Kottayam. Even when the Rajah was driven to the wilderness of Wayanad he waged several battles with his Nair and Kurichia-Kuruma tribal soliders against the British troops and defeated the latter several times through guerilla type encounters. The British could get only the dead body of the Rajah who killed himself somewhere in the interior forest. Thus Wayanad fell into the hands of British, and with it began a new turn in the history of this area. The Britishers opened up the Plateau for cultivation of tea and other cash crops. Roads were laid across the dangerous slopes of Wayanad from Calicut and Telicherry. These roads were extended to the city of Mysore and to Ooty through Gudalur. Roads facilities provided opportunities for the people of outside Wayanad to flow and settle to these jungle regions.
When the state of Kerala was formed in 1956, Wayanad was part of Kannur district. Later South Wayanad was added to Kozhikode district and then on November 1, 1980 North and South Wayanad joined together to form the present Wayanad district.
Tourism
Lakkidi Ghat Pass
:- It is the Gate Way of Wayanad above the Thamarassery Ghat Pass of western ghat, at an elevation of 700m above mean sea level. Deep valley to the south with winding roads through thick forest attract many. It is 55 kms. east of Kozhikode and 5 kms. south of Vythiri.
Chembra peak :- Trekking to the Chembra peak is one of the risky tourist endeavours. Chembra peak is the highest peak in Wayanad at 2100m. above mean sea level. It is14 kms. west of Kalpetta. Trekking to the top of this peak takes almost a day. Tourists can also stay one or two days at the top of the peak in temporary camps. District Tourism Promotion Council provides guides, sleeping bags, canvass huts trekking implements on hire charges to the tourists. The scenic beauty of Wayanad which is visible from the top of Chembra is very challenging and thrilling. The blue eyed water in the lake at the top of the hill never dries up even in the peak of summer. All along the steep and slippery way to the top of the hill, the whispering of the flowing spring which sprouts from the top of the hill accompanies the tourist. If he is fortunate enough, on his way he may come across a passing wild beast, may be a leopard who may instantly hide behind the bushes. Camping in the night with camp fire and sleeping bags at the top of the peak in shivering cold is everlasting experience.
Pakshipathalam :- Pakshipathalam in Bramha Giri hills at Thirunelly is a challenging tourist spot. It is 7 kms. north-east of Thirunelly temple and is situated 1740m. above mean sea level. To reach 'Pakshipathalam', 17 kms. have to be covered through wild forest. The deep rock caves formed among the thick blocks of rocks at the northern top end of the Brahmagiri is the abode of various birds and wild beasts. To go to 'Pakshipathalam' special permission has to be obtained from forest department. DTPC (District Tourism Promotion Council) arranges vehicle, guides, camping apparatus etc. to the tourists on hire charges.
Meenmutty Water Fall :- Water falls to a depth of more than 500m. in 3 steps. 12 kms. east of Meppadi.
Pookot Lake Tourist Resort :- This resort in Vythiri is the most sought - after tourist spot of Wayanad. Boating facilities are arranged to the very vast natural lake which lies in the lap of surrounding mountains. Thicks bushes and tall trees along the path round the lake gives a calm spiritual atmosphere. A fresh-water aquarium with wide varieties of fishes is managed by Fisheries Department. Children's park and shopping centre for handicrafts and spices of Wayanad are arranged by DTPC .
Kuruva Dweep :- 950 acres of evergreen forest surrounded by east flowing river, Kabani. Rare species of birds, orchids and herbs are sovereigns of this supernatural kingdom. It is 17kms. east of Mananthavady and 45 kms. north-west of Sulthan Bathery.
Sentinal Rock Water Fall :- A three step water fall of more than 200m inheight with a fantastic scenary provides for white water rafting, swimming, bathing, etc., The tree top huts at Soochipara will give unique view of the valleys of Western Ghats. It is also an ideal place for rock climbing. At Soochipara near Meppadi 22 kms. south of Kalpetta.
Edakkal Caves :- The Edakkal Caves are at Ambukutty Mala. It is a pre-historic rock shelter formed naturally out of a strange disposition of three huge boulders making one to rest on the other two with its bottom jutting out in between and serving as the roof. Edakkal literally means a stone in between.
The discovery of the cave and its identification as a prehistoric site were quite accidental by F. Fawcett, the then superintendent of police. An enthusiast in pre-history, Fawcett went around exploring the Wayanad high ranges which eventually led to the discovery of the Edakkal rock-shelter in 1894. He identified the site as a habitat of neolithic people on the basis of the nature of representations on the cave walls, which appeared to him as engravings made of neolithics celts. Edakkal rock engravings stand out distinct among the magnitude of prehistoric visual archives of paintings and graphic signs all over the world. It is the world's richest pictographic gallery of its kind.
Thirunelly Temple :- It is known as 'Thekkan Kasi', of Kerala. It is believed, that a dip in the river Papanasini, running crystal clear down hill, wipes one off all sins. Thirunelly is 30 kms. north-east of Manathavady.


Some Snaps of wayanad:-











Near By Attractions




Bangalore - (The City of Gardens, 250 Km from SBT)
Mysore -(The City of Palaces, 110 Km from SBT)
Ootty - (The Nilgiris, 80 Km from SBT)
Nagerhole -(National Park, 40 Km from Mananthavady)
Muthanga -(The Elephant Training Centre, 30 Km from SBT)
Kerala Forest - (Land of Tribals, 1/2 Km from our home)
Mysore Forest -(Land of Tippu Sulthan, 2 Km from our home)
Kabani River -(The East Flowing River, 1 Km from our home)

Skills of a Tester’s Skull

Abstract

Software Testing is one of the key practices in the Software Development Life Cycle that requires diversified skills. Because, developers find it difficult to find out the defects in their own code psychologically, the developers cannot test their code effectively. Hence, there arises the need for an Independent Testing Group, who approaches the code with a different perception to test them effectively.

Provided this scenario,

1) What are the unique skills required for the “Independent” Testers (which may or may not be required for a developer)?
2) What are the best practices that an “Independent” Tester need to adopt?

This paper tries to find out the answers for the above questions. Biologically, the skills and talents of a human being is managed by the brain and hence the title “Skills of a Tester’s Skull”. “Tester’s Skull” does not mean exactly the Skull of a tester; it is the Tester’s Brain inside the skull.


Understanding Skills

The first and foremost activity of Software Testing is to understand the requirements/ functionalities of the software to be tested. Formal Documents like Software Requirement Specifications, Software Functional Specifications, Use Case Specifications and other Documents like Minutes of Meeting serves as the key references for planning and executing the testing tasks. The testers must have very good understanding skills to read and understand the information written in these documents. Many a times, it is possible to have many interpretations for the information presented in the documents. The testers must be able to identify the duplicate & ambiguous requirements. If the requirements are not clear or ambiguous, the testers must identify the sources of the requirements and get them clarified. The sources of the requirements in most of the project development team should be Business Analysts, Business Users or any other competent authority as identified by the Project Management team. The testers shall analyze and co-relate the information gathered within the perspective of the project.

Listening Skills

Documents are not the only source of reference for the testing activities. The information required for the testing activities may be acquired through offline meetings, seminars, conferences, etc. The minutes of the meetings, conferences, seminars may or may not be recorded in a formal document. The testers must have very good active listening skills in order to collate and co-relate all of that information and refer them for the testing activities. While the requirements or functionalities of the software are discussed over a meeting, many a times, some part of the requirements are missed out. The testers should be able to identify and get them clarified before heading towards the subsequent testing phases.

Test Planning Skills

All software requirements shall be testable. The software shall be designed in such a way that all software requirements shall be testable. The test plan shall be formulated in such a way that paves the way for validating all the software requirements. In the real time scenario, there could be many requirements that are not testable. The tester with his/her test planning skills should be able to find out a workaround to test those non-testable requirements. If there is no way to test them, that shall be communicated clearly to the appropriate authority. There could be many requirements that are very complex to test and the tester should be able to identify the best approach to test them.

Test Design Skills

Software Testing Science preaches many techniques like Equivalence Class Partitioning, Boundary Value Analysis, Orthogonal Array and many more techniques for an effective test design. The testers shall be aware of all those techniques and apply them into their software test designing practice. The tester shall be aware of various formats and templates to author and present the test cases or test procedures in a neat fashion. The tester shall aware of the best practices and the acceptable standards for designing the test cases. The tester shall be aware of the how to write test cases – non-ambiguous, simple, straight to the point.

The test case needs to contain Test Case Description, Test Steps and its corresponding expected results. The tester shall be aware of how to present the content in these three topics effectively in such a way that they can be read without any ambiguity by all the project stakeholders.

Test Execution Skills

Test Execution is nothing but executing the steps that is specified in the test design documents. During the execution, the testers shall capture the actual results and compare against the expected results specified in the test design documents. If there are any deviations between the expected and actual results, the testers shall consider that as a defect. The tester shall analyze the cause of the defect and if it is found and confirmed in the application under test, the same shall be communicated to the developers and it shall get fixed. If the cause of the defect is found with the test case, the same shall be communicated to the test designers and the test cases shall be modified/ amended accordingly. If the testers are not confident about the application functionalities and the test design documents, they may not confidently come to a conclusion about the defect in case of any discrepancies. This will lead to defects being leaked to the next phase and the testers needs to avoid this scenario. The testers shall be confident about the application functionalities and in case of any ambiguity or clarifications; they need to get them sorted out before executing the tests or at least during the test execution.

Defect Reporting Skills

Defect Reports are one of the critical deliverables from a tester. The defect reports are viewed by the development team, business analysts, project managers, technical managers and the quality assurance engineers along with the testers. Hence, the defect reports shall carry enough information about the defect. Steps to reproduce the defect, its expected result and actual result along with other information such as Severity, Priority, Assigned To (developer), Test Environment details are very critical for a defect report without which, the defect report is considered as incomplete. The tester shall be aware of the importance of the defect report and he/she shall create defect report in such a way that it is non-ambiguous. During the course of fixing the defect, the developers may come back to the testing team for more information regarding the defect and the tester shall provide the same without failing.

Test Automation Skills

Test Automation is a wonderful phenomenon by which the testing cost is drastically reduced. The manual test cases upon automation can be executed by running the automated test scripts. This means that the manual effort to run those automated test cases is not necessary and hence the total test effort is reduced drastically. The testers shall be aware of the technique for adopting test automation into the current testing process. Identifying the test automation candidates is very critical for the success of the automation project. Automation candidates shall be identified in such a way that the testing cost towards manual test execution would reduce significantly. This involves lots of thoughts from the financial perspective as well. The testers shall understand the process of do’s & don’ts of automation to make the automation project successful.

Conclusion

The testers shall understand/ learn and be confident about the application functionalities. Test planning, designing, execution and defect reporting are the basic and essential skills that a tester shall possess and develop in his day-to-day career. Professionals who are perfectionist in using these skills are called as “Testing Professionals” or “Testers” or “Testing Engineers”. Hope now, you are a tester…

IEEE Software Engineering Standards

IEEE: Institute of Electrical and Electronics Engineers
A transnational organization founded in 1884, IEEE consists of dozens of specialized societies within geographical regions throughout the world. Software testing standards are developed within the technical committees of the IEEE Societies and the Standards Coordinating Committees of the IEEE Standards Board.
These standards are created through a process of obtaining the consensus of practicing professionals. This consensus process, which includes careful discussion and debate among members of the various committees who serve voluntarily, is one of the fundamental themes of the standards process. Another key theme is to provide standards in a timely manner, from project approval to standard approval is approximately 3 years.

To obtain the full version of the IEEE Standards:
IEEE
445 Hoes Lane
PO Box 1331
Piscataway, NJ 08855-1331


The following standards are those a tester should be aware of, and include a short abstract about each one:

610.12-1990:
IEEE Standard Glossary of Software Engineering Terminology
Topics covered include addressing; assembling, compiling, linking, loading, computer performance evaluation, configuration management, data types; errors, faults, and failures; evaluation techniques; instruction types; language types; libraries; microprogramming; operating systems; quality attributes; software documentation; software and system testing; software architectures; software development process; software development techniques; and software tools. This standard promotes clarity and consistency in the vocabulary of software engineering and associated fields.

730-1998: IEEE Standard for Software Quality Assurance Plans
The purpose of this standard is to provide uniform, minimum acceptable requirements for preparation and content of SQAPs (Software Quality Assurance Plans).
The plan includes the following sections:
Purpose
Defines purpose and scope
Reference Documents
Complete list of documents referenced within SQAP
Management
Describes organization, tasks, responsibilities
Documentation
Identifies the documentation governing development, verification and validation, use, and maintenance of the software.
Documents include: Software Requirements Specification (SRS), Software Design Description (SDD), Software Verification and Validation Plan (SVVP), Software Verification and Validation Report (SVVR), User documentation, Software Configuration Management Plan (SCMP), Software Development Plan, Standards and procedures manual, software project management plan, software maintenance manual.
Standards, practices, conventions, and metrics
Identifies the standards, practices, conventions and metrics to be applied, and states how compliance with these items is to be monitored and assured.
Reviews and audits
Defines the technical and managerial reviews and audits to be conducted, states how the reviews and audits are to be accomplished, and states what further actions are required and how they are to be implemented and verified.
Test
Identifies all the tests not included in the SVVP for the software covered by the SQAP and states what methods are to be used.
Problem reporting and corrective action
Describes the practices and procedures to be followed for reporting, tracking, and resolving problems identified in both software items and the software development and maintenance process
Tools, techniques, and methodologies
Identifies the special software tools, techniques, and methodologies that support SQA, state their purposes, and describe their use.
Code control
Defines the methods and facilities used to maintain, store, secure, and document controlled versions of the identified software during all phases of the software life cycle.
Media control
Identifies the media for each computer product and the documentation required to store the media, including the copy and restore process, and protects the computer program physical media from unauthorized access or inadvertent damage or degradation during all phases of the software life cycle.
Supplier control
States the provisions for assuring that software provided by suppliers meets established requirements. Also states the methods that will be used to assure that the software supplier receives adequate and complete requirements. This section also states the methods to be employed to assure that the developers comply with the requirements of this standard.
Records collection, maintenance, and retention
Identifies the SQA documentation to be retained; states the methods and facilities to be used to assemble, safeguard, and maintain this documentation, and designates the retention period.
Training
Identifies the training activities necessary to meet the needs of the SQAP
Risk management
Specifies the methods and procedures employed to identify, assess, monitor, and control areas of risk arising during the portion of the software life cycle covered by the SQAP.

828-1998: IEEE Standard for Software Configuration Management Plans
This standard establishes the minimum required contents of a software configuration (SCM) plan. It is supplemented by 1042-1987 (which provides approaches to good software configuration management planning). This standard applies to the entire life cycle of critical software. The plan documents what SCM activities are to be done, how they are to be done, who is responsible for doing specific activities, when they are to happen, and what resources are required.

829-1998: IEEE Standard for Software Test Documentation
This standard describes a set of basic test documents that are associated with the dynamic aspects of software testing. The standard defines the purpose, outline, and content of each basic document. Documents included:
§ Test Plan
§ Test Design Specification
§ Test Case Specification
§ Test Procedure Specification
§ Test Item Transmittal Report (a document identifying test items)
§ Test Log
§ Test Incident Report
§ Test Summary Report

830-1998: IEEE Recommended Practice for Software Requirements Specifications
This standard describes the recommended approaches for the specification of software requirements. It describes the content and qualities of a good software requirements specification (SRS), and includes several sample SRS outlines.

1008-1987 (R1993): IEEE Standard for Software Unit Testing (ANSI)
This standard defines an integrated approach to systematic and documented unit testing. The approach uses unit design and unit implementation information, in addition to unit requirements, to determine the completeness of the testing. The standard describes a testing process composed of a hierarchy of phases, activities, and tasks. Further, it defines a minimum set of tasks for each activity, although additional tasks may be added to any activity.

1012-1998:
IEEE Standard for Software Verification and Validation
The purpose of this standard is to:
1. Establish a common framework for V&V processes, activities, and tasks in support of all software life cycle processes, including acquisition, supply, development, operation, and maintenance processes.
2. Define the V&V tasks, required inputs, and required outputs
3. Identify the minimum V&V tasks corresponding to software integrity levels using a four-level scheme
4. Define the content of a software V&V Plan (SVVP)

1012a-1998: IEEE Standard for Software Verification and Validation (Supplement to 1012-1998 – Content Map to IEEE 12207.1)
The 2 requirements (1012 and 12207.1) place requirements on plans for verification of software and validation of software. The purpose of this annex is to explain the relationship between the two sets of requirements, so that users producing documents intended to comply with both standards may do so.

1016-1998: IEEE Recommended Practice for Software Design Descriptions
This is a recommended practice for describing software designs. It specifies the necessary information content, and recommended organization for a Software Design Description (SDD). An SDD is a representation of a software system that is used as a medium for communicating software design information.

1028-1997: IEEE Standard for Software Reviews
The purpose of this standard is to define systematic reviews applicable to software acquisition, supply, development, operation, and maintenance. This standard describes how to carry out a review. Other standards or local management define the context within which a review is performed, and the use made of the results of the review. Software reviews can be used in support of the objectives of project management, system engineering, verification and validation, configuration management, and QA. Different types of reviews reflect differences in the goals of each review type. Systematic reviews are described by their defined procedures, scope, and objectives.

1044-1993: IEEE Standard Classification for Software Anomalies (ANSI)
(Anomaly: any condition that departs from the expected. The expectation can come from documentation or someone’s perceptions or experiences). The methodology of this standard is based on a process (sequence of steps) that pursues a logical progression from the initial recognition of an anomaly to its final disposition. Each step interrelates with and supports the other steps.

1045-1992: IEEE Standard for Software Productivity Metrics (ANSI)
This standard describes the data collection process and calculations for measuring software productivity.

1058-1998: IEEE Standard for Software Project Management Plans
This standard prescribes the format and content of Software Project Management Plans (SPMPs). An SPMP is the controlling document for managing a software project; it defines the technical and managerial processes necessary to develop software work products that satisfy the product requirements.

1058.1-1987(R1993): IEEE Standard for Software Project Management Plans (ANSI)
Explains the relationship between 1058 standard and 12207.1 standard, so that users producing documents intended to comply with both standards may do so.

1061-1998: IEEE Standard for Software Quality Metrics Methodology
Scope: Provides a methodology for establishing quality requirements and identifying, implementing, analyzing, and validating process and product software quality metrics. This methodology applies to all software at all phases of any software life cycle.

Framework: Software Quality is the degree to which Software possesses a desired combination of quality attributes. The purpose of Software Metrics is to make assessments throughout the Software Life cycle as to whether the Software Quality requirements are being met. The use of software metrics reduces subjectivity in the assessment and control of software quality by providing a quantitative basis for making decisions about software quality. However the use of Software Metrics does not eliminate the need for human judgment in Software assessments. The use of software metrics within an organization or project is expected to have a beneficial effect by making software quality more visible.

Other External Standards

The use of standards simplifies communication, promotes consistency and uniformity, and eliminates the need to invent yet another (often different and even incompatible) solution to the same problem. Standards, whether ‘official’ or merely agreed upon, are especially important when we’re talking to customers and suppliers, but it’s easy to underestimate their importance when dealing with different departments and disciplines within an organization. They also provide vital continuity so that we are within our own organization. They also provide vital continuity so that we are not forever reinventing the wheel. They are a way of preserving proven practices above and beyond the inevitable staff changes within organizations.
Some standards are particularly important to the testing practitioner. They can provide a benchmark for writing documents like requirements, so that testers and others doing verification have a framework for what they can expect to find. More specifically, standards tell us what to put into key test documents, such as a test plan.
Standards are not only practical from the development point of view, but they are increasingly the basis for contracts and therefore also, when things go wrong, for litigation. One of the issues that arise in litigation is whether the software was developed according to known standards that are prevalent in the industry today. This means we need to know not only what the standards are, but to also see that they are applied.

ISO – International Organization for Standards
ISO 9000: addresses the quality management system of an organization.
ISO 9001: the base international standard for quality management
ISO 9000-3: Guidebook on how ISO 9000 applies to software.
TickIt: UK scheme for certifying organizations producing software according to ISO 9001.

SPICE –Software Process Improvement and Capability Determination
WG10: Software Process Assessment working group for the ISO.
SPICE was created to develop a suite of related standards and guidebooks. The purpose is to create a consistent standard for software process assessment that can be used by different nations and different sectors. The SEI (Software Engineering Institute) has worked closely with this group, including providing the CMM (Capability Maturity Model) as input to the effort.

NIST – National Institute of Standards and Technology
A non-regulatory federal agency within the Commerce Department’s Technology Administration. NIST's mission is to promote economic growth by working with industry to develop and apply technology, measurements, and standards. NIST carries out its mission through four interwoven programs: NIST Laboratories, Baldrige National Quality Program, Manufacturing Extension Partnership, and Advanced Technology Program. NIST programs are helping improve the quality and capabilities of software used by businesses, research institutions, and consumers. As a result of our programs (described below), many software packages are more efficient and can exchange data with each other. More info at: http://www.nist.gov/

DoD – Department of Defense
As the DoD Executive Agent for Information Standards, CFS (Center for Standards) influences, adopts, develops, promulgates, and maintains standards for OSD, CINCs, Services, Agencies and the international defense community. CFS leads DoD's IT standards activities, performs interoperability assessments, and facilitates the interoperability of customer IT systems.
GOALS
Identify, consolidate, and coordinate requirements for information technology standards.
Advocate DoD requirements in standards bodies to permit DoD adoption of commercial standards versus development of military standards.
Actively pursue forming partnerships with technology providers to promote timely delivery of products supporting approved standards.
Develop a framework and provide guidance for applying information technology standards through updating and publishing the Joint Technical Architecture (JTA).
Automate processes to the greatest extent possible in all standardization efforts to expedite development and coordination of standards and guidance.
Facilitate their use by program managers and engineers.

Wednesday, November 14, 2007

Software Reengineering

The Myth of Software Reengineering

What is Software Reengineering?

Software systems development is never complete. Changes even though not mandatory, but remain desirable even after the product kicks off the market. Reengineering is the analysis of existing Software system and modifying them to constitute a new form. The goal of reengineering is to understand the functionality and implementation of existing Software and re implement to improve System functionality, performance. System Replacement cost is expensive. Hence Reengineering is often a better choice.

The four general re-engineering objectives:

· Improve maintainability
· Migration
· Improve reliability
· Functional enhancement


As systems grow, maintainability cost also increases. An objective of reengineering is to re-design the system to improve the functionality, performance of the system. The software’s we use are getting outdated every now and then. Software must also accommodate the need to migrate to new hardware platform, operating system, or language.

As maintenance and change introduces new bugs, software reliability should be enhanced for he re-designed software. Reengineering is not targeted for functionality enhancement but it can be used as an opportunity for adding functionality for the existing system.

Re-engineering Steps

Several models are available for re-engineering but from the scratch our ultimate goal is to analyze the system, understand the functionalities (Primary and Contributory). Primary functionalities define the basic functionality the system should exhibit. Ex: changing font size is a primary functionality for any word document but importing an excel sheet into a word document is a secondary functionality. While analyzing the system we will identify the functionalities. Also, the Requirements documents and other documents will be used for this process. The ultimate objective of analyze is to get familiarized with it.

We have software in different domains Banking, Finance, Insurance…We have several Life cycle models. Some models may be conventionally applied to certain domains. So, The next step would be selection of appropriate model to re-work the code.

We will arrive at a generalized model that would be used as a model for re-engineering any software

ANALYSIS AND REDESIGN
Redesign
Implementation
Testing


ANALYSIS AND REDESIGN

Analysis is concerned with identifying the functionalities. It’s closely associated with reverse Engineering. Reverse engineering seeks to retrieve information such as design information and code. Reverse Engineering is the first stage of re-engineering process to explore the software system, identifying the interrelationships between different parts of that system, identifying reusable components. The goal of reengineering is minimization of expenditure and software reuse.

The initial stage of analysis is a reverse engineering activity and the later stage would deal with proper analysis and thereby proper understanding of the software. The Reengineering team will identify the needs and reshape these into requirements. Like the ordinary Software Analysis the results are also monitored, documented. Analysis may also reveal new functionalities. A well documented analysis will contribute quality design, helping development team in building reliable Software.

After the requirements have been specified as output of analysis, the next activity re-designing takes place.

Redesign

Redesign may add new design elements to add functionality to the system. Effective and better solution is the objective of redesign. Redesign has similar activities as the design phase any Software Engineering Project. The difference lies in how we are considering old design elements and integrating them in the new design. Re-design of object oriented system would focus on identifying relations between parts of the system and how they are related. If the design is compatible then the time required to deliver the Software would be short.

Implementation

Implementation here is a forward engineering process. Forward Engineering is similar to normal software development process, staring from traditional process of moving from high-level abstractions and logical implementation independent designs to the physical implementation of a system.

Forward Engineering uses new design developed during analysis and design phase to move from high level design to low level implementation of desired system.

The difference between reverse engineering and forward engineering given by Sommerville is illustrated in the figure








Forward Engineering moves from a high-level abstraction and design to a low level design implementation. Forward Engineering consists of modularization, coding and testing steps.

Modularization: The extent to which Software can be divided into modules which have high internal cohesion and low coupling. The result of modularization is maintainable code and reduced program complexity.

Implementation: The process of converting design into executable Software System by programming and related activities. Quality cannot be added during test or maintenance phase. We must strive to produce quality code in this stage.

Testing: Testing is not limited to finding bugs. Testing should be used to ensure quality and modify the system through feedback so that errors are prevented, rather than corrected. We have discussed enough about testing in the previous issues.

So far we have discussed Re-engineering, Reverse Engineering and Forward Engineering. Three of them are related in the following manner.




Fig. 1 Reeingineering model


So far we have discussed reengineering of a system. The root cause for reengineering is BPR (Business Process Reengineering). BPR may occur to better compete in market, experience, management decisions to use alternative approaches. BPR may also be result of growth or rearrangement of company.

Epilogue

“Software Never Dies”. The wrong perception is “Software is easy to Change”. This article is a tip of the iceberg about Software Reengineering. Software Development requires creativity, precision, willingness to learn and analyze new things.

Risk management

Risk management is the process of measuring, or assessing risks and then developing strategies to manage the risk. In ideal risk management, a prioritization process is followed whereby the risks with the greatest loss and the greatest probability of occurring are handled first, and risks with lower probability of occurrence and lower loss are handled later.
In practice the process can be very difficult, and balancing between risks with a high probability of occurrence but lower loss vs. a risk with high loss but lower probability of occurrence can often be mishandled.
Risk management also faces a difficulty in allocating resources properly. This is the idea of opportunity cost. Resources spent on risk management could be instead spent on more profitable activities. Again, ideal risk management spends the least amount of resources in the process while reducing the effects of risks as much as possible.

Steps in the risk management process

Identification and assessment
A first step in the process of managing risk is to identify potential risks. The risks must then be assessed as to their potential severity of loss and to the probability of occurrence.

Possible actions available
Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:
§ Avoidance
§ Reduction
§ Retention
§ Transfer

Ideal use of these strategies may not be possible. Some of them may involve trade offs that are not acceptable to the organization or person making the risk management decisions.
Risk avoidance

Includes not performing an activity that could carry risk. An example would be not buying a property or business in order to not take on the liability that comes with it. Another would be not flying in order to not take the risk that the plane were to be hijacked. Avoidance may seem the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning the profits.
Risk reduction

Involves methods that reduce the severity of the loss. Examples include sprinklers designed to put out a fire to reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable. Halon fire suppression systems may mitigate that risk, but the cost may be prohibitive as a strategy.
Risk retention

Involves accepting the loss when it occurs. True self insurance falls in this category. All risks that are not avoided or transferred are retained by default.

Risk transfer
Means causing another party to accept the risk, typically by contract. Insurance is one type of risk transfer. Other times it may involve contract language that transfers a risk to another party without the payment of an insurance premium. Liability among construction or other contractors is very often transferred this way. Some ways of managing risk fall into multiple categories. Risk retention pools are technically retaining the risk for the group, but spreading it over the whole group, involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group.

Google