Tuesday, May 17, 2011

Advantages and disadvantages of the top-down and bottom-up implementation approaches

http://publib.boulder.ibm.com/tividd/td/ITIM/SC32-1708-00/en_US/HTML/im460_plan76.htm

The top-down and bottom-up approaches to deploying your identity management solution are provided to help you decide the best way to integrate identity management capabilities into your environment. Each approach has distinct advantages and disadvantages, as shown in Table 11.
Table 11. Pros and cons of the top-down and bottom-up implementation approaches Bottom-up approach Top-down approach
Summary

* High deployment coverage in early phases
* Earlier return on investment
* High visibility of organizational changes
* Higher impact to organization



* Tactical, limited coverage
* Delayed return on investment
* Lower impact to overall organization
* Higher deployment costs

Advantages

* User and business awareness of the product. Benefits are realized in the early phases.
* You can replace many manual processes with early automation.
* You can implement password management for a large number of users.
* You do not have to develop custom adapters in the early phases.
* Your organization broadens identity management skills and understanding during the first phase.
* Tivoli Identity Manager is introduced to your business with less intrusion to your operations.



* Your organization realizes a focused use of resources from the individual managed application.
* The first implementation becomes a showcase for the identity management solution.
* When the phases are completed for the managed application, you have implemented a deeper, more mature implementation of the identity management solution.
* Operation and maintenance resources are not initially impacted as severely as with the bottom-up approach.

Disadvantages

* The organizational structure you establish might have to be changed in a later roll-out phase.
* Because of the immediate changes to repository owners and the user population, the roll-out will have a higher impact earlier and require greater cooperation.
* This strategy is driven by the existing infrastructure instead of the business processes.



* The solution provides limited coverage in the first phases.
* A minimal percentage of user accounts are managed in the first phases.
* You might have to develop custom adapters at an early stage.
* The support and overall business will not realize the benefit of the solution as rapidly.
* The implementation cost is likely to be higher.

Wednesday, May 11, 2011

SDLC

I was asked to put together this high-level and traditional software
life cycle information as a favor for a friend of a friend, so I
thought I might as well share it with everybody.

The General Model

Software life cycle models describe phases of the software cycle and
the order in which those phases are executed. There are tons of
models, and many companies adopt their own, but all have very similar
patterns. The general, basic model is shown below:
General Life Cycle Model

GeneralModel

Each phase produces deliverables required by the next phase in the
life cycle. Requirements are translated into design. Code
is produced during implementation that is driven by the design.
Testing verifies the deliverable of the implementation phase against
requirements.

Requirements

Business requirements are gathered in this phase. This phase
is the main focus of the project managers and stake holders.
Meetings with managers, stake holders and users are held in order to
determine the requirements. Who is going to use the system?
How will they use the system? What data should be input into the
system? What data should be output by the system? These are
general questions that get answered during a requirements gathering
phase. This produces a nice big list of functionality that the
system should provide, which describes functions the system should
perform, business logic that processes data, what data is stored and
used by the system, and how the user interface should work. The
overall result is the system as a whole and how it performs, not how it
is actually going to do it.

Design

The software system design is produced from the results of the
requirements phase. Architects have the ball in their court
during this phase and this is the phase in which their focus
lies. This is where the details on how the system will work is
produced. Architecture, including hardware and software,
communication, software design (UML is produced here) are all part of
the deliverables of a design phase.

Implementation

Code is produced from the deliverables of the design phase during
implementation, and this is the longest phase of the software
development life cycle. For a developer, this is the main focus
of the life cycle because this is where the code is produced.
Implementation my overlap with both the design and testing
phases. Many tools exists (CASE tools) to actually automate the
production of code using information gathered and produced during the
design phase.

Testing

During testing, the implementation is tested against the
requirements to make sure that the product is actually solving the
needs addressed and gathered during the requirements phase. Unit
tests and system/acceptance tests are done during this phase.
Unit tests act on a specific component of the system, while system
tests act on the system as a whole.

So in a nutshell, that is a very basic overview of the general
software development life cycle model. Now lets delve into some
of the traditional and widely used variations.



Waterfall Model

This is the most common and classic of life cycle models, also
referred to as a linear-sequential life cycle model. It is very
simple to understand and use. In a waterfall model, each phase
must be completed in its entirety before the next phase can
begin. At the end of each phase, a review takes place to
determine if the project is on the right path and whether or not to
continue or discard the project. Unlike what I mentioned in the
general model, phases do not overlap in a waterfall model.
Waterfall Life Cycle Model

WaterFallModel

Advantages

* Simple and easy to use.
* Easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
* Phases are processed and completed one at a time.
* Works well for smaller projects where requirements are very well understood.

Disadvantages

* Adjusting scope during the life cycle can kill a project
* No working software is produced until late during the life cycle.
* High amounts of risk and uncertainty.
* Poor model for complex and object-oriented projects.
* Poor model for long and ongoing projects.
* Poor model where requirements are at a moderate to high risk of changing.



V-Shaped Model

Just like the waterfall model, the V-Shaped life cycle is a
sequential path of execution of processes. Each phase must be
completed before the next phase begins. Testing is emphasized in
this model more so than the waterfall model though. The testing
procedures are developed early in the life cycle before any coding is
done, during each of the phases preceding implementation.

Requirements begin the life cycle model just like the waterfall
model. Before development is started, a system test plan is
created. The test plan focuses on meeting the functionality
specified in the requirements gathering.

The high-level design phase focuses on system architecture and
design. An integration test plan is created in this phase as well
in order to test the pieces of the software systems ability to work
together.

The low-level design phase is where the actual software components
are designed, and unit tests are created in this phase as well.

The implementation phase is, again, where all coding takes
place. Once coding is complete, the path of execution continues
up the right side of the V where the test plans developed earlier are
now put to use.
V-Shaped Life Cycle Model

VShapedModel

Advantages

* Simple and easy to use.
* Each phase has specific deliverables.
* Higher chance of success over the waterfall model due to the development of test plans early on during the life cycle.
* Works well for small projects where requirements are easily understood.

Disadvantages

* Very rigid, like the waterfall model.
* Little flexibility and adjusting scope is difficult and expensive.
* Software is developed during the implementation phase, so no early prototypes of the software are produced.
* Model doesn’t provide a clear path for problems found during testing phases.



Incremental Model

The incremental model is an intuitive approach to the waterfall
model. Multiple development cycles take place here, making the
life cycle a “multi-waterfall” cycle. Cycles are divided up into
smaller, more easily managed iterations. Each iteration passes
through the requirements, design, implementation and testing phases.

A working version of software is produced during the first
iteration, so you have working software early on during the software
life cycle. Subsequent iterations build on the initial software
produced during the first iteration.
Incremental Life Cycle Model

SpiralModel

Advantages

* Generates working software quickly and early during the software life cycle.
* More flexible – less costly to change scope and requirements.
* Easier to test and debug during a smaller iteration.
* Easier to manage risk because risky pieces are identified and handled during its iteration.
* Each iteration is an easily managed milestone.

Disadvantages

* Each phase of an iteration is rigid and do not overlap each other.
* Problems may arise pertaining to system architecture because not
all requirements are gathered up front for the entire software life
cycle.



Spiral Model

The spiral model is similar to the incremental model, with more
emphases placed on risk analysis. The spiral model has four
phases: Planning, Risk Analysis, Engineering and Evaluation. A
software project repeatedly passes through these phases in iterations
(called Spirals in this model). The baseline spiral, starting in
the planning phase, requirements are gathered and risk is
assessed. Each subsequent spirals builds on the baseline spiral.

Requirements are gathered during the planning phase. In the
risk analysis phase, a process is undertaken to identify risk and
alternate solutions. A prototype is produced at the end of the
risk analysis phase.

Software is produced in the engineering phase, along with testing at
the end of the phase. The evaluation phase allows the customer to
evaluate the output of the project to date before the project continues
to the next spiral.

In the spiral model, the angular component represents progress, and the radius of the spiral represents cost.
Spiral Life Cycle Model

SpiralModel

Advantages

* High amount of risk analysis
* Good for large and mission-critical projects.
* Software is produced early in the software life cycle.

Disadvantages

* Can be a costly model to use.
* Risk analysis requires highly specific expertise.
* Project’s success is highly dependent on the risk analysis phase.
* Doesn’t work well for smaller projects.



And that’s it. If you have any input, especially your views on
advantages and disadvantages of any particular model, feel free to
leave them in the comments and I can add them to my copy.

Copy from: http://codebetter.com/raymondlewallen/2005/07/13/software-development-life-cycle-models/

STLC

There is a systematic cycle to software testing, although it varies from organization to organization

Software Testing Life Cycle:
Software testing life cycle or STLC refers to a comprehensive group of testing related actions specifying details of every action along with the specification of the best time to perform such actions. There can not be a standardized testing process across various organizations, however every organization involved in software development business, defines & follows some sort of testing life cycle.

STLC by & large comprises of following Six Sequential Phases:
1) Planning of Tests

2) Analysis of Tests

3) Designing of Tests

4) Creation & Verification of Tests

5) Execution of Testing Cycles

6) Performance Testing, Documentation

7) Actions after Implementation

Every company follows its own software testing life cycle to suit its own requirements, culture & available resources. The software testing life cycle can’t be viewed in isolation, rather it interacts with the every phase of Software Development Life Cycle (SDLC). Prime focus of the software testing life cycle is on managing & controlling all activities of software testing. Testing might be manual testing or an automated testing using some tool.

1) Planning of Tests:
In this phase a senior person like the project manager plans & identifies all the areas where testing efforts need to be applied, while operating within the boundaries of constraints like resources & budget. Unless judicious planning is done in the beginning, the result can be catastrophic with emergence of a poor quality product, dissatisfying the ultimate customer. Planning is not limited just to the initial phase, rather it is a continuous exercise extending till the end.

During the planning stage, the team of senior level persons comes out with an outline of Testing Plan at High Level. The High Level Test Plan comprehensively describes the following:

* Scope of Testing :
Defining the areas to be tested, identification of features to be covered during testing
* Identification of Approaches for Testing:
Identification of approaches including types of testing
* Defining Risks:
Identification of different types of risks involved with the decided plan
* Identification of resources :
Identification of resources like man, materials & machines which need to be deployed during Testing
* Time schedule:
For performing the decided testing is aimed to deliver the end product as per the commitment made to the customer.

Involvement of software testers begins in the planning phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests will work.

2) Analysis of Tests:
Based upon the High Level Test Plan Document, further nitty-gritty’s covering the following are worked out.

* Identification of Types of Testing to be performed during various stages of Software Development Life Cycle.
* Identification of extent to which automation needs to be done.
* Identification of the time at which automation is to be carried out.
* Identification of documentation required for automated testing

The Software project can’t be successful unless there is frequent interaction among various teams involved in Coding & Testing with the active involvement of the Project Managers, Business Analysts or even the customer. Any deficiencies in the decided test plans come to the surface, during such meetings of cross-functional teams. This provides an opportunity to have a rethinking & refining the strategies decided for testing.

Based upon the customer requirements a detailed matrix for functional validation is prepared to cover the following areas:

* Ensure that each & every business requirement is getting covered through some test case or the other.
* Identification of the test cases best suited to the automated testing

* Identification of the areas to covered for performance testing and stress testing
* Carry out detailed review of documentation covering areas like Customer Requirements, Product Features & Specifications and Functional Design etc.

3) Designing of Tests:
This phase involves the following:

* Further polishing of various Test Cases, Test Plans
* Revision & finalization of Matrix for Functional Validation.
* Finalization of risk assessment methodologies.
* In case line of automation is to be adopted, identification of test cases suitable for automation.
* Creation of scripts for Test cases decided for automation.
* Preparation of test data.
* Establishing Unit testing Standards including defining acceptance criteria
* Revision & finalization of testing environment.

4) Construction and verification:
This phase involves the following:

* Finalization of test plans and test cases
* Completion of script creation for test cased decided for automation.
* Completion of test plans for Performance testing & Stress testing.
* Providing technical support to the code developers in their effort directed towards unit testing.
* Bug logging in bug repository & preparation of detailed bug report.
* Performing Integration testing followed by reporting of defects detected if any.


5) Execution of Testing Cycles:
This phase involves the following:

* Completion of test cycles by executing all the test cases till a predefined stage reaches or a stage of no detection of any more errors reach.
* This is an iterative process involving execution of Test Cases, Detection of Bugs, Bug Reporting, Modification of test cases if felt necessary, Fixing of bugs by the developers & finally repeating the testing cycles.

6) Performance Testing, Documentation & Actions after Implementation:
This phase involves the following:

* Execution of test cases pertaining to performance testing & stress testing.
* Revision & finalization of test documentation
* Performing Acceptance testing, load testing followed by recovery testing
* Verification of the software application by simulating conditions of actual usage.

7) Actions after Implementation:
This phase involves the following:

* Evaluation of the entire process of testing.
* Documentation of TGR (Things Gone Right) & TGW (Things Gone Wrong) reports. Identification of approaches to be followed in the event of occurrence of similar defects & problems in the future.
* Creation of comprehensive plans with a view to refine the process of Testing.
* Identification & fixing of newly cropped up errors on continuous basis.

Winding up of the test environment & restoration of all test equipment to the original base line conditions.

Copy from: http://www.softwaretestinggenius.com/articalDetails.php?qry=159

An approach for Security Testing of Web Applications

Introduction

As more and more vital data is stored in web applications and the number of transactions on the web increases, proper security testing of web applications is becoming very important. Security testing is the process that determines that confidential data stays confidential (i.e. it is not exposed to individuals/ entities for which it is not meant) and users can perform only those tasks that they are authorized to perform (e.g. a user should not be able to deny the functionality of the web site to other users, a user should not be able to change the functionality of the web application in an unintended way etc.).

Some key terms used in security testing

Before we go further, it will be useful to be aware of a few terms that are frequently used in web application security testing:

What is “Vulnerability”?
This is a weakness in the web application. The cause of such a “weakness” can be bugs in the application, an injection (SQL/ script code) or the presence of viruses.

What is “URL manipulation”?
Some web applications communicate additional information between the client (browser) and the server in the URL. Changing some information in the URL may sometimes lead to unintended behavior by the server.

What is “SQL injection”?
This is the process of inserting SQL statements through the web application user interface into some query that is then executed by the server.

What is “XSS (Cross Site Scripting)”?
When a user inserts HTML/ client-side script in the user interface of a web application and this insertion is visible to other users, it is called XSS.

What is “Spoofing”?
The creation of hoax look-alike websites or emails is called spoofing.
Security testing approach:

In order to perform a useful security test of a web application, the security tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS. Hopefully, the number of security defects present in the web application will not be high. However, being able to accurately describe the security defects with all the required details to all concerned will definitely help.

1. Password cracking:

The security testing on a web application can be kicked off by “password cracking”. In order to log in to the private areas of the application, one can either guess a username/ password or use some password cracker tool for the same. Lists of common usernames and passwords are available along with open source password crackers. If the web application does not enforce a complex password (e.g. with alphabets, number and special characters, with at least a required number of characters), it may not take very long to crack the username and password.

If username or password is stored in cookies without encrypting, attacker can use different methods to steal the cookies and then information stored in the cookies like username and password.

For more details see article on “Website cookie testing”.

2. URL manipulation through HTTP GET methods:

The tester should check if the application passes important information in the querystring. This happens when the application uses the HTTP GET method to pass information between the client and the server. The information is passed in parameters in the querystring. The tester can modify a parameter value in the querystring to check if the server accepts it.

Via HTTP GET request user information is passed to server for authentication or fetching data. Attacker can manipulate every input variable passed from this GET request to server in order to get the required information or to corrupt the data. In such conditions any unusual behavior by application or web server is the doorway for the attacker to get into the application.

3. SQL Injection:

The next thing that should be checked is SQL injection. Entering a single quote (‘) in any textbox should be rejected by the application. Instead, if the tester encounters a database error, it means that the user input is inserted in some query which is then executed by the application. In such a case, the application is vulnerable to SQL injection.

SQL injection attacks are very critical as attacker can get vital information from server database. To check SQL injection entry points into your web application, find out code from your code base where direct MySQL queries are executed on database by accepting some user inputs.

If user input data is crafted in SQL queries to query the database, attacker can inject SQL statements or part of SQL statements as user inputs to extract vital information from database. Even if attacker is successful to crash the application, from the SQL query error shown on browser, attacker can get the information they are looking for. Special characters from user inputs should be handled/escaped properly in such cases.

4. Cross Site Scripting (XSS):

The tester should additionally check the web application for XSS (Cross site scripting). Any HTML e.g. 'HTML' or any script e.g. 'SCRIPT' should not be accepted by the application. If it is, the application can be prone to an attack by Cross Site Scripting.

Attacker can use this method to execute malicious script or URL on victim’s browser. Using cross-site scripting, attacker can use scripts like JavaScript to steal user cookies and information stored in the cookies.

Many web applications get some user information and pass this information in some variables from different pages.

E.g.: http://www.examplesite.com/index.php?userid=123&query=xyz

Attacker can easily pass some malicious input or script as a ‘&query’ parameter which can explore important user/server data on browser.

Important: During security testing, the tester should be very careful not to modify any of the following:

* Configuration of the application or the server
* Services running on the server
* Existing user or customer data hosted by the application

Additionally, a security test should be avoided on a production system.

The purpose of the security test is to discover the vulnerabilities of the web application so that the developers can then remove these vulnerabilities from the application and make the web application and data safe from unauthorized actions.

Copy from: http://www.softwaretestinghelp.com/security-testing-of-web-applications/

Web Testing: Complete guide on testing web applications

In my previous post I have outlined points to be considered while testing web applications. Here we will see some more details on web application testing with web testing test cases. Let me tell you one thing that I always like to share practical knowledge, which can be useful to users in their career life. This is a quite long article so sit back and get relaxed to get most out of it.

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

1) Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

* Test the outgoing links from all the pages from specific domain under test.
* Test all internal links.
* Test links jumping on the same pages.
* Test links used to send the email to admin or other users from web pages.
* Test to check if there are any orphan pages.
* Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?

* First check all the validations on each field.
* Check for the default values of fields.
* Wrong inputs to the fields in the forms.
* Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.

Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines.

Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.

Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.

Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

* Browser compatibility
* Operating system compatibility
* Mobile browsing
* Printing options

Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.

OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing

Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.

In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

* Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
* If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
* Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
* Web directories or files should not be accessible directly unless given download option.
* Test the CAPTCHA for automates scripts logins.
* Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
* All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

I think I have addressed all major web testing methods. I have worked for around 2 years out of my testing career on web testing. There are some experts who have spent their whole career life on web testing. If I missed out addressing some important web testing aspect then let me know in comments below. I will keep on updating the article for latest testing information.

Copy from: http://www.softwaretestinghelp.com/web-application-testing/

Example of Severity and Priority

1. High Severity -Low priority
Supposing, you try the wildest or the weirdest of operations in a software (say, to be released the next day) which a normal user would not do and supposing this renders a run -time error in the application,
the severity would be high.The priority would be low as the operations or the steps which rendered this error by most chances will not be done by a user.
2. Low Severity -High priority
An example would be- you find a spelling mistake in the name of the website which you are testing.
Say, the name is supposed to be Google and its spelled there as 'Gaogle'.Though, it doesnt affect the basic functionality of the software,it needs to be corrected before the release.
Hence, the priority is high.
3. High severity--- High Priority
A bug which is a show stopper.i.e, a bug due to which we are unable to proceed our testing.
An example would be a run time error during the normal operation of the software,
which would cause the application to quit abruptly.
4. Low severity --- low priority
Cosmetic bugs

(1) High Priority and Low Severity
If a company logo is not properly displayed on their website.
(2) High Priority and High Severity
Suppose you are doing online shopping and filled payment informations, but after submitting the form, you get a message like "Order has been cancelled."
(3) Low Priority and High Severity
If we have a typical scenario in which the application get crashed, but that scenario exists rarely.
(4) Low Priority and Low Severity
There is a mistake like "You have registered success" instead of successfully, success is written.

There are four levels of software testing: Unit >> Integration >> System >> Acceptance.

software testing levels


1. Unit Testing is a level of the software testing process where individual units/components of a software/system are tested. The purpose is to validate that each unit of the software performs as designed.
2. Integration Testing is a level of the software testing process where individual units are combined and tested as a group. The purpose of this level of testing is to expose faults in the interaction between integrated units.
3. System Testing is a level of the software testing process where a complete, integrated system/software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.
4. Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.

Note: Some tend to include Regression Testing as a separate level of software testing but that is a misconception. Regression Testing is, in fact, just a type of testing that can be performed at any of the four main levels.
copy from: http://softwaretestingfundamentals.com/software-testing-levels/

Test Plan

TEST PLAN DEFINITION

A Software Test Plan is a document describing the testing scope and activities. It is the basis for formally testing any software/product in a project.

ISTQB Definition

* test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice,and any risks requiring contingency planning. It is a record of the test planning process.
* master test plan: A test plan that typically addresses multiple test levels.
* phase test plan: A test plan that typically addresses one test phase.

TEST PLAN TYPES

One can have the following types of test plans:

* Master Test Plan: A single high-level test plan for a project/product that unifies all other test plans.
* Testing Level Specific Test Plans: Plans for each level of testing.
o Unit Test Plan
o Integration Test Plan
o System Test Plan
o Acceptance Test Plan
* Testing Type Specific Test Plans: Plans for major types of testing like Performance Test Plan and Security Test Plan.

TEST PLAN TEMPLATE

The format and content of a software test plan vary depending on the processes, standards, and test management tools being implemented. Nevertheless, the following format, which is based on IEEE standard for software test documentation, provides a summary of what a test plan can/should contain.

Test Plan Identifier:

* Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one.)

Introduction:

* Provide an overview of the test plan.
* Specify the goals/objectives.
* Specify any constraints.

References:

* List the related documents, with links to them if available, including the following:
o Project Plan
o Configuration Management Plan

Test Items:

* List the test items (software/products) and their versions.

Features to be Tested:

* List the features the software/product to be tested.
* Provide references to the Requirements and/or Design specifications of the features to be tested

Features Not to Be Tested:

* List the features of the software/product which will not be tested.
* Specify the reasons these features won’t be tested.

Approach:

* Mention the overall approach to testing.
* Specify the testing levels, the testing types, and the testing methods.

Item Pass/Fail Criteria:

* Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing.

Suspension Criteria and Resumption Requirements:

* Specify criteria to be used to suspend the testing activity.
* Specify testing activities which must be redone when testing is resumed.

Test Deliverables:

* List test deliverables, and links to them if available, including the following:
o Test Plan (this document itself)
o Test Cases
o Test Scripts
o Defect/Enhancement Logs
o Test Reports

Test Environment:

* Specify the properties of test environment: hardware, software, communications etc.
* List any testing or related tools.

Estimate:

* Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation.

Schedule:

* Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule.

Staffing and Training Needs:

* Specify staffing needs by role and required skills.
* Identify training that is necessary to provide those skills, if not already acquired.

Responsibilities:

* List the responsibilities of each team/role/individual.

Risks:

* List the risks that have been identified.
* Specify the mitigation plan and the contingency plan for each risk.

Assumptions and Dependencies:

* List the assumptions that have been made during the preparation of this plan.
* List the dependencies.

Approvals:

* Specify the names and roles of all persons who must approve the plan.
* Provide space for signatures and dates. (If the document is to be printed.)

TEST PLAN GUIDELINES

* Make the plan concise. Avoid redundancy and superfluousness. If you think you do not need a section that has been mentioned in the template above, go ahead and delete that section in your test plan.
* Be specific. For example, when you specify an operating system as a property of a test environment, mention the OS Edition/Version as well, not just the OS Name.
* Make use of lists and tables wherever possible. Avoid lengthy paragraphs.
* Have the test plan reviewed a number of times prior to baselining it or sending it for approval. The quality of your test plan speaks volumes about the quality of the testing you or your team is going to perform.
* Update the plan as and when necessary. An out-dated and unused document stinks.

Copied from: http://softwaretestingfundamentals.com/test-plan/