Sunteți pe pagina 1din 11

Quality Assurance (QA)

Software QA
Tester

Definitie: este o metoda de prevenire a greselilor, defectelor sau omisiunilor in timpul


producerii unui produs software.

TIPURI DE TESTARE

I Validation and Verification

Definitie:
Validarea raspunde la intrebarea: construim produsul corect – specificatii, cerinte client
(testare documentatie) (Black box)

Verificarea raspunde la intrebarea: construim corect produsul – testare pt functionalitate

II Box

Black Box testing


– metoda de testare software care examineaza functionalitatea unei aplicatii fara sa iei in
calcul structura interna (aplicatia ca o cutie neagra – nu stii ce este in ea, nu ai cunostinte
de dezvoltare)

White Box
– metoda de testare software care testeaza structura interna a unei aplicatii, si nu
functionalitatea acesteia (aplicatia ca o cutie a carei continut il vezi – ai cunostinte de
dezvoltare)

Grey Box
- un amestec intre cele doua

III Functional and Non – functional

Functionala

– testarea functionala este un proces de asigurarea a calitatii si un tip de testare black box
care isi bazeaza test caseurile pe testarea specificatiilor componentei software
– functiile aplicatiei sunt testate printr-un input (actiune, comanda) si examinarea outputului
(raspuns, rezultat) si structura interna a programului este foarte rar luata in considerare
– Testarea functionala descrie ce face aplicatia (raspunde la intrebarea Ce face aplicatia?)

Niveluri de testare

A Unit Test
– Cand ne referim la Unit testing, unitatea este cea mia mica parte care se poate testa
dintr-o aplicatie
– Unit Tesing este o metoda de testare software prin care unitati individuale din codul sursa
sunt testate pentru a determina daca se pot folosi

B Integration test
– Testarea de integrare este faza de testare software in care module software individuale
sunt combinate si testate impreuna, ca un grup.
– Testarea de integrare are ca input modulele testate cu unit testing, le grupeaza si le
testeaza si are ca output un sistem integrat, gata pentru System testting.

C System Test

- System testing are loc pe un system complet, deja integrat si se verifica ca acesta este in
concordanta cu requirements-urile lui.

– system testing-ul are ca input, toate componentele software integrate, care au trecut de
testarea de integrare, dar si sistemul software ca atare, integrat cu alte sisteme software
sau hardware

D Acceptance Test
- Testarea de acceptanta este un test in urma caruia se determina daca cerintele si
specificatiile contractului/clientului sunt indeplinite.

– Testarea de acceptanta se poate face atat la provider, cat si la client. UAT este user
acceptance testing (are loc la client).
– Alpha testing are loc la dezvoltator, si reprezinta testarea systemului de staff-ul intern,
inainte de release-ul catre clientii externi
– Beta testing are loc la clienti, si reprezinta testarea efectuata de un grup de cienti care
foloses systemul la locatia lor si dau feedback inainte ca produsul sa fie released pt toti
clientii.
– Un smoke test poate fi efectuat inainte de inceperea procesului principal de testare a
build-ului.

E Regression test

– este un tip de testare care cauta defecte noi, sau regresii intr-un modul al aplicatiei, dupa
ce au avut loc modificari.
– 1. Sa te asiguri ca modificarile nu au dus la introducerea unor defecte noi.
– 2. Sa te asiguri ca modificarile nu au dus la reintroducerea unor defecte vechi.

Common methods of regression testing include rerunning previously completed tests and
checking whether program behavior has changed and whether previously fixed faults have
re-emerged. Regression testing can be performed to test a system efficiently by
systematically selecting the appropriate minimum set of tests needed to adequately cover a
particular change.
Non functionala
- testarea non functionala este testarea unei aplicatii pentru cerintele non-functionale.
Raspunde la intrebarea Cum face aplicatia?

A Compatibilitate
- testarea unei aplicatii pentru evaluare compatibilitati acesteia cu un environment (mediu pc)
– un site e compatibil cu browser-ul
– capacitate hardware
– sistemul de operare

B Soak testing
– Testarea unui sistem cu un load semnificativ pe o perioada foarte lunga de timp, pentru a
descoperi cum se comporta aplicatia cand este folosita intens.

C Localization
– pe mai multe limbi

D Recovery testing

- testeaza cat de bine se redreseaza (recover) o aplicatie dupa un crash, un fail hardware
sau probleme similare

E Security testing
- este procesul de a afla daca un sistem de informare protejeaza datele si mentine si
functionalitatea.
http://en.wikipedia.org/wiki/Security_testing (cele 6 componente)

F Usability testing

Usability testing is a technique used in user-centered interaction design to evaluate a product


by testing it on users. This can be seen as an irreplaceable usability practice, since it gives
direct input on how real users use the system.[1] This is in contrast with usability inspection
methods where experts use different methods to evaluate a user interface without involving
users.
Usability testing focuses on measuring a human-made product's capacity to meet its intended
purpose. Examples of products that commonly benefit from usability testing are foods,
consumer products, web sites or web applications, computer interfaces, documents, and
devices. Usability testing measures the usability, or ease of use, of a specific object or set of
objects, whereas general human-computer interaction studies attempt to formulate universal
principles.

Performanta -
Load - Load testing is the process of putting demand on a system or device and measuring its
response. Load testing is performed to determine a system’s behavior under both normal and
anticipated peak load conditions. It helps to identify the maximum operating capacity of an
application as well as any bottlenecks and determine which element is causing degradation.
When the load placed on the system is raised beyond normal usage patterns, in order to test
the system's response at unusually high or peak loads, it is known as stress testing. The load
is usually so great that error conditions are the expected result, although no clear boundary
exists when an activity ceases to be a load test and becomes a stress test.

Volume - Volume Testing belongs to the group of non-functional tests, which are often
misunderstood and/or used interchangeably. Volume testing refers to testing a software
application with a certain amount of data. This amount can, in generic terms, be the database
size or it could also be the size of an interface file that is the subject of volume testing. For
example, if you want to volume test your application with a specific database size, you will
expand your database to that size and then test the application's performance on it. Another
example could be when there is a requirement for your application to interact with an interface
file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on
to/from the file. You will create a sample file of the size you want and then test the
application's functionality with that file in order to test the performance.

Stress - Stress testing (sometimes called torture testing) is a form of deliberately intense or
thorough testing used to determine the stability of a given system or entity. It involves testing
beyond normal operational capacity, often to a breaking point, in order to observe the results.
Reasons can include:
to determine breaking points or safe usage limits
to confirm intended specifications are being met
to determine modes of failure (how exactly a system fails)
to test stable operation of a part or system outside standard usage

Test case-uri

A test case, in software engineering, is a set of conditions or variables under which a tester
will determine whether an application, software system or one of its features is working as it
was originally established for it to do. The mechanism for determining whether a software
program or system has passed or failed such a test is known as a test oracle. In some
settings, an oracle could be a requirement or use case, while in others it could be a heuristic.
It may take many test cases to determine that a software program or system is considered
sufficiently scrutinized to be released. Test cases are often referred to as test scripts,
particularly when written - when they are usually collected into test suites.

In order to fully test that all the requirements of an application are met, there must be at least
two test cases for each requirement: one positive test and one negative test. If a requirement
has sub-requirements, each sub-requirement must have at least two test cases. Keeping
track of the link between the requirement and the test is frequently done using a traceability
matrix. Written test cases should include a description of the functionality to be tested, and
the preparation required to ensure that the test can be conducted.
A formal written test-case is characterized by a known input and by an expected output, which
is worked out before the test is executed. The known input should test a precondition and the
expected output should test a postcondition.
Informal test cases[edit]

For applications or systems without formal requirements, test cases can be written based on
the accepted normal operation of programs of a similar class. In some schools of testing, test
cases are not written at all but the activities and results are reported after the tests have been
run.
In scenario testing, hypothetical stories are used to help the tester think through a complex
problem or system. These scenarios are usually not written down in any detail. They can be
as simple as a diagram for a testing environment or they could be a description written in
prose. The ideal scenario test is a story that is motivating, credible, complex, and easy to
evaluate. They are usually different from test cases in that test cases are single steps while
scenarios cover a number of steps of the key.
Typical written test case format[edit]

A test case is usually a single step, or occasionally a sequence of steps, to test the correct
behaviour/functionality, features of an application. An expected result or expected outcome is
usually given.
Additional information that may be included:
test case ID
test case description
test step or order of execution number
related requirement(s)
depth
test category
author
check boxes for whether the test can be or has been automated
pass/fail
remarks
Larger test cases may also contain prerequisite states or steps, and descriptions.
A written test case should also contain a place for the actual result.
These steps can be stored in a word processor document, spreadsheet, database or other
common repository.
In a database system, you may also be able to see past test results and who generated the
results and the system configuration used to generate those results. These past results would
usually be stored in a separate table.
Test suites often also contain
Test summary
Configuration
Besides a description of the functionality to be tested, and the preparation required to ensure
that the test can be conducted, the most time consuming part in the test case is creating the
tests and modifying them when the system changes.
Under special circumstances, there could be a need to run the test, produce results, and then
a team of experts would evaluate if the results can be considered as a pass. This happens
often on new products' performance number determination. The first test is taken as the base
line for subsequent test / product release cycles.
Acceptance tests, which use a variation of a written test case, are commonly performed by a
group of end-users or clients of the system to ensure the developed system meets the
requirements specified or the contract. User acceptance tests are differentiated by the
inclusion of happy path or positive test cases to the almost complete exclusion of negative
test cases.

Agile Scrum

Scrum is an iterative and incremental Agile software development framework for managing
software projects and product or application development. It defines "a flexible, holistic
product development strategy where a development team works as a unit to reach a common
goal." It challenges assumptions of the "traditional, sequential approach" to product
development. Scrum enables teams to self-organize by encouraging physical co-location of
all team members and daily face to face communication among all team members and
disciplines in the project.
A key principle of Scrum is its recognition that during a project the customers can change their
minds about what they want and need (often called requirements churn), and that unpredicted
challenges cannot be easily addressed in a traditional predictive or planned manner. As such,
Scrum adopts an empirical approach—accepting that the problem cannot be fully understood
or defined, focusing instead on maximizing the team's ability to deliver quickly and respond to
emerging requirements.

Product Owner
The Product Owner represents the stakeholders and is the voice of the customer. He or she is
accountable for ensuring that the team delivers value to the business. The Product Owner
writes (or has the team write) customer-centric items (typically user stories), ranks and
prioritizes them, and adds them to the product backlog. Scrum teams should have one
Product Owner, and while they may also be a member of the development team, this role
should not be combined with that of the Scrum Master. In an enterprise environment, though,
the Product Owner is often combined with the role of Project Manager as they have the best
visibility regarding the scope of work (products).[citation needed]
Development Team
The Development Team is responsible for delivering potentially shippable increments (PSIs)
of product at the end of each Sprint (the Sprint Goal). A Team is made up of 3–9 individuals
with cross-functional skills who do the actual work (analyse, design, develop, test, technical
communication, document, etc.). The Development Team in Scrum is self-organizing, even
though there may be some level of interface with project management offices (PMOs).
Scrum Master
Scrum is facilitated by a Scrum Master, who is accountable for removing impediments to the
ability of the team to deliver the product goals and deliverables. The Scrum Master is not a
traditional team lead or project manager, but acts as a buffer between the team and any
distracting influences. The Scrum Master ensures that the Scrum process is used as
intended. The Scrum Master is the enforcer of the rules of Scrum, often chairs key meetings,
and challenges the team to improve. The role has also been referred to as a servant-leader to
reinforce these dual perspectives.
The Scrum Master differs from a project manager in that the latter may have people
management responsibilities unrelated to the role of Scrum Master. The Scrum Master role
excludes any such additional people responsibilities. In fact, there is no role of project
manager in Scrum at all, because none is needed. The traditional responsibilities of a project
manager have been divided up and reassigned among the three Scrum roles, and mostly to
the Development Team and the Product Owner, rather than to the Scrum Master. Practicing
Scrum with the addition of a project manager indicates a fundamental misunderstanding of
Scrum, and typically results in conflicting responsibilities, unclear authority, and sub-optimal
results.

Sprint

A sprint (or iteration) is the basic unit of development in Scrum. The sprint is a "timeboxed"
effort; that is, it is restricted to a specific duration. The duration is fixed in advance for each
sprint and is normally between one week and one month, although two weeks is typical.[7]
Each sprint is started by a planning meeting, where the tasks for the sprint are identified and
an estimated commitment for the sprint goal is made, and ended by a sprint
review-and-retrospective meeting,[4] where the progress is reviewed and lessons for the next
sprint are identified.

Meetings

A. Sprint planning meeting


At the beginning of the sprint cycle (every 7–30 days), a "Sprint planning meeting" is held:[11]
-Select what work is to be done
-Prepare the Sprint Backlog that details the time it will take to do that work, with the entire
team
-Identify and communicate how much of the work is likely to be done during the current sprint
Eight-hour time limit [9]
(1st four hours) Entire team: dialog for prioritizing the Product Backlog
(2nd four hours) Development Team: hashing out a plan for the Sprint, resulting in the Sprint
Backlog
B. Daily Scrum meeting

Each day during the sprint, a project team communication meeting occurs. This is called a
Daily Scrum (meeting) and has specific guidelines:
All members of the development team come prepared with the updates for the meeting.
The meeting starts precisely on time even if some development team members are missing.
The meeting should happen at the same location and same time every day.
The meeting length is set (timeboxed) to 15 minutes.
All are welcome, but normally only the core roles speak.
During the meeting, each team member answers three questions:
What have you done since yesterday?
What are you planning to do today?
Any impediments/stumbling blocks? Any impediment/stumbling block identified in this meeting
is documented by the Scrum Master and worked towards resolution outside of this meeting.
No detailed discussions shall happen in this meeting.

C. End meetings

At the end of a sprint cycle, two meetings are held: the "Sprint Review Meeting" and the
"Sprint Retrospective".
I At the Sprint Review Meeting:
Review the work that was completed and the planned work that was not completed
Present the completed work to the stakeholders (a.k.a. "the demo")
Incomplete work cannot be demonstrated
Four-hour time limit
II At the Sprint Retrospective:
All team members reflect on the past sprint
Make continuous process improvements
Two main questions are asked in the sprint retrospective: What went well during the sprint?
What could be improved in the next sprint?
Three-hour time limit
This meeting is facilitated by the Scrum Master

Tipuri de aplicatii
WebBased – applicatii web, care se deschid in browser (testare de ti cross browser)
Desktop – care se instaleaza (testare de tip cross platforms, sau OS)
Client Server (aplicatii care se conecteaza la un server – gen messenger)

Defecte

Defect Life Cycle

Open/New (Cei de la Testare gasesc un defect si il raporteaza, sau defecte gasite de


clienti) → Assigned (Un programator is asigneaza defectul) → Resolved (programatorul
anunta ca defectul este rezolvat, si cel de la QA poate sa il verifice. Rezolutile pt
resolved sunt: INVALID, DUPLICATE, WILL NOT FIX, WORKS FOR ME sau RESOLVED)

1. Verified (daca defectul chiar este rezolvat) → CLOSED
2. Reopened (daca nu este rezolvat) → Assigned

Test Planuri

Un test plan este un document care detaliaza procesul de testare pentru un program
software. Planul detaliaza pasii care se urmeaza in procesul de testare.

Ce contine un Test Plan:

Test plan identifier (un nume sau un numar)


Introduction (o scurta descriere despre proiect si care este target-ul testarii)
Test items (care este produsul ce va fi testat)
Features to be tested (ce features se vor testa)
Features not to be tested (ceatures nu se vor testa)
Approach (ce tipuri de testare se vor efectua)
Item pass/fail criteria (cand va pica un item testarea si cand o va trece)
Suspension criteria and resumption requirements (cand poate sa se pauzeze testarea,
si cand sa renceapa)
Test deliverables (ce documente se vor trimite clientului in momentul terminarii
testarii)
Testing tasks (tascuri de testare)
Environmental needs (hardware si software)
Staffing and training needs (cati oameni, care oameni, ce traininguri)
Schedule (program, deadline-uri)
Risks and contingencies (ce riscuri exista)
Approvals (aprobari)

Workflow

Performance Testing :

Performance testing is to check the applications ,i.e.


response time, processing speed,efficiency.

LOAD TEST: testing done by increasing the number of users


gradually at a specific time.

STRESS TEST: Testing done with maximum number of user that


system can hold in low disk space, low memory, low
configuration.

VOLUME TESTING: Maximum data that the system can accommodate.

VOLUME TESTING:

- Online system: Input fast, but not necessarily fastest possible, from different input
channels. This is done for some time in order to check if temporary buffers tend to
overflow or fill up, if execution time goes down. Use a blend of create, update, read and
delete operations.
- Database system: The database should be very large. Every object occurs with maximum
number of instances. Batch jobs are run with large numbers of transactions, for example
where something must be done for ALL objects in the database. Complex searches with
sorting through many tables. Many or all objects linked to other objects, and to the
maximum number of such objects. Large or largest possible numbers on sum fields.
- File exchange: Especially long files. Maximal lengths. Lengths longer than typical
maximum values in communication protocols (1, 2, 4, 8, … Mega- og Gigabytes). For
example lengths that are not supported by mail protocols. Also especially MANY files,
even in combination with large lengths. (1024, 2048 etc. files). Email with maximum
number of attached files. Lengths of files that let input buffers overflow or trigger
timeouts. Large lengths in general in order to tripper timeouts in communications.
- Disk space: Try to fill disk space everywhere there a re disks. Check what happens if
there is no more space left and even more data is fed into the system. Is there any kind of
reserve like “overflow-buffers”? Are there any alarm signals, graceful degradation? Will
there be reasonable warnings? Data loss? This can be tested by “tricks”, by making less
space available and testing with smaller volumes.
- File system: Maximal numbers of files for the file system and/or maximum lengths.
Internal memory: Minimum amount of memory available (installed). Open many
programs at the same time, at least on the client platform.

1) Performance Testing:

Performance testing is the testing, which is performed, to ascertain how the components of a
system are performing, given a particular situation. Resource usage, scalability and reliability
of the product are also validated under this testing. This testing is the subset of performance
engineering, which is focused on addressing performance issues in the design and
architecture of software product.

Performance Testing Goal:

The primary goal of performance testing includes establishing the benchmark behaviour of
the system. There are a number of industry-defined benchmarks, which should be met during
performance testing.

Performance testing does not aim to find defects in the application, it address a little more
critical task of testing the benchmark and standard set for the application. Accuracy and close
monitoring of the performance and results of the test is the primary characteristic of
performance testing.

Example:

For instance, you can test the application network performance on Connection Speed vs.
Latency chart. Latency is the time difference between the data to reach from source to
destination. Thus, a 70kb page would take not more than 15 seconds to load for a worst
connection of 28.8kbps modem (latency=1000 milliseconds), while the page of same size
would appear within 5 seconds, for the average connection of 256kbps DSL (latency=100
milliseconds). 1.5mbps T1 connection (latency=50 milliseconds) would have the performance
benchmark set within 1 second to achieve this target.
For example, the time difference between the generation of request and acknowledgement of
response should be in the range of x ms (milliseconds) and y ms, where x and y are standard
digits. A successful performance testing should project most of the performance issues, which
could be related to database, network, software, hardware etc…

2) Load Testing:

Load testing is meant to test the system by constantly and steadily increasing the load on the
system till the time it reaches the threshold limit. It is the simplest form of testing which
employs the use of automation tools such as LoadRunner or any other good tools, which are
available. Load testing is also famous by the names like volume testing and endurance
testing.

The sole purpose of load testing is to assign the system the largest job it could possible
handle to test the endurance and monitoring the results. An interesting fact is that sometimes
the system is fed with empty task to determine the behaviour of system in zero-load situation.

Load Testing Goal:


The goals of load testing are to expose the defects in application related to buffer overflow,
memory leaks and mismanagement of memory. Another target of load testing is to determine
the upper limit of all the components of application like database, hardware and network etc…
so that it could manage the anticipated load in future. The issues that would eventually come
out as the result of load testing may include load balancing problems, bandwidth issues,
capacity of the existing system etc…

Example:

For example, to check the email functionality of an application, it could be flooded with 1000
users at a time. Now, 1000 users can fire the email transactions (read, send, delete, forward,
reply) in many different ways. If we take one transaction per user per hour, then it would be
1000 transactions per hour. By simulating 10 transactions/user, we could load test the email
server by occupying it with 10000 transactions/hour.
3) Stress testing

Under stress testing, various activities to overload the existing resources with excess jobs are
carried out in an attempt to break the system down. Negative testing, which includes removal
of the components from the system is also done as a part of stress testing. Also known as
fatigue testing, this testing should capture the stability of the application by testing it beyond
its bandwidth capacity.

The purpose behind stress testing is to ascertain the failure of system and to monitor how the
system recovers back gracefully. The challenge here is to set up a controlled environment
before launching the test so that you could precisely capture the behaviour of system
repeatedly, under the most unpredictable scenarios.

Stress Testing Goal:

The goal of the stress testing is to analyse post-crash reports to define the behaviour of
application after failure. The biggest issue is to ensure that the system does not compromise
with the security of sensitive data after the failure. In a successful stress testing, the system
will come back to normality along with all its components, after even the most terrible break
down.

Example:

As an example, a word processor like Writer1.1.0 by OpenOffice.org is utilized in


development of letters, presentations, spread sheets etc… Purpose of our stress testing is to
load it with the excess of characters.
To do this, we will repeatedly paste a line of data, till it reaches its threshold limit of handling
large volume of text. As soon as the character size reaches 65,535 characters, it would
simply refuse to accept more data. The result of stress testing on Writer 1.1.0 produces the
result that, it does not crash under the stress and that it handle the situation gracefully, which
make sure that application is working correctly even under rigorous stress conditions.

S-ar putea să vă placă și