Sunteți pe pagina 1din 10

Edith Cowan University

Research Online
Australian Digital Forensics Conference

Security Research Institute Conferences

2006

Validation of Forensic Computing Software


Utilizing Black Box Testing Techniques
Tom Wilsdon
University of South Australia

Jill Slay
University of South Australia

Originally published in the Proceedings of the 4th Australian Digital Forensics Conference, Edith Cowan University, Perth Western Australia,
December 4th 2006.
This Conference Proceeding is posted at Research Online.
http://ro.ecu.edu.au/adf/37

Validation of Forensic Computing Software Utilizing Black Box Testing Techniques


TomWilsdon&JillSlay
UniversityofSouthAustralia
SouthAustralia
tom.wilsdon@unisa.edu.au

Abstract
Theprocessofvalidatingthecorrectoperationofsoftwareisdifficultforavarietyofreasons.Theneedto
validatesoftwareutilisedasforensiccomputingtoolssuffersthesamefateandishamperedtoagreaterextent
withthesourcecodeofsaidtoolsusuallynotbeingaccessible.Thereforeatestingregimemustbedeveloped
thatoffersahighdegreeofcorrectnessandhighprobabilityoffindingsoftwarefaultswithlimitedabilityto
view source code. Software testing isa complexcomponent of software engineeringinits ownright. This
complexityisencounteredwithaninfinitenumberofenvironmentsposedbyhardware,andcohostedsoftware;
onecanneverdetermineanabsoluterangeofteststodeterminethesoftwareineachpotentialenvironmentor
customsetting.Thereforeafinitesetoftestsaredevelopedtovalidatethesoftware.Asthesoftwarebeingtested
isnotdisclosedtothesourcecodelevelthetestingisdevelopedaroundthefunctionsofthesoftware.Using
techniquescategorizedintheblackboxmethodologythisregimemustattempttotenderavaliditystatustothe
softwaresfunctions.
Keywords
ForensicComputing,Software,Testing

INTRODUCTION
The varietyand number ofdigitaldevicesavailable tothedomestic user is growingat aphenomenalrate
(Grabowski1998;Etter2001).Unfortunatelysuchtechnologyisnotsolelybeingusedforthebettermentoflife.
The misuse of devices to either conduct, or be used as a tool to conduct, crime is well publicised. The
unfortunaterealityispeopleembrace,interpretandabusetheaddedabilityonecaninheritfromtheacquisition
ofdigitaldevices.
Todatethesignificanceofencounteringdigitaldevicesbylawenforcementagencieshasyieldedthecreationof
dedicated investigators dealing with the interrogation of said devices. A common misconception is that an
investigationinvolvingdigitalmeansisisolatedtoahightechcrimewheninrealityquitetheoppositeofthisis
true(Etter2001).
Onesuchexampleofthisistheuptakeandusageofmobilephones.Aseeminglyisolateddevicecanproduce
associationsbetweenindividualpartiesbyreviewingincomingandoutgoingcommunicationsonthehandsets.
Anytwoor moredevicesoffertheability toidentify relationshipsbetween devices and subsequentlytheir
respectiveowners.Mobilephonesareonlyoneexampleandsimilarprocessescanbeappliedtocomputers,
personal digital assistants (PDA), fixed line telephones (billing), email accounts, websites, closed circuit
televisioncameras.Alldevicesandmanymore,potentiallyoffertheinvestigatortheabilitytocreateaseriesof
relationshipsbetweenindividualsviatheirdevices.Theusualcaveatofprovingtheindividualusedthedevice
foraspecificmessage,call,etcstillexists.
Whatthistranslatestoisthefactthatevidenceexistsoncountlessdevicesandtheremustexistsomewayto
extractthis inasound manner.Forensiccomputing investigators arein thefrontlineoftheanalysis ofthe
devices.Withthisknowledge,lawenforcementagencieshaveequippedtheirdigitalforensicinvestigatorswitha
varietyoftoolsandtrainingtoseize,analyseandreportrelevantinformationfromatrulyinfinitenumberof
sources.

Mostofthetoolsutilizedtoconductsuchinvestigationsareproprietarysoftwareapplicationsdevelopedby
specialisedapplicationvendors.Asthetradesecretistheapplication,theyarenotwillingtodisclosesourcecode
fortestingandevaluation.Itisunrealistictoexpectcompaniestodosotolawenforcementcommunity,orother
parties,toevaluatetheunderthebonnetastheriskofistoogreat.

NEED FOR EVALUATION


Theneedfortheevaluationofallforensiccomputingsoftwareiswelldocumented(Reust2006;Meyers&
Rogers2004.Evaluationframeworksforcomputerforensictoolsdoexist,suchasNISTComputerForensicTool
TestingprogrammeandSWGDE,andtheirworkisextremelycomprehensiveandscientificallysound.However,
theoutputofsuchorganizationsisfailingtosatisfythedemandoftherapidlydevelopingindustry.Itistherefore
proposed that an alternative testing framework be developed and implemented to offer a similar, if not
equivalent,leveloftestingbutbemoreefficientwithrespecttotime,outputandfinancialrequirements.Such
testingalternativeshavebeenproposedbynumeroussourcesintheAustraliancontextasnosuchevaluation
existsbeyondanindividualsenvironment(Armstrong2003;Wilsdon2005).Thispapersproposesahighlevel
architectureforsuchastreamlinedframework.
Itisimportanttoreiteratethatthisevaluationisconcentratedontheaccuracy,reliabilityofforensiccomputing
tools and not the processes associated to the investigation. The discipline is well documented with these
processes and guidelines for investigators but evaluation of tools is lacking in comparison. The underlying
reasonsforthisarenotdocumentedbuttheauthorsassumetobealackofskillsinthecomputersciencedomain,
thefinancialburden,experience&time.
Thetechnicalcomponentofforensiccomputingcanbeseenasthemostcriticalwithrespecttotheeffectiveness
ofaninvestigation.Withoutthecorrecttoolsatradesmancanonlydeliverasubstandardproductthisdoesnot
differfromthephysicaltothedigitalworld.Sowhatconstitutesaforensiccomputingtool?Totheauthors
currentknowledgethereisnodefinitionastowhataforensiccomputingtoolis.Manymayhavetheiropinions
onthisbutforthepurposeofthisresearchaforensiccomputingtoolisasoftwareapplication,oracomponent
ofanapplication,thatisutilizedtoperformsomefunctiononadigitaldeviceaspartofaforensiccomputing
investigation.
Whilethisdefinitionisbroadandnonspecificsotoistherealworldofforensiccomputingtoolsbeingembraced
byinvestigators.MainstreamtoolssuchasGuidanceSoftwaresEnCaseorAccessDatasForensicToolkit(FTK)
arewellknownandutilisedbymostagencies.Furthertothesemainstreamsuitestherearecountlessscriptsand
inhousedevelopedapplicationsthatperformforensictasks.Alsotheadoptionofsoftwaredevelopedforanother
purposeisbeingutilizedmoreandmore.Examplesofthisincludesystemadministrationtools,scriptsandeven
operatingsystems.
Sucharangeoftoolswithavarietyofpurposesincreasestheproblemoftestingifwehaveonecommontool
withoneclearobjectivethetestingrequirementsisclearlydefined.Unfortunatelyquitetheoppositeexistswith
manytoolssuchasEnCaseandFTKprovidingnumerousfunctions.Complicatingthisfurtheristheabilityfor
userstocreatetheirownfunctionstoextendtheexistingapplication.Thecreationof,andutilisationofexisting,
toolsisgrowingataphenomenalrate.Itisnaivetoassumeeachtoolanditsfunctionscanbeevaluated.The
currentlytestingoftoolsisameredropintheoceansuggestingwemustoptimisetheprocessandharnessall
resourcestoaddressthisshortcoming.Compoundingthisistheneedtoexaminenewdevicesregularlyandtheir
implications.
Holzmann explores the economics for software developers to include as many functions/features into their
softwareandchartsagraphsuggestingthegreaterthefeaturesthegreaterthedefects.Itisthentheonusofthe
developertoemploysufficientqualityassurancemechanismswithinthedevelopmentlifecycletoidentifysuch
defectsandaddresstheseasrequired.AccordingtotheDFRWS2005WorkshopReportonevendorofsoftware
suggested vendors cannot anticipate all uses in all situations therefore they can only test a finite set of
permutationsofsystemsanduses(Reust2006).Thisraisesthequestionwhetherproprietarysoftwareismore

reliableandaccuratethanopensourcesoftware.Numerousobserversactivelyparticipateinthisdebateandat
thistimethereisnoclearevidencetosuggesttotheauthorsthatonemodelisbetterthantheother.
Holzmannalsodocumentsthebenefitoftestingtoidentifyalldefectstosuggestthereexistsomepointoftime
whereamajorityofdefectshavebeenidentifiedwithinthesoftwaredevelopmentlifecycle.Testingbeyondthis
point is unviable for the developer, as the main defects identified are the more likely to occur and cause
potentiallylessimpact.Unfortunatelytheremainingdefectswithinthesoftwarearelesslikelytooccurbutif
theydomaterializetheimpactwillbegreater.(Holzmann)
Soifthevendorispotentiallydistributingsoftwaretoolswithdefectsresidingwithintheirapplications,whyisit
theforensiccomputingcommunitysresponsibilitytoidentifysuchdefects?
Toaddressthisquestionwemustdeterminewhatthesoftware,anditsresults,areutilisedfor.Thewordforensic
canbecategorizedassuitableforuseincourtsofjudicature,orinapublicdiscussion(AAFS1996).Wemust
satisfythemoststringentstandard.Thereforeitisimperativetheprocessesandproceduresundertakentoderive
resultsfromaforensiccomputinginvestigationissoundinitsfoundationsandexecutionensuringresultsare
admissiblewithinthecourtroomasevidence.
Softwaretestingisasciencethatexistswithinthedisciplineofsoftwareengineeringandcomputerscience.Itis
amajorcomponentofthesoftwaredevelopmentlifecycleandall softwareengineeringmethodologieshave
strongemphasisontestinginvariousstagesofthelifecycle.Withinthefieldofevaluationwehavealimited
capacity to test software as the lifecycle has evolved from development to deployment. While the testing
component is still available, the ability to conduct testing to the same rigor as is available during the
developmentstagesissignificantlyreduced.Inadditiontheaccessibilitytothesourcecodeisdubiousatbest.
Withinthisresearchanassumptionhasbeenmadethattestingisoccurringonproprietarysoftwareandtherefore
sourcecodeisnotavailable.Thedebatebetweenopensourceandproprietarysoftwareisactiveandtheauthors
donotwishtodiscounttheaddedbenefitsofanopensourcetestingmodelbutthisdecisionhasbeenmadeto
thefollowingrationale:
Itisacknowledgedopensourceapplicationsofferagreaterabilitytotrulyexaminetheworkingofthe
application(duetotheabilitytoexaminethesourcecode)butastandardorcommondenominatormust
beidentified.Thereasonwhyacommonstandardmustbeidentifiedisduetotheobjectivesofthe
testingprocesstovalidateandverifyforensiccomputingtoolscorrectlyandefficientlywithasmuch
standardisationaspossible.Separatingstandardsbetweenopenandclosedsourcedoeshavebenefits
butalsointroducescomplicationswhichwillimpactontheefficiencyoftheframework.
Individualorganizationscanconductlowerlevelexaminationstoeitherindependentlyvalidateorverify
resultsfromthevalidationresultachieved.
Crystalboxtestinghasmoreavailableprocessesinplacewhichmayidentifydefectsbutitishoped
software engineering methodologies will identify these. The level of familiarity is greater to the
developerthananindependentpartyperformingvalidationwithconstraintsoftimeandresources.
Createsauniversalorstandardisedtestingenvironmentforbothopensourceandproprietarysolutions.

TESTING REGIME
Thefollowingsectionsrefertovariousstandardstoguidethetestingprocedure.Thedefinitionsofsometesting
terminology must be clarified prior tocontinuing the IEEE Standard 610.121990 provides the following
definitions;

Test Anactivityinwhichasystemorcomponentisexecutedunderspecifiedconditions,theresultsare
observedorrecorded,andanevaluationismadeofsomeaspectofthesystemorcomponent.

Validation Theprocessofevaluatingasystemorcomponentduring orattheendofthedevelopment


processtodeterminewhetheritsatisfiesspecifiedrequirements.

ValidationandVerification(V&V)Theprocessofdeterminingwhethertherequirementsforasystem
or component are complete and correct and the final system or component complies with specified
requirements.
Itisassumedsomeindependentagencywillundertaketheroleofthetesterforthisprocess.Suchanagency
wouldhavetobefundedfromeitheraconsortiumofgovernmentandpotentiallysoftwarecompanies.This
structuresuggestsbiastovendorsanddevelopersthatcontributetotheconsortiumbutthetaskofestablishing
suchanagencywillrequiremajorsupportfromvendorsandpractitionersalike.Thereforetheircontributionmay
notcomeintheformofamonetaryvaluebutaninkindcontributiontoestablishanagencyanditsprocesses.
Theroleoftheindependentagencyactingasthetesterisassumedtobethefollowing.Acquiresoftwaretools
alongwithalldocumentationandknowledgerequiredfortheevaluationoftheproduct.WithrespecttoISO
170252005standardappropriateloggingofthisacquisitionismandatoryandmustbedevelopedbytheagency
tosuittheirbusinessmodel.Onceacquisitionisachievedthetestingprocessbegins.Itisimportantthatthegoal
ofthetestingismadeapparentbeforetheprocessevolvestoanysubsequentstate.Thegoalofthetestingisto
evaluateasoftwareproducttoactasaforensiccomputingtoolanddeterminewhetherthetoolsfunction(s)
operate correctlyand accuratelywithin a prescribed environment. The softwareproduct is referred to as a
collectiveformbutmayconsistofasetoffunctionsandeachofthesefunctionsarerequiredtobeidentified,
documentedandtested.
Whittaker(2000)suggeststhestartingpointpriortoconductingtestsistoplanthetestingenvironment.In
summarythefourcategoriesdescribedare
1.

Modellingthesoftwaresenvironment

2.

Selectingtestscenarios

3.

Runningandevaluatingthetestscenarios

4.

Measuringtestingprocess

While this process appears self explanatory it is important that each of these categories are explored and
customizedtotheproposedforensiccomputingevaluationarchitecture.
1. Modeling the Software Environment
Knowledgeofthesoftwaremustbeincludedwhenpreparing forthetestingofaforensiccomputing tools
function(s).Thisenvironmentmustbecreatedwithminimalpossibilityofcontaminationfromanysources.Such
sourcesofcontaminationmayincluderemnantdata,conflictingsoftware,illequippedtesters,etc.ISO17025
2005 describes clear guidelines to achieve such a state which should be considered when undertaking the
process.Itisimperativetheenvironmentmustbeabletoberecreated(Torecreatetesting isdiscrepancies
identified).Thereforethedocumentationmustbepreciseandblatant.Aprocesstodocumentandachievethis
requirementwillbecreatedalongwithtestingtoensuretheenvironmentisuncontaminatedpriortothetesting.
2. Select Test Scenarios
Theselectionofapplicabletestscenariosispotentiallythemostimportantcomponentofvalidatingforensic
computingsoftware.Aprocesstoidentifyfunctions,requirementsandtestcasesisexaminedlaterinthispaper.
Theabilitytoselectappropriatescenariostobetestedrequiresexperienceandknowledgeofthetester(and
agency).Astheassumptionforthisresearchisthatallapplicationswillbetestedasclosedsourcethevarietyof
testtypesisreducedconsiderablyandlimitedtoblackboxtesting.
3. Running and Evaluating the Test Scenarios
Theundertakingofthetestsasselectedinstage2andexecutedintheenvironmentasderivedfromstage1
requirescarefulobservationandconsiderationinpreparation,executionandevaluation.Thepreparationphase
ensures the tester is familiar with the objectives of test and interacts with the system in a prescribed and
documentedmanner.Itisassumedthatthisprescribedmannerisobtainedfromdocumentationsuppliedwiththe
softwareordeterminedinstage2.CompleteloggingoftheexecutionofthetestsisrequiredandISO17025

2005providesframeworkforthisrequirement.Resultsmustberecordedtoallowtheevaluationagainstexpected
outcomesandjudgmentsmustbeconsideredtodeterminewhetherthefunction/featurehasachievedthedesired
outcome.Resultsevaluationisexploredinalatersectionofthispaper.
4. Measuring the Testing Process
Within the context of the Whittaker publication the measurement of the testing process suggests that a
progressionofthetestingismonitoredandupdatedtoreflecttheactualstateatanyonetime.Withinthecontext
ofthetestingofforensiccomputingtoolsitisdeemedthatthemeasuringtestingprocesswillhaveanalternative
use.Asforensiccomputingtoolshavethepotentialtocontainmultiplefunctionsitisnaivetoassumethattheall
functionsshouldbetestedcollectively.Thereasoningforthisjudgmentisduetothefactthatsomeobscure
functionsoffernobenefitintherealworldortheprocessoftestingthesefunctionsistoocomplextoconsiderif
thedemanddoesnotexist.Withinthecontextofforensiccomputingtoolevaluation,itisthereforeproposedthat
themeasuringtestingprocesstracksthefunctionstestedanddisplaysidentifiedfunctionswhichhavenotbeen
tested(Whittaker2000).
The Process of Evaluation
Theprocessforevaluatingsoftwareapplicationswillconsistofthefollowing:
1.

Acquirementofsoftware

2.

Identificationofsoftwarefunctionalities

3.

Developmentoftestcases&referencesets

4.

Developmentofresultacceptancespectrum

5.

Executetests&evaluateresults

6.

Evaluationresultsrelease

Itisimperativethatallphasesoftheevaluationaretransparentintheformofacquirement,testing,results
reporting and evaluation. The process may consider input from a number of sources such as practitioners,
vendorsandacademicstoassistineachofthephases.Anexampleofwheresuchinputwillbebeneficialisto
determinethefunction(s)ofasoftwareapplicationifnodocumentationissupplied,orisofsubstandardnature.
Theprocessoftestingwillgeneratedocumentationwhichthenmaybeconsideredasanstandardoperating
procedure(SOP)oftypes.Thisisjustonesuchsideeffectofsuchacommunityengagingarchitecture.
Thephasesofevaluationare:
1. Acquirement of Software
Theprocessofacquiringtherelevantsoftwareapplicationstobeevaluatedisthebeginningoftheevaluation
lifecycle.OnceanapplicationisacquiredthedocumentationbeginssatisfyingISO170252005andAS4006
1992.
Oncetheinitialdocumentationisacquiredasignatureofthesoftwaremustbedevelopedtoensurethatany
subsequentversions,patchesormodificationsarenotmisinterpretedasalreadytested.ItisexpectedaMD5
checksumorsimilarcouldbeusedtogeneratesuchauniquesignature.
2. Identification of Software Functionalities
Oncetheacquirementrequirementsaresatisfiedtheidentificationofasetoftheapplicationsfunctionalityis
required.Onlytheidentifiedfunctionswillbetested.
Ifdocumentationisavailablewhenacquiringthesoftwarethenthisphaseismuchsimpler.Ifnodocumentation
isavailablethenthetestermustidentifyfunctionstobetestedbywhatevermeansavailable.Thismayinclude,
but is not limited to, execution and trialling of the application, community input, vendor input or mining
informationaboutfunctionsfromothersourcessuchaswebsites,discussionboards,guidelinesetc.

Afunctionisconsideredsomecomponentofthesoftwarewhichdeliversaresultwhenexecuted.Thisfunction
canbecomplexinitsnatureorsimplebutonlyfunctionsidentifiedinthisphasewillbetestedandgivenan
evaluationresultatthecompletionofthesubsequentphases.
If all functions are not identified within this phase the application can be retested at a later stage. Newly
identifiedoruntestedfunctionscanbeappendedtothetestdocumentation.
Allfunctionsmustbeclearlydocumentedasanitemtobetested.Thisrequiresanexperiencedtesterwith
knowledgeofhowthetoolsfunctionsareutilised.Itisimperativethatanyfunctionsthatactasadependencyto
anotherfunctionaredocumentedasthiswillbereflectedwithinthenextphaseofevaluation.
3. Development of Test Cases & Reference Sets
Thedevelopmentoftestcasesandreferencesetsiscriticaltodeterminethecorrectnessoftheapplication.
Whenallfunctionstobeareidentifiedduringthepreviousphase,testsmustbedevelopedtodeterminethe
correctoperation.Theassumptionhasbeenmadethatalltestingwillbeonapplicationstreatedasproprietary
accesstothesourcecodeisnotavailable.Thereforealltestswillbebasedonblackboxtestingtechniqueswhere
thedataisorganisedtotestasmanyfunctionsasidentifiedintoanimagereferredtoasareferenceset.The
referenceset will differforeachevaluation process determinedbythe functions being evaluated. Different
applicationswiththesamefunctionsmayusethesamereferenceset.
Utilizingtheblackboxtestingtechniquethedatainthetestcaseisknownanddeterminedtobeanexampleof
realworldscenario.Thereferencesetwillbedevelopedbythetesterandmustnotonlytesttherealworld
example in isolation but must also test the true operation of the software. Examples of this may include
encryption,modificationoffileheaders,placingkeywordsacrossclusters,modifyingdatasothattheexpected
formatisnotwithinstandardsthesearejustafewexamplesofsuchteststhatmaybeincluded.
Duetothecriticalnatureofthistestingphaseinputmaybesoughtfromoutsidethetestorganizationtodetermine
whattherealworldcaseisandhowtotrulytesttheextentofthefunction.Aswithallphasesthetestphase
referencesetswillbemadeavailablesoindividualscanverifythesoftware,anditsfunctions,withintheirown
environment.
Itmaybecomepracticethatonceareferencesethasbeendevelopedanddocumentedthisissharedwiththe
community.Thecommunitycanthenassesswhetherthetestsareappropriateandcomprehensiveenoughto
covertherealworldusage.Requestforcommentswouldexpirepriortothepenultimatephaseandallfeedback
wouldbeconsideredbeforeundertakinganysubsequentphasesofevaluation.
Testcasesandreferencesetsnotonlyaredesignedtotestthefunctionsidentifiedbutalsotheoperationofthe
applicationasawhole.Focusispredominantlyondeterminingthecorrectnessoffunctionsidentifiedbutitis
acknowledgedifthehostsoftwareisincorrectthenitmustbedeterminedwhetherthisimpactsonthefunctions
operations.Thereforeenvironmenttesting,hardwareconfiguration,cohostedsoftwareandprocessesmaybe
consideredandtestedtodeterminetheirimpact.Suchtestingmustbeincludedwithinthedocumentationas
faultsordefectsmaydirectly,orindirectly,beattributedtotheenvironmenthostingthetestbed.
Aswithinthepreviousphaseifafunctionisdeterminedtobeexcludedonanevaluationcyclethiscanaddedata
latertimeitisessentialonceagainthatanydependantsonthisfunction(s)areevaluatedtogether.
4. Development of Result Acceptance Spectrum
Resultacceptancespectrumisametricusedtoassessthetestresultsagainstwhatwasexpected.Asthetesting
methodisblackboxandthereferencesetsandtestcaseshavebeendevelopedallresultsoftestsareknown.
Dependantontheenvironment(lawenforcement,military,civil,etc)oftoolusedeterminesthelevelofaccuracy
required.Itisthereforeexpectedthatnumerouslevelsofacceptablerangeswillbeidentifiedallowingagreater
numberofpractitionerdomainsbenefitfromtheevaluationprocess.
ISO14598.12000providesamethodologytodocumenttheresultacceptancespectrumbydividingthepotential
resultsetinto4mutuallyexclusivegroupings;exceedsrequirements,targetrange,minimallyacceptableand

unacceptable.Howthesegroupingsaredeterminedmustincludepreviouslyundertakenphasesincludingphases
2&3.Fromthesephasesthemetricofacceptanceandexpectedcanbeidentifiedanddocumentedthismay
differ for example between law enforcement and military domains and this would require input from the
respectivecommunities.
If a function does not meet the acceptance range then that function and all dependant, and codependant
functions,willberenderedasincorrect.Thereforeitisessentialthatdependenciesoffunctionsarerecognizedin
thepriorphasesasthismaycauseonefunctiontoapassevaluationstatuswhencoupledwiththeincorrect
functionwillrendertheoutcomeasinvalid.
5. Execute Tests & Evaluate Results
TheexecutionofthetestsisundertakenbythetesterandeachprocessoftestingwillbedocumentedasperISO
170252005andAS40061992requirements.Anyunexpectedinterventionrequiredwillbedocumentedand
assessedasthismaybearimpactwithresults.
Allresultsfromalltestswillbecollectedandassessedagainsttheacceptancespectrumcompiledintheprevious
phase.Allfunctionsfoundtobebelowtheacceptablerangewillbeevaluatedasfailedanddocumented.All
functions found to achieve in, or above, an acceptance rating will be evaluated as passed with all
documentationincluded.
Itispossibleforafunctiontopassevaluationwithoutpassingalltestcases.Asdescribedinphase3extraneous
testswillbeexecutedwiththeintentiontoidentifylimitationsoftheapplication.Thesewillbeincludedwith
anyevaluationandhaveasignificantcomponentofthedocumentationexploringthelimitationsoalluserscan
recognizethis.
6. Evaluation Results Release
Uponconclusionofthepreviousphasesallresultsmustbemadeavailabletothecommunity.Onceatoolis
evaluateditisnotimmunefromfutureevaluationorreevaluation.
Furthermore if the application is modified in any capacity (patches, updates, etc) the evaluation is not
automaticallyinherited.Thereforesoftwarepatchesusuallyreleasedbyvendorstoaddressidentifieddefectswill
rendertheiralreadyevaluatedapplicationasunevaluated.Ifthisevaluationframeworkisadopteditwillforce
vendorstosupplyaproductwithagreateremphasisonqualitytotheconsumer.Ifpatchesarefrequenttheuseof
anunevaluatedtoolmaymotivateaconsumertoselectacompetitorsproductduetomoreuptime.
Discussion
As highlighted in the DFRWS Final Report 2005 (Reust 2006) the ability to test tools in a variety of
environments is limited due to both the financial and time constraints. The current models of tool testing
frameworks(NISTCFTT,etc)donotofferasolutiontothisastheremethodologiesaretestinginisolation.
Whittaker (2001) documented the invisible user in a paper highlighting the fact that it is difficult for test
environments to truly recreate all potential live environments. Furthermore systems, particularly operating
systems, areprone toreactin unexpected mannersand these actions aredifficult toidentify andsimulate.
Althoughsoftwareisdeterministicinisolationoncetheinteractionoccurswiththehostandothersoftwarethe
resultscanvary.
The evaluation framework proposed offers an evaluation of a software application identified as a forensic
computingtoolatonepointoftimeinoneenvironment.Therefore,byreleasingtheevaluationframeworkphases
toall,thisallowsforindividualorganizationsandpractitionerstoundertakethesametestingprocesswithintheir
environment.Thisisreferredtoverificationandtheirresultsshouldresembletheevaluationresultswhichare
referredtoasvalidationresults.Iftheydonotcorresponditmustbedeterminediftheerrorlieswithinthe
validation/evaluationprocess,theverificationprocessorboth.Iftheerrorisdeterminedtoexistinthevalidation
processthentheevaluationstatusofthesoftwaretool,andallfunctions,mustberevokedimmediatelyuntil
revalidationiscompletedandverified.

With this knowledge it is imperative that tools are tested in the greatest combinations of environments as
possible. Astudy beingconductedbytheauthorsofdigitalforensiclaboratoriescurrentlywithinAustralia
highlightedthatnotwolaboratoriesusedexaminationworkstationsofthesamespecification.Anexampleis
someworkstationsusingsinglecoreprocessorswhileothersuseddualcorewithinthesameorganisation.Write
blockersvariedinbrandandconnectivityalongwithcountlessotherhardwareandsoftwarevariations.
Otherbeneficialsideeffectsoftheproposedframeworkarethatitoffersacentralizedagencytheabilityto
registerpractitionerstocreateacommunity.Thisallowsforthecollectionofinputfromnumerousdomainsand
thiscanbeincludedwithinthevalidationprocess.Documentationmaybecreated,orextended,ifnewfunctions
arenotalreadydocumentedandpractitionersmaybeexposedtoalibraryoftoolsifthesearefreeforuse.
Further work
Establishing an Testing Authority
The formation and allocation of resources to implement the described evaluation process is required
immediately.Thefundingforaprojectsuchasthisshouldnotbesolelytheonusofagovernmentbutshouldbea
consortium of government, private industry and academia to put in place the processes to create such an
authority.
Automated Reference Set Generator Tool
Anautomatedtooltocreatemountablediskimageswithdynamiccontentistheconceptbehindthistool.When
a tool is to be validated the functionalities are identified and represented in the preceding screens of the
automatedtool.Onceallfunctionalitiesareincludedattheinitialexecutionofthesoftwaretheapplication
generatesadynamicimagecreatingcustomreferencesetfortheimageandembeddingtheseintoadiskimage.
Upon completion of execution a unique identifier for an image is created and a corresponding report
documentingallembeddedtestcasessuchaskeywords,partitiontypes,filestructures,deletedfiles,slackspace,
crossclusterfiles,etc.
When this image is used for validation the corresponding report is available to testers to allow for a
comprehensiveassessmentofthetool beingvalidated.Thiswillsignificantlyimprovetheefficiencyofthe
evaluationprocess;particularlyphase3ofthepreviouslydescribedframework.
Theautomatedreferencesetgeneratortoolcanalsobeusedbyvendorstoincreasetheirtestingdomainwith
minimalincreaseintimeandcost.

CONCLUSION:

The proposed evaluation framework utilizes black box testing techniques via reference sets and test cases to
determine the correctness of software applications utilized as forensic computing tools. While the testing
techniques are fully described it is important to describe the process that encompasses the tests to appreciate the
benefit this offers to the forensic computing community.
The proposed framework has numerous benefits such as a streamlined process, community input, multiple
environment testing and a community point of contact. It addresses shortcomings of other similar frameworks
and extends their capabilities. Is must be determined if such a framework will function and be a truly complete
solution to the needs of the forensic computing community. Whether this framework is the solution or not the
gap between tools and their evaluation is widening at an alarming rate.

REFERENCES:
AAFS. (1996) What is Forensic Science? [online], American Academy of Forensic Sciences,
http://www.aafs.org/
Armstrong,C.(2003)DevelopingAFrameworkforEvaluatingComputerForensicTools,CurtinUniversityof
Technology
AS40061992SoftwareTestDocumentation
Etter,B.(2001)TheForensicChallengesofECrime.AustralasianCentreforPolicingResearch.

Grabowski P (1998) Crime and technology in the global village. Internet Crime, University of Melbourne,
Victoria
Holzmann,G.EconomicsofSoftwareVerification.BellLaboratories
IEEEStd610.121990IEEEStandardGlossaryofSoftwareEngineeringTerminologyDescription
AS/NZISO/IEC14598.12000InformationtechnologySoftwareproductevaluationGeneraloverview
AS/NZISO/IEC170252005.Generalrequirementsforthecompetenceoftestingandcalibrationlaboratories.
Meyers, M. & Rogers, M. (2004) Computer Forensics: The Need for Standardization and Certification.
InternationalJournalofDigitalEvidence,Vol3,Issue2
Reust,J.(2006)DFRWS2005WorkshopReport.
Whittaker,J(2000)Whatissoftwaretesting?AndWhyIsItSoHard?IEEESoftware,January/February2000.
Whittaker,J.SoftwaresInvisibleUsers.IEEESoftwareMay/June2001.
Wilsdon, T. & Slay, J. (2005). Digital Forensics: Exploring Validation, Verification & Certification. First
InternationalWorkshopforSystematicApproachestoDigitalForensicEngineering

COPYRIGHT
Wilsdon,Tom&Slay,Jill2006. Theauthor/sassignSCISSEC&EdithCowanUniversityanonexclusive
licensetousethisdocumentforpersonaluseprovidedthatthearticleisusedinfullandthiscopyrightstatement
isreproduced.TheauthorsalsograntanonexclusivelicensetoSCISSEC&ECUtopublishthisdocumentin
fullintheConferenceProceedings.SuchdocumentsmaybepublishedontheWorldWideWeb,CDROM,in
printedform,andonmirrorsitesontheWorldWideWeb.Anyotherusageisprohibitedwithouttheexpress
permissionoftheauthors

S-ar putea să vă placă și