Sunteți pe pagina 1din 19

Overview of Research Methodologies Validity and Reliability

The Research Process


Formation of the topic Hypothesis Conceptual definitions Operational definitions Gathering of data Analysis of data Conclusion, revising of hypothesis

Research Methodology
The specifics methods one uses to collect and analyze data.

Classification or research methodologies


Quantitative Research Answers questions about relationships among measured variables with the purpose of explaining, predicting and controlling phenomena. Also called traditional, experimental, or positivist approach. Qualitative Research Used to answer questions about the complex nature of phenomena, often with the purpose of describing and understanding the phenomena from the participants point of view. Also referred to as interpretative, constructivist, postpositivist. Mixed Method Design Qualitative and Quantitative are not mutually exclusive.

Quantitative
. Purpose

Qualitative Seeks a better understanding of complex situations. Unknown variables, flexible guidelines, emergent methods, subjective Textual/image data, small sample, loosely structured observations and interviews Search for themes and categories, subjective and potentially biased analysis, inductive reasoning Words, narratives, quotes, literary style

Seeks explanations and predictions to develop generalizations. Known variables, established guidelines, predetermined methods, objective Numeric data, representative large sample, standardized instruments Statistical analysis, objectivity, deductive reasoning Numbers, statistics, aggregated data, scientific style

Research process

Data gathering Data analysis

Findings

Most research in computer science would probably fall under the quantitative classification. Examples of Qualitative Research in IS:
Ngwenyama, O.K. and Lee, A.S. "Communication Richness in Electronic Mail: Critical Social Theory and the Contextuality of Meaning," MIS Quarterly (21:2), 1997, pp. 145167. Boland, R.J. Jr. "Information System Use as a *Hermeneutic Process," in Information Systems Research: Contemporary Approaches and Emergent Traditions, H-E. Nissen, H.K. Klein, R.A. Hirschheim (eds.), NorthHolland, Amsterdam, 1991, pp. 439-464.. Walsham, G. Interpreting Information Systems in Organizations, Wiley, Chichester, 1993.

*Hermeneutics (Hermeneutic means interpretive), is a branch of philosophy concerned with human


understanding and the interpretation of texts.

Qualitative vs Quantitative classification has its critiques. Hope Olsen (1994) Quantitative "Versus" Qualitative Research: The Wrong Question
This paper examines the qualitative "versus" quantitative debate by focussing on the definitions put forward in the library and information science (LIS) literature, identifying the characteristics attributed to the two, and assessing whether or not there is a fundamental difference between them. The question of fundamental difference is addressed in terms of the ontological and epistemological assumptions which writings by proponents and examples of research accept explicitly or implicitly. The method of this paper is a deconstructive reading which, first, illustrates the arbitrary and fluctuating difference between definitions of qualitative and quantitative research and, second, examines their underlying assumptions.

The Qualitative versus Quantitative Debate "There's no such thing as qualitative data. Everything is either 1 or 0, Researcher Fred Kerlinger All research ultimately has a qualitative grounding, Researcher D. T. Campbell.

They and many other researchers agree that these two research methods need each other more often than not.

Research Methodologies
Case study Empirical research Action research Content analysis Correlational research Development research Ethnography Ex post facto research Grounded theory research Historical Research Observation study Survey Research Quasi-experimental research

Case Study
A type of qualitative research in which in-depth data are gathered relative to a single individual, program, or event, for the purpose of learning more about unknown or poorly understood situation.
WebWatcher: A Learning Apprentice for the World Wide Web (1997) Robert Armstrong, Dayne Freitag, Thorsten Joachims, Tom MitchellAAAI Spring Symposium on Information Gathering Abstract: We describe an information seeking assistant for the world wide web. This agent, called WebWatcher, interactively helps users locate desired information by employing learned knowledge about which hyperlinks are likely to lead to the target information. Our primary focus to date has been on two issues: (1) organizing WebWatcher to provide interactive advice to Mosaic users while logging their successful and unsuccessful searches as training data, and (2) incorporating machine learning methods to ... Software Versus Hardware Shared-Memory Implementation: A Case Study (1994) Alan L. Cox, Sandhya Dwarkadas, Pete Keleher, Honghui Lu, Ramakrishnan...Proc. of the 21th Annual Int'l Symp. on Computer Architecture (ISCA'94) Abstract: We compare the performance of software-supported shared memory on a general-purpose network to hardware-supported shared memory on a dedicated interconnect. Up to eight processors, our results are based on the execution of a set of application programs on a SGI 4D/480 multiprocessor and on TreadMarks, a distributed shared memory system that runs on a Fore ATM LAN of DECstation-5000/240s. Since the DECstation and the 4D/480 use the same processor, primary cache, and compiler, the shared-memory... Software Licensing: A Classification and Case Study (2007), S Manoharan and Jesse Wu, In Proceedings of International Conference on the Digital Society, Guadeloupe. January 2007. IEEE Computer Society Press.

Experimental Research
A study in which participants are randomly assigned to groups that undergo various research-imposed treatments or interventions, followed by observations or measurements.
Various examples of behavioral experiments in cognitive modeling and human-computer interaction measuring. By measuring behavioral responses to different stimuli, one can understand something about how those stimuli are processed. Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.

Action research
A type of applied research that focuses on finding a solution to a local problem in a local setting. Computer conferencing in a learning community. Comstock and Fox (1995) have written about their experiences in integrating computer conferencing into a learning community for mid-career working adults attending a Graduate Management Program at Antioch University in Seattle. From 1992 to 1995, the researchers and their students made use of a dial-up computer conferencing system called Caucus to augment learning outside of monthly classroom weekends. Their findings relate to establishing boundaries to interaction, creating a caring community, and building collaborative learning.

Content Analysis
A detailed and systematic examination of the contents of a particular body of material for the purpose of identifying patterns themes, or biases within that material. Practical Application - determining authorship. One technique for determining authorship is to compile a list of suspected authors, examine their prior writings, and correlate the frequency of nouns or function words to help build a case for the probability of each person's authorship of the data of interest. Mosteller and Wallace (1964) used Bayesian techniques based on word frequency to show that Madison was indeed the author of the Federalist papers; recently, Foster (1996) used a more holistic approach in order to determine the identity of the anonymous author of the 1992 book Primary Colors.

Correlational research
A statistical investigation of the relationship between two or more variables. Correlational research looks at surface relationships but does not necessarily probe the causal reasons underlying them. Example, the use of an aptitude test to predict success in an algebra course.

Developmental research
An observational descriptive type of research that either compares people in different age groups or follows a particular group over a lengthy period of time. Such studies are appropriate for looking at development trends.

Ethnography
In-depth study of an intact cultural group in a natural setting.

Ex post facto research


An approach in which one looks at conditions that have already occurred and then collects data to investigate possible relationships between these conditions and subsequent characteristics or behaviors. Archival study of documents is one of the few methods suitable for investigating infrequent or impossible to duplicate events.

Grounded theory research


The phrase "grounded theory" refers to theory that is developed inductively from a corpus of data. Grounded theory research is a qualitative research aimed at deriving theory through the use of multiple stages of data collection and interpretation.

Historical research
An attempt to solve certain problems arising out of historical context through gathering and examining relevant data. Through a detailed analysis of historical data, we can determine cause and effect relationships.

Observation study
A quantitative research in which a particular aspect of behavior is observed.

Survey research
Survey research is one of the most important areas of measurement in applied social research, commonly used in business, sociology, and government. The broad area of survey research encompasses any measurement procedures that involve asking questions of respondents. A "survey" can be anything form a short paperand-pencil feedback form to an intensive one-on-one in-depth interview.

Quasi experimental research


A method similar to experimental research, but without random assignment to groups. For example, matching instead of randomization is used. Someone studying the effects of a new police strategy in town would try to find a similar town somewhere in the same geographic region, perhaps in a 5-state area. That other town would have citizen demographics that are very similar to the experimental town. The other town is not technically a control group, but a comparison group, and this matching strategy is sometimes called nonequivalent group design.

Reliability
Reliability is the extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials. Without the agreement of independent observers able to replicate research procedures, or the ability to use research tools and procedures that yield consistent measurements, researchers would be unable to satisfactorily draw conclusions, formulate theories, or make claims about the generalizability of their research. In addition to its important role in research, reliability is critical for many parts of our lives, including manufacturing, medicine, and sports.

Validity
Validity refers to the degree to which a study accurately reflects or assesses the specific concept that the researcher is attempting to measure. While reliability is concerned with the accuracy of the actual measuring instrument or procedure, validity is concerned with the study's success at measuring what the researchers set out to measure.

Types of Reliability
Stability reliability (Test, Re-test reliability)
The agreement of measuring instruments over time. We estimate test-retest reliability when we administer the same test to the same sample on two different occasions.

Internal consistency
Used to assess the consistency of results across items within a test. The extent to which tests or procedures assess the same characteristic, skill or quality.

Inter-rater or Inter-observer reliability


Used to assess the degree to which different raters/observers give consistent estimates of the same phenomenon.

Parallel-Forms Reliability
Used to assess the consistency of the results of two tests constructed in the same way from the same content domain. In parallel forms reliability you first have to create two parallel forms. One way to accomplish this is to create a large set of questions that address the same construct and then randomly divide the questions into two sets. You administer both instruments to the same sample of people. The correlation between the two parallel forms is the estimate of reliability.

Types of Validity
Content Validity
The extent to which a measurement reflects the specific intended domain of content. For example, a history exam in which the questions use complex sentence structures may unintentionally measure students' reading comprehension skills rather than their historical knowledge.

Construct Validity
Seeks agreement between a theoretical concept and a specific measuring device or procedure. You might think of construct validity as a "labeling" issue. When you implement a program that you call a "Head Start" program, is your label an accurate one? When you measure what you term "self esteem" is that what you were really measuring?

Criterion Related Validity (Instrumental Validity)


Used to demonstrate the accuracy of a measure or procedure by comparing it with another measure or procedure which has been demonstrated to be valid.

Face Validity
Concerned with how a measure or procedure appears. Does it seem like a reasonable way to gain the information the researchers are attempting to obtain? Does it seem well designed? Does it seem as though it will work reliably?

Reliability and Validity


The center of the target is the concept that you are trying to measure. Imagine that for each person you are measuring, you are taking a shot at the target. If you measure the concept perfectly for a person, you are hitting the center of the target. If you don't, you are missing the center. The more you are off for that person, the further you are from the center.

S-ar putea să vă placă și