Sunteți pe pagina 1din 14

Master of Business Administration-MBA Semester 4 ASSIGNMENT SET 1 RESEARCH METHODOLOGY - MI0039

Question 1:
Warigon is a retail company and they want to automate the payment system. Assume that you are the design engineer of that company. What are the factors that you would consider while designing the electronic payment system?

Answer 1: Designing Electronic Payment Systems The following factors like non technical in nature, that must be consider while designing the electronic payment system for the retail company to automate the payment system: Privacy. A user expects to trust in a secure system; just as the telephone is a safe and private medium free of wiretaps and hackers, electronic communication must merit equal trust. Security. A secure system verifies the identity of two-party transactions through "user authentication" and reserves flexibility to restrict information/services through access control. Tomorrow's bank robbers will need no getaway cars-just a computer terminal, the price of a telephone call, and a little ingenuity. Millions of dollars have been embezzled by computer fraud. No systems are yet fool-proof, although designers are concentrating closely on security. Intuitive interfaces. The payment interface must be as easy to use as a telephone. Generally speaking, users value convenience more than anything. Database integration. With home banking, for example, a customer wants to play with all his accounts. To date, separate accounts have been stored on separate databases. The challenge before banks is to tie these databases together and to allow customers access to any of them while keeping the data up-to-date and error free. Brokers. A "network banker" -someone to broker goods and services, settle conflicts, and facilitate financial transactions electronically-must be in place. Pricing. One fundamental issue is how to price payment system service. For example, should subsidies be used to encourage users to shift from one form of payment to another, from cash to bank payments, from paper: based to e-cash. The problem with subsidies is the potential waste of resources, as money may be invested in systems that will not be used. Thus investment in systems not only might

not be recovered but substantial ongoing operational subsidies will also be necessary. On the other hand, it must be recognized that without subsidies, it is difficult to price all services affordably . Standards. Without standards, the welding of different payment users in different networks and different systems is impossible. Standards en at interoperability, giving users the ability to buy and receive information, regardless of which bank is managing their money. None of these hurdles are insurmountable. Most will be jumped within t next few years. These technical problems, experts hope, will be solved as technology is improved and experience is gained. The biggest question concern how customers will take to a paperless world.

Question 2:
The four Ps concept in marketing are the four major ingredients of a traditional marketing mix directed at the customer or target market. List and explain the four Ps of marketing? Are the 4 Ps really applicable to Internet marketing? How?

Answer 2: 4 Ps of marketing : 1. Product 2. Place 3. Price 4. Promotion Explanation :

1. Price
The price is the amount of money a customer pays for the product or service. The price is very significant as it governs the companys profit and therefore existence. Modifying the price has an intense impact on the marketing strategy and, contingent on the price elasticity of the product, it will frequently affect the demand and sales as well. The price that the marketer sets should balance the other elements of the marketing mix.

2. Product
The product is the simplest idea. It is an item or service that fulfills what a consumer needs or wants. Your product can literally be anything, even an idea. You must have a concrete grasp on what the product is, however, before you can successfully market it.

3. Promotion
Promotion can be all of the methods of communication that a marketer may use to deliver information to various people about the product. Promotion is made up of things like: advertising, public relations, personal selling, viral and word-of-mouth,

and sales promotion. Promotion is essentially how you get the word out about your product or service.

4. Place
Place is all about providing the product at a location which is convenient for consumers to access. Today, that is frequently online. Place also has a lot to do with distribution, or getting the product to the customer. Several approaches such as intensive distribution, selective distribution, exclusive distribution and franchising can be applied by the marketer to match the other parts of the marketing mix. The four Ps can help you become a better marketer by understanding the simplified balance of marketing products or services. Once you have a good understanding of these four aspects, you can have more success in marketing your products or services. - See more at: http://netprofitstoday.com/blog/the-four-ps-ofmarketing/#sthash.eegSfVWB.dpuf Q4. Website is the most important front end tool in the online marketing. It provides the interactive mode of operation to the user. Define what is a website? Analyze the structure of a website. (defining the website 1 mark, diagram- 2 marks, analyzing with the justification- 7 marks) Answer : Website : A webpage is a document, typically written in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). A webpage may incorporate elements from other websites with suitable markup anchors. Analyzing the structure of Website : Browsing and page creation are two fundamental forms of Web usage. The two activities are inherently related, in large part through the complex link topology of the WWW; patterns of browsing and information discovery are based fundamentally on the ways in which pages are connected through the construction of hyperlinks. Links carry considerable meaning: a link to another page, especially on another site, encodes a valuable type of human judgement. We have been investigating link structure as a way of understanding its relation to the process of searching for information, its role in the implicit communities that page creators define, and its implications for the understanding of social behavior on the Web. Hubs, Authorities, and Communities. There is, clearly, no explicit global scheme that controls the construction of Web pages and hyperlinks; how then can we discern high-level forms of structure from the link topology? In our work, we have found that the notion of authority provides a valuable perspective from which to consider this issue. A topic with broad representation on the WWW contains a number of prominent, authoritative pages, and structure emerges from the way in which such authorities are implicitly ``endorsed'' through hyperlinks. Modeling the mechanism by which authority is conferred on the WWW is itself a challenging problem. In many cases, authoritative pages on a common topic do not endorse one another directly -- Microsoft and Netscape are both good authorities for the topic of ``web

browsers,'' but they do not link to one another -- and so often they can only be grouped together through an intermediate layer of relatively anonymous hub pages, which link to multiple, thematically related authorities. Thus, hubs and authorities are distinct types of pages that exhibit a natural form of symbiosis: a good hub points to many good authorities, while a good authority is pointed to by many good hubs. Note that a good hub may not even be pointed to by any page; in other words, some of the most valuable structural contributions to the Web are being made by relatively unrecognized individuals. We feel that this two-level model of hubs and authorities is appropriate to a domain as heterogeneous as the Web, where individuals, organizations, and large commercial enterprises create hyperlinked content with different (and often conflicting) objectives in a common environment. The model also provides a natural way to expose structure among both the set of hubs, who often do not know directly of one another's existence, and among the set of authorities, who often do not directly acknowledge one another's existence. We refer to a densely interconnected set of hubs and authorities as a community. Note that our use of the term ``community'' is not meant to imply that these structures have been constructed in a centralized or planned fashion. Rather, our experiments with the Web's link structure suggest that communities of hubs and authorities are a recurring consequence of the way in which creators of WWW pages link to one another in the context of topics of widespread interest. The notion of using link information to define measures of ``importance,'' as we do in identifying authoritative pages, has antecedents in the study of social networks, citation analysis, and recent approaches to hypertext information retrieval. Two such approaches related to ours are the influence weight methodology of Pinski and Narin, from the field of citation analysis, and the PageRank algorithm of Brin and Page for the WWW. The models underlying these techniques form an interesting contrast to ours. They posit frameworks in which one's importance is determined by the extent to which one is referred to by other important sources; they do not incorporate a notion of hubs. As discussed above, we feel that our model for the interaction between authorities and hubs captures some of the crucial features of the Web's social organization: authority very often ``flows'' between highly visible nodes only through an intervening set of hub pages. Styles of Linking and Community Formation. One can find densely linked collections of hubs and authorities in a remarkably diverse range of settings on the WWW. Because such collections have an intrinsic definition in terms of the link structure, we can identify them even in the absence of a specific topic description. This suggests a promising approach to WWW categorization: Rather than assuming an a priori collection of subjects, we can let the link-based communities themselves define the prevailing topics, niches, and user populations of interest on the WWW. It is important to bear in mind that the issue here is not simply to partition the WWW into focused groups of this sort; the full representation of any such group on the WWW is typically enormous, and our small set of related authorities must serve the critical function of providing a compact yet informative representation of a much larger underlying population. In order to fully realize these possibilities, we need to further deepen our understanding of the many styles in which users create hyperlinks. We see recurrent contrasts between the structures of communities that have primarily academic, commercial, or governmental

representation on the Web; the style of linking in a structure such as a corporate intranet stretches our basic notions even further. We also see that communities on the Web often exist to extents disproportionate to their presence in the ``real'' world. Inferring Global Structure through Sampling. Although our goals are to infer notions of structure that apply in a global sense on the WWW, we have developed analysis techniques that operate on carefully chosen samples of only a few thousand Web pages at a time. Indeed, we find that our techniques typically extract the greatest degree of orderly structure in the context of topics for which the overall number of relevant pages, and the density of hyperlinking, is the largest. As a means for better understanding some of these phenomena, we believe it would be extremely valuable to develop probabilistic models of page and hyperlink creation that contain enogh structure to capture certain global properties of the WWW, and yet are clean enough to allow for concrete analysis. Such models could serve as a testbed for studying the effectiveness of link-based analysis on the WWW, and potentially for suggesting new methods of using links to study the structure of hypertext. Links, Traffic, and Browsing Patterns. We began by observing that the activities of browsing and linking are tightly coupled, and that the way in which the link structure of the Web has evolved has, in large part, determined the style in which people navigate it. We believe that browsing and search can be further enhanced by an awareness of Web communities. A few search engines are beginning to use link information, but effective tools can be built into browsers too. For example, simply presenting pages pointing to the page being browsed can lead a user to good hub pages very quickly. Such a technique can also be an effective supplement to contrasting approaches based on collaborative filtering, which use browsing logs in place of the quality judgments of hubs. In general, tools that incorporate high-level information about the WWW link topology can naturally lead users to adopt more ``link-aware'' browsing paradigms, and can aid in developing approaches to navigation that make more effective use of global structural information.

Question 3: Mobile advertising has become a no-brainer for some brands to drive in-store
traffic and online revenue. This is possible only after the growth of Mcommerce. Define Mcommerce. Describe the areas of its potential growth and future of mCommerce

Answer 3: M-commerce (mobile commerce) is the buying and selling of goods and services through wireless handheld devices such as cellular telephone and personal digital assistants (PDAs). Known as next-generation e-commerce, m-commerce enables users to access the Internet without needing to find a place to plug in. The emerging technology behind m-commerce, which is based on the Wireless Application Protocol (WAP), has made far greater strides in Europe, where mobile devices equipped with Web-ready micro-browsers are much more common than in the United States. In order to exploit the m-commerce market potential, handset manufacturers such as Nokia, Ericsson, Motorola, and Qualcomm are working with carriers such as AT&T Wireless and Sprint to develop WAP-enabled smart phones, the industry's answer to the Swiss Army Knife, and ways to reach them. Using Bluetooth technology, smart phones offer fax, e-mail, and phone capabilities all in one, paving the way for m-commerce to be accepted by an increasingly mobile workforce. As content delivery over wireless devices becomes faster, more secure, and scalable, there is wide speculation that m-commerce will surpass wireline e-commerce as the method of choice for digital commerce transactions. The industries affected by m-commerce include:

Financial services, which includes mobile banking (when customers use their handheld devices to access their accounts and pay their bills) as well as brokerage services, in which stock quotes can be displayed and trading conducted from the same handheld device Telecommunications, in which service changes, bill payment and account reviews can all be conducted from the same handheld device Service/retail, as consumers are given the ability to place and pay for orders on-the-fly Information services, which include the delivery of financial news, sports figures and traffic updates to a single mobile device

IBM and other companies are experimenting with speech recognition software as a way to ensure security for m-commerce transactions. Question 4: Write an essay on the need for research design and explain the principles of experimental designs. Answer 4: The need for the methodologically designed research: a)- In many a research inquiry, the researcher has no idea as to how accurate the results of his study ought to be in order to be useful. Where such is the case, the researcher has to determine how much inaccuracy may be tolerated. In a quite few cases he may be in a position to know

how much inaccuracy his method of research will produce. In either case he should design his research if he wants to assure himself of useful results. b)- In many research projects, the time consumed in trying to ascertain what the data mean after they have been collected is much greater than the time taken to design a research which yields data whose meaning is known as they are collected. c)- The idealized design is concerned with specifying the optimum research procedure that could be followed were there no practical restrictions. Professor Fisher has enumerated three principles of experimental designs: 1. The principle of replication: The experiment should be reaped more than once. Thus, each treatment is applied in many experimental units instead of one. By doing so, the statistical accuracy of the experiments is increased. For example, suppose we are to examine the effect of two varieties of rice. For this purpose we may divide the field into two parts and grow one variety in one part and the other variety in the other part. We can compare the yield of the two parts and draw conclusion on that basis. But if we are to apply the principle of replication to this experiment, then we first divide the field into several parts, grow one variety in half of these parts and the other variety in the remaining parts. We can collect the data yield of the two varieties and draw conclusion by comparing the same. The result so obtained will be more reliable in comparison to the conclusion we draw without applying the principle of replication. The entire experiment can even be repeated several times for better results. Consequently replication does not present any difficulty, but computationally it does. However, it should be remembered that replication is introduced in order to increase the precision of a study; that is to say, to increase the accuracy with which the main effects and interactions can be estimated. 2. The principle of randomization: It provides protection, when we conduct an experiment, against the effect of extraneous factors by randomization. In other words, this principle indicates that we should design or plan the experiment in such a way that the variations caused by extraneous factors can all be combined under the general heading of chance. For instance if we grow one variety of rice say in the first half of the parts of a field and the other variety is grown in the other half, then it is just possible that the soil fertility may be different in the first half in comparison to the other half. If this is so, our results would not be realistic. In such a situation, we may assign the variety of rice to be grown in different parts of the field on the basis of some random sampling technique i.e., we may apply randomization principle and protect ourselves against the effects of extraneous factors. As such, through the application of the principle of randomization, we can have a better estimate of the experimental error. 3. Principle of local control: It is another important principle of experimental designs. Under it the extraneous factors, the known source of variability, is made to vary deliberately over as wide a range as necessary and this needs to be done in such a way that the variability it causes can be measured and

hence eliminated from the experimental error. This means that we should plan the experiment in a manner that we can perform a two-way analysis of variance, in which the total variability of the data is divided into three components attributed to treatments, the extraneous factor and experimental error. In other words, according to the principle of local control, we first divide the field into several homogeneous parts, known as blocks, and then each such block is divided into parts equal to the number of treatments. Then the treatments are randomly assigned to these parts of a block. In general, blocks are the levels at which we hold an extraneous factors fixed, so that we can measure its contribution to the variability of the data by means of a two-way analysis of variance. In brief, through the principle of local control we can eliminate the variability due to extraneous factors from the experimental error.

Question 5: Distinguish between primary and secondary of data collection. Explain the features, uses, advantages and limitations of secondary data. Which is the best way of collecting the data for research primary or secondary? Support your answer. Answer 5: Primary Data: Primary sources are original sources from which the researcher directly collects data that have not been previously collected e.g.., collection of data directly by the researcher on brand

awareness, brand preference, brand loyalty and other aspects of consumer behaviour from a sample of consumers by interviewing them,. Primary data are first hand information collected through various methods such as observation, interviewing, mailing etc. Secondary Data: These are sources containing data which have been collected and compiled for another purpose. The secondary sources consists of readily compendia and already compiled statistical statements and reports whose data may be used by researchers for their studies e.g., census reports , annual reports and financial statements of companies, Statistical statement, Reports of Government Departments, Annual reports of currency and finance published by the Reserve Bank of India, Statistical statements relating to Co-operatives and Regional Banks, published by the NABARD, Reports of the National sample survey Organization, Reports of trade associations, publications of international organizations such as UNO, IMF, World Bank, ILO, WHO, etc., Trade and Financial journals newspapers etc. Secondary sources consist of not only published records and reports, but also unpublished records. The latter category includes various records and registers maintained by the firms and organizations, e.g., accounting and financial records, personnel records, register of members, minutes of meetings, inventory records etc. Features of Secondary Sources Though secondary sources are diverse and consist of all sorts of materials, they have certain common characteristics. First, they are readymade and readily available, and do not require the trouble of constructing tools and administering them. Second, they consist of data which a researcher has no original control over collection and classification. Both the form and the content of secondary sources are shaped by others. Clearly, this is a feature which can limit the research value of secondary sources. Finally, secondary sources are not limited in time and space. That is, the researcher using them need not have been present when and where they were gathered.

Use of Secondary Data The second data may be used in three ways by a researcher. First, some specific information from secondary sources may be used for reference purpose. For example, the general statistical information in the number of co-operative credit societies in the country, their coverage of villages, their capital structure, volume of business etc., may be taken from published reports and quoted as background information in a study on the evaluation of performance of cooperative credit societies in a selected district/state. Second, secondary data may be used as bench marks against which the findings of research may be tested, e.g., the findings of a local or regional survey may be compared with the

national averages; the performance indicators of a particular bank may be tested against the corresponding indicators of the banking industry as a whole; and so on. Finally, secondary data may be used as the sole source of information for a research project. Such studies as securities Market Behaviour, Financial Analysis of companies, Trade in credit allocation in commercial banks, sociological studies on crimes, historical studies, and the like, depend primarily on secondary data. Year books, statistical reports of government departments, report of public organizations of Bureau of Public Enterprises, Censes Reports etc, serve as major data sources for such research studies. Advantages of Secondary Data Secondary sources have some advantages: 1. Secondary data, if available can be secured quickly and cheaply. Once their source of documents and reports are located, collection of data is just matter of desk work. Even the tediousness of copying the data from the source can now be avoided, thanks to Xeroxing facilities. 2. Wider geographical area and longer reference period may be covered without much cost. Thus, the use of secondary data extends the researchers space and time reach. 3. The use of secondary data broadens the data base from which scientific generalizations can be made. 4. Environmental and cultural settings are required for the study. 5. The use of secondary data enables a researcher to verify the findings bases on primary data. It readily meets the need for additional empirical support. The researcher need not wait the time when additional primary data can be collected. Disadvantages of Secondary Data The use of a secondary data has its own limitations. 1. The most important limitation is the available data may not meet our specific needs. The definitions adopted by those who collected those data may be different; units of measure may not match; and time periods may also be different. 2. The available data may not be as accurate as desired. To assess their accuracy we need to know how the data were collected. 3. The secondary data are not up-to-date and become obsolete when they appear in print, because of time lag in producing them. For example, population census data are published tow or three years later after compilation, and no new figures will be available for another ten years. 4. Finally, information about the whereabouts of sources may not be available to all social scientists. Even if the location of the source is known, the accessibility depends primarily on proximity. For example, most of the unpublished official records and compilations are located in the capital city, and they are not within the easy reach of researchers based in far off places.

Question 6: Describe interview method of collecting data. State the conditions under which it is considered most suitable. You have been assigned to conduct a survey on the reading habits of the house wife in the middle class family. Design a suitable questionnaire consisting of 20 questions you propose to use in the industry. Answer 6: 1).Structured Directive Interview:

This is an interview made with a detailed standardized schedule. The same questions are put to all the respondents and in the same order. Each question is asked in the same way in each interview, promoting measurement reliability. This type of interview is used for large-scale formalized surveys. Advantages: This interview has certain advantages. First, data from one interview to the next one are easily comparable. Second, recording and coding data do not pose any problem, and greater precision is achieved. Lastly, attention is not diverted to extraneous, irrelevant and time consuming conversation. Limitation: However, this type of interview suffers from some limitations. First, it tends to lose the spontaneity of natural conversation. Second, the way in which the interview is structured may be such that the respondents views are minimized and the investigators own biases regarding the problem under study are inadvertent introduced. Lastly, the scope for exploration is limited. 2).Unstructured or Non-Directive Interview: This is the least structured one. The interviewer encourages the respondent to talk freely about a give topic with a minimum of prompting or guidance. In this type of interview, a detailed pre-planned schedule is not used. Only a broad interview guide is used. The interviewer avoids channelling the interview directions. Instead he develops a very permissive atmosphere. Questions are not standardized and ordered in a particular way. This interviewing is more useful in case studies rather than in surveys. It is particularly useful in exploratory research where the lines of investigations are not clearly defined. It is also useful for gathering information on sensitive topics such as divorce, social discrimination, class conflict, generation gap, drug-addiction etc. It provides opportunity to explore the various aspects of the problem in an unrestricted manner. Advantages: This type of interview has certain special advantages. It can closely approximate the spontaneity of a natural conversation. It is less prone to interviewers bias. It provides greater opportunity to explore the problem in an unrestricted manner. Limitations: Though the unstructured interview is a potent research instrument, it is not free from limitations. One of its major limitations is that the data obtained from one interview is not comparable to the data from the next. Hence, it is not suitable for surveys. Time may be wasted in unproductive conversations. By not focusing on one or another facet of a problem, the investigator may run the risk of being led up blind ally. As there is no particular order or sequence in this interview, the classification of responses and coding may required more time. This type of informal interviewing calls for greater skill than the formal survey interview.

3).Focused Interview: This is a semi-structured interview where the investigator attempts to focus the discussion on the actual effects of a given experience to which the respondents have been exposed. It takes place with the respondents known to have involved in a particular experience, e.g, seeing a particular film, viewing a particular program on TV., involved in a train/bus accident, etc. The situation is analyzed prior to the interview. An interview guide specifying topics relating to the research hypothesis used. The interview is focused on the subjective experiences of the respondent, i.e., his attitudes and emotional responses regarding the situation under study. The focused interview permits the interviewer to obtain details of personal reactions, specific emotions and the like. Merits: This type of interview is free from the inflexibility of formal methods, yet gives the interview a set form and insured adequate coverage of all the relevant topics. The respondent is asked for certain information, yet he has plenty of opportunity to present his views. The interviewer is also free to choose the sequence of questions and determine the extent of probing, 4).Clinical Interview: This is similar to the focused interview but with a subtle difference. While the focused interview is concerned with the effects of specific experience, clinical interview is concerned with broad underlying feelings or motivations or with the course of the individuals life experiences. The personal history interview used in social case work, prison administration, psychiatric clinics and in individual life history research is the most common type of clinical interview. The specific aspects of the individuals life history to be covered by the interview are determined with reference to the purpose of the study and the respondent is encouraged to talk freely about them. 5).Depth Interview: This is an intensive and searching interview aiming at studying the respondents opinion, emotions or convictions on the basis of an interview guide. This requires much more training on inter-personal skills than structured interview. This deliberately aims to elicit unconscious as well as extremely personal feelings and emotions. This is generally a lengthy procedure designed to encourage free expression of affectively charged information. It requires probing. The interviewer should totally avoid advising or showing disagreement. Of course, he should use encouraging expressions like uh-huh or I see to motivate the respondent to continue narration. Some times the interviewer has to face the problem of affections, i.e. the respondent may hide expressing affective feelings. The interviewer should handle such situation with great care. 1. Start the interview. Carry it on in an informal and natural conversational style. 2. Ask all the applicable questions in the same order as they appear on the schedule without any elucidation and change in the wording. Ask all the applicable questions listed in the schedule. Do not take answers for granted.

3. If interview guide is used, the interviewer may tailor his questions to each respondent, covering of course, the areas to be investigated. 4. Know the objectives of each question so as to make sure that the answers adequately satisfy the question objectives. 5. If a question is not understood, repeat it slowly with proper emphasis and appropriate explanation, when necessary. 6. Talk all answers naturally, never showing disapproval or surprise. When the respondent does not meet the interruptions, denial, contradiction and other harassment, he may feel free and may not try to withhold information. He will be motivated to communicate when the atmosphere is permissive and the listeners attitude is non judgmental and is genuinely absorbed in the revelations. 7. Listen quietly with patience and humility. Give not only undivided attention, but also personal warmth. At the same time, be alert and analytic to incomplete, non specific and inconsistent answers, but avoid interrupting the flow of information. If necessary, jot down unobtrusively the points which need elaboration or verification for later and timelier probing. The appropriate technique for this probing is to ask for further clarification in such a polite manner as I am not sure, I understood fully, is this.what you meant? 8. Neither argue nor dispute. 9. Show genuine concern and interest in the ideas expressed by the respondent; at the same time, maintain an impartial and objective attitude. 10. Should not reveal your own opinion or reaction. Even when you are asked of your views, laugh off the request, saying Well, your opinions are more important than mine. 11. At times the interview runs dry and needs re-stimulation. Then use such expressions as Uh-huh or That interesting or I see can you tell me more about that? and the like. 12. When the interviewee fails to supply his reactions to related past experiences, represent the stimulus situation, introducing appropriate questions which will aid in revealing the past. Under what circumstances did such and such a phenomenon occur? or How did you feel about it and the like. 13. At times, the conversation may go off the track. Be alert to discover drifting, steer the conversation back to the track by some such remark as, you know, I was very much interested in what you said a moment ago. Could you tell me more about it? 14. When the conversation turns to some intimate subjects, and particularly when it deals with crises in the life of the individual, emotional blockage may occur. Then drop the subject for the time being and pursue another line of conversation for a while so that a less direct approach to the subject can be made later. 15. When there is a pause in the flow of information, do not hurry the interview. Take it as a matter of course with an interested look or a sympathetic half-smile. If the silence is too prolonged, introduce a stimulus saying You mentioned that What happened then?

S-ar putea să vă placă și