Sunteți pe pagina 1din 10

Page 22, RQ3

CRM Implementation: Customer relationship management is an approach to manage


a company's interaction with current and potential customers. Once this relationship
is built suitably, the organization is able to easily identify the actual needs and
requirements of the customer and offer them products and services consequently. It
not only leads to better conversion on the part of the seller, but the customer is also
satisfied with the services. Keeping in view the importance of CRM for your
organization, it is very important to implement it with the right strategic approach.
Today in the era of internet and technological advances, there are number of CRM
tools, whose implementation in organisations help them create better user values.
Managing IS in org is a highly challenging and complex task. One of the main reasons
for this is that neither the org nor the IS systems are static over time and it is the job
of the management to ensure that the system remain useful and relevant for their org
goals.
Figure: 1.4
Outcomes that come out of direct consequences of implementation of IS is called First
Order effect. Effects that are not immediately visible because of implementation of the
IS but over a times are called second order effect.
Successful implementation of IS largely depend on the competitive environment of the
org and Org traits and such as competitive environment, culture, structure.
Initial days IS were implemented as a support process, but today they are
implemented to automate existing process and to add or eliminate processes.
If an org plans to introduce new CRM, it has to consider how the system will fit into its
existing system
Steps to A Successful CRM Implementation
Identify why you need a CRM and what you expect from it.
How much to spend
What level of capabilities should be created with IS
How fast and accurate should they serve
Security levels
What functions should be on the cloud
Find a suitable CRM vendor for your organization.
Develop a budget
Identify which departments and staff will handle the process and train them
accordingly.
Draw a blueprint on how you need to progress with the CRM
Page 36, RQ3
Roger’s Model
Figure 2.2
Innovators (2.5%) – Innovators are the first individuals to adopt an innovation.
Innovators are willing to take risks, youngest in age, have the highest social class,
have great financial lucidity, very social and have closest contact to scientific sources
and interaction with other innovators. Risk tolerance has them adopting technologies
which may ultimately fail. Financial resources help absorb these failures
Early Adopters (13.5%) – This is the second fastest category of individuals who adopt
an innovation. These individuals have the highest degree of opinion leadership among
the other adopter categories. Early adopters are typically younger in age, have a
higher social status, have more financial lucidity, advanced education, and are more
socially forward than late adopters. More discrete in adoption choices than innovators.
Realize judicious choice of adoption will help them maintain central communication
position
Early Majority (34%) – Individuals in this category adopt an innovation after a varying
degree of time. This time of adoption is significantly longer than the innovators and
early adopters. Early Majority tend to be slower in the adoption process, have above
average social status, contact with early adopters, and seldom hold positions
of opinion leadership in a system
Late Majority (34%) – Individuals in this category will adopt an innovation after the
average member of the society. These individuals approach an innovation with a high
degree of skepticism and after the majority of society has adopted the innovation. Late
Majority are typically skeptical about an innovation, have below average social status,
very little financial lucidity, in contact with others in late majority and early majority,
very little opinion leadership.
Laggards (16%) – Individuals in this category are the last to adopt an innovation.
Unlike some of the previous categories, individuals in this category show little to no
opinion leadership. These individuals typically have an aversion to change-agents and
tend to be advanced in age. Laggards typically tend to be focused on “traditions”, likely
to have lowest social status, lowest financial fluidity, be oldest of all other adopters, in
contact with only family and close friends, very little to no opinion leadership.
Example of UPI apps
Page 80, RQ4
Using IT for competitive advantage
Competing for low cost: IS help firms in reducing costs of operations, which help them
gain low cost competitive position. Example: Indian cellular providers used IS to track
billing and keeping track of customers which helped them to cut cost, which enabled
them to overtake the landline model. Give example of a supermart, which keeps track
of products and their selling frequency. This enables them to order the right product in
time and in proper quantity and in return can give discounts.
Competing on Differentiation: Offer services with the help of IS. Example: Amazon
used IT infra to sell books. Postal Service used IS to help users track their parcel.
Table 4.1
IS for value chain
Figure 4.9

Page 124, RQ4


Five categories of decisions to determine IT governance of any Org
1. Business Monarchy: decision making power held by heads of org (Example-
Kingdom structural decision)
2. IT monarchy: decision by IT heads (Example-War decision by only military
general)
3. Feudal: decision by business unit heads (Example- Administrative services)
4. Federal: decision jointly by heads of org and unit heads. (Example- PM and
CM)
5. IT duopoly: IT heads and business head or management head, but not the both
(Example- PM and CM or PM and cabinet)
6. Anarchy: no central or high-level control, purely based on needs (Example-
State and central list)

Page 145, RQ3


Table 7.1
Managing E-Waste:
1. Managing Acquisition
2. Centralised computing
3. Use of renewable and non-renewable resources
E-waste example: Binbag

Binbag provides dependable and professional


e-waste recycling, asset recovery, and data destruction services to companies
across India. We are a Pollution Board approved e-waste vendor that can meet all
your compliance requirements.
What they are doing:
1. E-waste Recycling
Whether you are a large company that needs dependable monthly e-waste
recycling, or a smaller company that would like to send e-waste for recycling
when you are ready, we’ll tailor our services to meet your particular needs.
2. Office Assets Liquidation
When you choose our turnkey office liquidation services, you don’t have to
work with multiple companies. We remove virtually anything from AC units,
electrical components, office furniture, network cables and more.
3. Data Destruction
Your data security is of paramount importance to us. We certify the
destruction of your data, whether you want virtual destruction via wiping, or
physical destruction via mechanical crushing. Peace of mind, guaranteed.

Page 167, RQ2


Carbon footprint of a computer or IT product is measured by accessing energy
requirement to make each product, transport, assemble, process, package, install and
finally the energy consumed by the IT product while it is working.
Green practices can be monitored by analysing the value chain: i.e. track the inbound
to service process of a product.
Page 197, RQ1
Virtualisation: is technology that lets you create useful IT services using resources that
are traditionally bound to hardware. It allows you to use a physical machine’s full
capacity by distributing its capabilities among many users or environments. A virtual
server is software layer that is placed between the OS and the hardware.
A. Resource Distribution

At times, the way virtualization partitions systems can result in some that
function really well and others that don't seem to have access to enough
resources to meet their needs. Resource distribution challenges often occur
early in the transition to virtualization and can be worked out with the provider's
help to mitigate these issues moving forward.

B. VM Sprawl

The ability to create as many virtual machines as you want can lead to more
VMs than are needed for the company to function. VM sprawl may seem
harmless, but it can exacerbate resource distribution problems by diverting
resources to VMs that aren't even being used while those that are used and
needed see reduced functioning.

C. Backward Compatibility

Many companies use legacy systems that can cause problems with newer
virtualized software and programs. Compatibility issues can be time-consuming
and difficult to resolve, but vendors may be aware of these difficulties and be
able to suggest upgrades or workarounds to make sure everything functions
the way it should.

D. Performance Monitoring

Virtualized systems don't lend themselves to the same kind of performance


monitoring as hardware like mainframes and hard drives do.

E. Backup

Since there is no actual hard drive on which data and systems can be backed
up, frequent software updates can make it difficult to access backups at times.

F. Security

Virtual systems can be compromised when users don't keep them secure and
use best practices for passwords, downloads and other tasks. Security can be
a problem for virtualization, but the isolation of each VM by the system can limit
security problems and help keep virtual systems more secure than others might
be.
Page 244, RQ1

IPv4 vs IPv6

1. lPv6 allows storing an unlimited number of IP Address.


2. Hierarchical addressing and routing infrastructure
3. Stateful and Stateless configuration
4. Support for quality of service (QoS)
5. An ideal protocol for neighbouring node interaction
6. IPv6 is an alphanumeric address whose binary bits are separated by a colon
(:). It also contains hexadecimal.
7. Does not have checksum fields
8. Unicast, multicast, and anycast
9. In IPv6, the configuration is optional, depending upon on functions needed.
10. IPv6 support autoconfiguration capabilities.
11. It allows direct addressing because of vast address Space.
12. IPv6 provides interoperability and mobility capabilities which are embedded in
network devices.
13. Internet Protocol Security (IPSec) Concerning network security is mandatory
14. IPv6 address is represented in hexadecimal, colon- separated notation. IPv6 is
better suited to mobile networks, which is the future of telecommunication

Page 299, RQ4

Systems development life cycle (SDLC), also referred to as the application


development life-cycle, is a process for planning, creating, testing, and deploying
an information system.
Stages of life development cycle
Preliminary analysis: Begin with a preliminary analysis, propose alternative solutions,
describe costs and benefits, and submit a preliminary plan with recommendations.
Systems analysis, requirements definition: Define project goals into defined functions
and operations of the intended application. This involves the process of gathering and
interpreting facts, diagnosing problems, and recommending improvements to the
system.

Systems design: At this step, desired features and operations are described in detail,
including screen layouts, business rules, process diagrams, pseudocode, and other
documentation.

Development: production

Integration and testing: All the modules are brought together into a special testing
environment, then checked for errors, bugs, and interoperability.
Acceptance, installation, deployment: This is the final stage of initial development,
where the software is put into production and runs actual business.

Maintenance: During the maintenance stage of the SDLC, the system is


assessed/evaluated to ensure it does not become obsolete. This is also where
changes are made to initial software.

Evaluation: Some companies do not view this as an official stage of the SDLC, while
others consider it to be an extension of the maintenance stage, and may be referred
to in some circles as post-implementation review.
Disposal: The purpose here is to properly move, archive, discard, or destroy
information, hardware, and software that is being replaced, in a manner that prevents
any possibility of unauthorized disclosure of sensitive data.
Example: A $2 billion air traffic control system failed due to insufficient computer
memory
On April 30, 2014, hundreds of LAX flights were delayed or cancelled because all
computers in the airport crashed due to a bug in the En Route Automation
Modernization (ERAM) system.
The ERAM system failed because it limits how much data each plane can send. Most
planes have simple flight plans, so they do not exceed that limit. However, the U-2
operating that day had a complex flight plan that brought it close to the system’s limit.
Reasons for failure
1. Poor Communication
Effective communication is valuable in the workplace for so many reasons. It
creates a healthy environment for employees, helping them to work efficiently but
it also creates a strong relationship with clients and stakeholders.
2. Resistance to change
Resistance to change is in the human nature. Being a good project manager
means resisting to the resistance to change
3. Not reviewing project progress on a regular basis
Initial plans and timelines must be regularly updated. Not measuring your progress
against your initial plan often enough can cause big and unpleasant surprises.
4. Unclear requirements
Not going through a complete planning exercise with your client before you start
building is a guarantee for failure. After you have a clear image of what the client
needs you have to see how that compares to their initial expectations so that you
can offer them realistic solutions that fit their budget and time frame.
5. Unrealistic expectations
No one wants to set unrealistic expectations, but it happens all the time. This leads
to the project eventually getting delayed and the client will be unsatisfied and angry.
Even if you are very excited about the upcoming project, don’t commit to an end
date until you understand exactly what the client needs and what you’re expected
to deliver.

Page 380, RQ1


A decision tree is a decision support tool that uses a tree-like graph or model of
decisions and their possible consequences, including chance event outcomes,
resource costs, and utility. It is one way to display an algorithm that only contains
conditional control statements.
Tree based methods empower predictive models with high accuracy, stability and
ease of interpretation. Unlike linear models, they map non-linear relationships quite
well. They are adaptable at solving any kind of problem at hand
Example
Let’s say we have a sample of 30 students with three variables Gender (Boy/
Girl), Class (IX/ X) and Height (5 to 6 ft). 15 out of these 30 play cricket in leisure time.
Now, we want to create a model to predict who will play cricket during leisure period?
In this problem, we need to segregate students who play cricket in their leisure time
based on highly significant input variable among all three.

Decision tree identifies the most significant variable and its value that gives best
homogeneous sets of population.
bounded rationality model explains why limits exist to how rational a decision maker
can actually be within a decision-making environment. There are four assumptions to
his model:
Managers select the first alternative that is satisfactory.
Managers recognize that their conception of the world is simple.
Managers are comfortable making decisions without determining all the alternatives.
Managers make decisions by rules of thumb or heuristics.

Page 270, RQ4


The 4 common encryption methods
There are different encryption methods based on the type of keys used, key length,
and size of data blocks encrypted. Here we discuss some of the common encryption
methods.
1. Advanced Encryption Standard (AES)
Advanced Encryption Standard is a symmetric encryption algorithm that
encrypts fixed blocks of data (of 128 bits) at a time. The keys used to decipher
the text can be 128-, 192-, or 256-bit long. The 256-bit key encrypts the data in
14 rounds, the 192-bit key in 12 rounds, and the 128-bit key in 10 rounds. Each
round consists of several steps of substitution, transposition, mixing of plaintext,
and more. AES encryption standards are the most commonly used encryption
methods today, both for data at rest and data in transit.
2. Rivest-Shamir-Adleman (RSA)
Rivest-Shamir-Adleman is an asymmetric encryption algorithm that is based on
the factorization of the product of two large prime numbers. Only someone with
the knowledge of these numbers will be able to decode the message
successfully. RSA is often used in digital signatures but works slower when
large volumes of data need to be encrypted.
3. Triple Data Encryption Standard (TripleDES)
Triple Data Encryption Standard is a symmetric encryption and an advanced
form of the DES method that encrypts blocks of data using a 56-bit key.
TripleDES applies the DES cipher algorithm three times to each data block.
TripleDES is commonly used to encrypt ATM PINs and UNIX passwords.
4. Twofish
Twofish is a license-free encryption method that ciphers data blocks of 128 bits.
It’s considered the successor to the Blowfish encryption method that ciphered
message blocks of 64 bits. Twofish always encrypts data in 16 rounds
regardless of the key size. Though it works slower than AES, the Twofish
encryption method continues to be used by many file and folder encryption
software solutions.

S-ar putea să vă placă și