Sunteți pe pagina 1din 13

IPASJ International Journal of Information Technology (IIJIT)

Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm


A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

THREAT PREVENTION IN SDLC USING ARTIFICIAL


INTELLIGENCE
Nandita Tiwari1 , Dr. Varsha Namdeo2
1
Computer Science and Engineering Department, SRK University Bhopal, India
2
Computer Science and Engineering Department, SRK University Bhopal, India

Abstract
SDLC (Software development lifecycle) is essentially a series of steps or phases that provide a framework for developing
software and managing software throughout its lifecycle. A robust SDLC strategy delivers higher quality software, fewer
vulnerabilities, and less time and resources. Not only does it help to develop and maintain software, it also provides an
advantage when it comes to obsolete code. The Software Development Life Cycle (SDLC) is a series of phases that provide a
common understanding of the software build process. It includes all the phases needed to ensure the development of useful and
powerful software products, and involves cost-effective and traceable processes.SDLC takes the responsibility of success and
failure of the real time projects. If planned well, even the preliminary phase can set aside a bundle of effort as well as money.
With the growth of time and advancements, the reusability of products is seen often in the research and development industry.
Reusing components of a code or data does not ensure the security of the project. Threat modeling and risk assessment are
considered for different principles. The amalgamation of risk assessment and threat modeling process lessens the risk for
software based system. The incorporation of security in each phase for SDLC is a tedious job. This paper focuses on the
development of a framework which can prevent threat by using the SDLC process and its components. The paper utilizes the
concept of Artificial Intelligence (AI) in combination with OOPS metrics to identify whether the components of a project are
safe from threat or not. The judgment parameters are precision, recall and prediction accuracy.
KEYWORDS: SDLC,Threat, Artificial Intelligence, Accuracy

I. Introduction

The security risk has an appropriate effect on the software system development. To build an application is not feasible
without potential threat understanding of application targets [1]. These days, more security is developed with novel
technologies, so, existence of new attacks came into existence that increases the risk for some organization. To
understand a risk is the pre-requisite step for threat analysis. With defensive and attacker perspectives, the risk is
examined in each SDLC phase like training, requirements, designing, implementation, verification, release and the
response. Vulnerability and threats are the segments of the risk, in which risk is the guiding factor for supporting the
decision in each test process phase [2]. This work deals with the enhancement of the possibilities of using SLDC
process from the company architecture to the security mechanism. The platform is a big bar in the area of prevention
from the threat. A threat may be an unwanted activity in the code frame or a package. Regardless of the type of
platform getting used, the problem of this research work is to classify whether a presented module is secure or not [3].
A lot of previous research works have opted different types of prevention mechanism already but the modern world
utilizes OOPS architecture and hence the prevention should be based on OOPS itself. Feed Forward Back Propagation
Method has been used to train the OOPS metrics [4].

II. Secure Software Design Phase


Currently, the main aim of the organizations is not just to develop an application and selling it to the customer but to
focus on the safety issues. Therefore, number of complexities that may occur during the process as shown in below
figure [5]. The designed system should consider the issues efficiently and effectively [6].

Volume 6, Issue 11, November 2018 Page 19


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Figure 1: Secure software architecture


It is known that numbers of security threats are there by sending or receiving the data from the system and from the
external system. These security vulnerabilities has malicious users that connects to the system and calls the system
techniques and transfers the data items like input strings from the system [7]. The data can be transferred by the
malicious users indirectly from the system by utilizing the constant data items. The malicious users may send the data
in the system by encrypting to the file which the system reads later. Therefore, the malicious users utilize the system
techniques and data items for harming the system [8].
Numbers of security patterns are there that are considered as a well-accepted solution for the recurring security problem
and helps to build efficient software. The security patterns are categorized into different categories as defined below [9]:

Figure 2: Sending and Receiving data

i. The structural, creational and behavioural security patterns [diagrams on the relationship among entities with
the object and interaction creation].
ii. Accessible system patterns being the sub type of structural patterns [predictable un-interrupted access towards
the services with the resources provided to the users].
iii. Protected system pattern as other structural sub type [system construction for protecting valuable resources for
unauthorized use, modification or disclosure].
iv. Utilization of anti-pattern [reference to the working pattern].
v. Mini-pattern being shorter, less formal definition of security expertise [Specifically a programming language].

III. Related work


Joanna f. Defranco and Philip a. Laplante, Investigate the type and superiority of research performed in software
development team communication. Analyzed 184 research papers and created a software engineering team research
classification. This taxonomy has been used to classify the background of research papers. The results show that most
lively/vibrant software development team communication research areas are global software development, effective
teamwork and project effectiveness. Mojtaba Shahin et al., Various methods of related software development were
reviewed. Different tools for developing software and the challenges posed by users and programmers have been
reviewed. Systematic literature review techniques have been used for peer review papers. In order to analyze 69 papers,
data analysis techniques were used. Helen Sharp et al., have explained how the ethnography benefitted the empirical
software engineering researchers. This process was achieved by elucidating 4 characters that ethnography can perform

Volume 6, Issue 11, November 2018 Page 20


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

in advancing the aims of empirical software engineering: to reinforce investigations into social and human traits of
software engineering; to notify the scheme of software engineering tools; to progress method development; and to notify
research programmes. Ethnography usefulness to software engineering has been discussed. José Luis Fernández-
Alemán et al., have presented the outcomes of two educational experiments which were carried out to find if the
process of requiring requirements has an influence on efficiency and productivity in distributed and co-located software
development environments. The main aspect of the experiment was the evaluation of RE learning method based on
catalogue and RE learning method based on specific requirements. Global results showed that a development can be
made to the specification process by using learning based on Reusable Requirements Catalogues (RRC) such as I-CAT.
Magne Jørgensen and Martin Shepperd , have reviewed software development cost estimation research work of
different authors to improve the software estimation process. The journals for study have been carefully selected in
order to ease the identification of research estimation results. Chris Sauer et al. have applied the behavioral theory of
software engineers group reviews to clarify the results of the software development technical reviews. The empirical
research program has been developed to elucidate the review performance and find ways of enhancing the performance
based on precise strengths of persons and groups. Moreover, in recognizing persons' task proficiency as the main task
of review performance. Irena Loutchkina et al, have presented a system integration technical risk assessment model
which was dependent on Bayesian belief networks joined with parametric models. The proposed model provided
information for decision-makers and for improving risk management of complex projects. A conceptual modeling
framework has been proposed to resolve the problem of System integration technical risks. Rationale and the modeling
objectives have been presented. Paolo Bresciani et al. have introduced a technology known as Tropos for creating an
agent-based software system. It has covered the very initial stages of the requirement analysis. Hence, it gave the full
understanding of the environment in which the system has operated and the communication between the agent and the
software. The one main long-term aim was to provide detailed information about the Tropos methodology.

IV. Threat Modelling

Threat modelling is a method for identifying, explaining and recording each threat from probable source that may
attack the system [10]. It focuses on serious threats that usually threaten the model than finding each obvious weakness
of the system. After the threat model designing, the mitigating manner is executed. Varied threat models are studied
and numbers of approaches are presented previously. Microsoft has provided a threat modelling tool for creating
documents of the application threat model, with asses, data entry points, threats, data flow diagram following the
vulnerabilities in the tree view. For creating a threat model for some applications, its intended functions with the
framework should be known that may be accessed from the below design documents [11]:
i. Use cases and usage scenarios
ii. Data schemes
iii. Data flows
iv. Deployment diagrams

The steps that should be considered to define a threat model for some application system are [12]:
a) Identification of security objectives
b) Creation of application
c) Decomposition of applications
d) Identification of threats
e) Identification of vulnerabilities

f) Initially, a threat model has been built by describing the attributes, requirements, data dependencies with the
trust boundaries of the applications. The identification of the threats is from the attacker and from the
defensive perspectives like, data flow, process, user interaction and the data storage with the user [14]. The
measures of the attacks are connected with the applications that enable the programmer to lessen the attack
chances.
g) The threat type for every component is computed from the application by means of threat tree, countermeasures
and the vulnerabilities. It is a repetitive procedure for evaluating the threat. The usage of this model is to find
and avoid the risk before the initialization of the code [15].

Volume 6, Issue 11, November 2018 Page 21


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Threat identification

Decompose the
application Determine
Threat categorization countermeasures
methodology and mitigation

Understanding Attacker Defensive


application
Mapping list
threat and counter
STRID ASF
measures
E
Interact with
external entities
Threat list Security control

Sorting threats
with prioritization
Potential threat targets by of mitigation
Data flow DFD
diagram for an
application
Threat Tree

Figure 3: Threat model to identity theft in SDLC

Threat Possible Apply to Evalu


identificat threats compone ate
ion in threat nt
list

Threat Attack Mitigati


Iterative
paths on
control

Figure 4: Threats Iterative process

V. Proposed Architecture

The architecture of the proposed work is divided into two sections:


a) Training of the OOPS metrics using Feed Forward Back Propagation Method
b) Classification of new data based on the Trained Structure
To attain both the objectives, a Graphical Interface Unit is designed in MATLAB 2016 b.
The following flow diagrams represent the work architecture. Architecture 1 is for training and architecture 2 is for
classification.

Volume 6, Issue 11, November 2018 Page 22


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Structure 1 represents the training mechanism of the threat classification algorithm. There are three types of learning
methods in Artificial Intelligence (AI). First one is termed as supervised learning followed by semi-supervised and then
nonsupervised or unsupervised learning.

Figure 5: Stage 1 of the proposed methodology


Unsupervised learning in the practical world through a machine is almost impossible. Even if in near future the human
brain develops unsupervised learning then it would be a threat itself to the society and world.
5.1 Supervised Learning

The majority of practical machine learning uses supervised learning. Supervised learning is where a userhas input
variables (x) and an output variable (Y) and an algorithm can be used to learn the mapping function from the input to
the output.

The goal is to approximate the mapping function so well that when there isa new input data (x) that can predict the
output variables (Y) for that data. It is known as supervised learning because the process of algorithm learning from the
training dataset can be thought of as a teacher supervising the learning process. Learning stops when the algorithm
achieves an acceptable level of performance.

5.2 Un-Supervised Learning

Unsupervised learning is where a userhas only input data (X) and no corresponding output variables. The goal for
unsupervised learning is to model the underlying structure or distribution in the data in order to learn data. These are
termed as unsupervised learning. Unsupervised learning problems can be further grouped into clustering and
association problems.
 Clustering: A clustering problem is where a usercan discover the inherent groupings in the data, such as
grouping customers by purchasing behavior.
 Association: An association rule learning problem is where a user can discover the rules that describe large
portions of data, such as the users that can buy X and also tends to buy Y. Some popular examples of

Volume 6, Issue 11, November 2018 Page 23


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

unsupervised learning algorithms are k-means for clustering problems. The learning method requires feature
values and the proposed architecture have utilized the following metrics for the training of the network.
 Lines of Codes: Total number of written lines in order to attain a significant program.
 Number of Functions: Functions are the code blocks, which can be used again and again just by calling them
into a concerted program. It makes the code more significant and easy to use.
 Number of Classes: A group of functions is kept together to form a class. It is better to keep the similar type of
functions in code blocks. It makes the program user-friendly. Classes written in one language can be utilized
in other languages also.

Figure 6: Second phase of implementation


 Number of Variables: Variables are the basic building blocks of any code file. A number of variables do not lead
to significant results. It should be in precise distribution.

Flow diagram represents the classification architecture. Classification architecture signifies that if something is trained
well, it will result in accurate classification. If the classification accuracy is not satisfactory than it means that the
training structure does not achieved the outcome well and leads to a poor structure of development andwelcomes the
threat.
Algorithm: ANN
1. Input: Features as Training Data (T), Target (G) and Neurons (N)
2. Output: Trained NN & detect threats in designed software project
3. Initialize NN with parameters;
a. Epochs
b. Neurons (N=15)
c. Performance parameters: MSE, Gradient, Mutation and Validation Points
d. Training Algorithm: Levenberg-Marquardt (Trainlm)
4. For
5. If )
6. Target (1, i) = Threat free software
7. Else
8. Target (2, i) = threat detected
9. End
10. End
11. Initialized the NN using Training data and Target
12. Where, Training data = Properties of extracted features from OOPS
13. Net = Newff ( )

Volume 6, Issue 11, November 2018 Page 24


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

14. Set the training parameters as per the requirements and train the system
15. Net = Train ( )
16. Return; Trained NN & detect threats in designed software project
17. End

Algorithm2: Support Vector Machine (SVM)


1. Input: Properties of extracted features of OOPS
2. output: threat detected
3. Initialize the SVM training data T is the total OOPS features
4. Defined the G as a group of training data
5. Set the RBF as Kernal function
6. For each VMs
7. Train_data=SVMTRAIN (T, G, RBF);
8. End
9. Detected threat list=SVMCLASSIFY (Train_data, test properties of test data)
10. If threat is detected in the software project then
11. Remove threat
12. Else
13. Check it again and return;
14. End
15. End function

VI. Simulation Results

This section defines the results obtained after the evaluation of the proposed architecture. Parameters like precision,
recall and accuracy are computed to check the effectiveness of the proposed work.

Figure 7:Training Architecture of Evaluated metrics

As already explained in the methodology section, there would be training and classification mechanism. The evaluated
metrics are passed to the training layer of Neural Network which results in Figure above [21].

Volume 6, Issue 11, November 2018 Page 25


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Figure 8: Structure of Neural Network

The structure of the neural network is shown in above figure. Neural Network performs in the back propagation layer
as in feed-forwarding layer [22].

Figure 9: Structure of back propagation layer

If a total of 100 iterations are supplied at the input layer, it is not necessary that the propagation in the forward
direction should complete all the 100 iterations [23]. It depends upon how early and precisely the training of the Neural
is satisfied. To satisfy the Neural training mechanism there are six constraints.

Volume 6, Issue 11, November 2018 Page 26


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Figure 10: Constraints of Satisfaction

If any of the constraintsare satisfied, then the training is complete. The constraints are Time, Performance, Gradient,
Mutation and Validation Checks [24].

Figure 11: SVM Training

The figure above represents SVM Training. As SVM is a binary classifier it can only take two classes at once. The
output of two classes goes as input to the further processing [25].

Volume 6, Issue 11, November 2018 Page 27


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Figure 12: ROC Curve

Table 1: Prediction Accuracy

No of Files Prediction Prediction


Accuracy AI Accuracy SVM
10 96.23 81.25
100 92.35 83.21
500 93.47 82.11
1000 92.45 83.32

Figure 13: Accuracy evaluation

Figure 13 and table 1 represents the evaluation of accuracy by means of Artificial intelligence and SVM (Support
vector machine). The x-axis in the above figure is defining the number of files and Y-axis is defining the prediction
accuracy obtained for AI and SVM values. The blue bar defines the results of prediction accuracy using AI whereas red
bar defines the results of prediction accuracy using SVM. The average value for accuracy using AI is 93.625 and the
average value for accuracy using SVM is 82.47. It is evident from the computation that the effectiveness of proposed
work is more in terms of accuracy.

Volume 6, Issue 11, November 2018 Page 28


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Table 2: Precision

No of Files Precision AI Precision SVM


10 0.95 0.93
100 0.94 0.89
500 0.92 0.85
1000 0.90 0.83

Figure 14: Precision evaluation

Figure 14 and table 2 represents the evaluation of precisionfor Artificial intelligence and SVM. X-axis in the above
figure is defining the number of files and Y-axis is defining the precision for AI and SVM values. The blue bar defines
the results of precision using AI whereas red bar defines the results of precision using SVM. The average value for
precision using AI is 0.9275 and the average value for precision using SVM is 0.875.

Table 3: Recall

No of Files Recall AI Recall SVM


10 0.94 0.91
100 0.92 0.87
500 0.90 0.83
1000 0.88 0.81

Figure 15: Recallevaluation

Volume 6, Issue 11, November 2018 Page 29


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

Figure 15 and table 3represents the evaluation of recallfor Artificial intelligence and SVM. X-axis in the above figure is
defining the number of files and Y-axis is defining the recall for AI and SVM values. The blue bar defines the results of
recall using AI whereas red bar defines the results of recall using SVM. The average value for recall using AI is 0.91
and the average value for recall using SVM is 0.85.

VII. Conclusion

The aim of the research work is the detection of threat in SDLC by using AI. The main objective is to identify threat
prone components and to remove or replace them. The OOPS metrics like LOC, number of function, Class Count are
used to model the training layer of AI.AI is implemented with training and classification layer with feed forward and
back propagation mechanism. The proposed mechanism results in a prediction accuracy of 93.625 %. The proposed
solution is also compared with other machine learning mechanism named as SVM. The prediction accuracy of SVM is
evaluated as 82.47 % on an average. The average value for precision using AI is 0.9275 and the average value for
precision using SVM is 0.875 whereas the average value for recall using AI is 0.91 and the average value for recall
using SVM is 0.85. The proposed solution opens a lot of future gates.More OOPS metrics can be opted to model AI. In
addition to this, prediction models like COCOMO and COCOMO 2 can also be used.

References

[1] Cosentino, V., Izquierdo, J. L. C., & Cabot, J. (2017). A Systematic Mapping Study of Software Development
With GitHub. IEEE Access, 5, 7173-7192.
[2] DeFranco, J. F., & Laplante, P. A. (2017). Review and Analysis of Software Development Team Communication
Research. IEEE Transactions on Professional Communication, 60(2), 165-182.
[3] Shahin, M., Babar, M. A., & Zhu, L. (2017). Continuous Integration, Delivery and Deployment: A Systematic
Review on Approaches, Tools, Challenges and Practices. IEEE Access, 5, 3909-3943.
[4] Ciccozzi, F., Seceleanu, T., Corcoran, D., & Scholle, D. (2016). UML-Based Development of Embedded Real-
Time Software on Multi-Core in Practice: Lessons Learned and Future Perspectives. IEEE Access, 4, 6528-6540.
[5] Bate, I., Burns, A., & Davis, R. I. (2017). An enhanced bailout protocol for mixed criticality embedded
software. IEEE Transactions on Software Engineering, 43(4), 298-320.
[6] Sharp, H., Dittrich, Y., & de Souza, C. R. (2016). The role of ethnographic studies in empirical software
engineering. IEEE Transactions on Software Engineering, 42(8), 786-804.
[7] Fernández-Alemán, J. L., Carrillo-de-Gea, J. M., Meca, J. V., Ros, J. N., Toval, A., & Idri, A. (2016). Effects of
using requirements catalogs on effectiveness and productivity of requirements specification in a software project
management course. IEEE Transactions on Education, 59(2), 105-118.
[8] Guillaume-Joseph, G., & Wasek, J. S. Improving software project outcomes through predictive analytics: Part
2. IEEE Engineering Management Review, 43(3), 39-49.
[9] Zhang, J., Lu, Y., Yang, S., & Xu, C. (2016). NHPP-based software reliability model considering testing effort
and multivariate fault detection rate. Journal of Systems Engineering and Electronics, 27(1), 260-270.
[10] Dybå, T., & Dingsøyr, T. (2008). Empirical studies of agile software development: A systematic
review. Information and software technology, 50(9), 833-859.
[11] Dingsøyr, T., Nerur, S., Balijepally, V., & Moe, N. B. (2012). A decade of agile methodologies: Towards
explaining agile software development.
[12] Jorgensen, M., & Shepperd, M. (2007). A systematic review of software development cost estimation
studies. IEEE Transactions on software engineering, 33(1).
[13] Sauer, C., Jeffery, D. R., Land, L., & Yetton, P. (2000). The effectiveness of software development technical
reviews: A behaviorally motivated program of research. IEEE Transactions on Software Engineering, 26(1), 1-14.
[14] Nunamaker Jr, J. F., Chen, M., & Purdin, T. D. (1990). Systems development in information systems
research. Journal of management information systems, 7(3), 89-106.
[15] Frakes, W. B., & Kang, K. (2005). Software reuse research: Status and future. IEEE transactions on Software
Engineering, 31(7), 529-536.
[16] DeFranco, J. F., & Laplante, P. A. (2017). Review and Analysis of Software Development Team Communication
Research. IEEE Transactions on Professional Communication, 60(2), 165-182.

Volume 6, Issue 11, November 2018 Page 30


IPASJ International Journal of Information Technology (IIJIT)
Web Site: http://www.ipasj.org/IIJIT/IIJIT.htm
A Publisher for Research Motivation ........ Email:editoriijit@ipasj.org
Volume 6, Issue 11, November 2018 ISSN 2321-5976

[17] Balsamo, S., Di Marco, A., Inverardi, P., & Simeoni, M. (2004). Model-based performance prediction in software
development: A survey. IEEE Transactions on Software Engineering, 30(5), 295-310.
[18] Loutchkina, I., Jain, L. C., Nguyen, T., & Nesterov, S. (2014). Systems' Integration Technical Risks' Assessment
Model (SITRAM). IEEE Transactions on Systems, Man, and Cybernetics: Systems, 44(3), 342-352.
[19] Bresciani, P., Perini, A., Giorgini, P., Giunchiglia, F., & Mylopoulos, J. (2004). Tropos: An agent-oriented
software development methodology. Autonomous Agents and Multi-Agent Systems, 8(3), 203-236.
[20] Wittig, G., & Finnie, G. (1997). Estimating software development effort with connectionist models. Information
and Software Technology, 39(7), 469-476.
[21] Benaroch, M., & Goldstein, J. (2009). An integrative economic optimization approach to systems development
risk management. IEEE Transactions on software engineering, 35(5), 638-653.
[22] Mohan, K., Kumar, N., & Benbunan-Fich, R. (2009). Examining communication media selection and information
processing in software development traceability: An empirical investigation. IEEE Transactions on Professional
Communication, 52(1), 17-39.
[23] Gokhale, S. S., & Trivedi, K. S. (2006). Analytical models for architecture-based software reliability prediction: A
unification framework. IEEE Transactions on reliability, 55(4), 578-590.
[24] Kendall, R. P., Votta, L. G., Post, D. E., Atwood, C. A., Hariharan, N., Morton, S. A., ... & Wilson, A. J. (2016).
Risk-Based Software Development Practices for CREATE Multiphysics HPC Software Applications. Computing
in Science & Engineering, 18(6), 35-46.
[25] Salgado, E. G., Salomon, V. A. P., Mello, C. H. P., & da Silva, C. E. S. (2014). A reference model for the new
product development in medium-sized technology-based electronics enterprises. IEEE Latin America
Transactions, 12(8), 1341-1348.
[26] Lee, D. H., In, H. P., Lee, K., Park, S., & Hinchey, M. (2013). Sustainable embedded software life-cycle
planning. IEEE Software, 30(4), 72-80.
[27] Chen, T. H., Thomas, S. W., Hemmati, H., Nagappan, M., & Hassan, A. E. (2017). An Empirical Study on the
Effect of Testing on Code Quality Using Topic Models: A Case Study on Software Development Systems. IEEE
Transactions on Reliability, 66(3), 806-824.
[28] ben Othmane, L., Angin, P., Weffers, H., & Bhargava, B. (2014). Extending the agile development process to
develop acceptably secure software. IEEE Transactions on Dependable and Secure Computing, 11(6), 497-509.
[29] Whitmore, J., Türpe, S., Triller, S., Poller, A., & Carlson, C. (2014). Threat analysis in the software development
lifecycle. IBM Journal of Research and Development, 58(1), 6-1.
[30] Solinas, M., Antonelli, L., & Fernandez, E. (2013). Software secure building aspects in Computer
Engineering. IEEE Latin America Transactions, 11(1), 353-358.
[31] Hoda, R., Noble, J., & Marshall, S. (2013). Self-organizing roles on agile software development teams. IEEE
Transactions on Software Engineering, 39(3), 422-444.

Volume 6, Issue 11, November 2018 Page 31

S-ar putea să vă placă și