Sunteți pe pagina 1din 22

SIEMONSTER VERSION 4 HIGH LEVEL DESIGN

Document Version 1.0 Public

Lead Designer Chris Rock / James Bycroft

Authors Chris Rock / James Bycroft

Last Change Date Sunday, 16 June 2019


Contact information
For more information on this document please contact:

Name Chris Rock

Position CEO

E-mail info@siemonster.com

The following people can also be contacted in relation to this document:

Name Position Email


Chris Rock Solution Lead chris@siemonster.com
James Bycroft Lead Architect james@siemonster.com

1
Glossary
The following terms and acronyms are used in this document:

Term Definition

AD Active Directory

ASA Cisco ASA IPS / Firewall

AWS Amazon Web Services

CoreOS Open Source Core Cloud Operating System

EFS Elastic File System, scalable file storage for use with Amazon EC2 instances

ELB Elastic Load Balancer

ELK ElasticSearch Logstash and Kibana Open Source Data analytics

ES ElasticSearch

Glacier Amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving
and long-term backup

Grafana Grafana is an open source, feature rich metrics dashboard and graph editor for Graphite,
Elasticsearch, OpenTSDB, Prometheus and InfluxDB.

IAM Identity and Access Management, Security of information and resources, controlling access to
information and resources, managing levels of access for different users in system

IIS Internet Information Services

Cerebro/ES- Cerebro is a simple web administration tool for elasticsearch written in JavaScript + AngularJS +
Maintenance jQuery + Twitter bootstrap. It offers an easy way of performing common tasks on an elasticsearch
cluster. Not every single API is covered by this plugin, but it does offer a REST client which allows you
to explore the full potential of the ElasticSearch API.

Kubernetes Kubernetes orchestration toolsets

NFS Network File System, It is a client/server system that allows users to access files across a network and
treat them as if they resided in a local file directory

OS Operating System

Prometheus Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring
system. It collects metrics from configured targets at given intervals, evaluates rule expressions,
displays the results, and can trigger alerts if some condition is observed to be true.

RDS Relational Database Service

S3 Amazon S3 provides storage through web services interfaces

SIEM Security Information and Event Management

SOC Security Operations Centre

Suricata Suricata is a free and open source, mature, fast and robust network threat detection engine. The
Suricata engine is capable of real time intrusion detection (IDS), inline intrusion prevention (IPS),
network security monitoring (NSM)

Terraform Service for rolling out AWS infrastructure

VM Virtual Machine

VPC Virtual Private Cloud

VPN Virtual Private Network

2
Table of contents
1 Authors Preface .............................................................................................. 1
2 Introduction .................................................................................................... 2
2.1 Scope ................................................................................................. 3
2.2 Audience ............................................................................................. 3
2.3 SIEMonster Starter Edition...................................................................... 3
2.4 SIEMonster Enteprise Amazon AWS Build Overview ..................................... 4
3 Infrastructure ................................................................................................. 5
3.1 Operating System ................................................................................. 5
3.2 Policies................................................................................................ 5
3.3 Networking .......................................................................................... 5
3.4 Security Groups .................................................................................... 5
3.5 Amazon AWS Instances.......................................................................... 6
3.1 Kubernetes Rollout ................................................................................ 8
3.2 Overview & Components ........................................................................ 8
Kubernetes version ............................................................................................... 8
Container Network Interfaces ................................................................................. 8
Certificates .......................................................................................................... 8
Load Balancer ...................................................................................................... 8
DNS ................................................................................................................... 8
3.3 Implementation .................................................................................... 9
3.4 Networking ........................................................................................ 10
3.5 Application Installation ......................................................................... 12
4 Backup and Recovery .................................................................................... 13
4.1 Overview ........................................................................................... 13

3
1 AUTHORS PREFACE

In 2015, one of our corporate clients told us of their frustrations with the exorbitant licensing costs
of commercial Security Information and Events Management (SIEM) products. The customer light
heartedly asked whether we could build them an open source SIEM to get rid of these annual
license fees. We thought that was a great idea and set out so to develop a SIEM product for
Managed Security Service Providers (MSSP’s) and Security Professionals. This product is called
SIEMonster.
SIEMonster Version 1 was released in late April of 2016 and a commercial release in November
2016. The release has been an astounding success without over 100,000 downloads of the
product. We have assisted individuals and companies integrate SIEMonster into small medium
and extra-large companies all around the world. SIEMonster with the help of the community and
a team of developers have been working hard since the Version1 release incorporating what the
community wanted to see in a SIEM as well as things we wanted to see in the next release.
Along the way we have signed up MSSP’s from around the world who have contributed to the
rollout of SIEMonster and in return they have assisted us with rollout scripts, ideas and things we
hadn’t even considered.
We are now proud to release the latest Version 4.0 of SIEMonster. SIEMonster comes in three
Editions. These new editions now run Open Elastic, on a CoreOS or Ubuntu base

Starter Edition: A single that runs locally or in the Cloud and is ideal for 1-100 endpoints.
SIEMonster Starter Edition is available in a 30-day trial and can be converted into an annual
subscription. This is perfect for smaller organisations.

Enterprise: Multi Server Cloud or Local that infinitely scales from 1-50,000 Endpoints that can
run 1-75,000 Events per second. This edition runs in AWS, Azure, ESX or BareMetal

MSSP: A Multi-tenancy Edition of SIEMonster in AWS or Local install for select customers and
Managed Security Service providers. This currently runs on AWS.

1
2 INTRODUCTION

SIEMonster Version 4 is built on the best supportable components and custom development from
a wish list from the SIEMonster community. This document will cover the architecture, the features
that make up SIEMonster, so that all security professionals can run a SIEM in their organisations
with no budget.

SIEMonster is built on CoreOS and Ubuntu 18.04 LTS with Kubernetes orchestration. The product
comes in Vbox, VMware, Bare-metal or Cloud. SIEMonster can scale horizontally and vertically
to support any enterprise client.
Some of these features include.

• Open Source Threat Intelligence both free and commercial feeds


• Open Elastic
• Wazuh HIDS system with Kibana plugin and OpenSCAP options & simplified agent
registration process
• Automated installation process orchestration & SIEMonster web application to give more
visibility over the install process
• All new dashboard with options for 2FA, site administration with user role-based access
and faster load times
• Built in parsers for most proprietary devices • Preloaded Minemeld threat intel feeds
integrated with log ingest out of the box.
• Open Source Incident Response. Alerts maybe escalated as tickets to other operators
or a whiteboard to show night shift analysts current issues.
• Elastalert, alerting on anomalies, spikes, or other patterns within Elasticsearch.
• Prometheus metric exporters with Prometheus AlertManager for system monitoring.
• Data Correlation UI, community rulesets and dashboards, community and open source
free plugins that make the SIEM.
• Incorporate your existing Vulnerability Scans into the Dashboard, (OpenVAS, Nexpose,
Metasploit, Burp, Nessus etc.)

We welcome you to try out our fully functional SIEM solution and if you wish to purchase the
product please contact sales at https://go.siemonster.com/ContactUs

2
2.1 SCOPE

This document covers all the software and hardware infrastructure components for the
Security Operations Centre SIEMonster product. Separate documents such as build guides,
standard operating procedures, troubleshooting and maintenance are in other documents
included in the document suite. Training videos, and how to use guides are on the SIEMonster
website. https://www.siemonster.com

2.2 AUDIENCE

This document is intended for technical representatives of companies, SOC owners as well
as security analysts and professionals. The audience of this document are expected to have
a thorough level of knowledge of Security, Software and Server Architecture.
The relevant parts are included here for convenience and may of course be subject to change.
They will be updated when notification is received from the relevant owners.

2.3 SIEMONSTER STARTER EDITION

The single OVA or AMI, a free 30-day trial of our Starter Edition product. It allows companies
to quick validate the SIEMonster build as a suitable proof of concept for their environment. It
is ideal for 1-200 nodes of endpoint monitoring and runs as a single Ubuntu server.
The minimum requirements for this build are
• 8 Cores
• 48 GB Ram
• 80 GB HDD Space
• VMware, ESX/I or Amazon AWS environment.

SIEMonster Enterprise is typically an 8-server rollout so reducing this footprint into a single
instance was a huge architectural effort. Unless your Laptop has 64 GB Ram you won’t be
able to run this from a laptop but a suitably spec’d workstation or server or Amazon instance
is ideal for this POC.

3
2.4 SIEMONSTER ENTEPRISE AMAZON AWS BUILD OVERVIEW

Below is a high-level diagram of the AWS Infrastructure components in an Enterprise Environment.

4
3 INFRASTRUCTURE
This chapter contains the networking, policy and infrastructure of the Enterprise Edition in
AWS.

3.1 OPERATING SYSTEM

Latest stable CoreOS AMI used for all VMs

3.2 POLICIES

IAM roles / policies are created for master and worker instances (the standard policies
required by AWS cloud provider in Kubernetes)

3.3 NETWORKING

● VPC 10.0.0.0/16, DNS support and DNS hostnames enabled


● Public Subnet 10.0.128.0/17, for bastion, NAT gateway, ELBs
● Private Subnet 10.0.0.0/17, for masters and workers
● NAT gateway with Elastic IP for outbound traffic in Private Subnet, attached via Route Table
● Internet gateway for Public Subnet, attached via Route Table

3.4 SECURITY GROUPS

● Bastion Security Group:


○ TCP 22 inbound from anywhere (for SSH, secured by private key)
○ TCP 22 outbound to Kubernetes Security Group
○ UDP 53 outbound for DNS resolution

● Kubernetes Security Group:


○ TCP 22 inbound from Bastion Security Group
○ TCP 6443 inbound from Load Balancer Security Group
○ All outbound allowed

● Load Balancer Security Group:


○ TCP 443 from anywhere
○ TCP 6443 outbound to Kubernetes Security Group
● EFS (shared file system) Security Group:

○ All inbound from Kubernetes Security Group


○ All outbound allowed

5
3.5 AMAZON AWS INSTANCES

There is a total of 10 AWS hosts as a default install


• Bastion Host - 1
• Master Nodes – 3
• Worker Nodes - 6

Bastion Host- 1

Build Cloud AWS t2.small

VCPU 1

Memory 2 GB RAM

Root Volume 100gb gp2 (default)

Master Nodes- 3

Build Cloud AWS m3.medium

VCPU 1

Memory 3.75Gi RAM

Root Volume 100gb gp2 (default)

Worker Nodes- 6

Build Cloud AWS m3.large

VCPU 2

Memory 7.5Gi GB RAM

Root Volume 100gb gp2 (default)

● Elastic Load Balancer attached to masters for K8S API


○ Secured with SSL and RBAC
● EFS (shared file system) mount target in Private Subnet for K8S nodes

6
The below diagram shows the full AWS stack including Server numbers, transport, security groups and
access. When using SIEMonster Enterprise we will scale out these server numbers as required to meet
your EPS goals.

7
3.1 KUBERNETES ROLLOUT

This section covers both the Kubernetes rollout as well as the SIEMonster application installed
on the top.

3.2 OVERVIEW & COMPONENTS

Kubernetes is deployed using Kubespray. Kubespray is an open-source project that uses Ansible
and Kubeadm under the hood to deploy a Kubernetes cluster. Kubespray is supported on multiple
clouds (AWS, Azure, Digital Ocean, etc).

Here is the Github link https://github.com/kubernetes-sigs/kubespray

Kubernetes version

The current Kubernetes version used is: v1.13.3

Container Network Interfaces

● Container Network Interface: Calico


○ Configures IP range for K8S Services: 10.233.0.0/18
○ Configures IP range for K8S Pods: 10.233.64.0/18
○ Provides Network Policies to secure Namespace networks

Certificates

● Let’s Encrypt for SSL certificates


○ Free SSL certs generated in-cluster (verified through Route53)
○ Allows each tenant to have separate DNS zone and certificate (AWS
Cert Manager has yearly cert request limits, whereas Let’s Encrypt has
weekly limits many multiples of what AWS allows)

Load Balancer

● SIEMonster application Load Balancer is provisioned by Kubernetes


○ Listens on 443, routes back to Kubernetes worker nodes
○ Worker nodes route traffic through Kube-proxy to Nginx Ingress
Controller
○ Ingress Controller routes to specific SIEMonster services by
subdomain (e.g. kibana.tenant.siemonster.io)

DNS

● External-DNS provides DNS records for services


○ Runs in-cluster, triggered by annotations on Ingresses

8
3.3 IMPLEMENTATION

Kubernetes is deployed using Kubespray. Kubespray is an open-source project that uses


Ansible and Kubeadm under the hood to deploy a Kubernetes cluster. Kubespray is supported
on multiple clouds (AWS, Azure, Digital Ocean, etc).

Upon a Kubernetes deploy, SIEMonster Stack is deployed using helm. Here is a screenshot
of the SIEMonster Stack running in the SIEMonster namespace:

9
Additionally, ingress into the cluster is possible through the ELB via host-based routing as visible
through the documentation below. (e.g. 411.new-cluster.sandbox.siemonster.io would route to
siemonster-411)

External DNS enables us to automatically define DNS records in route53 for routing to Kubernetes
services:

3.4 NETWORKING

10.0.0.0/16 was chosen for the VPC CIDR, since this is a standard space used in most K8S
deployments. That was subdivided into 4 /18 blocks. Both a private and a public subnet are
needed, which respectively use 10.0.0.0/18 and 10.0.64.0/18. This leaves 10.0.128.0/17
available if any additional subnets are needed in the future.

10
The diagram below illustrates the technical components residing in the AWS VPC, as well as
the CIDR block ranges mapped to these components:

10.233.0.0/18 and 10.233.64.0/18 are used for K8S Services and Pods, respectively. These
ranges do not conflict with our infrastructure subnets, and a CNI must be used to make them
routable. We have chosen to use Calico for the CNI, which also provides Kubernetes Network
Policy implementation that allows us to restrict network traffic between namespaces.

11
3.5 APPLICATION INSTALLATION

Once Kubernetes is deployed, the SIEMonster single-tenant application stack is deployed with
Helm charts.

Kubernetes itself will create additional infrastructure components as requested by the application.
This includes Load Balancers, Security Groups, and Volumes. Any Service exposed publicly has
a DNS record created by External-DNS, pointing to a K8S-created Load Balancer which routes to
the Nginx Ingress Controller on the cluster, which then finally routes to the appropriate backend
service. Data-intensive components will have ELBs provisioned and mounted, and others will use
EFS.

12
4 BACKUP AND RECOVERY

4.1 OVERVIEW

Backup and disaster recovery is handled by installing Velero into the main Kubernetes cluster. Velero
gives you tools to backup and restore your Kubernetes cluster resources and persistent volumes.
Velero is installed into the main SIEMonster cluster via helm, and when fully installed consists of the
following pieces:

● A velero service, which runs as a set of velero pods.


● A set of restic pods for handling converting file data to binary backups
● A command line utility that runs locally

Via Velero you can backup or restore all objects in your cluster, or you can target specific namespaces,
apps or deployments. Velero can handle both disaster recovery, as well as regular snapshotting your
application state. For example, prior to performing system operations on the cluster.

Velero backups are enabled out of the box when deploying a Kubernetes cluster. To back up a cluster,
or restore it from a backup, use the Velero binary locally just like kubectl.

The Velero binary interacts with the remote Kubernetes cluster to issue backups, restores and manage
backup lifecycle operations.

As of this writing, the stable version of Velero is v0.11.0, with v1.0.0 still in alpha/beta. The installed
version now is v0.11.0.

When deploying a new cluster or synchronizing a running cluster with a new SIEMonster version, there
are two settings available that control the location where the cluster backups are stored.

The following additional settings are available

● backup_region
● backup_bucket

backup_region will be defined to be the main AWS region, and backup_bucket will be defined as
velero-${CLUSTER_NAME}-backups.

At this time, backups are only stored within an S3 bucket in the AWS account. The two new parameters
dictate the name and region of the AWS S3 bucket that will be used for cluster backups.

13
The diagram below illustrates how Velero is installed into the main Kubernetes cluster. All Velero
related deployments, pods, services, and credentials will be installed into the backup namespace
which is “Velero”.

In order to operate both within Kubernetes and AWS, the Velero service and binary needs to be
delegated access rights. This is done by creating a new AWS user and associated user policy and
access keys. The access keys that Velero uses are then stored as a Kubernetes credential.

14
The diagram below is a more detailed view of the Velero deployments and backup location. The
Kubernetes cluster block represent the Velero and Velero Namespace block in the previous illustration.
Depending on cluster size, there could be any number of Velero and Restic pods running, but the
default is to have 1 Velero pod in the cluster, and 1 Restic pod per Kubernetes node. I.e. there would
normally be more Restic pods than Velero pods.

15
When creating a new backup, Velero creates snapshots of volumes and cluster metadata in the
following places within AWS:

● All cluster related metadata, like namespaces, deployments etc., are stored within the backup
S3 bucket.
● All EBS backed Persistent Volumes are stored as individual EBS Volume snapshots within
the region. All EBS Volume snapshots are properly tagged with the right Persistent Volume
name and Claim name, as well as backup name and namespace.
● All EFS backed Persistent Volumes that are market for backup (more on that later) will be
backed up as Restic archives within the backup S3 bucket.

This is the layout of the S3 backup bucket.

● /backups - Each backup is listed as a folder within here, and each backup folder contain the
backup data.
● /restores - Each backup restore is listed as a folder within here, and each restore folder
contain the data and details for this restore.
● /restic - For each backup that includes an EFS volume to back up, the backed up EFS volume
data will be in here.

EBS volume snapshots will be available within the same AWS region as the main cluster is in. Each
snapshot is tagged with backup name, namespace and PV/PVC names for this process. To identify
which snapshot is associated with what volume, or see what snapshots are available for a particular
volume, please look for the tags on each snapshot.
● kubernetes.io/created-for/pv/name - This is the name of the persistent volume that the
snapshot is associated with.
● kubernetes.io/created-for/pvc/name - This is the name of the persistent volume claim that
the snapshot is associated with.
● kubernetes.io/created-for/pvc/namespace - This is the name of the Kubernetes namespace
that the that the snapshot is associated with.
● velero.io/backup - This is the name of the backup that the snapshot is part of.

Any EFS backed volume that is annotated for backup, will be snapshotted via restic and stored within
the main /restic folder as stated above. Each volume will get a folder name in here, and each backup
will result in a new hash folder within the volume folder.

16

S-ar putea să vă placă și