First step with elastic security

Amit Cohen
10 min readMar 15, 2022

Following my previous post on elastic i found that security doesn't get its attention and created this post that review the security elements that elastic has. For those who are new to Elastic-search and the Elastic Stack in general,
I’m starting with a quick overview describing the core components of the Elastic Stack. Let’s start with Beats. Beats are the lightweight data shippers.
These are clients that you install on the end-user machines or wherever your data is that you want to collect. There’s a Beat for just about everything.
So once you’ve deployed the Beats, you’ve configured them to collect
the data that you want to collect, you can send it either directly to Elastic-search, or if you want to parse and process that data further, you can send it to Log-stash. Log-stash is a data processing pipeline and it really is actually
a quite powerful processing pipeline on its own. Even outside of the Elastic Stack, there’s a lot of use cases for Log-stash, and that’s because it can take inputs from anywhere and then send that data once it’s done filtering it to just about anywhere. In the Elastic Stack, you’re typically going to take inputs from Beats, and then you’re going to output to Elastic-search.
But again, you can input from anywhere, output to anywhere,
and you can filter and mutate your data in between as much as you like.
That makes Logstash’s use case extend outside of the Elastic Stack.
Next, we have Elastic-search this is the search and analytics engine.
It does both storage and analysis of data. A lot of other data tools
will either only do storage or only do analysis, so Elastic-search actually does both. lastly, we have Kibana.This is going to be our visualization and also a bit of a management console as well. Kibana has an ever growing number of applications inside of it or plugins. Elastic-search, every time they buy a company, they incorporate their functionality, whatever that company did, into Kibana. Added all kinds of things like machine learning, APM, and more advanced visualization tools. But the core concept of Kibana is to discover, visualize, and dashboard. And then more recently over the years,
Kibana has also been a bit of a management console as well for the entire Elastic Stack. There’s a lot of really good tools built into Kibana
to monitor and troubleshoot your cluster and also to interact with the Elasticsearch APIs.

In this section, i am going to focus on securing Elasticsearch cluster.

PKI

The first thing we have to do before we can start encrypting
the networks of our clusters as we plan to use https we need to create some certificates. We need to create our own PKI or public key infrastructure.
Now, there’s a few different ways you can do this to varying degrees of security. There’s 2 primary methods of verifying a certificate. There is certificate-level verification, which is just a way of checking if the certificate is valid. And then there is full verification, which is not only checking if the certificate is valid, but if the node that is trying to use that certificate
is allowed to, which means that certificate has to be signed with identifiable information of that node, typically the IP address and the DNS name.
That way, any given certificate can’t be used by any node. It could only be used by the node that it was signed for. Then, furthermore, if you have a series of certificates, one for each node, all the certificates have to trust each other. They all have to be signed by the same authority, the same certificate authority. We can use the usr/share/elasticsearch. And then in here we have a bin folder, which has Elastic-search itself along with a whole bunch of utilities. One of these utilities is called certutil that create a certificate authority. Its recommended to give the certificate a passphrase, so that it can’t be used to sign other certificates unless you know the passphrase. Then we have to pass in the DNS.
By type hostname you will find the name an internal DNS, of the server that we’re currently on. Copy the DNS recored into the DNS flag in the certificate .
Now we have the DNS and the IP. This will allow us to use full verification,
which means that the certificate that we generate is only going to work on hosts that have this DNS name and this IP address. It will not work for any other host that tries to use it. This is by far the most secure way to use PKI in order to make sure that not only you are going to encrypt the communications using the certificate, but the actual use of the certificate itself is as protected as it can be.

Transit Encryption

As our certificate infrastructure all created, we want to go ahead and encrypt the transport network of our Elasticsearch cluster. The transport network is the node-to-node communication of the cluster. We, as clients, don’t have any direct interaction with this network. However, this traffic could be intercepted and interpreted to essentially leak data about the cluster. We want to encrypt this internode network. We should use the certificates, before we must set file permissions, so that Elasticsearch could actually read the certificate. Elasticsearch has to be able to use thecertificate, specifically the Elasticsearch user, who is part of the Elasticsearch group. We need to give group read-access to this certificate. Now let’s configure transport network encryption.
To do that we open up the elasticsearch.yml file and we’re going to specify all
of our security configuration: First we add xpack.security.enabled: true. This enables the xpack security plugin, which is going to actually enable a few different things. We’re just going to focus on transport network encryption. Second, set: xpack.security transport.ssl.enabled true. Next one is going to be verification mode. If you recall, we created our certificates with DNS and IP address information, which will allow us to do full verification, which means that not only is the certificate validated to make sure that the certificate itself is valid, but we’re also going to make sure that the IP and DNS of the host providing the certificate matches what was used to sign the certificate. That way only the hosts that were assigned for by that certificate can use those certificates. This prevents someone from getting a hold of your certificate and then using it on their own host to join your cluster, and then read all the information about your cluster. Next we’ve got the keystore.path. This is going to be a relative path to the certificate relative to the configuration, of the certificate that we created for this node. Because this is a PKCS 12 certificate,
it’s technically a certificate package. It has the CA, the key, and the cert all inside, which means we can use this certificate as both the keystore and the truststore. By default, the cert gen tool, the cert util tool
that we use with Elasticsearch generates everything as PKCS 12.
And because of the anatomy of a PKCS 12 certificate, it is able to be used as both keystores and truststores. When we go ahead and specify the truststore ,
we don’t have to give it a separate certificate. We can just use the exact same one. Another thing that we want to look at here the configuration items in elastic yml file. xpack security transport network SSL keystore, and then also truststore. If you recall, we secured these certificates with a passphrase.
In order for Elasticsearch to actually even use this certificate, it needs to know the passphrase. For that a configuration item i sneeded. keystore called secure_password. This is the configuration item for supplying that passphrase
that we can actually use this certificate. However, we don’t want to put this
in the elasticsearch.yml file because that is a plain text password, and that’s really not good practice. So what we’re going to do instead is we’re going to put this passphrase into the keystore for Elasticsearch. We’re going to do that using another Elasticsearch utility, elasticsearch-keystore.We want to add the value for this configuration item to our keystore so that its value is protected.
We do the same exact thing now, but for the truststore. Truststore, secure password, and it’s the same password, because , we’re using the same certificate for both keystore and truststore.

Users

we created a custom role in Elasticsearch using the security plugin
to allow us to assign a read-only role, essentially read access to the entire cluster to a given user. So let’s go ahead and create a custom user and apply this role to it. First thing we want to specify is authorized to do something.
Authorization is done through roles. First, give the read-only role that was created before. There’s also a whole bunch of built-in roles that you may want to use as well. For instance, if I want a user to be able to log into Kibana and use the Kibana interface, user must have the kibana_user role. That is a built-in user for Kibana. Another useful user that you might want to be aware of
is the monitoring_user. This user role allows you to use the monitoring console within Kibana. Worth mentioning that this role is dependent
on having the kibana_user role as well. There is some role dependency involved depending on what you’re trying to do. users in Elasticsearch.

Roles

Now that we have user access control enabled protecting our cluster and Kibana instance, let’s go ahead and take that functionality a step further
by creating our own custom roles and users. We use it as we want to interact with Elasticsearch APIs. I want to cover how to create roles and users using the APIs. So lets create a role. Lets create a read only role. We call the security API, we’ll specify that we’re creating a role, and then we’ll give the role name.
There’s 2 different types of permissions: cluster permissions and indices permissions. We can determine how we want to identify the data for which we’re going to give privileges to. You can also specify permissions based on a query. Let’s say you don’t want to give someone access to an entire index, you just want to give them access to whatever documents are returned from a query, it could be a subset of that index. And then furthermore, you have a subset of that index based on a query that you’re giving people access to
only those documents. You can also limit that access even further by not allowing them to see certain fields within those documents, you can get to see here that the permissions and privileges with Elasticsearch with custom roles can get extremely granular. You can specify the index, specific documents in that index, and a specific field in those documents. Let’s assume we had a bunch of log indexes. They’re all logs dash something (logs-).
We could do logs-*, and that would automatically identify all indexes
that start with logs-, or you actually specify every single index, maybe logs-1,
logs-2, logs-3. Now let’s go ahead and specify the privileges that we want to give on these indices. There are a whole bunch of different privileges
available. It’s highly recommend you check that out in the documentation.
We also have permissions for the cluster itself. I would encourage you to check out what these permissions are from the documentation.
There are a ton of these. Way too many for us to go through in this high level article.

Built in users passwords

We’ve successfully created certificates using our own PKIs, our own public key infrastructure. And we were able to encrypt the transport network.
In the process we also inadvertently enabled user access control. So now comes the issue of how do we access our cluster? We don’t know any usernames or passwords, right? Elasticsearch actually has a whole bunch
of built in users. Users that you wouldn’t even know about if you never use the security plugin. When you enable the security plugin, all of these built-in users are given a randomly generated bootstrap password. That bootstrap password is stored in Elasticsearch’s keystore and the only way you can use that password is using the setup passwords utility. Elasticsearch ships with a whole bunch of utilities. Take a look in /usr/share/elasticsearch/bin,
we can see there’s a whole bunch of utilities, we have setup passwords utility,
and the setup passwords utility is going to go into the keystore and it’s going to grab the bootstrap passwords for all of the built-in users, it’s going to allow you to use that one time — and one time only — to reset these passwords to something permanent.

End points encryption

So we got certs, we got transport network encrypted, and we have user access control enabled for our cluster. The last remaining piece of the puzzle here
for security is going to be enabling client network encryption.
We listen to addresses using HTTPS, we need to enable this encryption and to do that, we will reuse the same certificate that we use to encrypt the transport network. In a production environment, it is perfectly acceptable to use a self-signed certificate to encrypt the transport network because that is not user-facing, no one should be using that. That is just for internode communication in the cluster. However, for a client network it is best practice and more acceptable to use a globally trusted certificate, not something that is self-signed, but rather something that is signed by a globally trusted certificate authority, that way any client who accesses your cluster through that client network will automatically trust that certificate. In the eslaasticseaqrch.yml file we set xpack.security.http .ssl.enabled as true. This is all we need to configure here. Just like before when we had to add the secure password to the Elasticsearch keystore, for the transport network keystore and truststore, we had to do the same thing for the HTTP network keystore and truststore.
So let’s do /usr/share/elasticsearch/bin/elasticsearch keystore utility.
We want to add xpack.security.http.ssl.keystore.secure_password. Another thing you should do, because we are using a self-signed certificate, we have to go and find the certificate verification mode for the Elasticsearch SSL configuration. By default we’re going to do full verification, but because we’re using a self-signed certificate, we’re going to go ahead and set this verification mode to none. This is just one of the side effects of using a self-signed certificate on your cluster.

--

--

Amit Cohen

A product leader with exceptional skills and strategic acumen, possessing vast expertise in cloud orchestration, cloud security, and networking.