Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
175 changes: 124 additions & 51 deletions troubleshoot/elasticsearch/increase-capacity-data-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,65 +4,33 @@
- https://www.elastic.co/guide/en/elasticsearch/reference/current/increase-capacity-data-node.html
applies_to:
stack:
deployment:
eck:
ess:
ece:
self:
products:
- id: elasticsearch
---

# Increase the disk capacity of data nodes [increase-capacity-data-node]

:::::::{tab-set}
Disk capacity pressures may cause index failures, unassigned shards, and cluster instability.

Check notice on line 13 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.WordChoice: Consider using 'can, might' instead of 'may', unless the term is in the UI.

::::::{tab-item} {{ech}}
In order to increase the disk capacity of the data nodes in your cluster:
{{es}} uses [disk-based shard allocation watermarks](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#disk-based-shard-allocation) to manage disk space on nodes, which can block allocation or indexing when nodes run low on disk space. Refer to [](/troubleshoot/elasticsearch/fix-watermark-errors.md) for additional details on how to address this situation.

1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body).
2. On the **Hosted deployments** panel, click the gear under the `Manage deployment` column that corresponds to the name of your deployment.
3. If autoscaling is available but not enabled, enable it. You can do this by clicking the button `Enable autoscaling` on a banner like the one below:
To increase the disk capacity of the data nodes in your cluster, complete these steps:

:::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_banner.png
:alt: Autoscaling banner
:screenshot:
:::
1. [Estimate how much disk capacity you need](#estimate-required-capacity).
1. [Increase the disk capacity](#increase-disk-capacity-of-data-nodes).

Or you can go to `Actions > Edit deployment`, check the checkbox `Autoscale` and click `save` at the bottom of the page.

:::{image} /troubleshoot/images/elasticsearch-reference-enable_autoscaling.png
:alt: Enabling autoscaling
:screenshot:
:::
## Estimate the amount of required disk capacity [estimate-required-capacity]

4. If autoscaling has succeeded the cluster should return to `healthy` status. If the cluster is still out of disk, check if autoscaling has reached its limits. You will be notified about this by the following banner:
The following steps explain how to retrieve the current disk watermark configuration of the cluster and how to check the current disk usage on the nodes.

:::{image} /troubleshoot/images/elasticsearch-reference-autoscaling_limits_banner.png
:alt: Autoscaling banner
:screenshot:
:::

or you can go to `Actions > Edit deployment` and look for the label `LIMIT REACHED` as shown below:

:::{image} /troubleshoot/images/elasticsearch-reference-reached_autoscaling_limits.png
:alt: Autoscaling limits reached
:screenshot:
:::

If you are seeing the banner click `Update autoscaling settings` to go to the `Edit` page. Otherwise, you are already in the `Edit` page, click `Edit settings` to increase the autoscaling limits. After you perform the change click `save` at the bottom of the page.
::::::

::::::{tab-item} Self-managed
In order to increase the data node capacity in your cluster, you will need to calculate the amount of extra disk space needed.

1. First, retrieve the relevant disk thresholds that will indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so we will only retrieve the high watermark:
1. Retrieve the relevant disk thresholds that indicate how much space should be available. The relevant thresholds are the [high watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high) for all the tiers apart from the frozen one and the [frozen flood stage watermark](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-flood-stage-frozen) for the frozen tier. The following example demonstrates disk shortage in the hot tier, so only the high watermark is retrieved:

```console
GET _cluster/settings?include_defaults&filter_path=*.cluster.routing.allocation.disk.watermark.high*
```

The response will look like this:
The response looks like this:

```console-result
{
Expand All @@ -83,33 +51,138 @@
}
```

The above means that in order to resolve the disk shortage we need to either drop our disk usage below the 90% or have more than 150GB available, read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high).
The above means that in order to resolve the disk shortage, disk usage must drop below the 90% or have more than 150GB available. Read more on how this threshold works [here](elasticsearch://reference/elasticsearch/configuration-reference/cluster-level-shard-allocation-routing-settings.md#cluster-routing-watermark-high).

Check notice on line 54 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.Wordiness: Consider using 'to' instead of 'in order to'.

2. The next step is to find out the current disk usage, this will indicate how much extra space is needed. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold.
1. Find the current disk usage, which in turn indicates how much extra space is required. For simplicity, our example has one node, but you can apply the same for every node over the relevant threshold.

```console
GET _cat/allocation?v&s=disk.avail&h=node,disk.percent,disk.avail,disk.total,disk.used,disk.indices,shards
```

The response will look like this:
The response looks like this:

```console-result
node disk.percent disk.avail disk.total disk.used disk.indices shards
instance-0000000000 91 4.6gb 35gb 31.1gb 29.9gb 111
```

3. The high watermark configuration indicates that the disk usage needs to drop below 90%. To achieve this, 2 things are possible:
In this scenario, the high watermark configuration indicates that the disk usage needs to drop below 90%, while the current disk usage is 91%.


## Increase the disk capacity of your data nodes [increase-disk-capacity-of-data-nodes]

Here are the most common ways to increase disk capacity:

* You can expand the disk space of the existing nodes. This is typically achieved by replacing your nodes with ones with higher capacity.
* You can add additional data nodes to the data tier that is short of disk space, increasing the overall capacity of that tier and potentially improving performance by distributing data and workload across more resources.

When you add another data node, the cluster doesn't recover immediately and it might take some time until shards are relocated to the new node.
You can check the progress with the following API call:

```console
GET /_cat/shards?v&h=state,node&s=state
```

If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED`.

:::::::{applies-switch}

::::::{applies-item} { ess:, ece: }
Copy link
Contributor

@eedugon eedugon Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Open question for ECE and ECH (cc: @shainaraskas and @yetanothertw ). No need to address this in this PR at this moment, but maybe we want to register this somewhere else.

  • The content for ECE and ECH implies the usage of autoscaling, which is OK. But there are other ways to expand disk capacity on ECE and ECH, such as:

    • Edit the deployment and change the capacity of the tier to a bigger one. That's the simplest manual method, and depending on the current size, the platform will add nodes or replace the current ones by bigger ones.

    • Configure autoscaling and increase the size to a bigger one (this is already explained).

    • Change the hardware profile / deployment template of the cluster to one that has a higher disk / memory ratio. This would replace the nodes with nodes with different disk sizes. More info here

    • (only ECE) ECE admins can temporary override the disk quota of elasticsearch nodes in real time as explained in /deploy-manage/deploy/cloud-enterprise/resource-overrides.md (link). This is considered a temporary measure to establize the cluster in case of problems before implementing a permanent solution.

Not sure if it's worthy to add that info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I definitely think this is valid information. It might make more sense to tackle it in a separate PR as there're a few places that could benefit from this change. Would it make sense to tie that to Shaina's draft issue or will I open a separate one?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm totally ok to tie that to Shaina's draft.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add another one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opened #4552


:::{warning}
:applies_to: ece:
In ECE, resizing is limited by your [allocator capacity](/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md).
:::

To increase the disk capacity of the data nodes in your cluster:

* to add an extra data node to the cluster (this requires that you have more than one shard in your cluster), or
* to extend the disk space of the current node by approximately 20% to allow this node to drop to 70%. This will give enough space to this node to not run out of space soon.
1. Log in to the [{{ecloud}} console](https://cloud.elastic.co?page=docs&placement=docs-body) or ECE Cloud UI.
1. On the home page, find your deployment and select **Manage**.
1. Go to **Actions** > **Edit deployment** and check that autoscaling is enabled. Adjust the **Enable Autoscaling for** dropdown menu as needed and select **Save**.
1. If autoscaling is successful, the cluster returns to a `healthy` status.
If the cluster is still out of disk, check if autoscaling has reached its set limits and [update your autoscaling settings](/deploy-manage/autoscaling/autoscaling-in-ece-and-ech.md#ec-autoscaling-update).

4. In the case of adding another data node, the cluster will not recover immediately. It might take some time to relocate some shards to the new node. You can check the progress here:
You can also add more capacity by adding more nodes to your cluster and targeting the data tier that may be short of disk. For more information, refer to [](/troubleshoot/elasticsearch/add-tier.md).

Check notice on line 105 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.WordChoice: Consider using 'can, might' instead of 'may', unless the term is in the UI.

::::::

::::::{applies-item} { self: }
To increase the data node capacity in your cluster, you can [add more nodes](/deploy-manage/maintenance/add-and-remove-elasticsearch-nodes.md) to the cluster, or increase the disk capacity of existing nodes. Disk expansion procedures depend on your operating system and storage infrastructure and are outside the scope of Elastic support. In practice, this is often achieved by [removing a node from the cluster](https://www.elastic.co/search-labs/blog/elasticsearch-remove-node) and reinstalling it with a larger disk.

::::::

::::::{applies-item} { eck: }
To increase the disk capacity of data nodes in your {{eck}} cluster, you can either add more data nodes or increase the storage size of existing nodes.

**Option 1: Add more data nodes**

1. Update the `count` field in your data node NodeSet to add more nodes:

```yaml subs=true
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: {{version.stack}}
nodeSets:
- name: data-nodes
count: 5 # Increase from previous count
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
```

1. Apply the changes:

```sh
kubectl apply -f your-elasticsearch-manifest.yaml
```

ECK automatically creates the new nodes and {{es}} will relocate shards to balance the load. You can monitor the progress using:

Check notice on line 150 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.FutureTense: 'will relocate' might be in future tense. Write in the present tense to describe the state of the product as it is now.

```console
GET /_cat/shards?v&h=state,node&s=state
```

If in the response the shards' state is `RELOCATING`, it means that shards are still moving. Wait until all shards turn to `STARTED` or until the health disk indicator turns to `green`.
::::::
**Option 2: Increase storage size of existing nodes**

1. If your storage class supports [volume expansion](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims), you can increase the storage size in the `volumeClaimTemplates`:

```yaml subs=true
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: {{version.stack}}
nodeSets:
- name: data-nodes
count: 3
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi # Increased from previous size
```

1. Apply the changes. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem will be resized online without restarting {{es}}. Otherwise, you may need to manually delete the Pods after the resize so they can be recreated with the expanded filesystem.

Check notice on line 183 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.WordChoice: Consider using 'can, might' instead of 'may', unless the term is in the UI.

Check notice on line 183 in troubleshoot/elasticsearch/increase-capacity-data-node.md

View workflow job for this annotation

GitHub Actions / preview / vale

Elastic.FutureTense: 'will be' might be in future tense. Write in the present tense to describe the state of the product as it is now.

:::::::
For more information, refer to [](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md) and [](/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md).

::::::
:::::::
Loading
Loading