Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Add collapsible sections to 8.0 breaking changes [Part 4] #56356

Merged
merged 8 commits into from
May 11, 2020
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/reference/migration/migrate_8_0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ coming[8.0.0]

//tag::notable-breaking-changes[]

.Indices created in {es} 7.0 and earlier versions are not supported.
.Indices created in {es} 6.x and earlier versions are not supported.
[%collapsible]
====
*Details* +
Expand Down
35 changes: 22 additions & 13 deletions docs/reference/migration/migrate_8_0/java.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,27 +9,36 @@

// end::notable-breaking-changes[]

[float]
==== Changes to Fuzziness

.Changes to `Fuzziness`.
[%collapsible]
====
*Details* +
To create `Fuzziness` instances, use the `fromString` and `fromEdits` method
instead of the `build` method that used to accept both Strings and numeric
values. Several fuzziness setters on query builders (e.g.
MatchQueryBuilder#fuzziness) now accept only a `Fuzziness`instance instead of
an Object. You should preferably use the available constants (e.g.
Fuzziness.ONE, Fuzziness.AUTO) or build your own instance using the above
mentioned factory methods.
MatchQueryBuilder#fuzziness) now accept only a `Fuzziness` instance instead of
an Object.

Fuzziness used to be lenient when it comes to parsing arbitrary numeric values
while silently truncating them to one of the three allowed edit distances 0, 1
or 2. This leniency is now removed and the class will throw errors when trying
to construct an instance with another value (e.g. floats like 1.3 used to get
accepted but truncated to 1). You should use one of the allowed values.


[float]
==== Changes to Repository

accepted but truncated to 1).

*Impact* +
Use the available constants (e.g. Fuzziness.ONE, Fuzziness.AUTO) or build your
own instance using the above mentioned factory methods. Use only allowed
`Fuzziness` values.
Copy link
Contributor Author

@jrodewig jrodewig May 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cbuescher Do you mind confirming that this accurately reflects your original guidance?

This is mostly a formatting change. I just want to ensure I don't unintentionally remove any context here.

Feel free to ignore the other changes in this PR unless you'd like to review them.

====

.Changes to `Repository`.
[%collapsible]
====
*Details* +
Repository has no dependency on IndexShard anymore. The contract of restoreShard
and snapshotShard has been reduced to Store and MappingService in order to improve
testability.

*Impact* +
No action needed.
====
18 changes: 13 additions & 5 deletions docs/reference/migration/migrate_8_0/network.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,17 @@

// end::notable-breaking-changes[]

[float]
==== Removal of old network settings

.The `network.tcp.connect_timeout` setting has been removed.
[%collapsible]
====
*Details* +
The `network.tcp.connect_timeout` setting was deprecated in 7.x and has been removed in 8.0. This setting
was a fallback setting for `transport.connect_timeout`. To change the default connection timeout for client
connections `transport.connect_timeout` should be modified.
was a fallback setting for `transport.connect_timeout`.

*Impact* +
Use the `transport.connect_timeout` setting to change the default connection
timeout for client connections. Discontinue use of the
`network.tcp.connect_timeout` setting. Specifying the
`network.tcp.connect_timeout` setting in `elasticsearch.yml` will result in an
error on startup.
====
43 changes: 30 additions & 13 deletions docs/reference/migration/migrate_8_0/node.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,36 @@

// end::notable-breaking-changes[]

[float]
==== Removal of `node.max_local_storage_nodes` setting

.The `node.max_local_storage_nodes` setting has been removed.
[%collapsible]
====
*Details* +
The `node.max_local_storage_nodes` setting was deprecated in 7.x and
has been removed in 8.0. Nodes should be run on separate data paths
to ensure that each node is consistently assigned to the same data path.

[float]
==== Change of data folder layout
*Impact* +
Discontinue use of the `node.max_local_storage_nodes` setting. Specifying this
setting in `elasticsearch.yml` will result in an error on startup.
====

.The layout of the data folder has changed.
[%collapsible]
====
*Details* +
Each node's data is now stored directly in the data directory set by the
`path.data` setting, rather than in `${path.data}/nodes/0`, because the removal
of the `node.max_local_storage_nodes` setting means that nodes may no longer
share a data path. At startup, Elasticsearch will automatically migrate the data
path to the new layout. This automatic migration will not proceed if the data
path contains data for more than one node. You should move to a configuration in
which each node has its own data path before upgrading.
share a data path.

*Impact* +
At startup, {es} will automatically migrate the data path to the new layout.
This automatic migration will not proceed if the data path contains data for
more than one node. You should move to a configuration in which each node has
its own data path before upgrading.

If you try to upgrade a configuration in which there is data for more than one
node in a data path then the automatic migration will fail and Elasticsearch
node in a data path then the automatic migration will fail and {es}
will refuse to start. To resolve this you will need to perform the migration
manually. The data for the extra nodes are stored in folders named
`${path.data}/nodes/1`, `${path.data}/nodes/2` and so on, and you should move
Expand All @@ -36,11 +46,18 @@ corresponding node to use this location for its data path. If your nodes each
have more than one data path in their `path.data` settings then you should move
all the corresponding subfolders in parallel. Each node uses the same subfolder
(e.g. `nodes/2`) across all its data paths.
====

[float]
==== Rejection of ancient closed indices

.Closed indices created in {es} 6.x and earlier versions are not supported.
[%collapsible]
====
*Details* +
In earlier versions a node would start up even if it had data from indices
created in a version before the previous major version, as long as those
indices were closed. {es} now ensures that it is compatible with every index,
open or closed, at startup time.

*Impact* +
Reindex closed indices created in {es} 6.x or before with {es} 7.x if they need
to be carried forward to {es} 8.x.
====
14 changes: 10 additions & 4 deletions docs/reference/migration/migrate_8_0/packaging.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,15 @@
=== Packaging changes

//tag::notable-breaking-changes[]
[float]
==== Java 11 is required

Java 11 or higher is now required to run Elasticsearch and any of its command
.Java 11 is required.
[%collapsible]
====
*Details* +
Java 11 or higher is now required to run {es} and any of its command
line tools.

*Impact* +
Use Java 11 or higher. Attempts to run {es} 8.0 using earlier Java versions will
fail.
====
//end::notable-breaking-changes[]
37 changes: 28 additions & 9 deletions docs/reference/migration/migrate_8_0/reindex.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,35 @@
//tag::notable-breaking-changes[]
//end::notable-breaking-changes[]

Reindex from remote would previously allow URL encoded index-names and not
.Reindex from remote now re-encodes URL-encoded index names.
[%collapsible]
====
*Details* +
Reindex from remote would previously allow URL-encoded index names and not
re-encode them when generating the search request for the remote host. This
leniency has been removed such that all index-names are correctly encoded when
leniency has been removed such that all index names are correctly encoded when
reindex generates remote search requests.

Instead, please specify the index-name without any encoding.

[float]
==== Removal of types
*Impact* +
Specify unencoded index names for reindex from remote requests.
====

.Reindex-related REST API endpoints containing mapping types have been removed.
[%collapsible]
====
*Details* +
The `/{index}/{type}/_delete_by_query` and `/{index}/{type}/_update_by_query` REST endpoints have been removed in favour of `/{index}/_delete_by_query` and `/{index}/_update_by_query`, since indexes no longer contain types, these typed endpoints are obsolete.

[float]
==== Removal of size parameter
*Impact* +
Use the replacement REST API endpoints. Requests submitted to API endpoints
that contain a mapping type will return an error.
====


.In the reindex, delete by query, and update by query APIs, the `size` parameter has been renamed.
[%collapsible]
====
*Details* +
Previously, a `_reindex` request had two different size specifications in the body:

- Outer level, determining the maximum number of documents to process
Expand All @@ -32,4 +46,9 @@ The outer level `size` parameter has now been renamed to `max_docs` to
avoid confusion and clarify its semantics.

Similarly, the `size` parameter has been renamed to `max_docs` for
`_delete_by_query` and `_update_by_query` to keep the 3 interfaces consistent.
`_delete_by_query` and `_update_by_query` to keep the 3 interfaces consistent.

*Impact* +
Use the replacement parameters. Requests containing the `size` parameter will
return an error.
====
16 changes: 12 additions & 4 deletions docs/reference/migration/migrate_8_0/rollup.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,20 @@

// end::notable-breaking-changes[]

[float]
==== StartRollupJob endpoint returns success if job already started

.The StartRollupJob endpoint now returns a success status if a job has already started.
[%collapsible]
====
*Details* +
Previously, attempting to start an already-started rollup job would
result in a `500 InternalServerError: Cannot start task for Rollup Job
[job] because state was [STARTED]` exception.

Now, attempting to start a job that is already started will just
return a successful `200 OK: started` response.
return a successful `200 OK: started` response.

*Impact* +
Update your workflow and applications to assume that a 200 status in response to
attempting to start a rollup job means the job is in an actively started state.
The request itself may have started the job, or it was previously running and so
the request had no effect.
====