如何将elasticsearch数据从一个服务器移动到另一个服务器

Jab*_*abb 78 elasticsearch

如何将Elasticsearch数据从一台服务器移动到另一台服务器?

我在一个具有多个索引的本地节点上运行Elasticsearch 1.1.1的服务器A. 我想将该数据复制到运行Elasticsearch 1.3.4的服务器B.

程序到目前为止

  1. 在两台服务器上关闭ES
  2. 将所有数据scp到新服务器上的正确数据目录.(数据似乎位于我的debian盒子上的/ var/lib/elasticsearch /)
  3. 将权限和所有权更改为elasticsearch:elasticsearch
  4. 启动新的ES服务器

当我使用ES头插件查看集群时,不会出现任何索引.

似乎没有加载数据.我错过了什么吗?

小智 109

所选答案使其听起来比它稍微复杂,以下是您所需要的(在您的系统上首先安装npm).

npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data
Run Code Online (Sandbox Code Playgroud)

如果映射保持不变,则可以跳过第一个elasticdump命令以用于后续副本.

我刚刚完成了从AWS到Qbox.io的迁移,没有任何问题.

更多详情:

https://www.npmjs.com/package/elasticdump

帮助页面(截至2016年2月)包括完整性:

elasticdump: Import and export tools for elasticsearch

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

--input
                    Source location (required)
--input-index
                    Source index and type
                    (default: all, example: index/type)
--output
                    Destination location (required)
--output-index
                    Destination index and type
                    (default: all, example: index/type)
--limit
                    How many objects to move in bulk per operation
                    limit is approximate for file streams
                    (default: 100)
--debug
                    Display the elasticsearch commands being used
                    (default: false)
--type
                    What are we exporting?
                    (default: data, options: [data, mapping])
--delete
                    Delete documents one-by-one from the input as they are
                    moved.  Will not delete the source index
                    (default: false)
--searchBody
                    Preform a partial extract based on search results
                    (when ES is the input,
                    (default: '{"query": { "match_all": {} } }'))
--sourceOnly
                    Output only the json contained within the document _source
                    Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                    sourceOnly: {SOURCE}
                    (default: false)
--all
                    Load/store documents from ALL indexes
                    (default: false)
--bulk
                    Leverage elasticsearch Bulk API when writing documents
                    (default: false)
--ignore-errors
                    Will continue the read/write loop on write error
                    (default: false)
--scrollTime
                    Time the nodes will hold the requested search in order.
                    (default: 10m)
--maxSockets
                    How many simultaneous HTTP requests can we process make?
                    (default:
                      5 [node <= v0.10.x] /
                      Infinity [node >= v0.11.x] )
--bulk-mode
                    The mode can be index, delete or update.
                    'index': Add or replace documents on the destination index.
                    'delete': Delete documents on destination index.
                    'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
                    (default: index)
--bulk-use-output-index-name
                    Force use of destination index name (the actual output URL)
                    as destination while bulk writing to ES. Allows
                    leveraging Bulk API copying data inside the same
                    elasticsearch instance.
                    (default: false)
--timeout
                    Integer containing the number of milliseconds to wait for
                    a request to respond before aborting the request. Passed
                    directly to the request library. If used in bulk writing,
                    it will result in the entire batch not being written.
                    Mostly used when you don't care too much if you lose some
                    data when importing but rather have speed.
--skip
                    Integer containing the number of rows you wish to skip
                    ahead from the input transport.  When importing a large
                    index, things can go wrong, be it connectivity, crashes,
                    someone forgetting to `screen`, etc.  This allows you
                    to start the dump again from the last known line written
                    (as logged by the `offset` in the output).  Please be
                    advised that since no sorting is specified when the
                    dump is initially created, there's no real way to
                    guarantee that the skipped rows have already been
                    written/parsed.  This is more of an option for when
                    you want to get most data as possible in the index
                    without concern for losing some rows in the process,
                    similar to the `timeout` option.
--inputTransport
                    Provide a custom js file to us as the input transport
--outputTransport
                    Provide a custom js file to us as the output transport
--toLog
                    When using a custom outputTransport, should log lines
                    be appended to the output stream?
                    (default: true, except for `$`)
--help
                    This page

Examples:

# Copy an index from production to staging with mappings:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
  --type=data

# Backup index data to a file:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index_mapping.json \
  --type=mapping
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index.json \
  --type=data

# Backup and index to a gzip using stdout:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=$ \
  | gzip > /data/my_index.json.gz

# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
  --all=true \
  --input=http://production-a.es.com:9200/ \
  --output=/data/production.json
elasticdump \
  --bulk=true \
  --input=/data/production.json \
  --output=http://production-b.es.com:9200/

# Backup the results of a query to a file
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=query.json \
  --searchBody '{"query":{"term":{"username": "admin"}}}'

------------------------------------------------------------------------------
Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`
Run Code Online (Sandbox Code Playgroud)

  • 如何申请基本认证? (4认同)
  • 基本身份验证可以这样完成:`--input=http://name:password@Production.es.com:9200/my_index` (3认同)

Chr*_*ris 41

使用ElasticDump

1)yum install epel-release

2)yum install nodejs

3)yum install npm

4)npm install elasticdump

5)cd node_modules/elasticdump/bin

6)

./elasticdump \

  --input=http://192.168.1.1:9200/original \

  --output=http://192.168.1.2:9200/newCopy \

  --type=data
Run Code Online (Sandbox Code Playgroud)

  • @tramp 它是 2 个不同的 IP 地址 (2认同)
  • 看起来现在支持Elasticsearch 5 https://github.com/taskrabbit/elasticsearch-dump/pull/268 (2认同)

Man*_*hKG 21

您可以使用Elasticsearch中提供的快照/恢复功能.设置基于文件系统的快照存储后,可以在群集之间移动它并在其他群集上还原


Aks*_*til 6

我尝试在ubuntu上将数据从ELK 2.4.3移动到ELK 5.1.1

以下是步骤

$ sudo apt-get update

$ sudo apt-get install -y python-software-properties python g++ make

$ sudo add-apt-repository ppa:chris-lea/node.js

$ sudo apt-get update

$ sudo apt-get install npm

$ sudo apt-get install nodejs

$ npm install colors

$ npm install nomnom

$ npm install elasticdump

在主目录转到$ cd node_modules/elasticdump/

执行命令

如果您需要基本的http auth,可以像这样使用它:

--input=http://name:password@localhost:9200/my_index

从生产中复制索引:

$ ./bin/elasticdump --input="http://Source:9200/Sourceindex" --output="http://username:password@Destination:9200/Destination_index" --type=data


Mar*_*arc 6

我总是成功地简单地将索引目录/文件夹复制到新服务器并重新启动它。您将通过执行找到索引 id,GET /_cat/indices并且与该 id 匹配的文件夹位于data\nodes\0\indices(通常在您的 elasticsearch 文件夹中,除非您移动它)。


And*_*eyP 5

如果您可以将第二台服务器添加到集群中,您可以这样做:

  1. 将服务器 B 添加到服务器 A 的集群中
  2. 增加索引的副本数量
  3. ES会自动将索引复制到服务器B
  4. 关闭服务器A
  5. 减少索引的副本数量

仅当替换数量等于节点数量时,这才有效。

  • 我相信当版本不同时这将不起作用(就像OP问题中的情况一样) (4认同)

mid*_*ido 5

还有一个_reindex选项

从文档:

通过 Elasticsearch reindex API(在 5.x 及更高版本中可用),您可以将新的 Elasticsearch Service 部署远程连接到旧的 Elasticsearch 集群。这会从旧集群中提取数据并将其索引到新集群中。重新索引本质上是从头开始重建索引,运行起来可能会占用更多资源。

POST _reindex
{
  "source": {
    "remote": {
      "host": "https://REMOTE_ELASTICSEARCH_ENDPOINT:PORT",
      "username": "USER",
      "password": "PASSWORD"
    },
    "index": "INDEX_NAME",
    "query": {
      "match_all": {}
    }
  },
  "dest": {
    "index": "INDEX_NAME"
  }
}
Run Code Online (Sandbox Code Playgroud)