[ 官网下载 ]
[ 上传到Linux并解压 ]
① 在/usr/local/下创建elasticsearch目录
# mkdir /usr/local/elasticsearch
② 进入elasticsearch目录
# cd /usr/local/elasticsearch/
③ 上传Elasticsearch和kibana安装包
# rz
④ 解压安装包
# tar -xzf elasticsearch-6.6.1.tar.gz
# tar -xzf kibana-6.6.1-linux-x86_64.tar.gz
⑤ 删除安装包
# rm -rf elasticsearch-6.6.1.tar.gz
# rm -rf kibana-6.6.1-linux-x86_64.tar.gz
[ Elasticsearch ]
① 创建数据存储目录
# mkdir data
② 编辑 elasticsearch.yml 配置(配置中文说明)
# vim /usr/local/elasticsearch/elasticsearch-6.6.1/config/elasticsearch.yml
③ 设置数据存储路径和日志存储路径
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
#集群名称
cluster.name: myProjectName
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# 节点名称同理,可自动生成也可手动配置
node.name: myProjectName-0
# 允许一个节点是否可以成为一个master节点,es是默认集群中的第一台机器为master,如果这台机器停止就会重新选举master.
node.master: true
# 允许该节点存储数据(默认开启)
node.data: true
# 新增索引
node.ingest: true
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# 数据存储目录
path.data: /usr/local/elasticsearch/elasticsearch-data/
# 日志存储目录
path.logs: /usr/local/elasticsearch/elasticsearch-logs/
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# 设置memory_lock来锁定进程的物理内存地址,避免交换(swapped)来提高性能
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# 设置外网访问
network.host: 0.0.0.0
# 端口
http.port: 9200
# 是否支持跨域,默认为false
http.cors.enabled: true
# 当设置允许跨域,默认为*,表示支持所有域名,如果我们只是允许某些网站能访问,那么可以使用正则表达式。比如只允许本地地址。
http.cors.allow-origin: "*"
# --------------------------------- discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes:
#
# For more information, consult the zen discovery module documentation.
#
#设置集群节点,防止脑裂现象:
#设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4)
#discovery.zen.minimum_master_nodes: N/2 + 1
#设置集群中自动发现其它节点时ping连接超时时间,默认为3秒,对于比较差的网络环境可以高点的值来防止自动发现时出错
#discovery.zen.ping.timeout: 10s
#设置是否打开多播发现节点,默认是true。
#discovery.zen.ping.multicast.enabled: false
#设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点
#discovery.zen.ping.unicast.hosts: ["9.115.42.89", "9.115.42.95"]
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- VarIoUs -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
xpack.security.enabled: false
④ 设置每个进程最大同时打开文件数
❶ 编辑 limits.conf 文件
# vim /etc/security/limits.conf
hanyong hard nofile 65536
hanyong soft nofile 65536
* soft nproc 4096
* hard nproc 4096
⑤ 设置 JVM参数,设置堆大小为机器内存的一半:
❶ 编辑jvm.options文件
# vim /usr/local/elasticsearch/elasticsearch-6.6.1/config/jvm.options
-xms512m
-Xmx512m
⑥ 设置用户拥有的内存权限大小
❶ 编辑sysctl.conf文件
# vim /etc/sysctl.conf
vm.max_map_count=655360
❸ 添加完毕之后,执行下面命令
# sysctl -p
⑦ 设置用户权限
# cd ..
# chown -R hanyong /usr/local/elasticsearch/
注意:elasticsearch是不允许使用超级管理员root账户启动服务,所以切换为普通用户启动
⑧ 启动
❶ 进入 elasticsearch-6.6.1/bin 目录
# cd /usr/local/elasticsearch/elasticsearch-6.6.1/bin/
❷ 输入下面命令启动
# ./elasticsearch
⑨ 后台启动
❶ 进入 elasticsearch-6.6.1/bin 目录
# cd /usr/local/elasticsearch/elasticsearch-6.6.1/bin/
❷ 输入下面命令启动
# ./elasticsearch -d
⑩ 查看是否启动成功(查看端口:9200)
# ss -tanl
⑪ 切换到日志目录,查看日志
# cd /usr/local/elasticsearch/
# more elasticsearch.log
❶ 查看 Elasticsearch 进程
# jps
❷ 杀掉进程
# kill -9 28136
⑬ 开启9200端口号访问(ROOT用户设置)
# /sbin/iptables -I INPUT -p tcp --dport 9200 -j ACCEPT
[ Kibana ]
① 进入kibana解压目录
# cd /usr/local/elasticsearch/kibana-6.6.1-linux-x86_64/
# vim config/kibana.yml
③ 设置端口
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
server.port: 9100
④ 设置外网访问
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
server.host: "0.0.0.0"
⑤ 启动
❶ 进入 bin 目录
# cd /usr/local/elasticsearch/kibana-6.6.1-linux-x86_64/bin/
❷ 输入下面命令启动
# ./kibana
⑥ 后台启动
❶ 进入 bin 目录
# cd /usr/local/elasticsearch/kibana-6.6.1-linux-x86_64/bin/
❷ 输入下面命令后台启动
# nohup ../bin/kibana &
# fuser -n tcp 9100
❷ 杀掉进程
# kill -9 33110
⑧ 开启9100端口号访问(ROOT用户设置)
# /sbin/iptables -I INPUT -p tcp --dport 9100 -j ACCEPT
[ Elasticsearch设置开机启动和加入服务 ]
① 进入下面目录
# cd /etc/init.d/
② 创建elasticsearch文件
# > elasticsearch
③ vim编辑文件
# vim elasticsearch
#!/bin/bash
#
#chkconfig: 345 63 37
#description: elasticsearch
#processname: elasticsearch-6.6.1
export JAVA_HOME=/usr/local/java/jdk1.8.0_201
export JAVA_BIN=/usr/local/java/jdk1.8.0_201/bin
export PATH=$PATH:$JAVA_HOME/bin
export CLAsspATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLAsspATH
export ES_HOME=/usr/local/elasticsearch/elasticsearch-6.6.1
case $1 in
start)
su hanyong<<!
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
stop)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
;;
restart)
pid=`cat $ES_HOME/pid`
kill -9 $pid
echo "elasticsearch is stopped"
sleep 1
su hanyong<<!
cd $ES_HOME
./bin/elasticsearch -d -p pid
exit
!
echo "elasticsearch is started"
;;
*)
echo "start|stop|restart"
;;
esac
exit 0
chmod 777 elasticsearch
⑥ 添加和删除服务并设置启动方式(chkconfig具体使用另行百度)
chkconfig --add elasticsearch
chkconfig --del elasticsearch
⑦ 启动和关闭服务
service elasticsearch start
service elasticsearch stop
service elasticsearch restart
⑧ 设置服务的启动方式
chkconfig elasticsearch on
chkconfig elasticsearch off
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。