本文共 21267 字,大约阅读时间需要 70 分钟。
系统: centos 7.4 3.10.0-327.el7.x86_64docker: 18.03.0-cedocker-compose:docker-compose version 1.21.0redis: 4.0.9nginx: 1.12.2tomcat:8.5.30jdk:1.8.151mysql:整成mariadb
session 统一方案:
1、单机的session会话保持机制如nginx的ip-hash,如果单个机器下线,那么session将会全部丢失2、session共享,如tomcat集群,多个应用服务器共享同步session,单台机器下线,根据负载均衡原理,调度器会遍历寻找可用节点,不足在于,只能是同一种中间件如tomcat集群。并且session复制会带来性能损失,session内容通过广播同步给成员,会造成网络流量瓶颈。3、session服务器 如memcached/redis ,应用服务器接受请求后会将session保存cache db中,当应用服务器出现问题,调度器会遍历所有节点,没发现都没有这个session时会去成cached db中查找,如果找到就复制到本机,这样就能实现session共享和高可用。实验使用 session服务器的方式tomcat:
Tomcat的工作模式3种:独立Servlet,进程内servlet,进程外servlet。Tomcat是一个基于组件的服务器,他的构建组件都是可以配置的,其中最外层的组件是Catalina Servlet容器,其他组一定要按照一定的格式要求配置在这个顶层的容器中。 Tomcat各个组件是在$CATLINA_HOME/conf/server.xml文件中配置的。tomat是一个servlet容器,来处理http请求。在平时的使用中我们都会再浏览器中输入http地址来访问服务资源,比如格式http://host[":"port][abs_path] ,这里我们将它做为中间件使用。nginx
nginx负载均衡实现方式: 分别是mysql主从理论
3、安装
在安装前的约定:
每个dockerfile一定要在单独的一个目录下如 nginx/Dockerfile tomcat/Dockerile
需要先了解一下基本原理: ,第四节安装
docker组成:docker daemon, docker client, 镜像,仓库,容器客户端通过api连接服务端,本地命令执行服务端执行完之后返回给客户端,镜像可以直接从dockerhub中下载,镜像可以生成容器,容器可以打包成镜像,仓库可以保存镜像。如图多加了一个第三方模块: | | |
nginx-dockerfile文件Host# cat Dockerfile FROM centos# 安装基础插件RUN yum -y install gcc gcc-c++ install gd gd-devel perl-ExtUtils-Embed wget unzip tar make \ && wget https://codeload.github.com/yaoweibin/nginx_upstream_check_module/zip/master -O /tmp/nginx_upstream_check_module.zip &>/dev/null \ && wget -P /tmp/ http://nginx.org/download/nginx-1.12.1.tar.gz &>/dev/null \ && wget -P /tmp https://ftp.pcre.org/pub/pcre/pcre-8.42.zip &>/dev/null \ && wget -P /tmp https://www.openssl.org/source/openssl-1.1.0h.tar.gz &>/dev/null \ && cd /tmp \ && unzip nginx_upstream_check_module.zip &>/dev/null && rm -rf nginx_upstream_check_module.zip \ && tar xf nginx-1.12.1.tar.gz &>/dev/null && rm -rf nginx-1.12.1.tar.gz \ && tar xf openssl-1.1.0h.tar.gz &>/dev/null && rm -rf openssl-1.1.0h.tar.gz \ && unzip pcre-8.42.zip &>/dev/null && rm -rf pcre-8.42.zip# 配置nginx,为了以后便于理解,分开搞RUN cd /tmp/nginx-1.12.1 \ && ./configure --prefix=/usr/local/nginx --user=nginx --with-select_module --with-http_ssl_module --with-http_realip_module --with-http_image_filter_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_perl_module --with-stream --with-stream_ssl_module --with-stream_realip_module --with-pcre=/tmp/pcre-8.42 --with-openssl=/tmp/openssl-1.1.0h --add-module=/tmp/nginx_upstream_check_module-master && make && make install \ && groupadd -g 1000 nginx && useradd -g 1000 -u 1000 nginx \ && chown nginx.nginx /usr/local/nginx -R \ && mkdir /webconf \ && chown nginx.nginx /webconf -R# 清空缓存以及安装文件,减轻容器体积RUN rm -rf /var/lib/yum/* \ && rm -rf /tmp/{openssl-1.1.0h,pcre-8.42,nginx-1.12.1,nginx_upstream_check_module}# 设置环境变量ENV PATH /usr/local/nginx/sbin:$PATH# 声明端口号EXPOSE 8080VOLUME /webconf#最后运行的命令 CMD ["nginx","-c","/webconf/nginx.conf"]
生成镜像:Host# docker build -t nginx:v2 .[root@dockers nginxs]# docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEnginx v2 0641695749a7 4 minutes ago 669MBcentos latest 2d194b392dd1 7 weeks ago 195MB查看当前目录:
查看配置文件Host# cat conf/nginx.conf # 前台启动服务daemon off;user nginx;worker_processes 1;pid /var/run/nginx.pid;events { worker_connections 10240;}http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # 注意日志文件,定义到当前目录好排错 access_log /webconf/log/access.log main; error_log /webconf/log/error.log; # 优化后面搞 sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; client_max_body_size 100m; server_tokens off; include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. # 临时先定义一个,用于查看nginx是否生效 server { server_name 127.0.0.1; listen 8080; location / { index index.html; root /webconf/html; } }}查看html文件Host ]# cat conf/html/index.htmlwelcome nginx
运行容器: Host# docker run -dit -v ${PWD}/conf:/webconf -p 8080:8080 --name container_nginx nginx:v2922e4490dd09ced34c4e6e87bf510986f1633dc7422dc9f0b40f38ebab0c617e查看容器 Host# docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES922e4490dd09 nginx:v2 "nginx -c /webconf/n…" 3 seconds ago Up 1 second 0.0.0.0:8080->8080/tcp container_nginx
查看nginx安装效果
| |
先通过本地修改 tomcat 配置,然后打成tar.gz包,具体操作如下#先清空默认的界面rm -rf webapps/*# 优化一下最大内存,免得有时候开发没设置内存给机器挤爆了sed -i "109aJAVA_OPTS=\"-server -Xms218m -Xmx512m -XX:PermSize=256M -XX:MaxPermSize=512m\"" bin/catalina.sh## 设置redis,如果不知道redis配置,请跳到下一章# 在${CATALINA_HOME}/bin/context.xml中的/Context中添加将以下三个包放到${CATALINA_HOME}/lib下面commons-pool2-2.3.jar jedis-2.7.3.jar tomcat-redis-session-manager-master-2.0.0.jar附上下载地址:链接:https://pan.baidu.com/s/1cYfa7IwvtpQTb1qROmtwmw 密码:puz9 # 在${CATALINA_HOME}/bin/server.xml添加这行 添加这行 修改这行 Host # tar zcf apache-tomcat-8.5.30.tar.gz apache-tomcat-8.5.30然后我们将tomcat的整个conf文件复制出来,目的是为了持久化操作,cp apache-tomcat-8.5.30/conf . -rHost # ls # 此时我们就只有三个文件了apache-tomcat-8.5.30.tar.gz jdk-8u151-linux-x64.rpm confHost # mkdir page/{tomcat1/{log,html},tomcat2/{log,html}} -pv
此时目录结构应该是这样 还有一个conf目录
附上index.jsp文件 tomcat1 是web1 tomcat2是web2注意结构Host # cat page/tomcat1/html/index.jsp <%@ page language="java" %>web1 web1
Session ID | session.setAttribute("web1","web1"); %><%=session.getId() %> | ?
Created on | <%= session.getCreationTime() %> |
下边我们用docker来进行配置. 注意安装前约定。Host# mkdir software Host# mv * softwareHost# vim DockerfileFROM centosCOPY software/apache-tomcat-8.5.30.tar.gz /tmpCOPY software/jdk-8u151-linux-x64.rpm /tmpRUN rpm -ivh /tmp/jdk-8u151-linux-x64.rpm \ && rm -rf /tmp/jdk-8u151-linux-x64.rpm \ && tar xf /tmp/apache-tomcat-8.5.30.tar.gz -C /usr/local \ && cd /usr/local && ln -sv apache-tomcat-8.5.30 tomcat \ && rm -rf /tmp/apache-tomcat-8.5.30.tar.gzENV JAVA_HOME /usr/java/defaultENV PATH $JAVA_HOME/bin:$PATH# html的持久化挂载地址VOLUME "/usr/local/webroot/html"# 文件的持久化挂载地址VOLUME "/usr/local/tomcat/logs"EXPOSE 8081CMD ["/usr/local/tomcat/bin/catalina.sh","run"]# 搞个镜像 Host # docker build -t tomcat:v1 .# 瞅瞅它大小Host # docker imagesREPOSITORY TAG IMAGE ID CREATED SIZEtomcat v1 fb45f6c25826 3 minutes ago 767MB生成容器 手写费劲这里直接用脚本Host # cat start_tomcat_two_containers.sh #!/bin/bash## 定义一个基础路径Base_page=/tmp/topv1/tom/page# 定义容器html路径container_html=/usr/local/webroot/html# 定义容器日志路径container_log=/usr/local/tomcat/logs# 定义容器配置文件路径,便于修改container_conf=/usr/local/tomcat/conf# 每个容器的名称属性container_name=${Base_page}/tomcat# 启动两个容器for i in {1..2};do docker run -dit -v ${container_name}${i}/html:$container_html -v ${container_name}${i}/log:$container_log -v ${Base_page}/conf:${container_conf} -p 1000${i}:8081 --name tomcat${i} tomcat:v1donedocker ps
此时查看缓存效果:
这里我们直接整MariaDB-10.2.14
Host# docker pull mariadb # 创建持久化目录 Host# mkdir data/{mysql1,mysql2}/log -pv Host# mkdir conf/{mysql1,mysql2} conf/mysql{1..2} 都复制一个my.cnf # my.cnf文件 [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp max_connections = 100 connect_timeout = 5 wait_timeout = 600 expire_logs_days = 10 max_binlog_size = 100M default_storage_engine = InnoDB server_id=111 binlog_format=mixed log_bin=/var/lib/mysql/master.log character-set-server=utf8 skip-name-resolve
运行容器 连接,我们采用的方式
大概意思就是: 先将创建一个网络,将容器弄到这个网络中,两个机器想互相通信就可以借助于这个容器的DNS或二层广播就能找到机器,如创建一个网络: docker create network net# 配置docker-composeHost # cat docker-compose.yml version: "3"services: mysqlmaster: image: mariadb volumes: - "${PWD}/data/mysql1:/var/lib/mysql" - "${PWD}/conf/mysql1/my.cnf:/etc/mysql/conf.d/my.cnf" ports: - "20000:3306" networks: - net # 定义容器名称,用于解析 container_name : master environment: MYSQL_ROOT_PASSWORD: mysqlpass mysqlslave1: image: mariadb volumes: - "${PWD}/data/mysql2:/var/lib/mysql" - "${PWD}/conf/mysql2/my.cnf:/etc/mysql/conf.d/my.cnf" ports: - "20001:3306" networks: - net container_name : slave1 environment: MYSQL_ROOT_PASSWORD: mysqlpassnetworks: net:# 运行容器Host# docker-compose up -dCreating network "mysql_net" with the default driverCreating slave1 ... doneCreating master ... done# 查看容器Host # docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES4d7840f9f5e5 mariadb "docker-entrypoint.s…" 5 seconds ago Up 2 seconds 0.0.0.0:20000->3306/tcp master72282f9aaa64 mariadb "docker-entrypoint.s…" 6 seconds ago Up 2 seconds 0.0.0.0:20001->3306/tcp slave1# 测试容器间的连通性mysql]# docker exec -it 4d7840f9f5e5 ping slave1PING slave1 (192.168.16.2): 56 data bytes64 bytes from 192.168.16.2: icmp_seq=0 ttl=64 time=0.085 ms^C--- slave1 ping statistics ---1 packets transmitted, 1 packets received, 0% packet lossround-trip min/avg/max/stddev = 0.085/0.085/0.085/0.000 ms
这里需要点mysql的知识 | 再来整点(主从的知识]()
alter:修改 | insert into :插入 | update:更新 | create:创建连通没问题之后,我们登陆到master上边 docker exec -it master /bin/bash 进入mysql# 注意: 这里是主MariaDB [(none)]> grant replication slave on *.* to 'repl'@'%' identified by 'repl';MariaDB [(none)]> flush privileges;MariaDB [(none)]> show master status;+---------------+----------+--------------+------------------+| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |+---------------+----------+--------------+------------------+| master.000005 | 646 | | |+---------------+----------+--------------+------------------+1 row in set (0.00 sec)## 注意: 这里是从# 登陆到从改变一下状态CHANGE MASTER TO MASTER_HOST='master', MASTER_USER='repl', MASTER_PASSWORD='repl', MASTER_PORT=3306, MASTER_LOG_FILE='master.000005', MASTER_LOG_POS=646;# 启动slave进程MariaDB [(none)]> start slave ;Query OK, 0 rows affected (0.00 sec)# 一定要确保 IO线程跟SQL线程是启动的状态MariaDB [(none)]> show slave status\G;*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master.000005 Read_Master_Log_Pos: 646 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 552 Relay_Master_Log_File: master.000005 Slave_IO_Running: Yes Slave_SQL_Running: Yes注意: 这里是主# 在主上面创建一个数据库,插几条数据CREATE TABLE `tab1` ( `id` int(11) NOT NULL, `name` varchar(20) DEFAULT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;INSERT INTO `tab1` VALUES ('1', 'xiong');## 注意: 我是从,我又来啦登陆到从 查看表 MariaDB [t1]> desc tab1;+-------+-------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+-------+-------------+------+-----+---------+-------+| id | int(11) | NO | PRI | NULL | || name | varchar(20) | YES | | NULL | |+-------+-------------+------+-----+---------+-------+2 rows in set (0.00 sec)MariaDB [t1]> select * from tab1;+----+-------+| id | name |+----+-------+| 1 | xiong |+----+-------+MariaDB [t1]> show slave status\G;*************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: master Master_User: repl Master_Port: 3306 Connect_Retry: 60 Master_Log_File: master.000005 Read_Master_Log_Pos: 1429 Relay_Log_File: mysqld-relay-bin.000002 Relay_Log_Pos: 1335 Relay_Master_Log_File: master.000005 Slave_IO_Running: Yes Slave_SQL_Running: Yes
至此,容器的主从完毕。
这里用的是9.223机器, 非docker安装,哨兵用docker会出现无法连接的现象 具体理论可查看
原理图,不在过多讲解,redis主从+哨兵已经说明大致就是这么个拓扑,哨兵互相监督,然后监控redis的状态,当发现其中一台挂掉之后,切换主,并在自己的状态中更新redis的状态信息,并通报给各哨兵节点更新状态
配置段
,请看1.5节,1.6节内容
先创建五个目录,用于保存redis的配置文件 Host# for i in {30001..30005};do mkdir $i;done查看目录结构,五个目录,一个redis的安装目录 Host # ls30001 30002 30003 30004 30005 redis-4.0.9复制redis.conf文件到对应的redis中,3-5是reids Host# for i in {30003..30005};do cp redis-4.0.9/redis.conf $i ;done复制哨兵的配置 Host# for i in {30001..30002};do cp redis-4.0.9/sentinel.conf $i ;done配置项太多 直接用脚本刷 cat change.sh #!/bin/bash#for i in {30003..30005};do # 定义主节点 sed -i '/# masterauth/amasterauth 123456' ${i}/redis.conf # 增加一个密码,自己定义 sed -i "/# requirepass/arequirepass 123456" ${i}/redis.conf # 所有地址都监听 sed -i "s/bind 127.0.0.1/#bind 127.0.0.1/gi" ${i}/redis.conf # 后台启动 sed -i "s/daemonize no/daemonize yes/gi" ${i}/redis.conf # 修改默认端口号改为当成目录的每个服务一个端口 sed -i "s/6379/${i}/gi" ${i}/redis.conf # 修改主配置slaveof地址也就是主的信息。 sed -i "/# slaveof/aslaveof 192.168.9.223 30003" ${i}/redis.conf # 修改dir的路径,这里是用来保存rdb文件的 sed -i "s@dir ./@dir ${PWD}/${i}@gi" ${i}/redis.conf # 增加日志用于分析错误 sed -i "s@logfile ""@logfile ${PWD}/${i}/redis_${i}.log@gi" ${i}/redis.confdone# 此时我们在来修改redis主——30003的信息这里还需要手动删除 slaveof 192.168.9.223 30003这行Host# vim 30003/redis.conf # 主不用rdb的持久化节约性能, #save 900 1 #save 300 10 #save 60 10000 开启aof持久化, appendonly yes 保持不变,最多丢失一秒的持久化信息,对于session来说没多大影响,还有从顶上 appendfsync everysec 到达3g的时候在重写,一样是为了节约性能,如果3g都没有,64m重写也没啥意义 auto-aof-rewrite-min-size 3gb这里我们来修改sentinel的信息,能用脚本解决的事,就不一个一个去搞了,#!/bin/bash#for i in {30001..30002};do # 后台启动服务 sed -i "2adaemonize yes" $i/sentinel.conf # 绑定地址,这里一定是主机地址,因为是用来连接的 sed -i "/# bind 127.0.0.1/abind 192.168.9.223" ${i}/sentinel.conf # 更改端口号 sed -i "s/port 26379/port ${i}/gi" ${i}/sentinel.conf # 修改成redis主的信息,监控 sed -i "s/sentinel monitor mymaster 127.0.0.1 6379 2/sentinel monitor mymaster 192.168.9.223 30003 2/gi" ${i}/sentinel.conf # redis主从的密码,必须要有 sed -i "/# sentinel auth-pass /asentinel auth-pass mymaster 123456" ${i}/sentinel.conf # 失效转移时长,看你能容忍多少时长了。 sed -i "s/sentinel failover-timeout mymaster 180000/sentinel failover-timeout mymaster 30000/gi" ${i}/sentinel.conf # 同样这里也得增加一行日志 sed -i "10alogfile ${PWD}/${i}/sentinel_${i}.log" ${i}/sentinel.confdone改完了来启动redis服务。一个一个敲关闭麻烦,直接脚本刷,bash status.sh 然后输入 1 2 3 就行了Host# cat status.sh #!/bin/bash## 定义redis-server的服务路径 这里看是你安装在哪 直接改就行Sbin=/tmp/topv1/redis/redis-4.0.9/src/redis-serverSenbin=/tmp/topv1/redis/redis-4.0.9/src/redis-sentinelDown=/tmp/topv1/redis/redis-4.0.9/src/redis-cliserver(){ for i in {30003..30005};do $Sbin ${i}/redis.conf done}sentinel(){ for i in {30001..30002};do $Senbin ${i}/sentinel.conf done}shutdowns(){ for i in {30001..30005};do ${Down} -h 192.168.9.223 -p ${i} -a 123456 shutdown done}read -p "输入 1: start | 2: stop | 3: show: " enabledcase $enabled in1|start) server sentinel ;;2|stop) shutdowns ;;3|show) ps -ef | grep redis ;;*) echo "script 1: start | 2: stop | 3: show" ;;esacHost # bash status.sh 输入 1: start | 2: stop | 3: show: 输入1--3
我们给它整起来,看一下效果
连接下哨兵: redis-cli -h 192.168.9.223 -p 30001 -a 123456192.168.9.223:30001> info# Sentinelsentinel_masters:1sentinel_tilt:0sentinel_running_scripts:0sentinel_scripts_queue_length:0sentinel_simulate_failure_flags:0master0:name=mymaster,status=ok,address=192.168.9.223:30003,slaves=2,sentinels=2
这里就说明了 状态是ok的 主机是 223:30003 从有2个 哨兵有2个,咱关闭一个查看一下哨兵是否生效
关闭主Host# redis-cli -h 192.168.9.223 -p 30003 -a 123456 shutdown
查看哨兵的日志,坐等failover的切换,这里说明已经切换了,我们登到哨到机器上,命令还是那个30001
成功切换,然后我们在给主弄起来,还是30005是主# Sentinelsentinel_masters:1sentinel_tilt:0sentinel_running_scripts:0sentinel_scripts_queue_length:0sentinel_simulate_failure_flags:0master0:name=mymaster,status=ok,address=192.168.9.223:30005,slaves=2,sentinels=2
基本的架构完毕,下边我们结合整整。
session保持配置,上边配置是单个主从结构,下边我们使用哨兵的方式
2、拓扑来一套
用户访问nginx,nginx负载均衡到tomcat,第一次访问tomcat1,先查自己有没有seeion缓存然后查询redis,没有tomcat创建session返回数据给用户,session会复制到redis中,当第二次访问到tomcat2的时候,查询本地缓存没有,然后去redis中查询有,就直接返回数据,不用在本地再创建一个会话修改page/conf/context.xml 从: 这个单机的配置修改为这样 我们这里里用容器编排的方式来启动容器, 便于复习的时候好理解Host # cat docker-compose.yml version: "3"services: tomcat1: image: tomcat:v1 networks: - net volumes: - "${PWD}/page/tomcat1/log:/usr/local/tomcat/logs" - "${PWD}/page/conf:/usr/local/tomcat/conf" - "${PWD}/page/tomcat1/html:/usr/local/webroot/html" ports: - "10001:8081" container_name: tomcat1 tomcat2: image: tomcat:v1 networks: - net volumes: - "${PWD}/page/tomcat2/log:/usr/local/tomcat/logs" - "${PWD}/page/conf:/usr/local/tomcat/conf" - "${PWD}/page/tomcat2/html:/usr/local/webroot/html" ports: - "10002:8081" container_name: tomcat2networks: net:## 启动,重复三次以上的工作,一定要用脚本。谁让我比较懒。Host # cat status.sh #!/bin/bash#read -p "使用格式 1:start, 2: del, 3: show : " Usecase $Use in1|start) docker-compose up -d ;;2|del) docker rm -f tomcat1 tomcat2 ;;3|show) docker-compose ps ;;*) echo "使用格式 1:start, 2: del, 3: show" ;;esac
访问测试: 没毛病,我们在来查看redis是否有这个状态了
查看redis状态,使用客户端,不会的请点击 ,直接跳到1.6就行
没问题,与访问的一致,说明session访问已经成功了,这个是临时存储的,隔一会在来看就没了。
nginx 反向代理tomcat服务器
大概原理: nginx通过upstream定义了一个tomcat资源池,然后定义一个server反向代理proxy_pass upstream,用户端访问时,调度器会通过算法 如 轮询 加权 等等 来选择一台后端的server处理数据,然后将处理结果返回给nginx,再由nginx返回给用户端。nginx配置文件Host # cat conf/nginx.conf # For more information on configuration, see:# * Official English Documentation: http://nginx.org/en/docs/# * Official Russian Documentation: http://nginx.org/ru/docs/daemon off;user nginx;worker_processes 1;pid /var/run/nginx.pid;events { worker_connections 10240;}http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /webconf/log/access.log main; error_log /webconf/log/error.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; client_max_body_size 100m; server_tokens off; include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. upstream mytomcat { server 192.168.9.222:10001; server 192.168.9.222:10002; check interval=3000 rise=2 fall=3 timeout=1000; } server { server_name 127.0.0.1; listen 8080; location / { proxy_pass http://mytomcat; proxy_set_header Host $Proxy_Host; proxy_set_header X-real-ip $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } }}运行一台nginx Host# cat docker-compose.yml version: "3"services: nginx1: image: nginx:v2 ports: - "8080:8080" networks: - net container_name: "nginx1" volumes: - "${PWD}/conf:/webconf"networks: net:
测试效果 : 默认算法 rr 可以使用least_conn
最后查看容器监控状态 ,只能查看容器状态,内存 CPU IO之类的,针对于日志还需要在做其它的操作,请期待后续文章。
测试后台跑了一个简单的java连接mysql查询的程序,ab测试 最大连接70,最大并发12左右,
宿主机硬件配置192.168.9.222: 2G内存,2核CPU:i7-4820K 容器:2个tomcat,1个nginx,一主一从mysql宿主机2: 192.168.9.223: 2G内存,2核CPU:i7-4820K 2哨兵,3个redis完结 下一版 v2更新中...
,以下是目录结构 redis_exp.zip ,tomcat_mysql_nginx.zip
转载于:https://blog.51cto.com/xiong51/2107039