0%

  • RUN is executed while the image is being build

    while ENTRYPOINT is executed after the image has been built.

reference

5 ways to move Docker container to another host

Build a Docker Image with MySQL Database

Plan A

Step1 create an Image From a Container

Create a new image from a container’s changes

commit command

1
sudo docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
  • options

    Name, shorthand Default Description
    --author , -a Author (e.g., “will brook”)
    --change , -c Apply Dockerfile instruction to the created image
    --message , -m Commit message
    --pause , -p true Pause container during commit

Step 2 export the image to a file

1
sudo docker save -o /path/to/your_image.tar your_image_name

Step 3 load the Docker image file

1
sudo docker load -i your_image.tar

Plan B

Step 1

First save the new image by finding the container ID (using docker container ls) and then committing it to a new image name. Note that only a-z0-9-_. are allowed when naming images:

1
2
# create image from container
docker container commit c16378f943fe rhel-httpd:latest

Step 2

tag the image with the host name or IP address, and the port of the registry:

1
2
3
4
# re-tag repository:tag info about image
docker image tag rhel-httpd:latest registry-host:5000/myadmin/rhel-httpd:latest
or
docker tag 0e5574283393 registry-host:5000/myadmin/rhel-httpd:latest

Step 3

log in from Docker client:

1
docker login <harbor_address>

Step 4

push the image to the registry using the image ID.

In this example the registry is on host named registry-host and listening on port 5000. (harbor默认配置端口80,详见harbor.yml)

1
2
3
4
# push repository:tag,
docker image push registry-host:5000/myadmin/rhel-httpd:latest
or
docker push registry-host:5000/myname/myimage

Pull Image from Harbor

Connecting to Harbor via HTTP

Step 1

add the option --insecure-registry to your client’s Docker daemon. By default, the daemon file is located at /etc/docker/daemon.json.

1
2
3
{
"insecure-registries" : ["ip:port", "0.0.0.0"] #如果port为80,则可省略
}

Restart Docker Engine.

1
systemctl restart docker

Step 2

1
docker pull hostAddress/library/REPOSITORY:TAG

容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
查看正在运行
docker ps
查看所有
docker ps -a
启动
docker start 容器名或容器 id
终止
docker stop [NAME]/[CONTAINER ID]:将容器退出。
docker kill [NAME]/[CONTAINER ID]:强制停止一个容器。

查看容器端口
docker port 容器名或容器id


删除
docker rm -f 容器id
导出
docker export 容器id > xxx.tar
导入
docker import - test/xxx:v1
重启
docker restart $container_id
日志
docker logs $container_id

查看container现在工作网络模式

  • 列出docker的所有网络模式

    1
    docker network ls
  • 针对bridge和host分别查找有哪些container在其中

    1
    2
    docker network inspect bridge
    docker network inspect host
  • 直接查看container的信息,找到network段查看。或者用grep筛选出network。

    1
    2
    docker inspect 容器名/容器ID
    docker inspect 容器名/容器ID | grep -i “network” # 其中grep的“-i”表示不区分大小写。

Exit Codes

Common exit codes associated with docker containers are:

  • Exit Code 0: Absence of an attached foreground process

  • Exit Code 1: Indicates failure due to application error

  • Exit Code 137: Indicates failure as container received SIGKILL (Manual intervention or ‘oom-killer’ [OUT-OF-MEMORY])

  • Exit Code 139: Indicates failure as container received SIGSEGV

  • Exit Code 143: Indicates failure as container received SIGTERM

  • Exit Code 126: Permission problem or command is not executable

  • Exit Code 127: Possible typos in shell script with unrecognizable characters

mysql

  • 密码123456

  • 创建容器

    1
    2
    3
    4
    5
    docker run --name mysql-server -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7
    注意:
    -d:让容器在后台运行
    -P(大写):是容器内部端口随机映射到主机的高端口
    -p(小写):是容器内部端口绑定到指定的主机端口
  • 进入容器

    1
    2
    3
    docker exec -it mysql-server /bin/bash

    docker exec -it mysql-server /bin/sh
  • 访问

    docker exec -it mysql-server mysql -uroot -p

  • 修改root 可以通过任何客户端连接

    1
    ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
  • 从外部访问docker mysql-server

    1
    mysql -h127.0.0.1 -P3306 -uroot -p
  • 导入sql文件

    1
    2
    3
    4
    5
    6
    7
    8
    先将文件导入到容器
    #docker cp **.sql 容器名:/root/
    进入容器
    #docker exec -ti 容器名或ID sh
    登录数据库
    # mysql -uroot -p
    将文件导入数据库
    source 数据库名 < /root/***.sql
  • 导出数据库

    1
    docker exec -it  mysql-server(容器名) mysqldump -uroot -p123456 数据库名称 > /opt/sql_bak/test_db.sql(导出表格路径)

portainer

  • 密码重置

    • 下载帮助镜像portainer/helper-reset-password

      1
      docker pull portainer/helper-reset-password
    • 停止运行的portainer

      1
      docker stop "id-portainer-container"
    • 运行重置命令

      1
      docker run --rm -v portainer_data:/data portainer/helper-reset-password
    • 结果

      1
      2
      2020/06/04 00:13:58 Password successfully updated for user: admin
      2020/06/04 00:13:58 Use the following password to login: &_4#\3^5V8vLTd)E"NWiJBs26G*9HPl1
    • 重新运行portainer,密码 为👆重置的 &_4#\3^5V8vLTd)E”NWiJBs26G*9HPl1

      1
      docker start "id-portainer-container"
  • 现在密码为 admin/admin

  • 重新安装

    1
    2
    3
    4
    5
    sudo docker run -d -p 8000:8000 -p 9443:9443 --name portainer \
    --restart=always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v portainer_data:/data \
    cr.portainer.io/portainer/portainer-ce:2.9.3

nacos

  • run
    1
    docker run -d --name nacos -p 8848:8848 -e PREFER_HOST_MODE=hostname -e MODE=standalone nacos/nacos-server
    • Linux memory is insufficient
      1
      docker run -e JVM_XMS=256m -e JVM_XMX=256m --env MODE=standalone --name nacos -d -p 8848:8848 nacos/nacos-server

redis

使用docker-compose up redis启动容器时,如果配置自定义配置文件 redis.conf,需要设置

1
2
bind 0.0.0.0
daemonize no

docker-compose.yml文件内容

1
2
3
4
5
6
7
8
9
10
version: "3.7"                                                                            services:
redis:
image: "redis:alpine"
stdin_open: true #打开标准输入,可以接受外部输入。
tty: true #模拟一个伪终端。
volumes:
- /docker/projects/test/redis.conf:/data/redis.conf # 主机路径:容器路径
# - /docker/projects/test/redis/data:/data
# - /docker/projects/test/redis/logs:/logs
command: redis-server --include /data/redis.conf

使用 docker-compose –verbose up redis启动,可查看启动详情

修改已有容器的端口映射

  1. 停止容器

  2. 停止docker服务(systemctl stop docker)

  3. 修改这个容器的hostconfig.json文件中的端口(原帖有人提到,如果config.v2.json里面也记录了端口,也要修改)

    1
    2
    3
    4
    5
    6
    7
    8
    cd /var/lib/docker/3b6ef264a040* #这里是CONTAINER ID
    vi hostconfig.json
    如果之前没有端口映射, 应该有这样的一段:
    "PortBindings":{}
    增加一个映射, 这样写:
    "PortBindings":{"3306/tcp":[{"HostIp":"","HostPort":"3307"}]}
    前一个数字是容器端口, 后一个是宿主机端口.
    而修改现有端口映射更简单, 把端口号改掉就行.
  4. 启动docker服务(systemctl start docker)

  5. 启动容器

配置容器的镜像源(安装vim)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
mv /etc/apt/sources.list /etc/apt/sources.list.bak

echo "deb http://mirrors.163.com/debian/ jessie main non-free contrib" >/etc/apt/sources.list

echo "deb http://mirrors.163.com/debian/ jessie-proposed-updates main non-free contrib" >>/etc/apt/sources.list

echo "deb-src http://mirrors.163.com/debian/ jessie main non-free contrib" >>/etc/apt/sources.list

echo "deb-src http://mirrors.163.com/debian/ jessie-proposed-updates main non-free contrib" >>/etc/apt/sources.list
#更新安装源
apt-get update
#如果下载过程中卡在[waiting for headers] 删除/var/cache/apt/archives/下的所有文件
#安装vim
apt-get install vim

jconsole配置远程监控

  • 远程jvm进程需配置
    1
    2
    3
    4
    5
    env.java.opts: 
    -Dcom.sun.management.jmxremote
    -Dcom.sun.management.jmxremote.port=9999
    -Dcom.sun.management.jmxremote.authenticate=false
    -Dcom.sun.management.jmxremote.ssl=false

    其中9999为指定监控端口

BinaryObjectException: Conflicting enum values

  • 原因

    1
    2
    3
    4
    5
    6
    7
    存入ignite的数据格式
    key: String , value: Map<Enum, BigDecimal>
    Enum类型包含
    {A,B,C}

    在之后由于业务变更,需要新增新的enum项目,并添加D在A与B之间
    {A,D,B,C}
  • 分析

    1
    由于在数据存入ignite之后,ignite会保存数据相关的schema信息,此时在enum项目之间修改item,会打乱之前的index
  • 解决

    1
    2
    3
    4
    5
    6
    方法一:
    更改enum类的名称,不再使用原有的schema信息
    方法二:
    enum类新增项目时,需要在最后面添加,避免打乱已有的schema索引
    方法三(未验证):
    删除 $IGNITE_HOME/work/binary_meta/Nodex里面的文件
  • 官方说明

    • You cannot change the types of existing fields.
    • You cannot change the order of enum values or add new constants at the beginning or in the middle of the list of enum’s values. You can add new constants to the end of the list though.
  • 处理conflict enum values, 需要清除数据

需要清理 $IGNITE_HOME/work/db目录下的 binary_meta、marshaller

需要验证是否清理 storagePath、walPath、walArchivePath

gc

3s进行gc (110060-52672)/1024=56.04G

17s进行gc (109952-52658)/1024=55.95G

IgniteCacheException

1
2
3
4
5
ERROR org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi  [] - Failed to send message to remote node [node=ZookeeperClusterNode [id=1c8a032d-042e-4386-9ce8-2605c0699304, addrs=[17.9.11.11], order=1, loc=false, client=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridNearLockRequest [topVer=AffinityTopologyVersion [topVer=358, minorTopVer=0], miniId=1, dhtVers=GridCacheVersion[] [null], subjId=a5dbdc1d-e76e-49c2-85d7-ed7f1c7db7bd, taskNameHash=0, createTtl=-1, accessTtl=-1, flags=3, txLbl=null, filter=null, super=GridDistributedLockRequest [nodeId=a5dbdc1d-e76e-49c2-85d7-ed7f1c7db7bd, nearXidVer=GridCacheVersion [topVer=245500806, order=1638786801426, nodeOrder=336], threadId=11960694, futId=96c1bf42d71-90702925-3ef9-4c70-b7a7-4be2fb6d75ba, timeout=0, isInTx=true, isInvalidate=false, isRead=true, isolation=REPEATABLE_READ, retVals=[true], txSize=0, flags=0, keysCnt=1, super=GridDistributedBaseMessage [ver=GridCacheVersion [topVer=245500806, order=1638786801426, nodeOrder=336], committedVers=null, rolledbackVers=null, cnt=1, super=GridCacheIdMessage [cacheId=-182240380, super=GridCacheMessage [msgId=1360862, depInfo=null, lastAffChangedTopVer=AffinityTopologyVersion [topVer=336, minorTopVer=0], err=null, skipPrepare=false]]]]]]]

org.apache.ignite.IgniteCheckedException: Failed to connect to node due to unrecoverable exception (is node still alive?). Make sure that each ComputeTask and cache Transaction has a timeout set in order to prevent parties from waiting forever in case of network issues [nodeId=d0a258e5-ec1b-4f79-89ad-80c27708f895, addrs=[x/x.x.x.x:47100], err= class org.apache.ignite.IgniteCheckedException: Remote node does not observe current node in topology : d0a258e5-ec1b-4f79-89ad-80c27708f895]

Caused by: org.apache.ignite.IgniteCheckedException: Remote node does not observe current node in topology : d0a258e5-ec1b-4f79-89ad-80c27708f895
  • gc的策略
  • ignite client的异常捕获

ELK

Elasticsearch 是一个搜索和分析引擎。Logstash 是服务器端数据处理管道,能够同时从多个来源采集数据,转换数据,然后将数据发送到诸如 Elasticsearch 等“存储库”中。Kibana 则可以让用户在 Elasticsearch 中使用图形和图表对数据进行可视化。

Elasticsearch is the living heart of what is today’s the most popular log analytics platform — the ELK Stack (Elasticsearch, Logstash and Kibana). Elasticsearch’s role is so central that it has become synonymous with the name of the stack itself.

Elasticsearch behaves like a REST API, so you can use either the POST or the PUT method to add data to it. You use PUT when you know the or want to specify the id of the data item, or POST if you want Elasticsearch to generate an id for the data item:

solution

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

If you want to increase the limit shown by ulimit -n, you should:

  • Modify /etc/systemd/user.conf and /etc/systemd/system.conf with the following line (this takes care of graphical login):

    1
    DefaultLimitNOFILE=65535
  • Modify /etc/security/limits.conf with the following lines (this takes care of non-GUI login):

    1
    2
    * hard nofile 65535
    * soft nofile 65535
  • Reboot your computer for changes to take effect.

  • check

    1
    2
    ulimit -Hn
    ulimit -Sn

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

  • vim /etc/sysctl.conf 

    新增vm.max_map_count=655360

  • sysctl -p

the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

in short, if you are running Elasticsearch locally(single node) or just with a single node on the cloud then just use below config in your elasticsearch.yml to avoid the production check, and to make it work, more info about this config in this SO answer:

1
discovery.type: single-node

Kubernetes is pronounced coo-ber-net-ees, not coo-ber-neats. People also use the shortened version k8s a lot. Please don’t pronounce that one k-eights—it is still coo-ber-net-ees.

Difference between Docker and Kubernetes

Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker. 

Docker Container Problems:

  • How would all of these containers be coordinated and scheduled?
  • How do you seamlessly upgrade an application without any interruption of service?
  • How do you monitor the health of an application, know when something goes wrong and seamlessly restart it?

When most people talk about “Kubernetes vs. Docker,” what they really mean is “Kubernetes vs. Docker Swarm.” 

Kubernetes architecture and its components

We can break down the components into three main parts.

  1. The Control Plane - The Master.
  2. Nodes - Where pods get scheduled.
  3. Pods - Holds containers.

Docker is a platform and tool for building, distributing, and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.

command

查看运行程序

1
kubectl get pod -n 命令空间

查看日志

1
kubectl logs -f 容器id -n 命令空间

进入容器

1
kubectl exec -it  容器id -n 命令空间 -c entity-server-server -- sh

复制服务器文件到宿主机

1
kubectl cp 命令空间/容器id:/path/to/source_file ./path/to/local_file

原理

https://juejin.cn/post/7136952484903256077

服务注册

Spring Cloud Alibaba Nacos Discovery 遵循了 Spring Cloud Common 标准,实现了 AutoServiceRegistration、ServiceRegistry、Registration 这三个接口。

在 Spring Cloud 应用的启动阶段,监听了 WebServerInitializedEvent 事件,当 Web 容器初始化完成后,即收到 WebServerInitializedEvent 事件后,会触发注册的动作,调用 ServiceRegistry 的 register 方法,将服务注册到 Nacos Server。

![nacos客户端注册机制](/images/nacos/nacos client register mechanism.png)

启动

  • 2.x版本启动,需要添加 -m属性./startup.sh -m standalone

心跳机制

健康检查

服务端接受到客户端的服务注册请求后,在创建空的Service后,就会开启健康检查任务

  • 在超过15秒没收到客户端心跳时,就会把注册表中实例的健康状态改为false
  • 超时30秒没有收到客户端心跳时,就会从注册表表剔除该实例,会使用HTTP DELETE方式调用/v1/ns/instance地址

open API

  • 获取所有服务列表
1
curl -X GET 'http://127.0.0.1:8848/nacos/v2/ns/service/list'
  • 获取心跳
1
2
3
4
5
6
7
8
9
10
11
12
13
curl -X PUT '127.0.0.1:8848/nacos/v2/ns/instance/beat' \
-d '{
"namespaceId": "jarvex_space",
"serviceName": "jarvex-common-group@@entity-server",
"ip": "127.0.0.1",
"port": "5333"
}'

curl -X PUT '127.0.0.1:8848/nacos/v2/ns/instance/beat' \
-d 'namespaceId=jarvex_space' \
-d 'serviceName=jarvex-common-group@@entity-server' \
-d 'ip=127.0.0.1' \
-d 'port=5333'
  • 创建命令空间

    1
    2
    3
    curl -d 'namespaceId=jarvex_space' \
    -d 'namespaceName=jarvex' \
    -X POST
  • 删除持久化实例

1
curl -X DELETE "http://127.0.0.1:8848/nacos/v2/ns/instance?serviceName=jarvex-gateway&ip=192.168.1.148&port=8085&namespaceId=jarvex_space&groupName=jarvex-common-group&ephemeral=false"

动态配置

方案一: nacos config配置中心获取

  • 通过配置nacos config和 controller获取

    1
    2
    3
    4
    5
    config:
    enabled: true
    server-addr: ${spring.cloud.nacos.server-addr}
    file-extension: yaml
    namespace: public

    controller类使用 @Refresh,刷新动态配置

方案二:nacos client 监听指定配置文件(recommended)

  • 编写监听类
  • 手动刷新配置

reference

终端登录pg

  • 如果之前没有登录过,需要设置当前用户进行登录操作

    There is no default username and password without you creating one. The simplest possible setup is to follow these steps to set up your own user as a superuser.

    At a terminal prompt, create a postgres user with your own username

    1
    sudo -u postgres createuser --superuser $USER	# $USER无须替换

    Start the postgresql command prompt as your username but running as root since you didn’t set a password yet;

    1
    sudo -u postgres psql

    At the postgresql prompt, set your password;

    1
    \password $USER    # 其中$USER需要替换成当前用户名

    After that, you should be able to log on just fine.

  • 如果之前设置了上面的步骤,可直接运行

    1
    psql postgres

导入文件

1
2
3
psql postgres	# login command
\c some_database # choose database
\i \path\TO\file_name.sql # execute sql

连接指定schema

  • 如果不指定schema的话,会默认访问public的schema

  • 指定schema

    jdbc:postgresql://localhost:5432/mydatabase?currentSchema=myschema