通过 travis ci 在 eks 上部署失败
deploying on eks via travis ci fails
我正在尝试制作一个 cicd 管道 github->travisci->aws eks
一切正常图像发布到 dockerhub 和 all.but 当 travis 执行 kubectl apply -f "the files" 它抛出错误.. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
(源 coe/deployment/service 文件没有任何问题,因为我手动将它们部署在 aws eks 上并且它们工作正常。)
#-----------------travis.yml-------------
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
- docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client
# - aws s3 ls
script:
- docker run -e CI=true akifboi/multi-client npm test
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
#----deploy.sh--------
# docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client
# docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server
# docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker
# docker push akifboi/multi-client:latest
# docker push akifboi/multi-server:latest
# docker push akifboi/multi-worker:latest
# docker push akifboi/multi-client:$SHA
# docker push akifboi/multi-server:$SHA
# docker push akifboi/multi-worker:$SHA
echo "starting"
aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese!
echo "applying k8 files"
kubectl apply -f ./k8s/
# kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA
# kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA
# kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA
echo "done"
#------travis;logs----------
last few lines:
starting
"ACTIVE"
applying k8 files
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
done
Already up to date.
HEAD detached at c1858f7
Untracked files:
(use "git add <file>..." to include in what will be committed)
aws/
awscliv2.zip
nothing added to commit but untracked files present (use "git add" to track)
Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225)
Done. Your build exited with 0.
我们在 CI 中安装了最新版本的 kubectl,今天遇到了这个错误。固定到以前的版本 (1.18) 后,错误已解决。
最后一个工作版本是 1.23.6,我们发现 1.24 有错误
我已确认,它适用于版本 v1.22.0
如果有人在寻找 circleci 解决方案,他们可以尝试下面的代码
steps:
- checkout
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
我们今天也 运行 讨论这个问题。每次我们部署时,我们的 kubectl 都会自动更新,并且昨天有一个新版本(版本 1.24)似乎有问题。我所做的修复是将自动更新更改为设定版本 (1.23.5) 并解决了问题。
运行 进入与 Gitlab Runner 和 EKS 相同的问题..api 版本 v1beta 支持已随 kubectl 版本 v1.24 删除。通过使用与kubernetes 版本的 EKS 集群/kube 控制器,而不是使用最新的 kubectl 版本:
curl -LO https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl
我正在尝试制作一个 cicd 管道 github->travisci->aws eks 一切正常图像发布到 dockerhub 和 all.but 当 travis 执行 kubectl apply -f "the files" 它抛出错误.. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
(源 coe/deployment/service 文件没有任何问题,因为我手动将它们部署在 aws eks 上并且它们工作正常。)
#-----------------travis.yml-------------
sudo: required
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
- docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client
# - aws s3 ls
script:
- docker run -e CI=true akifboi/multi-client npm test
deploy:
provider: script
script: bash ./deploy.sh
on:
branch: master
#----deploy.sh--------
# docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client
# docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server
# docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker
# docker push akifboi/multi-client:latest
# docker push akifboi/multi-server:latest
# docker push akifboi/multi-worker:latest
# docker push akifboi/multi-client:$SHA
# docker push akifboi/multi-server:$SHA
# docker push akifboi/multi-worker:$SHA
echo "starting"
aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese!
echo "applying k8 files"
kubectl apply -f ./k8s/
# kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA
# kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA
# kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA
echo "done"
#------travis;logs----------
last few lines:
starting
"ACTIVE"
applying k8 files
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
done
Already up to date.
HEAD detached at c1858f7
Untracked files:
(use "git add <file>..." to include in what will be committed)
aws/
awscliv2.zip
nothing added to commit but untracked files present (use "git add" to track)
Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225)
Done. Your build exited with 0.
我们在 CI 中安装了最新版本的 kubectl,今天遇到了这个错误。固定到以前的版本 (1.18) 后,错误已解决。
最后一个工作版本是 1.23.6,我们发现 1.24 有错误
我已确认,它适用于版本 v1.22.0
如果有人在寻找 circleci 解决方案,他们可以尝试下面的代码
steps:
- checkout
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
我们今天也 运行 讨论这个问题。每次我们部署时,我们的 kubectl 都会自动更新,并且昨天有一个新版本(版本 1.24)似乎有问题。我所做的修复是将自动更新更改为设定版本 (1.23.5) 并解决了问题。
运行 进入与 Gitlab Runner 和 EKS 相同的问题..api 版本 v1beta 支持已随 kubectl 版本 v1.24 删除。通过使用与kubernetes 版本的 EKS 集群/kube 控制器,而不是使用最新的 kubectl 版本:
curl -LO https://dl.k8s.io/release/v1.22.0/bin/linux/amd64/kubectl