无法将大于 10 GB 的大 docker 图像推送到 OpenShift 上的 Artifactory
Failing to push large docker image > 10 GB to Artifactory on OpenShift
我们在 Openshift 上使用 Artifactory 作为 docker 注册表。安装是通过 jFrog 的 helm chart 进行的。到目前为止一切正常,但正在上传大于 10GB 的大 docker 图片到注册表。
我们在一个 pod 中使用 Nginx 反向代理,在另一个 pod 中使用 artifactory。它应该表现得像 Nginx 与 artifactory 本身不在同一台服务器上。
在控制台上它看起来像 push st 工作。较小的层被推送,大的层也在上传。几秒钟后,它再次开始重新上传。
Artifactory 抛出此错误
2021-08-23T05:41:53.624Z [jfrt ] [ERROR] [ ] [.j.a.c.g.GrpcStreamObserver:97] [default-executor-755] - refreshing affected platform config stream - got an error (status: Status{code=INTERNAL, description=Received unexpected EOS on DATA frame from server., cause=null})
io.grpc.StatusRuntimeException: INTERNAL: Received unexpected EOS on DATA frame from server.
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at org.jfrog.access.client.grpc.AuthorizationInterceptor$AuthenticatedClientCall$RejoiningClientCallListener.onClose(AuthorizationInterceptor.java:73)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
at io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Artifactory Nginx conf 如下所示(主要由 Artifactory 生成):
server {
listen 443 ssl;
listen 80;
server_name ~(?<repo>.+)\.my.url.ch my.url my.nonssl.url;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key;
ssl_password_file /var/opt/jfrog/nginx/ssl/tls.pass;
ssl_ciphers HIGH:!aNULL:!MD5;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo//;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
#proxy_buffering off;
#proxy_request_buffering off;
proxy_pass http://devlab-artifactory:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security always;
location ~ ^/artifactory/ {
proxy_pass http://artifactory:8081;
}
}
}
nginx.conf
# Main Nginx configuration file
worker_processes 4;
error_log stderr warn;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_max_body_size 100G;
client_body_buffer_size 128k;
client_body_in_file_only clean;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log /var/opt/jfrog/nginx/logs/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
我们在这里尝试了很多东西。更大的客户端大小或禁用代理缓冲,但我没有上传超过 5.6 GB
我在 Harbor 上成功上传了这种图片,所以在 Artifactory 上应该可以做同样的事情。
如有任何建议,我将不胜感激
谢谢和最诚挚的问候
我认为最好的方法是通过使用多阶段 docker 文件来最小化 docker 图像大小。
由于 NGINX 的代理缓冲,这种情况经常发生在我身上。检查那里的日志,看看这是否是问题发生的地方。
我建议尝试使用 NGINX 禁用代理缓冲:
proxy_buffering关闭;
proxy_ignore_headers“X-加速缓冲”;
我们在 Openshift 上使用 Artifactory 作为 docker 注册表。安装是通过 jFrog 的 helm chart 进行的。到目前为止一切正常,但正在上传大于 10GB 的大 docker 图片到注册表。
我们在一个 pod 中使用 Nginx 反向代理,在另一个 pod 中使用 artifactory。它应该表现得像 Nginx 与 artifactory 本身不在同一台服务器上。
在控制台上它看起来像 push st 工作。较小的层被推送,大的层也在上传。几秒钟后,它再次开始重新上传。
Artifactory 抛出此错误
2021-08-23T05:41:53.624Z [jfrt ] [ERROR] [ ] [.j.a.c.g.GrpcStreamObserver:97] [default-executor-755] - refreshing affected platform config stream - got an error (status: Status{code=INTERNAL, description=Received unexpected EOS on DATA frame from server., cause=null})
io.grpc.StatusRuntimeException: INTERNAL: Received unexpected EOS on DATA frame from server.
at io.grpc.Status.asRuntimeException(Status.java:533)
at io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onClose(ClientCalls.java:478)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at org.jfrog.access.client.grpc.AuthorizationInterceptor$AuthenticatedClientCall$RejoiningClientCallListener.onClose(AuthorizationInterceptor.java:73)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:413)
at io.grpc.internal.ClientCallImpl.access0(ClientCallImpl.java:66)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInternal(ClientCallImpl.java:742)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImplStreamClosed.runInContext(ClientCallImpl.java:721)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Artifactory Nginx conf 如下所示(主要由 Artifactory 生成):
server {
listen 443 ssl;
listen 80;
server_name ~(?<repo>.+)\.my.url.ch my.url my.nonssl.url;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_certificate /var/opt/jfrog/nginx/ssl/tls.crt;
ssl_certificate_key /var/opt/jfrog/nginx/ssl/tls.key;
ssl_password_file /var/opt/jfrog/nginx/ssl/tls.pass;
ssl_ciphers HIGH:!aNULL:!MD5;
if ($http_x_forwarded_proto = '') {
set $http_x_forwarded_proto $scheme;
}
## Application specific logs
rewrite ^/$ /ui/ redirect;
rewrite ^/ui$ /ui/ redirect;
rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo//;
chunked_transfer_encoding on;
client_max_body_size 0;
location / {
proxy_read_timeout 2400s;
proxy_pass_header Server;
proxy_cookie_path ~*^/.* /;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
#proxy_buffering off;
#proxy_request_buffering off;
proxy_pass http://devlab-artifactory:8082;
proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header Strict-Transport-Security always;
location ~ ^/artifactory/ {
proxy_pass http://artifactory:8081;
}
}
}
nginx.conf
# Main Nginx configuration file
worker_processes 4;
error_log stderr warn;
pid /tmp/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
variables_hash_max_size 1024;
variables_hash_bucket_size 64;
server_names_hash_max_size 4096;
server_names_hash_bucket_size 128;
types_hash_max_size 2048;
types_hash_bucket_size 64;
proxy_read_timeout 2400s;
client_header_timeout 2400s;
client_body_timeout 2400s;
proxy_connect_timeout 75s;
proxy_send_timeout 2400s;
proxy_buffer_size 128k;
proxy_buffers 40 128k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 250m;
proxy_http_version 1.1;
client_max_body_size 100G;
client_body_buffer_size 128k;
client_body_in_file_only clean;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format timing 'ip = $remote_addr '
'user = \"$remote_user\" '
'local_time = \"$time_local\" '
'host = $host '
'request = \"$request\" '
'status = $status '
'bytes = $body_bytes_sent '
'upstream = \"$upstream_addr\" '
'upstream_time = $upstream_response_time '
'request_time = $request_time '
'referer = \"$http_referer\" '
'UA = \"$http_user_agent\"';
access_log /var/opt/jfrog/nginx/logs/access.log timing;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
我们在这里尝试了很多东西。更大的客户端大小或禁用代理缓冲,但我没有上传超过 5.6 GB
我在 Harbor 上成功上传了这种图片,所以在 Artifactory 上应该可以做同样的事情。
如有任何建议,我将不胜感激
谢谢和最诚挚的问候
我认为最好的方法是通过使用多阶段 docker 文件来最小化 docker 图像大小。
由于 NGINX 的代理缓冲,这种情况经常发生在我身上。检查那里的日志,看看这是否是问题发生的地方。 我建议尝试使用 NGINX 禁用代理缓冲:
proxy_buffering关闭; proxy_ignore_headers“X-加速缓冲”;