GitHub 和 GitLab 是否支持 git 克隆的 --filter 参数?
Do GitHub and GitLab support git clone's --filter parameter?
我想使用 git 的 partialClone
功能。在这个 中,我看到了 git clone --filter=tree:none <repo>
命令。
但是在github上执行时,提示warning: filtering not recognized by server, ignoring
。没用。
我想知道是GitHub网站不支持,还是我的设置有问题
我问了GitHub的反馈人员,技术人员没有得到答复。
GitHub 或 GitLab 几乎肯定还不支持。
--filter
选项正在积极开发中,尚未真正准备好用于通用用途。 GitHub的blog post about the release of Git 2.19 in September, 2018 says
Note that most public servers do not yet support the feature, but you can play with git clone --filter=blob:none
against your local Git 2.19 install.
一旦此功能开发得更加完善并且主机开始支持它,我相信他们不会悄悄这样做。据我所知,目前还没有主要的云提供商发布这样的公告。
OP 2019-03-21 更新:
Not long ago, I received an official reply from github. They think that the --filter
parameter is still an immature feature and is accompanied by some security issues. Therefore, this feature will not be supported in the short term.
关于 GitLab 中的部分 clone/spare 结帐支持:
从 GitLab 12.4(2019 年 10 月 22 日发布)开始,部分克隆已作为自托管实例的可选 alpha 功能添加。您可以通过功能标志 api:
在实例范围内启用它
curl --data "value=true" --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/features/gitaly_upload_pack_filter
您可以在此处获得更多相关信息:
https://docs.gitlab.com/ee/topics/git/partial_clone.html
明确一点:在上次编辑此答案时,您不能将此功能用于 gitlab.com 托管存储库。
Exclude large files using Partial Clone
Storing large binary files in Git is normally discouraged, because every large file added will be downloaded by everyone who clones or fetches changes thereafter.
This is slow, if not a complete obstruction when working from a slow or unreliable internet connection.
In GitLab 13.0, Partial Clone has been enabled for blob size filters, as well as experimentally for other filters.
This allows troublesome large files to be excluded from clones and fetches. When Git encounters a missing file, it will be downloaded on demand.
When cloning a project, use the --filter=blob:none
or --filer=blob:limit=1m
to exclude blobs completely or by file size.
Note, Partial Clone requires at least Git 2.22.0.
(另见“”)
Read more in our recent blog, "How Git Partial Clone lets you fetch only the large file you need", from James Ramsay.
参见 documentation and issue。
虽然我找不到官方博客 post 或有关支持的新闻,GitHub 确实似乎正在推出 --filter
支持.
$ git clone --bare --single-branch --depth=1 https://github.com/torvalds/linux
导致下载了价值 195.82MiB 的大约 74k 个对象。
$ git clone --bare --single-branch --depth=1 --filter=blob:none https://github.com/torvalds/linux
导致下载了价值 2.15MiB 的大约 4.7k 个对象。如果您只想知道存储库中有哪些文件,那么数据量会减少 91 倍。
既然你在 tree:none
中提到,我也测试过。现在它导致 fatal: expected 'tree:<depth>'
并且我的以下实验表明只有 tree:0
有效,这导致在裸回购中下载 603 字节左右。如果您尝试克隆并检出,那么 git 会慢慢找出它需要的对象并克隆整个存储库。大于 0 的数字导致:fatal: remote error: filter 'tree' not supported (maximum depth: 0, but got: 1)
我想使用 git 的 partialClone
功能。在这个 git clone --filter=tree:none <repo>
命令。
但是在github上执行时,提示warning: filtering not recognized by server, ignoring
。没用。
我想知道是GitHub网站不支持,还是我的设置有问题
我问了GitHub的反馈人员,技术人员没有得到答复。
GitHub 或 GitLab 几乎肯定还不支持。
--filter
选项正在积极开发中,尚未真正准备好用于通用用途。 GitHub的blog post about the release of Git 2.19 in September, 2018 says
Note that most public servers do not yet support the feature, but you can play with
git clone --filter=blob:none
against your local Git 2.19 install.
一旦此功能开发得更加完善并且主机开始支持它,我相信他们不会悄悄这样做。据我所知,目前还没有主要的云提供商发布这样的公告。
OP 2019-03-21 更新:
Not long ago, I received an official reply from github. They think that the
--filter
parameter is still an immature feature and is accompanied by some security issues. Therefore, this feature will not be supported in the short term.
关于 GitLab 中的部分 clone/spare 结帐支持:
从 GitLab 12.4(2019 年 10 月 22 日发布)开始,部分克隆已作为自托管实例的可选 alpha 功能添加。您可以通过功能标志 api:
在实例范围内启用它curl --data "value=true" --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/features/gitaly_upload_pack_filter
您可以在此处获得更多相关信息: https://docs.gitlab.com/ee/topics/git/partial_clone.html
明确一点:在上次编辑此答案时,您不能将此功能用于 gitlab.com 托管存储库。
Exclude large files using Partial Clone
Storing large binary files in Git is normally discouraged, because every large file added will be downloaded by everyone who clones or fetches changes thereafter.
This is slow, if not a complete obstruction when working from a slow or unreliable internet connection.In GitLab 13.0, Partial Clone has been enabled for blob size filters, as well as experimentally for other filters.
This allows troublesome large files to be excluded from clones and fetches. When Git encounters a missing file, it will be downloaded on demand.
When cloning a project, use the
--filter=blob:none
or--filer=blob:limit=1m
to exclude blobs completely or by file size.
Note, Partial Clone requires at least Git 2.22.0.
(另见“
Read more in our recent blog, "How Git Partial Clone lets you fetch only the large file you need", from James Ramsay.
参见 documentation and issue。
虽然我找不到官方博客 post 或有关支持的新闻,GitHub 确实似乎正在推出 --filter
支持.
$ git clone --bare --single-branch --depth=1 https://github.com/torvalds/linux
导致下载了价值 195.82MiB 的大约 74k 个对象。
$ git clone --bare --single-branch --depth=1 --filter=blob:none https://github.com/torvalds/linux
导致下载了价值 2.15MiB 的大约 4.7k 个对象。如果您只想知道存储库中有哪些文件,那么数据量会减少 91 倍。
既然你在 tree:none
中提到,我也测试过。现在它导致 fatal: expected 'tree:<depth>'
并且我的以下实验表明只有 tree:0
有效,这导致在裸回购中下载 603 字节左右。如果您尝试克隆并检出,那么 git 会慢慢找出它需要的对象并克隆整个存储库。大于 0 的数字导致:fatal: remote error: filter 'tree' not supported (maximum depth: 0, but got: 1)