JupyterLab TensorFlow 2.3 构建失败并显示 524

JupyterLab TensorFlow 2.3 Build Failed with 524

我在 Google Cloud Vertex-AI 中创建了一个具有以下属性的新笔记本:

当我打开笔记本时,出现以下消息提示:

一段时间后,我收到以下错误消息:

Build failed with 524.

    If you are experiencing the build failure after installing an extension (or trying to include previously installed extension after updating JupyterLab) please check the extension repository for new installation instructions as many extensions migrated to the prebuilt extensions system which no longer requires rebuilding JupyterLab (but uses a different installation procedure, typically involving a package manager such as 'pip' or 'conda').

    If you specifically intended to install a source extension, please run 'jupyter lab build' on the server for full output.

当我在终端中 运行 jupyter lab 构建时,我得到:

    [LabBuildApp] WARNING | Config option `kernel_spec_manager_class` not recognized by `LabBuildApp`.
[LabBuildApp] JupyterLab 3.2.8
[LabBuildApp] Building in /opt/conda/share/jupyter/lab
[LabBuildApp] Building jupyterlab assets (production, minimized)
Build failed.
Troubleshooting: If the build failed due to an out-of-memory error, you
may be able to fix it by disabling the `dev_build` and/or `minimize` options.

If you are building via the `jupyter lab build` command, you can disable
these options like so:

jupyter lab build --dev-build=False --minimize=False

You can also disable these options for all JupyterLab builds by adding these
lines to a Jupyter config file named `jupyter_config.py`:

c.LabBuildApp.minimize = False
c.LabBuildApp.dev_build = False

If you don't already have a `jupyter_config.py` file, you can create one by
adding a blank file of that name to any of the Jupyter config directories.
The config directories can be listed by running:

jupyter --paths

Explanation:

- `dev-build`: This option controls whether a `dev` or a more streamlined
`production` build is used. This option will default to `False` (i.e., the
`production` build) for most users. However, if you have any labextensions
installed from local files, this option will instead default to `True`.
Explicitly setting `dev-build` to `False` will ensure that the `production`
build is used in all circumstances.

- `minimize`: This option controls whether your JS bundle is minified
during the Webpack build, which helps to improve JupyterLab's overall
performance. However, the minifier plugin used by Webpack is very memory
intensive, so turning it off may help the build finish successfully in
low-memory environments.

An error occurred.
RuntimeError: JupyterLab failed to build
See the log file for details:  /tmp/jupyterlab-debug-ke3s6jt2.log
(base) jupyter@lookalike-conversion-model2:~$

当我为此检查日志时,出现以下错误(post 底部的完整错误):

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

我现在想知道如何修复此错误以实现成功构建?我认为这就是后来我 运行 我的管道出现问题的原因,因为一旦模型完成训练我就会收到此错误:

2022-02-24T13:15:54.660529854ZERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

如有任何帮助,我们将不胜感激!

完整日志堆栈跟踪:

[LabBuildApp] Building in /opt/conda/share/jupyter/lab
[LabBuildApp] Node v12.22.6

[LabBuildApp] Yarn configuration loaded.
[LabBuildApp] Building jupyterlab assets (production, minimized)
[LabBuildApp] > node /opt/conda/lib/python3.7/site-packages/jupyterlab/staging/yarn.js install --non-interactive
[LabBuildApp] yarn install v1.21.1
[1/5] Validating package.json...
[2/5] Resolving packages...
success Already up-to-date.
Done in 0.89s.

[LabBuildApp] > node /opt/conda/lib/python3.7/site-packages/jupyterlab/staging/yarn.js yarn-deduplicate -s fewer --fail
[LabBuildApp] yarn run v1.21.1
$ /opt/conda/share/jupyter/lab/staging/node_modules/.bin/yarn-deduplicate -s fewer --fail
Done in 1.53s.

[LabBuildApp] > node /opt/conda/lib/python3.7/site-packages/jupyterlab/staging/yarn.js run build:prod:minimize
[LabBuildApp] yarn run v1.21.1
$ webpack --config webpack.prod.minimize.config.js

<--- Last few GCs --->

[17013:0x55b0f665b100]   203434 ms: Mark-sweep 2025.5 (2051.4) -> 2024.2 (2051.4) MB, 1531.2 / 0.0 ms  (average mu = 0.079, current mu = 0.009) allocation failure scavenge might not succeed
[17013:0x55b0f665b100]   205286 ms: Mark-sweep 2028.4 (2054.3) -> 2025.7 (2052.1) MB, 1842.0 / 0.0 ms  (average mu = 0.040, current mu = 0.005) allocation failure scavenge might not succeed


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0x12edd074a8d9]
    1: StubFrame [pc: 0x12edd0708ad2]
    2: StubFrame [pc: 0x12edd07bae96]
Security context: 0x18eab2cb2ec9 <JSObject>
    3: /* anonymous */(aka /* anonymous */) [0x1766605ff901] [/opt/conda/share/jupyter/lab/staging/node_modules/webpack/node_modules/webpack-sources/lib/applySourceMap.js:156] [bytecode=0x1766605fa259 offset=503](this=0x1ea94dc00451 <undefined>,0x1dcd32299611 <String[2]: e.>,0x0b62ae...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x55b0f3faff69 node::Abort() [webpack]
 2: 0x55b0f3ee7b87 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > node::SPrintFImpl<char const*>(char const*, char const*&&) [webpack]
 3: 0x55b0f41390b2 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [webpack]
 4: 0x55b0f413938b v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [webpack]
 5: 0x55b0f42ccf96  [webpack]
 6: 0x55b0f42df8ea v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [webpack]
 7: 0x55b0f42e05f4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [webpack]
 8: 0x55b0f42e27ed v8::internal::Heap::AllocateRawWithLightRetry(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [webpack]
 9: 0x55b0f42e2855 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [webpack]
10: 0x55b0f42a8fde v8::internal::Factory::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [webpack]
11: 0x55b0f42b1770 v8::internal::Factory::NewRawOneByteString(int, v8::internal::AllocationType) [webpack]
12: 0x55b0f44f2011 v8::internal::String::SlowFlatten(v8::internal::Isolate*, v8::internal::Handle<v8::internal::ConsString>, v8::internal::AllocationType) [webpack]
13: 0x55b0f44cd0ca v8::internal::StringTable::LookupString(v8::internal::Isolate*, v8::internal::Handle<v8::internal::String>) [webpack]
14: 0x55b0f45ea563 v8::internal::Runtime_HasProperty(int, unsigned long*, v8::internal::Isolate*) [webpack]
15: 0x12edd074a8d9 
Aborted
error Command failed with exit code 134.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

[LabBuildApp] JupyterLab failed to build
[LabBuildApp] Traceback (most recent call last):

[LabBuildApp]   File "/opt/conda/lib/python3.7/site-packages/jupyterlab/debuglog.py", line 48, in debug_logging
    yield

[LabBuildApp]   File "/opt/conda/lib/python3.7/site-packages/jupyterlab/labapp.py", line 176, in start
    raise e

[LabBuildApp]   File "/opt/conda/lib/python3.7/site-packages/jupyterlab/labapp.py", line 173, in start
    app_options=app_options, production = production, minimize=self.minimize)

[LabBuildApp]   File "/opt/conda/lib/python3.7/site-packages/jupyterlab/commands.py", line 483, in build
    production=production, minimize=minimize, clean_staging=clean_staging)

[LabBuildApp]   File "/opt/conda/lib/python3.7/site-packages/jupyterlab/commands.py", line 695, in build
    raise RuntimeError(msg)

[LabBuildApp] RuntimeError: JupyterLab failed to build

[LabBuildApp] Exiting application: JupyterLab

要回答您的问题并用作解决方法,您应该使用以下命令(如 the error commit about this issue 所示):

sudo -i
jupyter lab build --dev-build=False --minimize=False
jupyter labextension  list

无论如何,我已经打开了一个关于这个案例的问题跟踪器,你可以在这个link上查看它,请在上面给问题投票,让我们等待 google 开发者的官方回复。

此外,如果您需要更多详细信息,您可以查看我看到的类似案例列表,这让我认为这是一个持续存在的问题,与 network/image 相关,目前正在查看(部分),这就是为什么我还在 google.

上打开一个问题跟踪器