iPython,保存一个代码单元格的输出,并且在执行期间仍然正常显示输出
iPython, save the output of a code cell and still normally showing the ouput during execution
所以,这个问题的通常答案是使用细胞魔法“%%caputre cap”,
问题是它抑制了正常输出并显示它你必须在单元格执行后 运行 "cap.show()" 。
当 运行ning 一个需要很长时间的单元格时,比如训练神经网络,这个特性就变成了一个令人头疼的问题。
我怎样才能 运行 我的代码单元并像往常一样获得实时输出并且之后能够将其保存到 .txt 文件?
这不是 IPython/Jupyter-specific,但这是我为类似目的编写的上下文管理器:
import sys
class capture_stdout:
"""
Context manager similar to `contextlib.redirect_stdout`, but
different in that it:
- temporarily writes stdout to other streams *in addition to*
rather than *instead of* `sys.stdout`
- accepts any number of streams and sends stdout to each
- can optionally keep streams open after exiting context by
passing `closing=False`
Parameters
----------
*streams : *`io.IOBase`
stream(s) to receive data sent to `sys.stdout`
closing : `bool`, optional
if [default: `True`], close streams upon exiting the context
block.
"""
def __init__(self, *streams, closing=True):
self.streams = streams
self.closing = closing
self.sys_stdout_write = sys.stdout.write
def __enter__(self):
sys.stdout.write = self._write
if len(self.streams) == 1:
return self.streams[0]
return self.streams
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout.write = self.sys_stdout_write
if self.closing:
for s in self.streams:
s.close()
def _write(self, data):
for s in self.streams:
s.write(data)
self.sys_stdout_write(data)
sys.stdout.flush()
您可以传递任意数量的流,它会实时写入 除了 常规 sys.stdout
之外的所有流。因此,例如,如果你想显示实时输出,捕获它以供稍后在你的代码中使用,并将它记录到一个文件中,你可以这样做:
from io import StringIO
with capture_stdout(StringIO(), open('logfile.txt', 'w')) as (mem_stream, file_stream):
print('some really long output')
# etc...
stdout = mem_stream.getvalue()
print(f"in str: {stdout}")
with open('logfile.txt', 'r') as f:
print(f"in file: {f.read()}")
some really long output
in str: some really long output
in file: some really long output
转变为具有相同作用的细胞魔法应该也不会太难。 Here 的 IPython
文档部分是关于定义自定义魔法的,如果您想试一试的话。
所以,这个问题的通常答案是使用细胞魔法“%%caputre cap”, 问题是它抑制了正常输出并显示它你必须在单元格执行后 运行 "cap.show()" 。 当 运行ning 一个需要很长时间的单元格时,比如训练神经网络,这个特性就变成了一个令人头疼的问题。 我怎样才能 运行 我的代码单元并像往常一样获得实时输出并且之后能够将其保存到 .txt 文件?
这不是 IPython/Jupyter-specific,但这是我为类似目的编写的上下文管理器:
import sys
class capture_stdout:
"""
Context manager similar to `contextlib.redirect_stdout`, but
different in that it:
- temporarily writes stdout to other streams *in addition to*
rather than *instead of* `sys.stdout`
- accepts any number of streams and sends stdout to each
- can optionally keep streams open after exiting context by
passing `closing=False`
Parameters
----------
*streams : *`io.IOBase`
stream(s) to receive data sent to `sys.stdout`
closing : `bool`, optional
if [default: `True`], close streams upon exiting the context
block.
"""
def __init__(self, *streams, closing=True):
self.streams = streams
self.closing = closing
self.sys_stdout_write = sys.stdout.write
def __enter__(self):
sys.stdout.write = self._write
if len(self.streams) == 1:
return self.streams[0]
return self.streams
def __exit__(self, exc_type, exc_value, traceback):
sys.stdout.write = self.sys_stdout_write
if self.closing:
for s in self.streams:
s.close()
def _write(self, data):
for s in self.streams:
s.write(data)
self.sys_stdout_write(data)
sys.stdout.flush()
您可以传递任意数量的流,它会实时写入 除了 常规 sys.stdout
之外的所有流。因此,例如,如果你想显示实时输出,捕获它以供稍后在你的代码中使用,并将它记录到一个文件中,你可以这样做:
from io import StringIO
with capture_stdout(StringIO(), open('logfile.txt', 'w')) as (mem_stream, file_stream):
print('some really long output')
# etc...
stdout = mem_stream.getvalue()
print(f"in str: {stdout}")
with open('logfile.txt', 'r') as f:
print(f"in file: {f.read()}")
some really long output
in str: some really long output
in file: some really long output
转变为具有相同作用的细胞魔法应该也不会太难。 Here 的 IPython
文档部分是关于定义自定义魔法的,如果您想试一试的话。