Javascript 在 Jupyter 和服务器之间通过 IPython 内核直接通信
Direct communication between Javascript in Jupyter and server via IPython kernel
我正在尝试在 Jupyter 单元格内显示基于 Three.js 的交互式网格可视化工具。工作流程如下:
- 用户启动 Jupyter notebook,并在单元格中打开查看器
- 使用 Python 命令,用户可以手动添加网格并以交互方式为它们制作动画
实际上,主线程通过 ZMQ 套接字向服务器发送请求(每个请求都需要一个回复),然后服务器使用其他套接字对(许多“请求”)将所需数据发送回主线程,预计回复很少),它最终使用通过 ipython 内核的通信将数据发送到 Javascript 前端。到目前为止一切顺利,并且工作正常,因为消息都在同一个方向流动:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
但是,当我想要获取前端的状态以等待网格完成加载时,模式会有所不同:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
这一次,服务器向前端发送请求,在 return 中,前端不会直接将回复发送回服务器,而是发送给主线程,主线程会将回复转发给服务器,最后到主线程。
有一个明确的问题:主线程应该共同转发前端的回复并接收服务器的回复,这是不可能的。理想的解决方案是使服务器能够直接与前端通信,但我不知道该怎么做,因为我不能在服务器端使用 get_ipython().kernel.comm_manager.register_target
。我尝试使用 jupyter_client.BlockingKernelClient
在服务器端实例化一个 ipython 内核客户端,但我没有设法使用它进行通信或注册目标。
好的,所以我现在找到了解决方案,但不是很好。确实只是等待回复并保持主循环忙碌,我添加了一个超时并将其与内核的 do_one_iteration
交织以强制处理消息:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
它可以工作,但不幸的是它不是真正的可移植的,它会扰乱 Jupyter 评估堆栈(所有排队的评估将在这里处理而不是按顺序处理)...
或者,还有一种更吸引人的方式:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
但是在这种情况下你需要提前知道你期望内核恰好接收一条消息,依赖nest_asyncio
也不理想。
这是 link 关于 Github, along with an example notebook 主题的一个问题。
更新
我终于彻底解决了我的问题,没有缺点。诀窍是分析每条传入的消息。不相关的消息按顺序放回队列,而 comm-related 的消息被处理 on-the-spot:
class CommProcessor:
"""
@brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
@details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
@brief Check once if there is pending comm related event in
the shell stream message priority queue.
@param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()
我正在尝试在 Jupyter 单元格内显示基于 Three.js 的交互式网格可视化工具。工作流程如下:
- 用户启动 Jupyter notebook,并在单元格中打开查看器
- 使用 Python 命令,用户可以手动添加网格并以交互方式为它们制作动画
实际上,主线程通过 ZMQ 套接字向服务器发送请求(每个请求都需要一个回复),然后服务器使用其他套接字对(许多“请求”)将所需数据发送回主线程,预计回复很少),它最终使用通过 ipython 内核的通信将数据发送到 Javascript 前端。到目前为止一切顺利,并且工作正常,因为消息都在同一个方向流动:
Main thread (Python command) [ZMQ REQ] -> [ZMQ REP] Server (Data) [ZMQ XREQ] -> [ZMQ XREQ] Main thread (Data) [IPykernel Comm] -> [Ipykernel Comm] Javascript (Display)
但是,当我想要获取前端的状态以等待网格完成加载时,模式会有所不同:
Main thread (Status request) --> Server (Status request) --> Main thread (Waiting for reply)
| |
<--------------------------------Javascript (Processing) <--
这一次,服务器向前端发送请求,在 return 中,前端不会直接将回复发送回服务器,而是发送给主线程,主线程会将回复转发给服务器,最后到主线程。
有一个明确的问题:主线程应该共同转发前端的回复并接收服务器的回复,这是不可能的。理想的解决方案是使服务器能够直接与前端通信,但我不知道该怎么做,因为我不能在服务器端使用 get_ipython().kernel.comm_manager.register_target
。我尝试使用 jupyter_client.BlockingKernelClient
在服务器端实例化一个 ipython 内核客户端,但我没有设法使用它进行通信或注册目标。
好的,所以我现在找到了解决方案,但不是很好。确实只是等待回复并保持主循环忙碌,我添加了一个超时并将其与内核的 do_one_iteration
交织以强制处理消息:
while True:
try:
rep = zmq_socket.recv(flags=zmq.NOBLOCK).decode("utf-8")
except zmq.error.ZMQError:
kernel.do_one_iteration()
它可以工作,但不幸的是它不是真正的可移植的,它会扰乱 Jupyter 评估堆栈(所有排队的评估将在这里处理而不是按顺序处理)...
或者,还有一种更吸引人的方式:
import zmq
import asyncio
import nest_asyncio
nest_asyncio.apply()
zmq_socket.send(b"ready")
async def enforce_receive():
await kernel.process_one(True)
return zmq_socket.recv().decode("utf-8")
loop = asyncio.get_event_loop()
rep = loop.run_until_complete(enforce_receive())
但是在这种情况下你需要提前知道你期望内核恰好接收一条消息,依赖nest_asyncio
也不理想。
这是 link 关于 Github, along with an example notebook 主题的一个问题。
更新
我终于彻底解决了我的问题,没有缺点。诀窍是分析每条传入的消息。不相关的消息按顺序放回队列,而 comm-related 的消息被处理 on-the-spot:
class CommProcessor:
"""
@brief Re-implementation of ipykernel.kernelbase.do_one_iteration
to only handle comm messages on the spot, and put back in
the stack the other ones.
@details Calling 'do_one_iteration' messes up with kernel
'msg_queue'. Some messages will be processed too soon,
which is likely to corrupt the kernel state. This method
only processes comm messages to avoid such side effects.
"""
def __init__(self):
self.__kernel = get_ipython().kernel
self.qsize_old = 0
def __call__(self, unsafe=False):
"""
@brief Check once if there is pending comm related event in
the shell stream message priority queue.
@param[in] unsafe Whether or not to assume check if the number
of pending message has changed is enough. It
makes the evaluation much faster but flawed.
"""
# Flush every IN messages on shell_stream only
# Note that it is a faster implementation of ZMQStream.flush
# to only handle incoming messages. It reduces the computation
# time from about 10us to 20ns.
# https://github.com/zeromq/pyzmq/blob/e424f83ceb0856204c96b1abac93a1cfe205df4a/zmq/eventloop/zmqstream.py#L313
shell_stream = self.__kernel.shell_streams[0]
shell_stream.poller.register(shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
while events:
_, event = events[0]
if event:
shell_stream._handle_recv()
shell_stream.poller.register(
shell_stream.socket, zmq.POLLIN)
events = shell_stream.poller.poll(0)
qsize = self.__kernel.msg_queue.qsize()
if unsafe and qsize == self.qsize_old:
# The number of queued messages in the queue has not changed
# since it last time it has been checked. Assuming those
# messages are the same has before and returning earlier.
return
# One must go through all the messages to keep them in order
for _ in range(qsize):
priority, t, dispatch, args = \
self.__kernel.msg_queue.get_nowait()
if priority <= SHELL_PRIORITY:
_, msg = self.__kernel.session.feed_identities(
args[-1], copy=False)
msg = self.__kernel.session.deserialize(
msg, content=False, copy=False)
else:
# Do not spend time analyzing already rejected message
msg = None
if msg is None or not 'comm_' in msg['header']['msg_type']:
# The message is not related to comm, so putting it back in
# the queue after lowering its priority so that it is send
# at the "end of the queue", ie just at the right place:
# after the next unchecked messages, after the other
# messages already put back in the queue, but before the
# next one to go the same way. Note that every shell
# messages have SHELL_PRIORITY by default.
self.__kernel.msg_queue.put_nowait(
(SHELL_PRIORITY + 1, t, dispatch, args))
else:
# Comm message. Processing it right now.
comm_handler = getattr(
self.__kernel.comm_manager, msg['header']['msg_type'])
msg['content'] = self.__kernel.session.unpack(msg['content'])
comm_handler(None, None, msg)
self.qsize_old = self.__kernel.msg_queue.qsize()
process_kernel_comm = CommProcessor()