防止并发执行
Prevent concurrent execution
我想阻止并发执行异步调用的函数。
该函数是从超级服务调用的,两个连接应该导致一个等待另一个函数调用完成。我认为实现一个 Future 来阻止执行直到其他线程/连接完成将解决这个问题。遇到我的问题时,我将 Futures 存储在 Mutex<HashMap<i64, LockFut>>
中,但是当我锁定互斥锁以获取并等待 LockFut 时,它显然会抱怨没有发送 MutexGuard。我不知道如何解决这个问题,或者我的方法是否不好。
|
132 | let mut locks = LOCKS.lock().unwrap();
| --------- has type `std::sync::MutexGuard<'_, std::collections::HashMap<i64, hoster::hoster::LockFut>>`
...
136 | lock.await;
| ^^^^^^^^^^ await occurs here, with `mut locks` maybe used later
137 | }
| - `mut locks` is later dropped here
这是我未来的实现
lazy_static! {
static ref LOCKS: Mutex<HashMap<i64, LockFut>> = Mutex::new(HashMap::new());
}
struct LockState {
waker: Option<Waker>,
locked: bool
}
struct LockFut {
state: Arc<Mutex<LockState>>
}
impl Future for LockFut {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut state = self.state.lock().unwrap();
match state.locked {
false => {
Poll::Ready(())
},
true => {
state.waker = Some(cx.waker().clone());
Poll::Pending
}
}
}
}
impl LockFut {
fn new() -> LockFut {
LockFut {
state: Arc::new(Mutex::new(LockState {
locked: false,
waker: None
}))
}
}
pub fn release_lock(&mut self) {
let mut state = self.state.lock().unwrap();
state.locked = false;
if let Some(waker) = state.waker.take() {
waker.wake();
}
}
pub async fn lock<'a>(id: i64) {
let mut locks = LOCKS.lock().unwrap();
// Wait for existing lock to be unlocked or create a new lock
let lock = locks.entry(id).or_insert(LockFut::new());
// Wait for the potential lock to be released
lock.await;
}
pub fn unlock(id: i64) {
match LOCKS.lock().unwrap().get_mut(&id) {
Some(lock) => lock.release_lock(),
None => warn!("No lock found for: {}", id)
};
}
}
我就是这样称呼它的
async fn is_concurrent(id: i64) {
should_not_be_concurrent().await;
}
async fn should_not_be_concurrent(id: i64) {
LockFut::lock(id).await;
// Do crazy stuff
LockFut::unlock(id);
}
来自标准 Mutex
的守卫确实是 !Send
,所以它不能在 await
-s 之间携带。对于该任务,通常异步互斥锁是值得考虑的好事。 futures
and also there's a stand-alone crate 中有一个。他们的守卫是Send
,至此问题应该已经解决了
但我想更进一步地说 LockFut
解决了与 async Mutex 完全相同的问题。因此对于这个特定的示例代码可以显着简化为以下 (playground):
use std::sync::Mutex as StdMutex;
use futures::lock::Mutex;
#[derive(Default)]
struct State { .. }
type SharedState = Arc<Mutex<State>>;
lazy_static! {
static ref LOCKS: StdMutex<HashMap<i64, SharedState>> = Default::default();
}
fn acquire_state<'a>(id: i64) -> SharedState {
Arc::clone(&LOCKS.lock().unwrap().entry(id).or_default())
}
// Acquiring is straightforward:
let mut state = acquire_state(0).lock().await;
// or with your functions:
async fn is_concurrent(id: i64) {
should_not_be_concurrent(id).await;
}
async fn should_not_be_concurrent(id: i64) {
let mut state = acquire_state(id).lock().await;
// Do crazy stuff
// As a bonus there's no need in manual unlocking here
// since `drop(state)` unlocks the mutex.
}
此外,您可能会发现有关异步代码中的互斥锁的有用 this 博客 post。
我想阻止并发执行异步调用的函数。
该函数是从超级服务调用的,两个连接应该导致一个等待另一个函数调用完成。我认为实现一个 Future 来阻止执行直到其他线程/连接完成将解决这个问题。遇到我的问题时,我将 Futures 存储在 Mutex<HashMap<i64, LockFut>>
中,但是当我锁定互斥锁以获取并等待 LockFut 时,它显然会抱怨没有发送 MutexGuard。我不知道如何解决这个问题,或者我的方法是否不好。
|
132 | let mut locks = LOCKS.lock().unwrap();
| --------- has type `std::sync::MutexGuard<'_, std::collections::HashMap<i64, hoster::hoster::LockFut>>`
...
136 | lock.await;
| ^^^^^^^^^^ await occurs here, with `mut locks` maybe used later
137 | }
| - `mut locks` is later dropped here
这是我未来的实现
lazy_static! {
static ref LOCKS: Mutex<HashMap<i64, LockFut>> = Mutex::new(HashMap::new());
}
struct LockState {
waker: Option<Waker>,
locked: bool
}
struct LockFut {
state: Arc<Mutex<LockState>>
}
impl Future for LockFut {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let mut state = self.state.lock().unwrap();
match state.locked {
false => {
Poll::Ready(())
},
true => {
state.waker = Some(cx.waker().clone());
Poll::Pending
}
}
}
}
impl LockFut {
fn new() -> LockFut {
LockFut {
state: Arc::new(Mutex::new(LockState {
locked: false,
waker: None
}))
}
}
pub fn release_lock(&mut self) {
let mut state = self.state.lock().unwrap();
state.locked = false;
if let Some(waker) = state.waker.take() {
waker.wake();
}
}
pub async fn lock<'a>(id: i64) {
let mut locks = LOCKS.lock().unwrap();
// Wait for existing lock to be unlocked or create a new lock
let lock = locks.entry(id).or_insert(LockFut::new());
// Wait for the potential lock to be released
lock.await;
}
pub fn unlock(id: i64) {
match LOCKS.lock().unwrap().get_mut(&id) {
Some(lock) => lock.release_lock(),
None => warn!("No lock found for: {}", id)
};
}
}
我就是这样称呼它的
async fn is_concurrent(id: i64) {
should_not_be_concurrent().await;
}
async fn should_not_be_concurrent(id: i64) {
LockFut::lock(id).await;
// Do crazy stuff
LockFut::unlock(id);
}
来自标准 Mutex
的守卫确实是 !Send
,所以它不能在 await
-s 之间携带。对于该任务,通常异步互斥锁是值得考虑的好事。 futures
and also there's a stand-alone crate 中有一个。他们的守卫是Send
,至此问题应该已经解决了
但我想更进一步地说 LockFut
解决了与 async Mutex 完全相同的问题。因此对于这个特定的示例代码可以显着简化为以下 (playground):
use std::sync::Mutex as StdMutex;
use futures::lock::Mutex;
#[derive(Default)]
struct State { .. }
type SharedState = Arc<Mutex<State>>;
lazy_static! {
static ref LOCKS: StdMutex<HashMap<i64, SharedState>> = Default::default();
}
fn acquire_state<'a>(id: i64) -> SharedState {
Arc::clone(&LOCKS.lock().unwrap().entry(id).or_default())
}
// Acquiring is straightforward:
let mut state = acquire_state(0).lock().await;
// or with your functions:
async fn is_concurrent(id: i64) {
should_not_be_concurrent(id).await;
}
async fn should_not_be_concurrent(id: i64) {
let mut state = acquire_state(id).lock().await;
// Do crazy stuff
// As a bonus there's no need in manual unlocking here
// since `drop(state)` unlocks the mutex.
}
此外,您可能会发现有关异步代码中的互斥锁的有用 this 博客 post。