使用普通互斥体取消协程的正确方法是什么

What is the proper way to cancel coroutines with common mutex

我已经 运行 解决了这个问题。

我有(至少)6 个协同程序,它们在通过互斥锁管理的地图上工作。

有时候需要在不同的场景下取消一个,多个或者全部的协程

取消协程时处理互斥量的最佳方法是什么? (事实是我真的不知道取消协程是否是锁定互斥锁的协程)。互斥“系统”是否有任何巧妙的技巧来解决这个问题?


添加 2021.09.30 11:28 GMT+2(夏令时)

我的编码相当复杂,所以我将其简化并在此处显示主要问题

... 
class HomeFragment:Fragment(){
...
private lateinit var googleMap:GoogleMap

val mapMutex = Mutex()
...

override fun onViewCreated(view:View, savedInstanceState: Bundle?) {
...
binding.fragmentHomeMapView?.geMapAsync { _googleMap ->

  _googleMap?.let{ safeGoogleMap ->
    googleMap = safeGoogleMap
  }?:let{
    Message.error("Error creating map (null)") 
  }

  ...
   
  homeViewModel.apply {
    ...
    //observer & coroutine 1 
    liveDataMapFlagged?.observe(
      viewLifeCycleOwner
    ){flaggedMapDetailResult->

      //Here I want to stop the lifecycleScope job below if it is already 
      //running and do some cleanup before entering (do I need to access the
      //mutex if cleanup influence the google map ?)
      //If I cancel the job, will the mutex then unlock gracefully ?

      flaggedMapDetailResult?.apply {
        ...
        lifecycleScope.launchWhenStarted { //Here I want to catch the job with i.e 'flagJob = lifeCycleScope.launchWhe...'  
          ...
          withContext(Dispatchers.Default){
            ...
            mapMutex.withLock {   //suspends if locked
              withContext(Dispatchers.Main){
                selectedSiteMarker?.remove()
                selectedCircle?.remove() 
                ... // Doing some cleanup... removing markers
              }
              ... // Creating new markers
              var flaggedSiteMarkerLatLng = coordinateSiteLatitude?.let safeLatitude@{safeLatitude->
                 return@safeLatitude coordinateSiteLongitude?.let safeLongitude@{safeLongitude->
                 return@safeLongitude LatLng(safeLatitude,safeLongitude)
                 }
              }
              ...
              flaggedSiteMarkerLatLng?.let { safeFlaggedSiteMarkerLatLng ->
                val selectedSiteOptions =     
                  MarkerOptions()
                    .position(safeFlaggedSiteMarkerLatLng)
                    .anchor(0.5f,0.5f)
                    .visible(flaggedMarkerState)
                    .flat(true)
                    .zIndex(10f)
                    .title(setTicketNumber(ticketNumber))
                    .snippet(appointmentName?:"Name is missing")
                    .icon(vSelectedSiteIcon)

              selectedSiteMarker = withContext(Dispatchers.Main){
                googleMap.addMarker(selectedSiteOptions)?.also{
                  it.tag = siteId
                }
              }
              ... //Do some more adding

            } //End mutex
            ...
          }//End dispatchers default
          ...
        }//End lifecycleScope.launchWhenStarted
        ...
      }?:let{//End apply
        ...//Cleanup if no data present
        lifeCycleScope.launchWhenStarted{ //Shoud harvest Job and stop above
                                          //if it is called before ending...
                                          //if necessary
          mapMutex.withLock{
            //Cleanup markers         
          }
        } 
      }
      ...
    }//End observer 1


    //observer 2
    liveDataMapListFromFiltered2?.observer(
      viewLifeCycleOwner
    ){mapDetailList ->

      //Should check if job below is running and cancel gracefully and
      //clean up data 

      ...//Do some work on mapDetailList and create new datasets
      lifecycleScope.launchWhenStarted{ //Scope start (should harvest job)
        ...
        withContext(Dispatchers.Default) //Default context
        {
           ...//Do some heavy work on list (no need for mutex)
           
        }

        mapMutex.withLock {
          withContext(Dispatchers.Main)
          {
            //Do work on googlemap. Move camera etc.
          } 

        }

        ...//Do other not map related work
        
        mapMutex.withLock {
          withContext(Dispatchers.Main)
          {
            //Do work on googlemap. Move camera etc.
          } 

        }

        ...//Do other not map related work

        mapMutex.withLock {
          withContext(Dispatchers.Main)
          {
            //Do work on googlemap. Move camera etc.
          } 
        }//end mutex
      }//end scope 
    }//end observer 2 
  }//end viewmode
}//end gogleMap
     
 

一般来说,cancel是一个正常的异常,你可以直接捕获它来运行清理操作,你可以看closing resources上的例子。

此外,由于您仍然可以在清理期间取消,对于关键操作,您可以 prevent further cancellation。把你的工作放在一起可以是这样的:

my_mutex.lock()
try {
    // locked stuff
} finally {
    withContext(NonCancellable) {
        // clean up
        my_mutex.unlock()
    }
}

我认为 NonCancellable 在只有解锁的情况下做得太过分了,因为它应该是原子的,但我不确定。如果是这种情况,我只是查找了这个模式,显然这很常见,他们有一些东西 more nifty:

mutex.withLock {
    // locked stuff
}

正如 link

中所说

There is also withLock extension function that conveniently represents mutex.lock(); try { ... } finally { mutex.unlock() } pattern.

我已经接受了@kabanus 对我的问题的回答,因为它导致我的代码工作发生了变化。

这个概念是(据我所知),当拥抱协程作业被取消时,如果互斥量处于 mutex.withLock{ ... } 形式,则会自动解锁。

概念上看起来像这样:

class HomeFragment : Fragment(){
  //...
  val commonMutex = Mutex()
  //...
  override fun onViewCreated(view:View, savedInstanceState:Bundle?){
    super.onViewCreated(view, savedInstanceState)
    //...
    val job1:Job?=null
    val job2:Job?=null
    val job3:Job?=null
    //...
    binding.fragmentHomeMyView?.getMyAsyncView{ 
      //could be any view with async work like i.e async GoogleMaps 
      binding.apply{ //I like to let bindings embrace if they exists.
        //...
        homeViewModel.apply{ //like to let homeViewModel embrace if they exists
          //...
          liveDataSet1?.observe( //LiveData set 1 observer
            viewLifecycleOwner
          ){dataSetResult1->
            //Will check if my lengthy coroutine job 1 is still running
            //If it is -> cancel it, since the observer provides new dataset
            //Note ! If your dataset is meant to be mutable, you should do a 
            //dataset copy after the cancellation so it doesn't overrun it on next
            //observer update
            //...
            if(job1?.isActive == true){ 
              job1?.cancel()
            } 
            //...
            dataSetResult1?.apply{ //like to let the dataSet embrace if there are 
                                   //many members
              job1 = lifecycleScope.launchWhenStarted{
                //I used lifecycleScope here, you can use other coroutine "bases"
                withContext(Dispatchers.Default){
                  //Doing heavy work which doesn't imply a mutex situation
                  
                  commonMutex.withLock{ //Locking mutex section 
                    //Work on data which should be shared between two or more 
                    //coroutines 
                    withContext(Dispatchers.Main){
                      //Do screenupdates if necessary
                    } //end context main
                
                  } //end commonMutex.withLock

                }//end context default
                        
              }//end coroutine Job (lifecycleScope)
            } //end dataSetResult1 apply
          } //end dataSetResult1 observer 

          liveDataSet2?.observe( //LiveData set 2 observer
            viewLifecycleOwner
          ){dataSetResult2->
            //Will check if my lengthy coroutine job 2 is still running
            //If it is -> cancel it, since the observer provides new dataset
            //Note ! If your dataset is meant to be mutable, you should do a 
            //dataset copy after the cancellation so it doesn't overrun it on next
            //observer update
            //...
            if(job2?.isActive == true){ 
              job2?.cancel()
            } 
            //...
            dataSetResult2?.apply{ //like to let the dataSet embrace if there are 
                                   //many members
              job1 = lifecycleScope.launchWhenStarted{
                //I used lifecycleScope here, you can use other coroutine "bases"
                withContext(Dispatchers.Default){
                  //Doing heavy work which doesn't imply a mutex situation
                  
                  commonMutex.withLock{ //Locking mutex section 
                    //Work on data which should be shared between two or more 
                    //coroutines 
                    withContext(Dispatchers.Main){
                      //Do screenupdates if necessary
                    } //end context main
                
                  } //end commonMutex.withLock

                }//end context default
                        
              }//end coroutine Job (lifecycleScope)
            } //end dataSetResult2 apply
          } //end dataSetResult2 observer 

          liveDataSet3?.observe( //LiveData set 3 observer
            viewLifecycleOwner
          ){dataSetResult3->
            //Will check if my lengthy coroutine job 3 is still running
            //If it is -> cancel it, since the observer provides new dataset
            //Note ! If your dataset is meant to be mutable, you should do a 
            //dataset copy after the cancellation so it doesn't overrun it on next
            //observer update
            //...
            if(job3?.isActive == true){ 
              job3?.cancel()
            } 
            //...
            dataSetResult3?.apply{ //like to let the dataSet embrace if there are 
                                   //many members
              job3 = lifecycleScope.launchWhenStarted{
                //I used lifecycleScope here, you can use other coroutine "bases"
                withContext(Dispatchers.Default){
                  //Doing heavy work which doesn't imply a mutex situation
                  
                  commonMutex.withLock{ //Locking mutex section 
                    //Work on data which should be shared between two or more 
                    //coroutines 
                    withContext(Dispatchers.Main){
                      //Do screenupdates if necessary
                    } //end context main
                
                  } //end commonMutex.withLock

                }//end context default
                        
              }//end coroutine Job (lifecycleScope)
            } //end dataSetResult3 apply
          } //end dataSetResult3 observer 
        } //end homeViewModel.apply
      } //end binding.apply
    } //end asyncView
  } //end onViewCreated
}