使用通道(工作人员)进入等待组

Go waitgroup with channel (worker)

我正在尝试在 Go 中创建简单的工作池。 将等待组添加到以下程序后,我面临死锁。 背后的核心原因是什么?

当我不使用等待组时,程序似乎运行良好。

fatal error: all goroutines are asleep - deadlock!

goroutine 1 [semacquire]:
sync.runtime_Semacquire(0xc0001b2ea8)

计划-

package main

import (
    "fmt"
    "strconv"
    "sync"
)

func main() {
    workerSize := 2
    ProcessData(workerSize)
}

// ProcessData :
func ProcessData(worker int) {

    // Create Jobs Pool for passong jobs to worker
    JobChan := make(chan string)

    //Produce the jobs
    var jobsArr []string
    for i := 1; i <= 10000; i++ {
        jobsArr = append(jobsArr, "Test "+strconv.Itoa(i))
    }

    //Assign jobs to worker from jobs pool
    var wg sync.WaitGroup
    for w := 1; w <= worker; w++ {
        wg.Add(1)
        // Consumer
        go func(jw int, wg1 *sync.WaitGroup) {
            defer wg1.Done()
            for job := range JobChan {
                actualProcess(job, jw)
            }
        }(w, &wg)
    }

    // Create jobs pool
    for _, job := range jobsArr {
        JobChan <- job
    }

    wg.Wait()
    //close(JobChan)
}

func actualProcess(job string, worker int) {
    fmt.Println("WorkerID: #", worker, ", Job Value: ", job)
}

一旦所有作业都被消耗完,您的工作人员将在 for job := range JobChan 中等待更多数据。直到通道关闭,循环才会结束。

另一方面,您的主要 goroutine 正在等待 wg.Wait() 并且没有达到(已注释掉)关闭。

此时,所有 goroutine 都卡在等待数据或等待组完成。

最简单的解决方案是在将所有作业发送到频道后直接调用 close(JobChan)

    // Create jobs pool
    for _, job := range jobsArr {
        JobChan <- job
    }

    close(JobChan)
    wg.Wait()
    

这略有修改,但比您的实施版本更高级。我已经很好地注释了代码,以便于理解。所以现在您可以配置作业数和作品数。甚至可以看到工作是如何在工人之间分配的,这样平均工作量几乎相等。

package main

import (
    "fmt"
)

func main() {
    var jobsCount = 10000 // Number of jobs
    var workerCount = 2   // Number of workers
    processData(workerCount, jobsCount)
}

func processData(workers, numJobs int) {
    var jobsArr = make([]string, 0, numJobs)
    // jobArr with nTotal jobs
    for i := 0; i < numJobs; i++ {
        // Fill in jobs
        jobsArr = append(jobsArr, fmt.Sprintf("Test %d", i+1))
    }
    var jobChan = make(chan string, 1)
    defer close(jobChan)
    var (
        // Length of jobsArr
        length = len(jobsArr)
        // Calculate average chunk size
        chunks = len(jobsArr) / workers
        // Window Start Index
        wStart = 0
        // Window End Index
        wEnd = chunks
    )
    // Split the job between workers. Every workers gets a chunk of jobArr
    // to work on. Distribution is work is approximately equal because last
    // worker can less or more work as well.
    for i := 1; i <= workers; i++ {
        // Spawn a goroutine for every worker for chunk i.e., jobArr[wStart:wEnd]
        go func(wrk, s, e int) {
            for j := s; j < e; j++ {
                // Do some actual work. Send the actualProcess's return value to
                // jobChan
                jobChan <- actualProcess(wrk, jobsArr[j])
            }
        }(i, wStart, wEnd)
        // Change pointers to get the set of chunk in next iteration
        wStart = wEnd
        wEnd += chunks
        if i == workers-1 {
            // If next worker is the last worker,
            // do till the end
            wEnd = length
        }
    }
    for i := 0; i < numJobs; i++ {
        // Receieve all jobs
        fmt.Println(<-jobChan)
    }
}

func actualProcess(worker int, job string) string {
    return fmt.Sprintf("WorkerID: #%d, Job Value: %s", worker, job)
}