按元素乘以 SparseVectors

Multiply SparseVectors element-wise

我有 2RDD,我想在这 2 个 rdd 之间按元素相乘。

假设我有以下 RDD(示例):

a = ((1,[0.28,1,0.55]),(2,[0.28,1,0.55]),(3,[0.28,1,0.55]))
aRDD = sc.parallelize(a)
b = ((1,[0.28,0,0]),(2,[0,0,0]),(3,[0,1,0]))
bRDD = sc.parallelize(b)

可以看出 b 是稀疏的,我想避免将零值乘以另一个值。我正在执行以下操作:

from pyspark.mllib.linalg import Vectors
def create_sparce_matrix(a_list):
    length = len(a_list)
    index = [i for i ,e in enumerate(a_list) if e !=0]
    value = [e for i ,e in enumerate(a_list) if e !=0]
    sv1 = Vectors.sparse(length,index,value)
    return sv1


brdd = b.map(lambda (ids,a_list):(ids,create_sparce_matrix(a_list)))

乘法:

combinedRDD = ardd + brdd
result = combinedRDD.reduceByKey(lambda a,b:[c*d for c,d in zip(a,b)])

看来我无法在RDD 中将sparce 与list 相乘。有没有办法做到这一点?或者当两个 RDD 之一有很多零值时,另一种有效的元素相乘的方法?

处理这个问题的一种方法是将 aRDD 转换为 RDD[DenseVector]:

from pyspark.mllib.linalg import SparseVector, DenseVector, Vectors

aRDD = sc.parallelize(a).mapValues(DenseVector)
bRDD = sc.parallelize(b).mapValues(create_sparce_matrix)

并使用基本的 NumPy 操作:

def mul(x, y):
    assert isinstance(x, DenseVector)
    assert isinstance(y, SparseVector)
    assert x.size == y.size
    return SparseVector(y.size, y.indices, x[y.indices] * y.values)

aRDD.join(bRDD).mapValues(lambda xy: mul(*xy))