2

同じ列を持つ 2 つの RDD があります:
rdd1 :-

+-----------------+
|中|uid|頻度|
+-----------------+
| | m1| u1| 1|
| | m1| u2| 1|
| | m2| u1| 2|
+-----------------+

rdd2 :-

+-----------------+
|中|uid|頻度|
+-----------------+
| | m1| u1| 10|
| | m2| u1| 98|
| | m3| u2| 21|
+-----------------+

frequenciesと に基づいてmidの合計を計算したいuid。結果は次のようになります。

+-----------------+
|中|uid|頻度|
+-----------------+
| | m1| u1| 11|
| | m2| u1| 100|
| | m3| u2| 21|
+-----------------+

前もって感謝します。

編集:私はこの方法でも解決策を達成しました(map-reduceを使用):

from pyspark.sql.functions import col

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)
df4 = df3.map(lambda bbb: ((bbb['mid'], bbb['uid']), int(bbb['frequency'])))\
             .reduceByKey(lambda a, b: a+b)

p = df4.map(lambda p: (p[0][0], p[0][1], p[1])).toDF()

p = p.select(col("_1").alias("mid"), \
             col("_2").alias("uid"), \
             col("_3").alias("frequency"))

p.show()

出力:

+---+---+---------+
|中|uid|頻度|
+---+---+---------+
| | m2| u1| 100|
| | m1| u1| 11|
| | m1| u2| 1|
| | m3| u2| 21|
+---+---+---------+
4

2 に答える 2

1

mid と uid でグループを実行し、合計操作を実行するだけです。

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)

df4 = df3.groupBy(df3.mid,df3.uid).sum() \
         .withColumnRenamed("sum(frequency)","frequency")

df4.show()

# +---+---+---------+
# |mid|uid|frequency|
# +---+---+---------+
# | m1| u1|       11|
# | m1| u2|        1|
# | m2| u1|      100|
# | m3| u2|       21|
# +---+---+---------+
于 2016-04-16T09:01:33.750 に答える
0

私もこの方法で解決策を達成しました(map-reduceを使用):

from pyspark.sql.functions import col

data1 = [("m1","u1",1),("m1","u2",1),("m2","u1",2)]
data2 = [("m1","u1",10),("m2","u1",98),("m3","u2",21)]
df1 = sqlContext.createDataFrame(data1,['mid','uid','frequency'])
df2 = sqlContext.createDataFrame(data2,['mid','uid','frequency'])

df3 = df1.unionAll(df2)
df4 = df3.map(lambda bbb: ((bbb['mid'], bbb['uid']), int(bbb['frequency'])))\
             .reduceByKey(lambda a, b: a+b)

p = df4.map(lambda p: (p[0][0], p[0][1], p[1])).toDF()

p = p.select(col("_1").alias("mid"), \
             col("_2").alias("uid"), \
             col("_3").alias("frequency"))

p.show()

出力:

+---+---+---------+
|中|uid|頻度|
+---+---+---------+
| | m2| u1| 100|
| | m1| u1| 11|
| | m1| u2| 1|
| | m3| u2| 21|
+---+---+---------+
于 2016-05-12T05:32:57.307 に答える