1

I have a Spark.SQL.Row that looks something like this:

my_row = Row(id = 1,
    value = [Row(id = 1, value = "value1"), Row(id = 2, value = "value2")])

I'd like to get the value from each of the nested rows using something like:

[x.value for x in my_row.value]

The problem is that when I iterate, the entire row is converted into tuples,

my_row = (1, [(1, "value1"), (2, "value2")])

and I lose the schema. Is there a way to iterate and retain the schema for the list of rows?

4

1 に答える 1