如何加快group_by后每组重复取前n行的操作?

问题描述 投票:0回答:2

df包含1亿行,group_by列大约是25-30。有没有办法从这里加速这个操作?或者这是我能得到的最好的。

import polars as pl
import numpy as np

rows = 100000000
n_cols = 30
df = pl.DataFrame(np.random.randint(0, 100, size=(n_cols, rows)), schema=[str(x) for x in range(n_cols)])
x_list = [1,2,3]

df = df.sort('0').group_by([str(x) for x in range(1,n_cols)])    
pl.concat([df.head(x).with_columns(pl.lit(x).alias('x').cast(pl.Int8)) for x in x_list])
python python-polars
2个回答
0
投票

正如您所说,您可以采取

head(max(x_list))
,然后重复每行适当的次数:

(
    df.head(x)
    .with_columns(
        pl.int_range(pl.len() + 1, 1, step=-1)
        .over([str(x) for x in range(1,n_cols)]).alias("x")
    )
    .with_columns(pl.exclude("x").repeat_by("x"))
    .explode(pl.exclude("x"))
)

-1
投票
import polars as pl
import numpy as no
n=50
df = pl.DataFrame(
np.random.randint (0,100, size = (4,n)), schema= [‘A’, ‘B’, ‘C’, ‘D’])
x_list = [1,2,3]
grouped = df.group_by ([‘A’, ‘B’, ‘C’])
result=pl.concat(
[ 
grouped.head(x).with_columns(pl.lit(x).alias(‘x’).cast(pl.Int8))
for x in x_list
]
)
print(result)
© www.soinside.com 2019 - 2024. All rights reserved.