我有一些由类(X)和一些二进制(Y)组成的数据。我想通过对较小的类进行过采样来均衡类的大小。例如,如果我开始:
Df_01 = pd.DataFrame({'X' : [1,1,1,1,1,1,1,2,2],
'Y1': [1,1,1,1,1,0,0,0,1],
'Y2': [0,0,0,0,0,1,0,0,0]})
然后我想得到:
Df_02 = pd.DataFrame({'X' : [1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2],
'Y1': [1,1,1,1,1,0,0,0,1,0,1,0,1,0,1,0],
'Y2': [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0]})
我试图这样做:
# Sort the data by class
Ma_01 = Df_01.groupby('X')
Di_01 = {}
for name, group in Ma_01:
Di_01[str(name)] = group
# Size of each class
Se_01 = Df_01.groupby('X').size()
# Size of the biggest class
In_Bi = max(Se_01)
# How much over sampling would equalise the class sizes?
Se_Ra = In_Bi / Se_01
Di_Ra = Se_Ra.to_dict()
但是当我尝试:
# Copy each dataframe
Di_03 = {}
for x in Di_01:
for y in range(int(Di_Ra[int(x)])):
if not Di_03:
Di_03[x] = Di_01[x]
else:
Di_03[x] = Di_03[x] .append(Di_01[x])
# Concatonate the dictionary to a single dataframe
df_03 = pd.concat(Di_03.values(), ignore_index=True)
我明白了
KeyError: '2'
感谢您找到重复的Matthew Strawbridge! Ayhan对我的数据的原始作品的答案:
max_size = Df_01['X'].value_counts().max()
lst = [Df_01]
for class_index, group in Df_01.groupby('X'):
lst.append(group.sample(max_size-len(group), replace=True))
Df_03 = pd.concat(lst)