生成具有重复率的numpy数组
Generate numpy array with duplicate rate
这是我的问题:我必须生成一些相互关联的综合数据(如 7/8 列)(使用皮尔逊系数)。我可以很容易地做到这一点,但接下来我必须在每列中插入一定比例的重复项(是的,皮尔逊系数会更低),每列不同。
问题是我不想亲自插入重复项,因为在我的情况下这就像作弊。
有人知道如何生成已经重复的相关数据吗?我已经搜索过,但通常问题是关于删除或避免重复的问题..
语言:python3
为了生成相关数据,我使用了这个简单的代码:Generatin correlated data
尝试这样的事情:
indices = np.random.randint(0, array.shape[0], size = int(np.ceil(percentage * array.shape[0])))
for index in indices:
array.append(array[index])
这里我假设你的数据存储在 array
中,这是一个 ndarray,其中每一行包含你的 7/8 列数据。
上面的代码应该创建一个随机索引数组,其中的条目(行)你 select 并再次附加到数组。
我找到了解决办法。
我 post 代码,它可能对某人有帮助。
#this are the data, generated randomically with a given shape
rnd = np.random.random(size=(10**7, 8))
#that array represent a column of the covariance matrix (i want correlated data, so i randomically choose a number between 0.8 and 0.95)
#I added other 7 columns, with varing range of values (all upper than 0.7)
attr1 = np.random.uniform(0.8, .95, size = (8,1))
#attr2,3,4,5,6,7 like attr1
#corr_mat is the matrix, union of columns
corr_mat = np.column_stack((attr1,attr2,attr3,attr4,attr5, attr6,attr7,attr8))
from statsmodels.stats.correlation_tools import cov_nearest
#using that function i found the nearest covariance matrix to my matrix,
#to be sure that it's positive definite
a = cov_nearest(corr_mat)
from scipy.linalg import cholesky
upper_chol = cholesky(a)
# Finally, compute the inner product of upper_chol and rnd
ans = rnd @ upper_chol
#ans now has randomically correlated data (high correlation, but is customizable)
#next i create a pandas Dataframe with ans values
df = pd.DataFrame(ans, columns=['att1', 'att2', 'att3', 'att4',
'att5', 'att6', 'att7', 'att8'])
#last step is to truncate float values of ans in a variable way, so i got
#duplicates in varying percentage
a = df.values
for i in range(8):
trunc = np.random.randint(5,12)
print(trunc)
a.T[i] = a.T[i].round(decimals=trunc)
#float values of ans have 16 decimals, so i randomically choose an int
# between 5 and 12 and i use it to truncate each value
最后,这些是每列的重复百分比:
duplicate rate attribute: att1 = 5.159390000000002
duplicate rate attribute: att2 = 11.852260000000001
duplicate rate attribute: att3 = 12.036079999999998
duplicate rate attribute: att4 = 35.10611
duplicate rate attribute: att5 = 4.6471599999999995
duplicate rate attribute: att6 = 35.46553
duplicate rate attribute: att7 = 0.49115000000000464
duplicate rate attribute: att8 = 37.33252
这是我的问题:我必须生成一些相互关联的综合数据(如 7/8 列)(使用皮尔逊系数)。我可以很容易地做到这一点,但接下来我必须在每列中插入一定比例的重复项(是的,皮尔逊系数会更低),每列不同。 问题是我不想亲自插入重复项,因为在我的情况下这就像作弊。
有人知道如何生成已经重复的相关数据吗?我已经搜索过,但通常问题是关于删除或避免重复的问题..
语言:python3 为了生成相关数据,我使用了这个简单的代码:Generatin correlated data
尝试这样的事情:
indices = np.random.randint(0, array.shape[0], size = int(np.ceil(percentage * array.shape[0])))
for index in indices:
array.append(array[index])
这里我假设你的数据存储在 array
中,这是一个 ndarray,其中每一行包含你的 7/8 列数据。
上面的代码应该创建一个随机索引数组,其中的条目(行)你 select 并再次附加到数组。
我找到了解决办法。 我 post 代码,它可能对某人有帮助。
#this are the data, generated randomically with a given shape
rnd = np.random.random(size=(10**7, 8))
#that array represent a column of the covariance matrix (i want correlated data, so i randomically choose a number between 0.8 and 0.95)
#I added other 7 columns, with varing range of values (all upper than 0.7)
attr1 = np.random.uniform(0.8, .95, size = (8,1))
#attr2,3,4,5,6,7 like attr1
#corr_mat is the matrix, union of columns
corr_mat = np.column_stack((attr1,attr2,attr3,attr4,attr5, attr6,attr7,attr8))
from statsmodels.stats.correlation_tools import cov_nearest
#using that function i found the nearest covariance matrix to my matrix,
#to be sure that it's positive definite
a = cov_nearest(corr_mat)
from scipy.linalg import cholesky
upper_chol = cholesky(a)
# Finally, compute the inner product of upper_chol and rnd
ans = rnd @ upper_chol
#ans now has randomically correlated data (high correlation, but is customizable)
#next i create a pandas Dataframe with ans values
df = pd.DataFrame(ans, columns=['att1', 'att2', 'att3', 'att4',
'att5', 'att6', 'att7', 'att8'])
#last step is to truncate float values of ans in a variable way, so i got
#duplicates in varying percentage
a = df.values
for i in range(8):
trunc = np.random.randint(5,12)
print(trunc)
a.T[i] = a.T[i].round(decimals=trunc)
#float values of ans have 16 decimals, so i randomically choose an int
# between 5 and 12 and i use it to truncate each value
最后,这些是每列的重复百分比:
duplicate rate attribute: att1 = 5.159390000000002
duplicate rate attribute: att2 = 11.852260000000001
duplicate rate attribute: att3 = 12.036079999999998
duplicate rate attribute: att4 = 35.10611
duplicate rate attribute: att5 = 4.6471599999999995
duplicate rate attribute: att6 = 35.46553
duplicate rate attribute: att7 = 0.49115000000000464
duplicate rate attribute: att8 = 37.33252