取 double[] 哈希映射中每个索引的平均值并将其分配给输出 double[]
take the average of each index in a double[] hash map and assign it to an output double[]
我想根据 this description(第 48 页,完整的伪代码)实现平均感知器算法。
我想我已经很接近了,但是我在尝试找出最后一步时遇到了麻烦,在这一步中我需要计算每个特定索引在每次迭代期间计算的权重的平均值,然后分配该值到最终的权重数组。我将如何实施?
hashmap 的结构是int
,这是迭代的次数,然后是double[]
的数组,其中包含该迭代的权重。所以我想输出应该是
For all the hashmap keys
for the length of the hashmap value at this key index
...something
所以如果每次迭代的第一个权重是 2
, 4
, ,3
, 我想将 3
的权重分配给最后的 [=14] =] 该索引的数组,等等所有实例。
下面是相关代码。完整的代码是 here on my GitHub 以备不时之需。
//store weights to be averaged.
Map<Integer,double[]> cached_weights = new HashMap<Integer,double[]>();
final int globoDictSize = globoDict.size(); // number of features
// weights total 32 (31 for input variables and one for bias)
double[] weights = new double[globoDictSize + 1];
for (int i = 0; i < weights.length; i++)
{
//weights[i] = Math.floor(Math.random() * 10000) / 10000;
//weights[i] = randomNumber(0,1);
weights[i] = 0.0;
}
int inputSize = trainingPerceptronInput.size();
double[] outputs = new double[inputSize];
final double[][] a = Prcptrn_InitOutpt.initializeOutput(trainingPerceptronInput, globoDictSize, outputs, LABEL);
double globalError;
int iteration = 0;
do
{
iteration++;
globalError = 0;
// loop through all instances (complete one epoch)
for (int p = 0; p < inputSize; p++)
{
// calculate predicted class
double output = Prcptrn_CalcOutpt.calculateOutput(THETA, weights, a, p);
// difference between predicted and actual class values
//always either zero or one
double localError = outputs[p] - output;
int i;
for (i = 0; i < a.length; i++)
{
weights[i] += LEARNING_RATE * localError * a[i][p];
}
weights[i] += LEARNING_RATE * localError;
// summation of squared error (error value for all instances)
globalError += localError * localError;
}
这是我上面提到的部分
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
// ...
}
/* Root Mean Squared Error */
//System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt(globalError / inputSize));
}
while (globalError != 0 && iteration <= MAX_ITER);
//calc averages
Iterator it = cached_weights.entrySet().iterator();
while( it.hasNext() )
{
Map.Entry pair = (Map.Entry)it.next();
System.out.println(pair.getKey() + " = " + pair.getValue());
it.remove(); // avoids a ConcurrentModificationException
}
我想这样的事情会奏效:
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
AVERAGED_WEIGHTS[ key - 1 ] += value[ key - 1 ];
}
但是,我想最后必须得出一些除以迭代次数的项
就像如果键在键的末尾,就没有更大的迭代,如果是这样的话,然后除以它,类似的东西。
也许是这个?
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
AVERAGED_WEIGHTS[ key - 1 ] += value[ key - 1 ];
if (key == iteration)
{
AVERAGED_WEIGHTS[ key - 1 ] /= key;
}
}
我想根据 this description(第 48 页,完整的伪代码)实现平均感知器算法。
我想我已经很接近了,但是我在尝试找出最后一步时遇到了麻烦,在这一步中我需要计算每个特定索引在每次迭代期间计算的权重的平均值,然后分配该值到最终的权重数组。我将如何实施?
hashmap 的结构是int
,这是迭代的次数,然后是double[]
的数组,其中包含该迭代的权重。所以我想输出应该是
For all the hashmap keys
for the length of the hashmap value at this key index
...something
所以如果每次迭代的第一个权重是 2
, 4
, ,3
, 我想将 3
的权重分配给最后的 [=14] =] 该索引的数组,等等所有实例。
下面是相关代码。完整的代码是 here on my GitHub 以备不时之需。
//store weights to be averaged.
Map<Integer,double[]> cached_weights = new HashMap<Integer,double[]>();
final int globoDictSize = globoDict.size(); // number of features
// weights total 32 (31 for input variables and one for bias)
double[] weights = new double[globoDictSize + 1];
for (int i = 0; i < weights.length; i++)
{
//weights[i] = Math.floor(Math.random() * 10000) / 10000;
//weights[i] = randomNumber(0,1);
weights[i] = 0.0;
}
int inputSize = trainingPerceptronInput.size();
double[] outputs = new double[inputSize];
final double[][] a = Prcptrn_InitOutpt.initializeOutput(trainingPerceptronInput, globoDictSize, outputs, LABEL);
double globalError;
int iteration = 0;
do
{
iteration++;
globalError = 0;
// loop through all instances (complete one epoch)
for (int p = 0; p < inputSize; p++)
{
// calculate predicted class
double output = Prcptrn_CalcOutpt.calculateOutput(THETA, weights, a, p);
// difference between predicted and actual class values
//always either zero or one
double localError = outputs[p] - output;
int i;
for (i = 0; i < a.length; i++)
{
weights[i] += LEARNING_RATE * localError * a[i][p];
}
weights[i] += LEARNING_RATE * localError;
// summation of squared error (error value for all instances)
globalError += localError * localError;
}
这是我上面提到的部分
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
// ...
}
/* Root Mean Squared Error */
//System.out.println("Iteration " + iteration + " : RMSE = " + Math.sqrt(globalError / inputSize));
}
while (globalError != 0 && iteration <= MAX_ITER);
//calc averages
Iterator it = cached_weights.entrySet().iterator();
while( it.hasNext() )
{
Map.Entry pair = (Map.Entry)it.next();
System.out.println(pair.getKey() + " = " + pair.getValue());
it.remove(); // avoids a ConcurrentModificationException
}
我想这样的事情会奏效:
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
AVERAGED_WEIGHTS[ key - 1 ] += value[ key - 1 ];
}
但是,我想最后必须得出一些除以迭代次数的项
就像如果键在键的末尾,就没有更大的迭代,如果是这样的话,然后除以它,类似的东西。
也许是这个?
//calc averages
for (Entry<Integer, double[]> entry : cached_weights.entrySet())
{
int key = entry.getKey();
double[] value = entry.getValue();
AVERAGED_WEIGHTS[ key - 1 ] += value[ key - 1 ];
if (key == iteration)
{
AVERAGED_WEIGHTS[ key - 1 ] /= key;
}
}