函数 returns 十六进制值

Function returns value in hex

我创建了一个函数来将数字的每隔一位加倍,但出于某种原因,它在返回之前将 in 转换为十六进制。我检查了一下,那个十六进制与实际数字是准确的,但是我怎样才能让它停止返回十六进制呢?这是代码。

unsigned long long* Luhn_Algorigthem::double_every_other_value(unsigned long long int in) {
    unsigned long long int* out = new unsigned long long;
    int counter = 10;
    
    for (int i = 0; i < std::to_string(in).length(); i++) {
        if (i % 2 == 0) { //Is even
            counter * 10;
            out += (unsigned long long)((in % counter) * 2);
        }
        else { //Is odd
            out += (unsigned long long)(std::to_string(in).at(i));
        }
    }

    doubled_val = (unsigned long long)out;
    return (unsigned long long*)33;
    delete out;
}

unsigned 和 long 是类型修饰符(就像没有名词的形容词),您是否尝试过 unsigned long <strong>int</strong> 进行显式类型转换?

@TheUndedFish 帮助我解决了这个问题,我最终只是摆脱了指针,并使用了 doubled val 变量而不是为堆分配内存。这是更新后的代码:

unsigned long long int Luhn_Algorigthem::double_every_other_value(unsigned long long int in) {
    //unsigned long long int* out = new unsigned long long;
    int counter = 10;
    
    for (int i = 0; i < std::to_string(in).length(); i++) {
        if (i % 2 == 0) { //Is even
            counter * 10;
            doubled_val += (in % counter) * 2;
        }
        else { //Is odd
            doubled_val += std::to_string(in).at(i);
        }
    }
    //delete out;
    return doubled_val;
}