LaVOZs

The World’s Largest Online Community for Developers

'; C++ Universal Character Set to code and vise versa - LavOzs.Com

I'm trying to create a simple C++ string encrypter/decrypter

The algorithm passes for every character in the input, takes the unicode value, then passes for every character in the key and add the first unicode value with the key character unicode value.

To decrypt it decreases the first unicode value with the key character unicode value.

This is a piece of code:

std::string encrypt(std::string Input, std::string Key) {
    std::string result = "";
    int text_size = Input.size();
    int key_size = Key.size();

    for (int t_i = 0; t_i < text_size; t_i++) {
        int result_char = (int)Input.at(t_i);
        for (int k_i = 0; k_i < key_size; k_i++) {
            result_char += (int)Key.at(k_i);
        }
        result += char(result_char);
    }
    return result;
}

std::string decrypt(std::string Input, std::string Key) {
    std::string result = "";
    int text_size = Input.size();
    int key_size = Key.size();

    for (int t_i = 0; t_i < text_size; t_i++) {
        int result_char = (int)Input.at(t_i);
        for (int k_i = 0; k_i < key_size; k_i++) {
            result_char -= (int)Key.at(k_i);
        }
        result += char(result_char);
    }
    return result;
}

But when I use out-range ASCII characters like an emoji, it completely breaks like this:

String to encrypt:  This is an test, ÆÆÆ
String encrypted:  ����    � ǻ^!^!^!
String decrypted:  T$"hisT$"isT$"T$"nT$"test, ÆÆÆ

Please note: The encrypted string is acceptable that appear completely broken, as it's encrypted and the terminal isn't configured to print that characters. But the decrypted string need to be the same as the original string.

I need an explanation and a REAL solution on how to resolve this.

I've already searched every kind of solution but none of then I could use.

EDIT:

The solution from @Remy Lebau that worked to me:

std::u16string encrypt(std::u16string Input, std::u16string Key) {
    std::u16string result;
    int text_size = Input.size();
    int key_size = Key.size();

    for (int t_i = 0; t_i < text_size; t_i++) {
        int result_char = Input[t_i];
        for (int k_i = 0; k_i < key_size; k_i++) {
            result_char += Key[k_i];
        }
        result += (unsigned char)result_char;
    }
    return result;
}

std::u16string decrypt(std::u16string Input, std::u16string Key) {
    std::u16string result;
    int text_size = Input.size();
    int key_size = Key.size();

    for (int t_i = 0; t_i < text_size; t_i++) {
        int result_char = Input[t_i];
        for (int k_i = 0; k_i < key_size; k_i++) {
            result_char -= Key[k_i];
        }
        result += (unsigned char)result_char;
    }
    return result;
}

Result:

String to encrypt:  This is an test, ÆÆÆ
String encrypted:  ï»»ü    » Ç»aaa
String decrypted:  This is an test, ÆÆÆ

I used std::u16string to be able to use universal character set and (unsigned char) casting to convert from codepoint to character.

Thank you for your answer.

Related
How can I profile C++ code running on Linux?
C++ code file extension? .cc vs .cpp
RSA String Encryption\Decryption
Can code that is valid in both C and C++ produce different behavior when compiled in each language?
Replacing a 32-bit loop counter with 64-bit introduces crazy performance deviations with _mm_popcnt_u64 on Intel CPUs
filestream reading hexadecimal file into binary buffer
C++ code for testing the Collatz conjecture faster than hand-written assembly - why?
Using a recursive function to encrypt user input