为什么 IPAddress.Parse("192.168.001.001") 有效而 IPAddress.Parse("192.168.001.009") 无效?

Why IPAddress.Parse("192.168.001.001") works while IPAddress.Parse("192.168.001.009") don't?

我一直在尝试从 API 结果中解析 IP 地址,其中 IPv4 地址的四个部分中的每一个都带有 0(零)前缀。像这样:

127.000.000.001 而不是 127.0.0.1

我在尝试解析 192.168.001.009 时开始遇到解析错误。它也对 192.168.001.008 失败,但对 007、006、005 到 001 有效!!!

它也对 192.168.001.018 失败,但对 .017、.016 到 010 有效!

它适用于 192.168.001.8 或 .8 以及 192.168.001.18 和 .19...

这是 CLR 中的错​​误吗?还是我错过了一些愚蠢的东西?

试一试:

IPAddress.Parse("192.168.001.007"); // works
IPAddress.Parse("192.168.001.87"); // works
IPAddress.Parse("192.168.001.008"); // throws exception
IPAddress.Parse("192.168.001.19"); // works
IPAddress.Parse("192.168.001.019");  // throws exception
// and so on!

这些数字以 0 开头,因此被解释为八进制而不是十进制。这些不是 C# 文字,因此由库以一种或另一种方式解释它。

一个简单的测试方法是构建一个以“.010”结尾的IP,解析它,你会看到它被解析为一个以“.8”结尾的IP。

一个可能的快速但肮脏的解决方案是搜索正则表达式 /\.0*/ 并将其替换为“.”

您可以在有关点十进制表示法的维基百科条目中找到更多信息:

A popular implementation of IP networking, originating in 4.2BSD, contains a function inet_aton() for converting IP addresses in character strings representation to internal binary storage. In addition to the basic four-decimals format and full 32-bit addresses, it also supported intermediate syntaxes of octet.24bits (e.g. 10.1234567; for Class A addresses) and octet.octet.16bits (e.g. 172.16.12345; for Class B addresses). It also allowed the numbers to be written in hexadecimal and octal, by prefixing them with 0x and 0, respectively. These features continue to be supported by software until today, even though they are seen as non-standard. But this also means addresses where an IP address component is written with a leading zero digit may be interpreted differently by different programs: some will ignore the leading zero, some will interpret the number as octal.

这可能是因为 00X0XY 被认为是八进制数,只允许 07 的数字。数字 89 是错误的。