3

私は最近、次のようなコードに出くわしました:

uint32_t val;
...
printf("%.08X", val);

これは私を混乱させます。0つまり、+width を指定するか、precision を指定しますが、両方のポイントは何ですか?

:

width 引数 ... は、出力される最小文字数を制御します。出力値の文字数が指定幅未満の場合、空白が追加されます ... . width の前に 0 を付けると、先行ゼロが追加されます ... .

幅の指定によって値が切り捨てられることはありません。...

精度:

... ピリオド (.) の後に負でない 10 進整数が続き、変換の種類に応じて、出力される有効桁数を指定します。

タイプは、精度の解釈または精度が省略された場合のデフォルトの精度のいずれかを決定します...

d, i, u, o, x, X- 精度は、印刷される最小桁数を指定します。引数の桁数が精度より少ない場合、出力値の左側にゼロが埋め込まれます。桁数が精度を超える場合、値は切り捨てられません。

したがって、"%08X" or "%.8X"を使用しますが、私には意味が"%.08X"ありません。

ただし、違いはないようです。つまり、3 つのバリアントはすべて同じ出力を生成するようです。

4

1 に答える 1

4

You're correct:

"%08X", "%.8X" and "%.08X"

are equivalent.

As for why - refer to this:

http://www.cplusplus.com/reference/cstdio/printf/

Hence:

In case 1, this one uses specifies the width:

Minimum number of characters to be printed. If the value to be printed is shorter than this number, the result is padded with blank spaces. The value is not truncated even if the result is larger.

Hence:

%08X will print a minimum of 8 characters

and from this reference:

For integer specifiers (d, i, o, u, x, X): precision specifies the minimum number of digits to be written. If the value to be written is shorter than this number, the result is padded with leading zeros. The value is not truncated even if the result is longer. A precision of 0 means that no character is written for the value 0. For a, A, e, E, f and F specifiers: this is the number of digits to be printed after the decimal point (by default, this is 6). For g and G specifiers: This is the maximum number of significant digits to be printed. For s: this is the maximum number of characters to be printed. By default all characters are printed until the ending null character is encountered. If the period is specified without an explicit value for precision, 0 is assumed.

%.8X uses the precision specifier. As such, it too, will also print a minimum of 8 characters.

And lastly:

%.08X will also print a minimum of 8 characters (again, because of the precision specifier). Why? Because 08 is interpreted as 8 - resulting in the same output as the previous. This may not seem to make sense for single digit precision specification outputs, but in a case like this:

%0.15X

It can matter.

These different formats exist to allow finer control of output (which in my opinion - is a carry over that resembles Fortran a lot).

However, as you've discovered, this overcompensation for finer control of precision allows you to get the same output - but with different flags.

UPDATE:

As hvd pointed out, which I had forgotten to mention: the X specifier requires an unsigned value, so in this case your output is the same for %08X and %.8X (due to there being no sign). However, for something like: %08d and %.8d - it isn't as: one pads to 8 digits, the other to 8 characters, so they behave differently for negative values.

于 2013-03-11T07:17:21.980 に答える