添付のコードは SSE 最適化を使用しています。
実装はC組み込みを使用します-アセンブリはありません...
簡単にするために、R、G、B は 3 つの異なる平面で
あり、データ順の r、g、b、r、g、b、r、g、b ではなく、R 行列、G 行列、B 行列としてメモリに格納されていると仮定しました。 ...
コードは、パフォーマンスを向上させるために固定小数点の実装を使用しています。
重要なお知らせ:
- 3 で割るよりも (1/3) を掛ける方が効率的です。
- 整数キャストの前に 0.5 を追加すると、正の値を丸めることができます。
- 拡張、スケーリング、およびシフトによって実行される (1/3) スケーリングの固定小数点実装。例: avg = (sum*scale + rounding) >> 15; [スケール = (1/3)*2^15 の場合]。
- _mm_mulhrs_epi16 組み込み関数は上記の操作を行っています: (x*scl + 2^14) >> 15.
実装のコメントには、より多くの説明が含まれています。
//Calculate elements average of 3 vectors R, G and B, and store result into J.
//Implementation uses SSE intrinsics for performance optimization.
//Use fixed point computations for better performance.
//R - Plain of red pixels: RRRRRRRRRRRRRRRR
//G - Plain of green pixels: GGGGGGGGGGGGGGGG
//B - Plain of blue pixels: BBBBBBBBBBBBBBBB
//image_size: Total number of pixels (width * height).
//J - Destination Grayscale plane: JJJJJJJJJJJJJJJJ
//Limitations:
//1. image_size must be a multiple of 16.
void RgbAverage(const unsigned char R[],
const unsigned char G[],
const unsigned char B[],
int image_size,
unsigned char J[])
{
int x;
/*
//1. Plain C code:
//--------------------
for (x = 0; x < image_size; x++)
{
//Add 0.5 for rounding (round instead of floor).
//Multiply by (1.0/3.0) - much more efficient than dividing by 3.
J[x] = (unsigned char)(((double)R[x] + (double)G[x] + (double)B[x])*(1.0/3.0) + 0.5);
}
*/
/*
//2. Plain C fixed point implementation:
// Read this code first, for better understanding the SSE implementation.
const unsigned int scale = (unsigned int)((1.0/3.0)*(1 << 15) + 0.5); //scale equals 1/3 expanded by 2^15.
const unsigned int roundig_ofs = (unsigned int)(1 << 14); //Offset of 2^14 for rounding (equals 0.5 expanded by 2^15).
for (x = 0; x < image_size; x++)
{
unsigned int r0 = (unsigned int)R[x];
unsigned int g0 = (unsigned int)G[x];
unsigned int b0 = (unsigned int)B[x];
unsigned int sum = r0 + g0 + b0;
//Multiply by (1/3) with rounding:
//avg = (sum*(1/3)*2^15 + 2^14) / 2^15 = floor(sum*(1/3) + 0.5)
unsigned int avg = (sum * scale + roundig_ofs) >> 15;
J[x] = (unsigned char)avg;
}
*/
//3. SSE optimized implementation:
const unsigned int scale = (unsigned int)((1.0/3.0)*(1 << 15) + 0.5); //scale equals 1/3 expanded by 2^15.
const __m128i vscl = _mm_set1_epi16((short)scale); //8 packed int16 scale elements - scl_scl_scl_scl_scl_scl_scl_scl
//Process 16 pixels per iteration.
for (x = 0; x < image_size; x += 16)
{
__m128i r15_to_r0 = _mm_loadu_si128((__m128i*)&R[x]); //Load 16 uint8 R elements.
__m128i g15_to_g0 = _mm_loadu_si128((__m128i*)&G[x]); //Load 16 uint8 G elements.
__m128i b15_to_b0 = _mm_loadu_si128((__m128i*)&B[x]); //Load 16 uint8 B elements.
//Unpack uint8 elements to uint16 elements:
__m128i r7_to_r0 = _mm_unpacklo_epi8(r15_to_r0, _mm_setzero_si128()); //Lower 8 R elements
__m128i r15_to_r8 = _mm_unpackhi_epi8(r15_to_r0, _mm_setzero_si128()); //Upper 8 R elements
__m128i g7_to_g0 = _mm_unpacklo_epi8(g15_to_g0, _mm_setzero_si128()); //Lower 8 G elements
__m128i g15_to_g8 = _mm_unpackhi_epi8(g15_to_g0, _mm_setzero_si128()); //Upper 8 G elements
__m128i b7_to_b0 = _mm_unpacklo_epi8(b15_to_b0, _mm_setzero_si128()); //Lower 8 B elements
__m128i b15_to_b8 = _mm_unpackhi_epi8(b15_to_b0, _mm_setzero_si128()); //Upper 8 B elements
__m128i sum7_to_sum0 = _mm_add_epi16(_mm_add_epi16(r7_to_r0, g7_to_g0), b7_to_b0); //Lower 8 sum elements.
__m128i sum15_to_sum8 = _mm_add_epi16(_mm_add_epi16(r15_to_r8, g15_to_g8), b15_to_b8); //Upper 8 sum elements.
//Most important trick:
//_mm_mulhrs_epi16 intrinsic does exactly what we wanted i.e: avg = (sum*scl + (1<<14)) >> 15.
//Each SSE instruction perform the described operation on 8 elements.
__m128i avg7_to_avg0 = _mm_mulhrs_epi16(sum7_to_sum0, vscl); //Lower 8 average elements.
__m128i avg15_to_avg8 = _mm_mulhrs_epi16(sum15_to_sum8, vscl); //Upper 8 average elements.
//Pack the result to 16 uint8 elements.
__m128i j15_to_j0 = _mm_packus_epi16(avg7_to_avg0, avg15_to_avg8);
_mm_storeu_si128((__m128i*)(&J[x]), j15_to_j0); //Store 16 elements of J.
}
}