CMU 15-462 A1 实现笔记

最近在跟 Computer Graphics (CMU 15-462) 的课程,所以想把完成 Assignment 1 的过程很粗略地记录一下。

笔记和代码有放在 GitHub 上:https://github.com/PepcyCh/cmu15462-notes

不保证代码正确,只保证看起来差不多。

只实现了吴小林直线算法这一项 bonus。。。

Task 1: Hardware Renderer

热身的 Task,随便写写就好了。

(不过代码的 2 格缩进稍稍有点不习惯)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void HardwareRenderer::rasterize_point(float x, float y, Color color) {
// Task 1:
// Implement point rasterization
glBegin(GL_POINTS);
glColor3f(color.r, color.g, color.b);
glVertex2f(x, y);
glEnd();
}

void HardwareRenderer::rasterize_line(float x0, float y0,
float x1, float y1,
Color color) {
// Task 1:
// Implement line rasterization
glBegin(GL_LINES);
glColor3f(color.r, color.g, color.b);
glVertex2f(x0, y0);
glVertex2f(x1, y1);
glEnd();
}

void HardwareRenderer::rasterize_triangle(float x0, float y0,
float x1, float y1,
float x2, float y2,
Color color) {
// Task 1:
// Implement triangle rasterization
glBegin(GL_TRIANGLES);
glColor3f(color.r, color.g, color.b);
glVertex2f(x0, y0);
glVertex2f(x1, y1);
glVertex2f(x2, y2);
glEnd();
}

Task 2 : Warm Up: Drawing Lines

要求实现 \(O(length)\) 的画线段的算法,要求支持浮点坐标的端点和任意斜率。

我实现了一下吴小林直线算法,不过代码是根据原理脑补出来的:

假设我们在绘制一个斜率小于 \(1\) 的线段,考虑当前在考虑的像素的 \(x\) 坐标是 \(x + 0.5\),根据此时相应的 \(y\) 坐标与像素坐标 \(y_0 + 0.5\) 的差为该像素与上方或下方相邻像素分配透明度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
void SoftwareRendererImp::rasterize_line( float x0, float y0,
float x1, float y1,
Color color) {
// Task 2:
// Implement line rasterization
float dx = abs(x1 - x0), dy = abs(y1 - y0);
if (dy <= dx) {
if (x0 == x1) return;
if (x0 > x1) {
swap(x0, x1);
swap(y0, y1);
}
float k = (y1 - y0) / (x1 - x0);
float x = floor(x0) + 0.5;
float y = y0 + k * (x - x0);
for (; x <= x1; x += 1) {
float y2 = floor(y) == round(y) ? y - 1 : y + 1;
float d = abs(y + 0.5 - round(y + 0.5));
rasterize_point(x, y, Color(color.r, color.g, color.b, color.a * (1 - d), true);
rasterize_point(x, y2, Color(color.r, color.g, color.b, color.a * d), true);
y += k;
}
} else {
if (y0 == y1) return;
if (y0 > y1) {
swap(y0, y1);
swap(x0, x1);
}
float k = (x1 - x0) / (y1 - y0);
float y = floor(y0) + 0.5;
float x = x0 + k * (y - y0);
for (; y <= y1; y += 1) {
float x2 = floor(x) == round(x) ? x - 1 : x + 1;
float d = abs(x + 0.5 - round(x + 0.5));
rasterize_point(x, y, Color(color.r, color.g, color.b, color.a * (1 - d)), true);
rasterize_point(x2, y, Color(color.r, color.g, color.b, color.a * d), true);
x += k;
}
}
}

实现的时候写错过的地方:

  • 像素的坐标要加上 \(0.5\)
  • 分配的应该是透明度

此时绘制 /svg/basic/test2.svg 可以感受一下吴小林算法的抗锯齿效果。因为此时还没有实现颜色的混合,所以之后的测试中都会在直线附近有一道白边,实现 Task 8 之后就一切正常了。

Task 3: Drawing Triangles

要求实现比对所有像素(采样点)进行判定快的算法。

我的实现是在三角形的包围框中进行判定,实现的时候没有实现 edge rules。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
void SoftwareRendererImp::rasterize_triangle( float x0, float y0,
float x1, float y1,
float x2, float y2,
Color color ) {
// Task 3:
// Implement triangle rasterization
int minx = (int) floor(min({x0, x1, x2}));
int maxx = (int) floor(max({x0, x1, x2}));
int miny = (int) floor(min({y0, y1, y2}));
int maxy = (int) floor(max({y0, y1, y2}));

float dx0 = x1 - x0, dy0 = y1 - y0;
float dx1 = x2 - x1, dy1 = y2 - y1;
float dx2 = x0 - x2, dy2 = y0 - y2;
float rot = dx0 * dy1 - dy0 * dx1;

float pd = 1.0f / sample_rate;
for (int x = minx; x <= maxx; x++) {
for (int y = miny; y <= maxy; y++) {
for (int i = 0; i < sample_rate; i++) {
for (int j = 0; j < sample_rate; j++) {
float px = (i + 0.5f) * pd, py = (j + 0.5f) * pd;
float e0 = (y + py - y0) * dx0 - (x + px - x0) * dy0;
float e1 = (y + py - y1) * dx1 - (x + px - x1) * dy1;
float e2 = (y + py - y2) * dx2 - (x + px - x2) * dy2;
if (e0 * rot >= 0 && e1 * rot >= 0 && e2 * rot >= 0)
rasterize_point(x + px, y + py, color);
}
}
}
}
}

代码是实现了 SSAA 后的代码。

事实上,判定部分还可以写作 (e0 >= 0 && e1 >= 0 && e2 >= 0) || (e0 <= 0 && e1 <= 0 && e2 <= 0)

Task 4: Anti-Aliasing Using Supersampling

SoftwareRendererImp 中新定义一个 supersample_target,和一个用来表示当前是否在 SSAA 的 supersampling(因为我想在没有 SSAA 的时候直接写入 rander_target)。选用 std::vector 是为了不用花心思于内存分配与释放上。

1
2
std::vector<unsigned char> supersample_target;
bool supersampling = false;

之后补全 resolve() 函数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
void SoftwareRendererImp::resolve( void ) {

// Task 4:
// Implement supersampling
// You may also need to modify other functions marked with "Task 4".
if (!supersampling) return;

for (int y = 0; y < target_h; y++) {
for (int x = 0; x < target_w; x++) {
float sumr = 0;
float sumg = 0;
float sumb = 0;
float suma = 0;
for (int i = 0; i < sample_rate; i++) {
for (int j = 0; j < sample_rate; j++) {
sumr += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w)];
sumg += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 1];
sumb += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 2];
suma += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 3];
}
}
sumr /= sample_rate * sample_rate;
sumg /= sample_rate * sample_rate;
sumb /= sample_rate * sample_rate;
suma /= sample_rate * sample_rate;

render_target[4 * (x + y * target_w)] = (uint8_t) sumr;
render_target[4 * (x + y * target_w) + 1] = (uint8_t) sumg;
render_target[4 * (x + y * target_w) + 2] = (uint8_t) sumb;
render_target[4 * (x + y * target_w) + 3] = (uint8_t) suma;
}
}
}

要求 SSAA 后不应该改变点的大小和线的粗细,所以我在 rasterize_point() 的参数中增加了一项 point_or_line ,对于点和线段,直接覆盖 \(sample\_rate \times sample\_rate\) 大小的采样点。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
void SoftwareRendererImp::rasterize_point( float x, float y, Color color, bool point_or_line = false ) {
// fill in the nearest pixel
int sx = (int) floor(x);
int sy = (int) floor(y);

// check bounds
if ( sx < 0 || sx >= target_w ) return;
if ( sy < 0 || sy >= target_h ) return;

// fill sample
if (!supersampling) {
render_target[4 * (sx + sy * target_w)] = (uint8_t) (color.r * 255);
render_target[4 * (sx + sy * target_w) + 1] = (uint8_t) (color.g * 255);
render_target[4 * (sx + sy * target_w) + 2] = (uint8_t) (color.b * 255);
render_target[4 * (sx + sy * target_w) + 3] = (uint8_t) (color.a * 255);
} else if (point_or_line) {
sx *= sample_rate;
sy *= sample_rate;
for (int i = 0; i < sample_rate; i++) {
for (int j = 0; j < sample_rate; j++) {
supersample_target[4 * (sx + j + (sy + i) * target_w * sample_rate)] = (uint8_t) (color.r * 255);
supersample_target[4 * (sx + j + (sy + i) * target_w * sample_rate) + 1] = (uint8_t) (color.g * 255);
supersample_target[4 * (sx + j + (sy + i) * target_w * sample_rate) + 2] = (uint8_t) (color.b * 255);
supersample_target[4 * (sx + j + (sy + i) * target_w * sample_rate) + 3] = (uint8_t) (color.a * 255);
}
}
} else {
sx = (int) floor(x * sample_rate);
sy = (int) floor(y * sample_rate);
supersample_target[4 * (sx + sy * target_w * sample_rate)] = (uint8_t) (color.r * 255);
supersample_target[4 * (sx + sy * target_w * sample_rate) + 1] = (uint8_t) (color.g * 255);
supersample_target[4 * (sx + sy * target_w * sample_rate) + 2] = (uint8_t) (color.b * 255);
supersample_target[4 * (sx + sy * target_w * sample_rate) + 3] = (uint8_t) (color.a * 255);
}
}

Task 5: Implementing Modeling and Viewing Transforms

Part 1: Modeling Transforms

参考 hardware_renderer.cpp 就好了。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
void SoftwareRendererImp::draw_element( SVGElement* element ) {

// Task 5 (part 1):
// Modify this to implement the transformation stack

Matrix3x3 temp_matrix = transformation;
transformation = transformation * element->transform;

switch(element->type) {
case POINT:
draw_point(static_cast<Point&>(*element));
break;
case LINE:
draw_line(static_cast<Line&>(*element));
break;
case POLYLINE:
draw_polyline(static_cast<Polyline&>(*element));
break;
case RECT:
draw_rect(static_cast<Rect&>(*element));
break;
case POLYGON:
draw_polygon(static_cast<Polygon&>(*element));
break;
case ELLIPSE:
draw_ellipse(static_cast<Ellipse&>(*element));
break;
case IMAGE:
draw_image(static_cast<Image&>(*element));
break;
case GROUP:
draw_group(static_cast<Group&>(*element));
break;
default:
break;
}

transformation = temp_matrix;
}

Part 2: Viewing Transform

按照原文:

This transform should map the SVG canvas coordinate space to a normalized device coordinate space where the top left of the visible SVG coordinate maps to (0, 0) and the bottom right maps to (1, 1). For example, for the values centerX=200, centerY=150, vspan=10, then SVG canvas coordinate (200, 150) transforms to normalized coordinate (0.5, 0.5) (center of screen) and canvas coordinate (200, 160) transforms to (0.5, 1) (bottom center).

就可以写出相应的变换矩阵了。注意提供的矩阵是右乘行向量而不是左乘列向量的(或者说注意矩阵的下标)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
void ViewportImp::set_viewbox( float centerX, float centerY, float vspan ) {

// Task 5 (part 2):
// Set svg coordinate to normalized device coordinate transformation. Your input
// arguments are defined as normalized SVG canvas coordinates.
this->centerX = centerX;
this->centerY = centerY;
this->vspan = vspan;

// Matrix3x3:
// 00 10 20
// 01 11 21
// 02 12 22
svg_2_norm = Matrix3x3::identity();
svg_2_norm[0][0] = 0.5 / vspan;
svg_2_norm[1][1] = 0.5 / vspan;
svg_2_norm[2][0] = 0.5 - 0.5 * centerX / vspan;
svg_2_norm[2][1] = 0.5 - 0.5 * centerY / vspan;
}

Task 6: Drawing Scaled Images

首先是实现 rasterize_image() 函数:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
void SoftwareRendererImp::rasterize_image( float x0, float y0,
float x1, float y1,
Texture& tex ) {
// Task 6:
// Implement image rasterization
float dx = x1 - x0;
float dy = y1 - y0;

float pd = 1.0f / sample_rate;
for (int x = (int) floor(x0 * sample_rate); x <= (int) floor(x1 * sample_rate); x++) {
for (int y = (int) floor(y0 * sample_rate); y <= (int) floor(y1 * sample_rate); y++) {
float u = ((x + 0.5f) * pd - x0) / dx;
float v = ((y + 0.5f) * pd - y0) / dy;
// Color c = sampler->sample_nearest(tex, u, v, 0);
// Color c = sampler->sample_bilinear(tex, u, v, 0);
Color c = sampler->sample_trilinear(tex, u, v, dx, dy);
rasterize_point((x + 0.5f) * pd, (y + 0.5f) * pd, c);
}
}
}

然后是 sample_nearest()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Color Sampler2DImp::sample_nearest(Texture& tex,
float u, float v,
int level) {
// Task 6: Implement nearest neighbour interpolation

// return magenta for invalid level
if (level >= tex.mipmap.size())
return Color(1, 0, 1, 1);

int su = (int) floor(clamp(u, 0.0f, 0.99999f) * tex.mipmap[level].width);
int sv = (int) floor(clamp(v, 0.0f, 0.99999f) * tex.mipmap[level].height);

float r = tex.mipmap[level].texels[4 * (su + sv * tex.mipmap[level].width)] / 255.0f;
float g = tex.mipmap[level].texels[4 * (su + sv * tex.mipmap[level].width) + 1] / 255.0f;
float b = tex.mipmap[level].texels[4 * (su + sv * tex.mipmap[level].width) + 2] / 255.0f;
float a = tex.mipmap[level].texels[4 * (su + sv * tex.mipmap[level].width) + 3] / 255.0f;

return Color(r, g, b, a);
}

sample_bilinear()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
Color Sampler2DImp::sample_bilinear(Texture& tex,
float u, float v,
int level) {
// Task 6: Implement bilinear filtering

// return magenta for invalid level
if (level >= tex.mipmap.size())
return Color(1, 0, 1, 1);

float tu = clamp(u, 0.0f, 0.99999f) * tex.mipmap[level].width;
float tv = clamp(v, 0.0f, 0.99999f) * tex.mipmap[level].height;

int su[2];
su[0] = clamp<int>(round(tu) - 1, 0, tex.mipmap[level].width - 1);
su[1] = clamp<int>(su[0] + 1, 0, tex.mipmap[level].width - 1);
float du = tu - 0.5f - su[0];
if (du < 0) su[1] = su[0];

int sv[2];
sv[0] = clamp<int>(round(tv) - 1, 0, tex.mipmap[level].height - 1);
sv[1] = clamp<int>(sv[0] + 1, 0, tex.mipmap[level].height - 1);
float dv = tv - 0.5f - sv[0];
if (dv < 0) sv[1] = sv[0];

Color mix = Color(0, 0, 0, 0);
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
float r = tex.mipmap[level].texels[4 * (su[i] + sv[j] * tex.mipmap[level].width)] / 255.0f;
float g = tex.mipmap[level].texels[4 * (su[i] + sv[j] * tex.mipmap[level].width) + 1] / 255.0f;
float b = tex.mipmap[level].texels[4 * (su[i] + sv[j] * tex.mipmap[level].width) + 2] / 255.0f;
float a = tex.mipmap[level].texels[4 * (su[i] + sv[j] * tex.mipmap[level].width) + 3] / 255.0f;
Color c = Color(r * a, g * a, b * a, a);
mix += (i * du + (1 - i) * (1 - du)) * (j * dv + (1 - j) * (1 - dv)) * c;
}
}

if (mix.a != 0) {
mix.r /= mix.a;
mix.g /= mix.a;
mix.b /= mix.a;
}
return mix;
}

实现双线性插值的时候因为加不加 \(0.5\) 的问题写错过一段时间,当时的效果是图像仿佛向左上角平移了一段距离。

另外,这个多线性插值的写法是当时跟 Ray Tracing in One Weekend 系列时学到的,虽然感觉会多几次运算,但看起来似乎美观一点的样子。

Task 7: Anti-Aliasing Image Elements Using Trilinear Filtering

有了双线性插值,实现三线性插值就很方便了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Color Sampler2DImp::sample_trilinear(Texture& tex,
float u, float v,
float u_scale, float v_scale) {

// Task 7: Implement trilinear filtering

// return magenta for invalid level
float level = max(log2f(max(tex.width / u_scale, tex.height / v_scale)), 0.0f);

int ld = (int) floor(level);
if (ld >= tex.mipmap.size())
return Color(1, 0, 1, 1);

int hd = ld;
if (hd >= tex.mipmap.size())
return sample_bilinear(tex, u, v, ld);

Color lc = sample_bilinear(tex, u, v, ld);
Color hc = sample_bilinear(tex, u, v, hd);

lc.r *= lc.a;
lc.g *= lc.a;
lc.b *= lc.a;
hc.r *= hc.a;
hc.g *= hc.a;
hc.b *= hc.a;

float dd = level - ld;
Color mix = (1 - dd) * lc + dd * hc;
if (mix.a != 0) {
mix.r /= mix.a;
mix.g /= mix.a;
mix.b /= mix.a;
}
return mix;
}

这个计算 mipmap level 的式子虽然乍一看和课件中的有点不同,但仔细想一想会发现是对的。(当时发现根据传进来的参数似乎只好这么写,后来才意识到它的正确性的。)

然后是 mipmap 的生成,因为要求假定尺寸是 \(2\) 的整次幂,所以很好实现:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
void Sampler2DImp::generate_mips(Texture& tex, int startLevel) {

// NOTE:
// This starter code allocates the mip levels and generates a level
// map by filling each level with placeholder data in the form of a
// color that differs from its neighbours'. You should instead fill
// with the correct data!

// Task 7: Implement this

// check start level
if ( startLevel >= tex.mipmap.size() ) {
std::cerr << "Invalid start level";
}

// allocate sublevels
int baseWidth = tex.mipmap[startLevel].width;
int baseHeight = tex.mipmap[startLevel].height;
int numSubLevels = (int)(log2f( (float)max(baseWidth, baseHeight)));

numSubLevels = min(numSubLevels, kMaxMipLevels - startLevel - 1);
tex.mipmap.resize(startLevel + numSubLevels + 1);

int width = baseWidth;
int height = baseHeight;
for (int i = 1; i <= numSubLevels; i++) {

MipLevel& level = tex.mipmap[startLevel + i];

// handle odd size texture by rounding down
width = max( 1, width / 2); assert(width > 0);
height = max( 1, height / 2); assert(height > 0);

level.width = width;
level.height = height;
level.texels = vector<unsigned char>(4 * width * height);
}

for(size_t i = 1; i <= numSubLevels; ++i) {
MipLevel& mip = tex.mipmap[i];

for (int x = 0; x < mip.width; x++) {
for (int y = 0; y < mip.height; y++) {
Color sum = Color(0, 0, 0, 0);
for (int j = 0; j < 4; j++) {
static const int d[4][2] = {
{0, 0}, {0, 1}, {1, 0}, {1, 1}
};
float r = tex.mipmap[i - 1].texels[4 * (2 * x + d[j][0] + (2 * y + d[j][1]) * mip.width * 2)] / 255.0f;
float g = tex.mipmap[i - 1].texels[4 * (2 * x + d[j][0] + (2 * y + d[j][1]) * mip.width * 2) + 1] / 255.0f;
float b = tex.mipmap[i - 1].texels[4 * (2 * x + d[j][0] + (2 * y + d[j][1]) * mip.width * 2) + 2] / 255.0f;
float a = tex.mipmap[i - 1].texels[4 * (2 * x + d[j][0] + (2 * y + d[j][1]) * mip.width * 2) + 3] / 255.0f;
sum += Color(r * a, g * a, b * a, a);
}
sum *= 0.25f;
if (sum.a != 0) {
sum.r /= sum.a;
sum.g /= sum.a;
sum.b /= sum.a;
}
float_to_uint8(&mip.texels[4 * (x + y * width)], &sum.r);
}
}
}
}

其实感觉这段代码写得应该是有问题的,因为它似乎只在 \(startLevel = 0\) 的时候是对的,虽说程序中也只有真美调用过。

Task 8: Alpha Compositing

修改 rasterize_point()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
static void blend(uint8_t *dst, const Color &c) {
dst[0] = (c.r + (1 - c.a) * (dst[0] / 255.0f)) * 255;
dst[1] = (c.g + (1 - c.a) * (dst[1] / 255.0f)) * 255;
dst[2] = (c.b + (1 - c.a) * (dst[2] / 255.0f)) * 255;
dst[3] = (c.a + (1 - c.a) * (dst[3] / 255.0f)) * 255;
}

void SoftwareRendererImp::rasterize_point( float x, float y, Color color, bool point_or_line = false ) {
// fill in the nearest pixel
int sx = (int) floor(x);
int sy = (int) floor(y);

// check bounds
if ( sx < 0 || sx >= target_w ) return;
if ( sy < 0 || sy >= target_h ) return;

// fill sample
color.r *= color.a;
color.g *= color.a;
color.b *= color.a;
if (!supersampling) {
blend(render_target + (4 * (sx + sy * target_w)), color);
} else if (point_or_line) {
sx *= sample_rate;
sy *= sample_rate;
for (int i = 0; i < sample_rate; i++) {
for (int j = 0; j < sample_rate; j++) {
blend(&supersample_target[4 * (sx + j + (sy + i) * target_w * sample_rate)], color);
}
}
} else {
sx = (int) floor(x * sample_rate);
sy = (int) floor(y * sample_rate);
blend(&supersample_target[4 * (sx + sy * target_w * sample_rate)], color);
}
}

resolve()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
void SoftwareRendererImp::resolve( void ) {

// Task 4:
// Implement supersampling
// You may also need to modify other functions marked with "Task 4".
if (!supersampling) {
for (int y = 0; y < target_h; y++) {
for (int x = 0; x < target_w; x++) {
float r = render_target[4 * (x + y * target_w)] / 255.0f;
float g = render_target[4 * (x + y * target_w) + 1] / 255.0f;
float b = render_target[4 * (x + y * target_w) + 2] / 255.0f;
float a = render_target[4 * (x + y * target_w) + 3] / 255.0f;
if (a != 0) {
r /= a;
g /= a;
b /= a;
}
render_target[4 * (x + y * target_w)] = (uint8_t) (r * 255);
render_target[4 * (x + y * target_w) + 1] = (uint8_t) (g * 255);
render_target[4 * (x + y * target_w) + 2] = (uint8_t) (b * 255);
}
}

return;
}

for (int y = 0; y < target_h; y++) {
for (int x = 0; x < target_w; x++) {
float sumr = 0;
float sumg = 0;
float sumb = 0;
float suma = 0;
for (int i = 0; i < sample_rate; i++) {
for (int j = 0; j < sample_rate; j++) {
sumr += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w)];
sumg += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 1];
sumb += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 2];
suma += supersample_target[4 * (x * sample_rate + j + (y * sample_rate + i) * sample_rate * target_w) + 3];
}
}
sumr /= sample_rate * sample_rate;
sumg /= sample_rate * sample_rate;
sumb /= sample_rate * sample_rate;
suma /= sample_rate * sample_rate;
if (suma != 0) {
sumr /= suma / 255;
sumg /= suma / 255;
sumb /= suma / 255;
}
render_target[4 * (x + y * target_w)] = (uint8_t) sumr;
render_target[4 * (x + y * target_w) + 1] = (uint8_t) sumg;
render_target[4 * (x + y * target_w) + 2] = (uint8_t) sumb;
render_target[4 * (x + y * target_w) + 3] = (uint8_t) suma;
}
}
}

事实上,本来还需修改材质相关部分,但一开始就是按正确的写的。

Task 9: Draw Something!!!

本想用在线编辑器随便画点,但发现保存后它不能很好地解析,所以就呆呆地画了个正方形在屏幕中央。。。

其他(随便口胡)

实现 SSAA 的时候本想去感受一下 MLAA,但一直有个地方没有想清楚(而文章中举例时都正好跳过了我想不通的情况。。。),也看了一下 MSAA,发现其实很好实现的样子,但感觉当像素中心在三角元外的时候会有点奇怪的问题,就是很明显此时插出的材质坐标是不在期望的三角形内部的,但却要用这个代表这个像素(也有可能是我理解错了 MSAA 的做法)。

实现椭圆的话,因为上学期学校程序设计专题的大作业就是写的建议 CAD,当时实现椭圆就是简单粗暴地一点一点地微分凑出椭圆,想来在椭圆的包围盒内依次检查每一个采样点也是一个不错的想法(我甚至觉得比画一堆三角形好)。