[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vr/ - Retro Games


View post   

File: 98 KB, 640x480, lsd.png [View same] [iqdb] [saucenao] [google]
3067779 No.3067779 [Reply] [Original]

Why is the PS1 so bad at making straight lines?

>> No.3067829

A GPU that can only calculate rounded up/down to the nearest integer. It is unable to do floating point operations for sub pixel accuracy.
This is also what helped it render so fast all the way back in 1995

>> No.3067884

>>3067829
>sub pixel accuracy.
But it's clearly several pixels out in places. If it's a rounding issue, surely it's taking place in the geometric "domain" of the process?

>> No.3067891

No z-buffer, it could only approximate depth relative to the camera.

>> No.3067948
File: 64 KB, 564x935, texturing.png [View same] [iqdb] [saucenao] [google]
3067948

>>3067779
>Short answer:
To simplify implementation, draw more polygons, and reduce costs.

>Medium answer:
PSX graphics hardware interpolates polygon textures in 2D rather than in 3D in a sense. Textures are interpolated in "screen coordinates" rather than in "perspective correct coordinates". Without going into the math, a picture is worth a 1000 words.

On the left side of the figure is perspective correct texturing. It requires a computationally expensive "perspective divide", which has major repercussions on the design of a graphics pipeline. It's much easier to implement the right side, ignore perspective effects, and just interpolate in 2D. Here I have broken drawing a triangle into applying the texture to 2d triangles, scaling them, shearing them, and finally drawing them. Notice how the visual artifact of wobbly lines arises. This is precisely the thing OP is asking about.

The perspective divide makes things difficult because it can't be implemented as a matrix equation, like everything else. It breaks up your pipeline into before and after divide parts, which necessitates several matrix multiply units. It's not trivial to implement, but clearly is desirable to the point where we are willing to complicate matters to get the effect.

>Long answer:
look in a computer graphics book on how a graphics pipeline is implemented. Perspective divide and homogeneous coordinates are the most complicated part of the whole process.

>> No.3068015

Affine texture mapping.

>> No.3068020

>>3067948
So on the right, the texturing can't calculate a vanishing point because it doesn't have two parallel edges to work with?

>> No.3068028

The good looking PS1 games got around to this, plus no wobbly polygons.

>> No.3068081
File: 76 KB, 976x548, 1455604962951.png [View same] [iqdb] [saucenao] [google]
3068081

>>3068015
Not really.

It's not that the vanishing point can't be calculated when texturing, it's just the designers chose not to. Introducing a vanishing point is exactly equivalent to performing the perspective divide. Without that operation, the best you can do is texture in screen coordinates.

It's a little more complicated than this, but to how division and a vanishing point are the same, imagine you are drawing a 3D scene. Every point is represented by say an (x,y,z) coordinate. When you draw a 3D point to a 2D screen, you have to convert the 3D point to a 2D point. The easiest way to do this is to throw out the z: (x,y,z) -> (x,y).

Perspective has the behavior that the farther away something is, the smaller it is. If we are looking along a direction, things also tend to vanish towards a single point the farther away they get. This can be easily implemented in computer graphics by dividing your x and y by z: (x,y)/z. When z is big, the coordinate (x,y)/z tends towards (0,0) the vanishing point.

Turns out 95% of the math describing real-time computer graphics is linear algebra (vectors and matrices). Turns out this perspective divide is the only non-linear algebra thing in a basic a pipeline. Also turns out this makes the pipeline a lot more complicated.

That being said it is clear the geometry pipeline does have a proper perspective divide. Then again it's a lot cheaper to transform vertcies than fill faces.

>>3068028
They got around this mostly by drawing more polygons. It's a trade off; yeah the PSX could render more polygons than the N64, but the N64 could do perspective correct texturing. For instance, an N64 could implement a large rectangular room with 2 polygons representing the floor, whereas the PSX would have to use hundreds. However the PSX may be better at rendering geometrically complex scenes.

Of course working closely with your hardware you can work around issues, but you can't the graphics hardware.

>> No.3068096

>>3067948
OP BTFO

>> No.3068141

>>3068081
>For instance, an N64 could implement a large rectangular room with 2 polygons representing the floor

And it would look like a gigantic smear because of the 4k texture size limit.

In reality, both consoles would need to subdivide a large ground into several smaller polygons.

>> No.3068151

>>3067829
>>3067884
>>3067891
It has nothing to do with subpixel accuracy or lack of z-buffer, you are all conflating different image quality deficiencies of the PS1.

The real answer is affine texture mapping, which is used in lieu of a computationally expensive perspective divide operation. See >>3067948

>> No.3068161

>>3068141
Tiling you idiot

>> No.3068167

>>3068151
Strictly speaking it's 2D affine texture mapping.

>> No.3068187

>>3068141
I'm pretty sure that's not true at all. Any N64 emulator can prove this empirically. Take a game like Goldeneye, and toggle wireframe mode in the renderer. There are plenty of large squarish rooms in that game and most use few polygons.

You assume that a texture must be stretched uniformly across a polygon at 1x scale. Textures may be tiled and mimapped multiple times.

Obviously we're not talking Dreamcast or PS2 quality textures here; but the actual resolution isn't often much different than a PS1 game's. It's the filtering method that's different (affine vs "cheap" bilinear).

>> No.3068194

>>3068187
>>3068161
>>3068141
tradesoffs are tradeoffs :\

>> No.3068202

>>3068161
>Tiling you idiot

Which still requires subdividing the polygon.

>> No.3068213

>>3068187
I'd say the best programmed PS1 games do have higher texture resolutions than the typical N64 game, but usually N64 games tend to display more "things" as it doesn't burn so many polygons on subdivision - more things also means more textures. So if you're looking at texels inserted into the framebuffer per frame, the consoles are certainly very even, maybe even an advantage to N64 due to the helpfulness of mipmaps (PS1 games will tend to shade distant objects instead).

But if you look at the best programmed N64 games like Conker, they absolutely destroy PS1 in the texture department, at least in some areas of the game, like the haunted mansion.

>> No.3068220

>>3068202
No it doesn't, that's why it's called tiling

>> No.3068374

>>3067948
Cool. Someone should make a Cthulhu game that uses this effect so "angles are all wrong".

>> No.3068378

>>3068374
Going by that line of thought - there were a bunch of Tim Burton-esque titles like Medievil 1-2 on the Playstation that probably looked so great because of this.

>> No.3068425
File: 97 KB, 640x480, 39912-Legend_of_Zelda,_The_-_Ocarina_of_Time_-_Master_Quest_(USA)_(Debug_Edition)-5.jpg [View same] [iqdb] [saucenao] [google]
3068425

>>3068202
specify a single square texture, with texture coordinates from (0,0) to (1,1). Map them to a single square polygon (two triangles), but use (10,10) for the top right corner. Set the texture lookup to "wrap". Whenever your hardware looks up coordinates larger than 1, you "wrap" (divide, effectively) until the value fits into the 0...1 range. 7.6 becomes 0.6 and so on. The output result is that this single large square contains 10x10 tiles of the texture. Of course using other larger-than-one values you can enforce tiling only in one direction. Like (0,0) to (1,3) would result in a texture being repeated exactly three times next to each other, on the same polygon. This is a common texturing mechanism, found not just on the NES, but also OGL, D3D and so on.
The 3 in the example above was a deliberate choice: In the attached screenshots you can see three sets of stairs leading up to the door. They are drawn on a single rectangular primitive (two triangles), using the described mechanism.

>> No.3068427

>>3068425
likewise, the whole slab of floor in front of the stairs is a single large rectangle. The tiling is the result of texture coordinates, not the mesh

>> No.3068436

>>3068220
An alternative mechanism to tile a surface is to use only texture lookups within 0..1, and lots of polygons. Tomb Raider is a classic example that does this. If you want to use tiling on the same polygon, your texture must be standalone, and only the same texture will repeat. If you subdivide your surface, and specify the uv coordinates for each tile, you can use a texture atlas (multiple different textures on the same large texture region) and you can trivially do patterns. You will get visible seams between the polygons though, because you can not bilinear-filter across their boundaries.
So, no, the name "tiling" does not, in any form, hint at the mechanism used to tile.

>> No.3068440

>>3068028
all PS1 games will have wobbly polygons (actually, wobbly meshes) to some extent because it lacks sub pixel correction. There are techniques to minimize the distortion through manipulation of the coordinate system but there is no 'cure'

>> No.3068548
File: 210 KB, 2048x576, PerspCorrectionAffineExamples1-1.png [View same] [iqdb] [saucenao] [google]
3068548

Since this thread is getting all technical here, which method of dealing with affine textures is the best?

I know most PS1 games just subdivide polys to mitigate the problem, but what of other methods like in pic-related?

>> No.3068559
File: 959 KB, 1024x1024, This Is Why Playstation Is Better.gif [View same] [iqdb] [saucenao] [google]
3068559

.GIF related

>> No.3068560

Coz why would you bother at 320x240 resolution.
It would be a literal waste of performance.
This and texture filtering.

>> No.3068571

>>3068559
That's a terrible example, I don't know why people still use that damn gif.

>> No.3068583

>>3067779
That floor looks like it could be rendered with a couple of flat quads and it wouldn't have any of that distortion.

>>3068571
It's called bait.

>> No.3068743

>>3068559

N64 resolutions are even more resolution and blurry than that. It's really absurd.

>> No.3068758

>>3068151
Yeah we know that now. Funnily enough I read this post too >>3067948

Also I was this guy >>3067884 and I never conflated anything with anything. In fact, like you, I disagreed with >>3067829

>> No.3068761

>>3068583
>That floor looks like it could be rendered with a couple of flat quads and it wouldn't have any of that distortion.
But didn't the PS only deal in triangles at low level?

>> No.3068763

>>3068081
What I don't get is how a simple divide operation is more costly than doing all that matrix stuff. If dividing is so essential for 3D then why wasn't that accelerated in hardware too?

>> No.3068767

>>3068763
Matrix stuff is just addition and multiplication, and behaves nicely. Divide is more expensive than multiplication, has a singularity at 0, and the math isn't as nice.

>> No.3068771

>>3068767
You mean how like multiplying 2 whole numbers always gives a whole number but dividing almost always results in a floating point?

>> No.3068781

>>3068571
>>3068583
The point seems pretty straightforward. What's wrong with it?

>> No.3068891

>>3068560
Because it's still obvious as 320x240?

>> No.3068978

>>3068761
Not like it matters, since two flat shaded untextured triangles can easily act as quads. I said quad because there's a GPU function with the same name.

>>3068781
N64's texture is scaled using bicubic filtering and some gaussian interpolation, I don't even know what's going on the PS1's texture or what it's meant to represent in comparison.

>> No.3068981

>>3068559
Oddly enough I like PS1 graphics. For the right games (MGS, TR, and others) the devs use it well to carve out a certain style and look.

>> No.3071670

How do PS1 games have such high res textures when it has less memory than the N64?

>> No.3071696

>>3068081
>>3068440
I jumped then gun with my wording, but basically I meant this. Later PS1 games' wobbling were more or less transparent given that you are playing with dithering and on native resolution or on a CRT as intended.

>> No.3072137

>>3068978
>Not like it matters, since two flat shaded untextured triangles can easily act as quads. I said quad because there's a GPU function with the same name.

I thought the whole essence of the problem discussed in this thread was that reducing to triangles "confused" the texturing algorithm.

>> No.3072242

>>3072137
Anyone who thinks texturing a quad is easier than a triangle doesn't know how texturing is done.

>> No.3072258

>>3072242
gpuBladeSoft software psx video plugin has an option to render to quads.

Drawing as quads is meant to reduce the
texturing distortions, caused by lack of perspective correction. Picture
quality in this case depends on the game's usage of triangle pairs. So, if the
game doesn't use them, this option won't have any effect. You can use it
safely, since the chances of adding any artifacts are very low.

>> No.3072354

>>3071670
PS1 textures usually aren't very high res, but there are lots of tricks like reducing color depth that do help in doing that.

Compared to N64 there are two main advantages
1) You don't have to dick around with the texture cache, which makes loading textures easy (PS1 has a texture cache too and it is smaller than the N64 one, but you don't have to use it)

2) You don't have to worry about decompressing assets from smaller sized cartridges

Usually bad N64 textures are a result of a lack of effort from developers to work within the constraints

3) N64 games tend to render more open worlds. As you would know GTA3 has worse textures than other PS2 games for that reason

>> No.3072384

>>3072354
They also use paletted textures, have some advantages. For example, consider a source texture that has only 256 distinct colors in a 256 by 256 pixel grid. Full-color representation requires three bytes per pixel, taking 192K of texture data. By putting the distinct colors in a palette only eight bits are required per pixel, reducing the 192K to 64K plus 768 bytes for the palette.

>> No.3072392

love me some tech babble

>> No.3072396

>>3072384
TLUTs were also used on N64 but you could only put it in one half of the cache while the texture data would be in the other

>> No.3072412

>>3072242
I keep hearing this argument but research says otehrwise
http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/

>> No.3073121

Why do N64 graphics looks so weird? I didn't grow up with the console and looking at the graphics now it's bizarre.

Is it because of the filtering?

>> No.3073145

>>3067779
No perspective correction

>> No.3073148

>>3068771
He meant that division is a much more complex operation than multiplication. Think about how in school you learnt to multiply numbers long before dividing them.

>> No.3073164

>>3071670
Textures on the N64 could only be drawn from a pitifully small cache (it was like 4KB, only allowing for 32x32 px textures for most practical cases). On the PS1 textures could be drawn from the 1MB VRAM allowing textures sizes up 256x256, or optionally from a 2KB cache a dev could use for more speed.

>> No.3073169

>>3073164
In most cases though, textures on the PS1 were usually 64x64 since they have to share that VRAM with the frame buffer. The textures also had the option to be palatalized (4/8-bit) to save memory.

>> No.3073191

>>3073164
Texturing from VRAM isn't fast though. Ideally you do want the textures in a cache. VRAM texture load is only good for large block loads.

Hence why the better N64 games actually tend to use more textures as compared to even the better PS1 games which usually combine a few high res textures with gouraud shading everything else.

>> No.3073193

>>3072242
*shrugs*
I'm just going by this explanation >>3067948. Makes sense to me...

>> No.3073225

>>3072258
Does any PSX game use quads?

>> No.3073235

>>3073225
Some games "technically" use quads, but not like on the Saturn and they're still triangulated when rendered (like on any other modern 3D system). I'm not sure how that plugin's quad option magic works though.

>> No.3073247

>>3068374
That's a fuckin quality idea, pal. I'd love to see this.

>> No.3073278

>>3073235
>I'm not sure how that plugin's quad option magic works though.

It's a software plugin, you can do whatever you want.

>>3073191
Spyro used texture caching the same way the N64 did, with mipmapping and all that.

>> No.3073352

>>3073278
>Spyro used texture caching the same way the N64 did, with mipmapping and all that.

Seems a bit unlikely considering
A) the PS1 doesn't support hardware mipmapping (you can still do software controlled LOD though)
B) the PS1's texture cache is even smaller (it's 2KB)

>> No.3073358

>>3073352
Yeah it was entirely done in software with prebaked 64x64 and 32x32 textures. You can check the VRAM or even the game files since they're uncompressed.

>> No.3073425

>>3073358
I'm guessing that the 64x64 came from the VRAM and the 32x32 came from the texture cache?

That's pretty clever actually.

>> No.3073427

>>3073425
No, both are 4bit textures and go through the texture cache.

>> No.3073429

>>3072384
Do PS1 games not share a single palette between all textures?
How many total colours are possible given enough textures that each have unique palette entries?

>> No.3073481

>>3073429
You have to draw CLUTs on the VRAM itself and specify the coordinates when calling each texture, so it's entirely up to how you manage the given 1024x512 space (15bits per color).

15bit textures with no CLUT are also allowed.

>> No.3073601

>>3073481
So 15-bit it is.
I assume it can't do 16, but would it even be worth it? It's nice fitting two bytes per pixel/texel exactly, but in the case of 16-bit PC games, everything looks so green in darkness due to that colour having an extra bit.
Isn't this just the textures, though? Can it apply lighting and effects to achieve a 24-bit framebuffer?

>> No.3073627

>>3073601
Every render command of the GPU can only act in 15bit terms although you can state 24bit values for untextured polygons and all that (that goes for lighting and effects as well, but as long they're there, they will be dithered automatically), but you can kinda draw 24bit framebuffers by directly accessing the VRAM, almost no game did this I think.

>> No.3075293

>>3073429
>>3073601
The PS1 can do 24bit color, but it's mostly used for opening logos and I think mdecs.

>> No.3075573

>>3075293
The PS1 in practice can only output 24bit color in MJPEG or JPEG mode. I'm not sure if there's a way to force the mode in gameplay though.

I know for certain that on N64 you can display 24bit color gameplay because Quake 2 does it (with Expansion Pak).

On another note, I find it particularly amusing that SGI did some funky maths to claim that N64's 16 bit color mode was actually equivalent to 21 bit color due to the dither filter and coverage values. 3dfx did a similar thing with the Voodoo 3 I think.