[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vr/ - Retro Games


View post   

File: 182 KB, 1067x867, 46221-1544831305-62299445.png [View same] [iqdb] [saucenao] [google]
5230316 No.5230316 [Reply] [Original]

Enhanced super resolution generative adverserial networks, or ESRGAN, is an upscaling method that takes an low-res image and adds realistic details to it. By doing it over several passes with the goal of fooling its adverserial part, it will usually produce an image with more fidelity and realism than past methods. I have upscaled the textures in Morrowind to four times the vanilla resolution using ESRGAN. Below you can compare various models' results to the original (HR).

https://www.nexusmods.com/morrowind/mods/46221?tab=description

>> No.5230328
File: 596 KB, 1885x937, 46221-1544880920-600360147.png [View same] [iqdb] [saucenao] [google]
5230328

AI is going to make remasters/HD versions an automated process

>> No.5230338

The idea is great. This particular execution of it though, meh.

>> No.5230342

>>5230328
Nice

>> No.5230343
File: 68 KB, 512x512, cat remaster.jpg [View same] [iqdb] [saucenao] [google]
5230343

Well it can't do worse than what modders are currently doing.

>> No.5230351
File: 23 KB, 123x126, 2014-07-19_01-35-38.png [View same] [iqdb] [saucenao] [google]
5230351

>>5230343

>> No.5230358
File: 305 KB, 1067x867, 46221-1544831305-62299445.png [View same] [iqdb] [saucenao] [google]
5230358

>>5230316
You could have do this with waifu 2x long ago.

>> No.5230363

>>5230358
that just looks like a 2xSAI filter though

>> No.5230367
File: 193 KB, 619x597, 1402992400802.jpg [View same] [iqdb] [saucenao] [google]
5230367

>>5230316
Just getting this out of the way.

>> No.5230372

>>5230358
Except that's a scaling filter and looks fucking awful. Makes everything look like a painting.

>> No.5230384
File: 66 KB, 1024x500, 1545167320698.png [View same] [iqdb] [saucenao] [google]
5230384

doom sprite with ESRGAN

>> No.5230392

>>5230384
That's actually quite convincing. Can't wait to see the end result in games.

>> No.5230396

>>5230392
iirc in dooms case the final output is heavily touched up by hand after the AI upscaling

>> No.5230397

>>5230372
You're saying painting like it's something bad.

>> No.5230401
File: 48 KB, 400x375, tumblr_peyvwpef2U1xvyxl5o2_400.jpg [View same] [iqdb] [saucenao] [google]
5230401

final fantasy VII in 4k, automated

>> No.5230404

>>5230401
Is there a higher res image of this?

>> No.5230405
File: 1.06 MB, 1280x1280, ffvii.jpg [View same] [iqdb] [saucenao] [google]
5230405

>>5230401

>> No.5230406
File: 2.44 MB, 1920x1080, xCKiPVn[1].png [View same] [iqdb] [saucenao] [google]
5230406

>>5230397
yes because the game isn't supposed to look like a painting and it just looks goofy

>> No.5230408

>>5230397
You don't want everything to look like a painting, for example things that are not supposed to be paintings.

>> No.5230409
File: 748 KB, 1280x960, tumblr_peyvwpef2U1xvyxl5o3_1280.jpg [View same] [iqdb] [saucenao] [google]
5230409

>>5230404

>> No.5230416

>>5230405
Now instead of everything looking like a painting, everything looks like leather.

>> No.5230418
File: 530 KB, 1280x1200, 4x.jpg [View same] [iqdb] [saucenao] [google]
5230418

4x of original

>> No.5230419

>>5230409
Dunno. Kinda easier for your brain to fill in the lack of detail than to ignore the strange texture.

>> No.5230421

>>5230419
There are better architectures for GAN. It will be possible to fix all of those oddities and make the style match soon.

>> No.5230429

>>5230406
That's different algorithm.

>> No.5230437

>>5230316
doesn't exactly look 4k but impressive none the less

>> No.5230516

>>5230316
Not bad, but too noisy. Looks like they upscaled it, printed it, then scanned it back in. But >>5230328 isn't half bad, with some extra touch ups, mostly to the Daedric font, it could be great.
>>5230409
>>5230418
>>5230421
Fuck, for that matter, it wouldn't even be that much of a stretch to manually fix these up. Makes me wonder how Resident Evil 1-3's backgrounds would do with this.

>> No.5230526
File: 435 KB, 593x539, 1545168507751.gif [View same] [iqdb] [saucenao] [google]
5230526

>>5230516

>> No.5230553

>>5230316
A step in the right direction, but you can't magically create accurate information from noting. It will always take artists. Right now it fills some areas with undesirable nonsense and too much noise.

>> No.5230558

interested in seeing fallout sprites touched up

>> No.5230563
File: 310 KB, 488x782, doomlet.png [View same] [iqdb] [saucenao] [google]
5230563

>>5230384
>>5230316
Seems like some Doomlets are dumb as a rock

>> No.5230571

>>5230316
Hey that's nice, another skillset replaced by automation.

>> No.5230574

>>5230316

Where can I get this ESRGAN?

>> No.5230589

>>5230574
https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-running-on

>> No.5230613
File: 377 KB, 500x527, tumblr_penyfqaQpc1xvyxl5o5_500.gif [View same] [iqdb] [saucenao] [google]
5230613

ff9 background

https://i.4cdn.org/v/1545176658840.jpg

>> No.5230618

>>5230613
That's not FF9 that's 7.

>> No.5230632

Sorry man, it looks bad. What is with that weird texture it puts over everything?

>> No.5230638
File: 1.24 MB, 3000x1600, 1545174941390.jpg [View same] [iqdb] [saucenao] [google]
5230638

top right is original

>> No.5230646

It seems like half of these look perfectly natural and the other half look like complete shit

>> No.5230654

>>5230618
the link at bottom of post is to an image too large for vr

>> No.5230659

>>5230632
>>5230646
It needs more training and more iterations to go through. You also need to do some touch ups / cleaning by hand, it's not fucking magic or something, but it's faster than doing everything from scratch.

>> No.5230660
File: 26 KB, 194x237, yuck.jpg [View same] [iqdb] [saucenao] [google]
5230660

>>5230613
LOOKIN GOOD

>> No.5230670

>>5230659
I understand that. But on some of these, manually touching up would seem to be a hell of a lot of work.

>> No.5230673

>>5230670
Photo work is like kid stuff to professional fx artists

>> No.5230679
File: 230 KB, 592x500, vjyyvg.png [View same] [iqdb] [saucenao] [google]
5230679

AI enhanced

>> No.5230681
File: 28 KB, 148x125, original.png [View same] [iqdb] [saucenao] [google]
5230681

>>5230679
original sprite it was created from

>> No.5230683

>>5230409
>>5230418
These two are very impressive. Damn.

Can somebody take a screenshot of the first level of Donkey Kong Country and run it through this?

>> No.5230686

>>5230683
Yeah, the thing is you could get those type of results on every image with just a better GAN. It's not a huge leap to doing so either.

This isn't close to the "best" way of doing this but it needs more attention from people to start having impact on retro games.

>> No.5230731
File: 539 KB, 1440x900, f1.jpg [View same] [iqdb] [saucenao] [google]
5230731

>>5230316
Yay, finally we can have Fallout 1 in 1080p in the original interface where you can't see 10m in front of you and you have to walk across every square meter of the map to figure out where things are
Pretty good algorithm though.

>> No.5230826

>>5230316
>>5230589
Thanks, got it working. It's really good to give you a base when you upscale artwork, but the output still needs extensive adjustments by hand, i.e. painting all the errors and artifacts out and correcting what the algorithm got wrong.

>> No.5230835
File: 170 KB, 640x400, 1545221016268.png [View same] [iqdb] [saucenao] [google]
5230835

640x400 original

>> No.5230836

but i like my games blurry

besides, I don't care how high res your textures are, they gonna look ass on a 200 polygon character with clunky animation

>> No.5230843
File: 3.45 MB, 2560x1600, upscaled.jpg [View same] [iqdb] [saucenao] [google]
5230843

>>5230835
properly done upscale via AI, no artistic touch all algorithms/AI

>> No.5230849

>>5230843
Uh, I still see A LOT of potential for adjustments and touchups.

>> No.5230852

>>5230849
I meant nothing was hand touched

not that nothing could be improved

Also economics matter. A "free" upscale is very different than something requiring human work in terms of applications.

>> No.5230856
File: 39 KB, 384x313, huh.png [View same] [iqdb] [saucenao] [google]
5230856

>>5230843

1. haloing here and there and everywhere
2. introduces some weird noise patterns

What's the point if it looks like hot garbage?

>> No.5230898
File: 180 KB, 472x724, 1545217193441 (1).png [View same] [iqdb] [saucenao] [google]
5230898

>>5230856
this one worked pretty good

THE LINK NOT IMAGE POSTED
https://i.4cdn.org/v/1545220469360.jpg

>> No.5230920

His skin looks like a basketball. Pretty shitty results

>> No.5230924

>>5230836
This. What’s the point of adding HD textures to a model made of seven doritos

>> No.5230925

>>5230316
>Can be used for any game
Wow, even Adventure on the Atari 2600?

>> No.5230932
File: 35 KB, 634x406, 3B2660BA00000578-0-image-m-111_1481153735473.jpg [View same] [iqdb] [saucenao] [google]
5230932

>>5230924
>>5230924

>> No.5230938

>>5230898
Now it just looks like something from Monkey Island

>> No.5230946

some of these look really fucking good
the others look like a Photoshop filter

>> No.5230952
File: 177 KB, 420x315, 1464927147370.png [View same] [iqdb] [saucenao] [google]
5230952

>>5230526
BY THE DEATH!

>> No.5230953

>>5230932
O-ok...that still doesn’t address the issue of an HD texture being applied to extremely low-poly models. HD textures floating in a black void isn’t how 3D games are rendered

>> No.5230971

>>5230953
The point was it will eventually improve the mesh as well.

>> No.5230972

It is pretty impressive but it's only really useful if the original textures are shit to begin with

>> No.5231010

>>5230971
more like a decade away, would have to re-render a game in real-time to do the mesh.

>> No.5231014

Someone do this with SOTN or Aria of Sorrow.

>> No.5231084
File: 31 KB, 449x546, 1469802018068.jpg [View same] [iqdb] [saucenao] [google]
5231084

>>5230358

>> No.5231086

>>5230316
I would much rather use the mods to get better heads bodies than try to polish up the original horrible body/face textures, on the original horrible body/head models.

>>5230397
For the context? Yes, that shit would look fucking awful in Morrowind.

>> No.5231091
File: 2.47 MB, 3072x2880, ff7.jpg [View same] [iqdb] [saucenao] [google]
5231091

AI artist enhanced this FF7 background

>> No.5231093

>>5230384
That actually doesn't look that bad. I still very much prefer the low res crunchiness of the original textures and sprites though.

>> No.5231094
File: 212 KB, 512x480, 1545233172767.jpg [View same] [iqdb] [saucenao] [google]
5231094

>needing artists anymore

The AI ARTIST is a MASTER

here is original

>> No.5231106

>>5230563
I think he's referencing that people have used algorithms and filters before and the results have always been atrocious.
Not to talk about the big stack of abandoned HiRes projects which all looked terrible at every stage.

>> No.5231109

Give me Super Mario RPG or give me death

>> No.5231113

>>5231106
yeah people don't understand the advantage of AI

This isn't a half baked HD project requiring 3 years of effort.

This is a functional tool that is getting better and better. Once it is good it can be used on every single game pretty much and achieve similar results on all of them.

People are so used to human work that they don't understand the scale advantage of an AI process. Generative / Synthetic content is a huge holy grail for video games, retro or not.

>> No.5231145

y'all blind, looks like shit

>> No.5231149

>>5231113
You can tell who is and isn't an artist in this thread, no rational person whose ever worked in photoshop would tell you that isn't at least a better starting point for repainting higher-res textures. Sure there is a problem in the OPs image with the texture on the skin looking a bit like a carpet weave but that can all be polished alot faster and easier than having to go in and create a round eyeball from a few pixellated squares for instance.

>> No.5231180

>>5231091
Still needs tons of touchup and corrections, don't kid yourself

>> No.5231196

does anyone here wants to try this with quake 2 Skyboxes?

>> No.5231201

>>5231180
I'd rather play FFVII with those upscaled backgrounds than vanilla

>> No.5231203

>>5231201
it begs me to wonder, how advanced the FFVII modding community is.

RE2 and 3 still needs the Sourcenext versions since its better for modding

>> No.5231212
File: 328 KB, 431x450, ccc.png [View same] [iqdb] [saucenao] [google]
5231212

>>5231201

>> No.5231227

>>5231180
That may be, but this is a great start. Saves a lot of effort in the broad strokes.
And there's no telling how much better these AIs will get. They could top out here, or get much much better. Even if they don't progress in their current form, what's been learned from making them will be used in the next generation, and so forth...

>> No.5231234

>>5230405
Terrible example.

>> No.5231235

>>5231212
Cringe

>> No.5231269

>>5231235
Truth

>> No.5231279

Some of these textures look good in isolation but they all look awful in game.

about the only game I can think of that hi res textures could work in is MegaMan Legends and that game doesn't really need a facelift.

>> No.5231357

>>5231279
I disagree. Most of the environment textures look good. The only ones that fail hard are some of the face textures because the uvs stretch more pixels across more screen space that was less noticeable when the texel density was lower.

>> No.5231395

>>5230932
Imagine a point in the future where we can neural interface with a computer and dump the contents of our brain at the point of death so that we may live on as a computer generated 3D avatar created from the scan of a family photograph.

That’s some real cyberpunk shit, I wonder if it could ever happen?

>> No.5231398

>>5231094
AI will surpass mankind in every possible way creatively once it has rid itself, and the world, of jewish presence and influence.

>> No.5231460

Allright, could someone tell me how do i make this work, i want to test with some stuff

>> No.5231464

https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-running-on

>> No.5231478
File: 144 KB, 248x248, why.png [View same] [iqdb] [saucenao] [google]
5231478

Also I ran the cat through without interpolation
Not a good idea to be honest

>> No.5231486
File: 31 KB, 500x385, homer trashcan.jpg [View same] [iqdb] [saucenao] [google]
5231486

>>5231478

>> No.5231513

>>5231478
>autism
you still need to correct the details
this would be great for stuff like RE games, 3d fps shooters and such with modding capabilities

>> No.5231706

>brainlets talk about overlord ais
>while showing the results of a 2d cartesian plane

it's basically an excel spreadsheet, nippas. Basic calculators can do the same thing. Talk to me when computers learn how to love a human beign so I can finally understand how it feels to be loved by someone else.

>> No.5231750

>>5231091
fucking comfy/10

>> No.5231761

>>5231010
None of this would have to be done in real-time; one monkey and a computer can do it then upload it for everyone else to use. Keep in mind you're also looking at a 3D model with textures being lifted from a 2D image (or maybe more than one, but nonetheless still 2D).

>> No.5231795

>>5231761
emulators would have to add in support for it.

>> No.5231806

>>5231795
I don't want to talk out of my ass, but couldn't games that are already hackable be fixed? It's not like you can't find a few thousand shitty Youtube videos of shitty hi-res packs in emulated games (though I don't know what limitations might be head if models are changed drastically). Anyways, this stuff can all be done in advance so long as you can get the game to be read by an emulator. You're not going to run into as many hardware restrictions with an emulator.

>> No.5231880

>>5231761
>>5231806
Yeah, this couldn't be done on-the-fly like with some other form of filters etc. This would need to be done by a developer with access to the source code at the very least since the games themselves don't support textures this large anyway.

>>5231706
I love you, Anon.

>> No.5231904

>>5231880
Would an emulator be able to enable them?

Just 2xing every coordinate or whatever

>> No.5231924

>>5231904
It would be running an AI controlled filter over every single frame in real time. It's theoretically possible but it would be tough to actually implement. The normal filters in use nowadays require next to no intelligence or processing power to achieve compared to this.

>> No.5231943
File: 1.56 MB, 1640x1476, helia peppercats will die in your lifetime.jpg [View same] [iqdb] [saucenao] [google]
5231943

>>5231478

Better version. The compression in the original pic ruins it a bit.

>> No.5231959
File: 776 KB, 1640x1476, og helia.jpg [View same] [iqdb] [saucenao] [google]
5231959

>>5231943

Original at 4xNN for comparison

>> No.5231967

>>5231924
I meant with 2x scaled up sprites on everything not realtime. It'd have to double any drawing calls.

>> No.5232036

>>5231959
>>5231943
Interesting, but still not very visually appealing.

>> No.5232105
File: 518 KB, 1344x1216, LANivA7.jpg [View same] [iqdb] [saucenao] [google]
5232105

FFVIII

1

>> No.5232107

>>5230731
this

>> No.5232110
File: 607 KB, 1920x1408, JFD22FU.jpg [View same] [iqdb] [saucenao] [google]
5232110

2

>> No.5232113

Fallout Mods when?

>> No.5232114
File: 1.80 MB, 2688x1856, Winhill-magic.jpg [View same] [iqdb] [saucenao] [google]
5232114

3

>> No.5232117
File: 1.29 MB, 1664x2496, Winhill-3-magic.jpg [View same] [iqdb] [saucenao] [google]
5232117

4

>> No.5232120
File: 1.06 MB, 1920x1472, Winhill-6-magic.jpg [View same] [iqdb] [saucenao] [google]
5232120

5

>> No.5232121
File: 2.23 MB, 2496x1664, Winhill-unused-magic.jpg [View same] [iqdb] [saucenao] [google]
5232121

6

>> No.5232123
File: 616 KB, 1280x1920, EBg6Rwc.jpg [View same] [iqdb] [saucenao] [google]
5232123

7

sand looks bad here

>> No.5232124
File: 578 KB, 1536x960, umGS1Fv.jpg [View same] [iqdb] [saucenao] [google]
5232124

8

this one shows well the limits of this tool so far, pretty bad result

>> No.5232126
File: 489 KB, 1280x896, H2ONS5G.jpg [View same] [iqdb] [saucenao] [google]
5232126

9 and last

this one looks good in isolation as a painting-like image, but would look bad in game.

>> No.5232150

In a way, it is its own art style, but another thing it is doing is getting the artist close to complete with an upscaling. Some of the "sand" doesn't look like sand, or could with some touch up. Some face texture OP posted are also pretty good, but there does need to be some more clarity in places. Some hard lines need to be more clear where lips meet and the nose, and the eyebrows could look like hair at that resolution. It looks like it would make a nice skin texture overall however.

>> No.5232590

>AI created

holy fuck when will this meme die

>> No.5232604
File: 239 KB, 1284x1080, lets enhance doom.jpg [View same] [iqdb] [saucenao] [google]
5232604

How the hell did that Doom modder pull it off? I've tried several times to upscale Doom sprites with neural networks and it always came out like shit. Even Let's Enhance, which is a paid service, just looks like somebody scanned a 20 year old Doom preview in a game magazine. https://imgur.com/a/9wc7Ol3

>> No.5232621

>>5232126
>https://imgur.com/a/9wc7Ol3
>>5232124
>>5232123
>>5232121
>>5232120
>>5232117
>>5232114
>>5232110
>>5232105

None of these look like something I would want my name associated with

>> No.5233403
File: 215 KB, 804x500, 1545274752282.png [View same] [iqdb] [saucenao] [google]
5233403

>>5232621

>> No.5233428

>>5233403
You honestly think that look good?

>> No.5233442

>>5233428
looks like the gan or whatever got tricked by the metal texture into thinking it was supposed to draw something sci-fi/futuristic. hence the fuckin daft punk helmet lol
Isn't it called no free lunch or something where, when you find your super-res algorithm that works amazingly on one game, it's gonna look like garbage on the next game

>> No.5233451
File: 1.10 MB, 1157x1250, dash hd render.png [View same] [iqdb] [saucenao] [google]
5233451

>>5233403
The original is clearly a 3D model render, why not just re-render it in HD using tech such as RenderMan?
Update the models to be slightly more detailed in geometry, then add shaders for hair and metal.
It would make the hair look amazing on the horse and helmet, and no doubt the metal armor and shield would look really great too.

>> No.5233468

>>5233451
The original sources/models for the sprites were done in 3D Studio Max, but yeah... you really think that 99% of companies that do remasters care that much so that they go through all this?

>> No.5233594

>>5233403
looks cool except for the horses legs.

>> No.5233618

>>5230405
This reminds me of one of those shaders I played around with on epsxe like a decade ago.

>> No.5233627

>>5233468
It just seems no more complex that one could just pay a code bounty commission for someone to make a script that automates either:
- Opening all the old 3DS Max projects
- Re-rendering them in 3DS Max at high res + framerate
or
- Exporting all the old projects to the appropriate format for RenderMan; auto-assigning metal, fabric and hair materials as needed.
- Re-rendering them in RenderMan at high res + framerate

Sure one might have to open the RenderMan projects before the final step and adjust some of the materials, eg. hair, but it still seems like far less work than running the old renders through AI and then having outputted images that require a large amount of additional manual work to be done.

There's a video somewhere showing Pixar using RenderMan to adjust hair/fur on characters, it takes them a couple of seconds to get a result that looks ready for a movie.

>> No.5234423

>>5232123
sand looks like a mass of briars over sand, the AI exaggerated the craggy/cracked texture completely

>> No.5234581

>>5230638
Same algorithm or is this waifu2x or something

>> No.5234636

>>5231093
Cringe

>> No.5234837
File: 180 KB, 813x461, Screenshot from 2018-12-20 18-34-41.png [View same] [iqdb] [saucenao] [google]
5234837

Started training it on the Manga109 dataset: http://www.manga109.org/en/

Don't use it. Some random images were greyscale-only and ESRGAN needed RGB images. I had to split them into 128x128 tiles and cut down the batch size to get it all to fit in my 6GB card. No clue if it's possible to train on Windows.

>> No.5234946

post something and I'll memeGAN it.

>> No.5235024
File: 443 KB, 640x480, tex1_640x480_74037e9c860acf33_5_by_aloooo81-d8s5pbc.png [View same] [iqdb] [saucenao] [google]
5235024

>>5234946
Here. Would be incredible if it spits out something halfway decent like other pre-rendered backgrounds posted. But I suppose I shouldn't hold my breath.

>> No.5235036
File: 2.34 MB, 3840x2880, street_output.jpg [View same] [iqdb] [saucenao] [google]
5235036

>>5235024
It's not a huge fan of text but with minimal retouching this would be somewhat doable.

>> No.5235054

>>5230343
SOMEONE DO THIS

>> No.5235076
File: 23 KB, 384x224, hsf2j.png [View same] [iqdb] [saucenao] [google]
5235076

>>5234946

>> No.5235093

>>5230343
>>5235054
now in 6k.
https://my.mixtape.moe/ufupzw.jpg

>>5235076
bad image, also doesnt' really like overly sprited things.

>> No.5235172
File: 1.40 MB, 2432x1568, Clipboard #6_rlt.jpg [View same] [iqdb] [saucenao] [google]
5235172

>>5234837

I thought it was gonna take days to train, but this took 90 minutes. Maybe it does need that time to bring out more detail, but I like the subtle approach. I'm gonna let it do its thing overnight and release the model when it's done.

>> No.5235175
File: 1.44 MB, 2432x1568, comparison.jpg [View same] [iqdb] [saucenao] [google]
5235175

>>5235172

For comparison, left is the default PSNR network, right is the default ESRGAN network at 0.8 interpolation.

>> No.5235187

How well would this work with Diablo 2?

>> No.5235205
File: 447 KB, 644x644, tumblr_pc83sk8Wbh1xvyxl5o6_r2_1280.gif [View same] [iqdb] [saucenao] [google]
5235205

>>5235187

I've tried it a few times, biggest issues are hard, pixel perfect edges and preserving the clean, straight lines. Pic related's from Let's Enhance, which uses their own variation of SRGAN.

>> No.5235223
File: 3.73 MB, 3864x3864, soulless.jpg [View same] [iqdb] [saucenao] [google]
5235223

>>5235205
soul vs soulless
>>5235187
Post some textures and lets see. Images like this aren't great because they're highly pixelated due to being ingame screenshots rather than actual textures.

>> No.5235276

>>5235036
Thanks anon. I'm surprised at that, it's actually super impressive and even factoring in retouching it, this would be a major time and effort save compared to trying to upscale these from scratch.
>>5235205
This kind of looks like a higher resolution render printed in a magazine and scanned back in. Interesting.

>> No.5235396

Just stop. Every sample posted here looks like shit.

>> No.5235449

>>5235093
>bad image, also doesnt' really like overly sprited things.
Fuck you then, asswipe.

>> No.5235467
File: 2.34 MB, 3840x2880, fflight_output.jpg [View same] [iqdb] [saucenao] [google]
5235467

>>5230405
icky, then again so is this

>>5235449
sorry anon-kyun, here I did this background from street fighter to show you how ugly it makes sprites. Waifu would probably have a bit of an easier time guess but it'll look shit whatever you do because it was made with sprites.
https://my.mixtape.moe/fohvbn.jpg

>> No.5236087
File: 20 KB, 258x195, vomiting pepe.jpg [View same] [iqdb] [saucenao] [google]
5236087

>>5233403

>> No.5236094

>>5230316
Is there an online website that can convert the images (like waifu) or does it need to be done via a downloadable program?

>> No.5236103

I've been working on making hd textures for ffxi and having tried most of these upscaling programs I have to say the results are rarely very good.

At best they are stage one in a process, if you care about the results you still usually have to touch the textures up afterwards manually.

>> No.5236105
File: 1.09 MB, 1280x1280, 1452684737542_rlt.jpg [View same] [iqdb] [saucenao] [google]
5236105

94 epochs later, it stopped because I ran out of hard drive space from all the models it saved.

It's a step up from the PSNR network it initialized from, but someone could do a better job if they knew how to tweak the GAN settings.

>>5236094

Let's Enhance, but you only get the 5 free images before it makes you charge. You can just use different email addresses

>>5236103

Consistency is a big problem with thes, there's nothing that makes 100% of the images look better.

>> No.5236113

>>5236105
holy shit what game

>> No.5236119
File: 191 KB, 320x320, 1452684737542.jpg [View same] [iqdb] [saucenao] [google]
5236119

>>5236113

Chrono Cross, original image is here.

>> No.5236136

>>5235396
I think you need to look at this as a process, if people remain engaged it will get better.

To get to a ferrari you had to have a guy making a cart with square wheels first.

>> No.5236137

>>5230526

Oh my God, can we use this as an emulator filter? You've finally turned me off nearest neighbour.

>> No.5236139

>>5230679

Ew. Never mind what I said about emulation filters.

>> No.5236142
File: 45 KB, 640x400, pq3-Thats it. Put your teeth on the curb..png [View same] [iqdb] [saucenao] [google]
5236142

How is it with 2D sprites?

>> No.5236160

>>5230316
https://www.youtube.com/watch?v=PupePmY9OA8

Looks kind of uneven. I'm sure the AI could be improved, though.

>> No.5236209
File: 1.46 MB, 1216x924, Clipboard #25_rlt.gif [View same] [iqdb] [saucenao] [google]
5236209

>>5236105

http://www.mediafire.com/file/w3jujtm752hvdj1/Manga109Attempt.pth.zip/file

It's possible that the dataset didn't matter because I badly screwed up the GAN settings. But >decent results are >decent results, I guess.

You can interpolate it with the existing networks, just modify net_interp.py.

>> No.5236328

Is there a reddit community or something where people working on this kind of stuff discuss it?

>> No.5236378
File: 1.08 MB, 1024x896, flashbacks to that MLB The Show glitch.jpg [View same] [iqdb] [saucenao] [google]
5236378

>>5236142

No good. Which makes sense, as it needs high res images to compare its upscales to and there's no high res images of pixel art. Backgrounds sometimes turn out alright.

>>5236328

There's /r/gameupscale but it's not very active.

I searched online and found threads about it on RPGCodex and ResetEra.

>> No.5236434 [DELETED] 
File: 60 KB, 155x266, ass eater.png [View same] [iqdb] [saucenao] [google]
5236434

>>5236378

>> No.5236437
File: 64 KB, 155x266, ass eater.png [View same] [iqdb] [saucenao] [google]
5236437

>>5236378

>> No.5236549

>>5235205
Ooh I like that one. Diablo 2 deserves an HD update while preserving its artstyle. I can see a bit of an issue with the shadows of the candles though. They get wider and more blobby on top while the original is trying to convey the tip of the candle casting a shadow over its holder. Also the shaft of the bottom one is being absorbed by the base of the top one

>> No.5236616

This has a lot of potential, and I say that as someone who prefers original graphics to stuff like this.

>> No.5236624
File: 1.13 MB, 1089x804, 17_rlt.gif [View same] [iqdb] [saucenao] [google]
5236624

>>5236549

Gave the manga109 model a shot and it turned out better compared to PSNR_x4 and ESRGAN_x4, though an attempt at >>5235205 looked like ass, probably because it was a GIF frame.

High res backgrounds don't mean much without high res characters, though.

>> No.5237106

>>5236328
>>5236378
https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-and-sftgan

This is a guide for getting it to run on Windows. Haven't tried it, myself.

>> No.5237495

>>5230316
just in case anyone doesn't realise, if you displayed any of these low res textures on a crt screen, they would look as good or better than the interpolations. If you simply squint your eyes at, for example the left of >>5233403 you can start to see the way it would be smoothed out by the bloom of crt phosphors (your brain fills in correct interpolations better than the gan. i.e. it's a knight with a helmet made of a single curved piece of metal with a hole in the front. squint you can see it. you can also see the hair in the horses tail. not the abominations by the gan) the difference is the CRT phosphors give it much more life by having 100x the contrast of whatever you're looking at this on. the highlights of the metal would really glare your eyes, etc. the look of living

>> No.5237513

God damn, these are beautiful.

>> No.5237539

>>5230316
I hope someone will use this tech on AM2R

>> No.5237561
File: 364 KB, 484x405, i definitely dont need it.png [View same] [iqdb] [saucenao] [google]
5237561

All of this is great and all, but what about Quake? Does it even work with it?

>> No.5237591

Holy cow.. imagine the old xcom series...

>> No.5237631

Oh man, you have to do some Resident Evil 2 backgrounds with this shit.

>> No.5237643
File: 48 KB, 950x534, 6be6abbc93161c83db99024f6359d758-why-ron-swanson-was-the-worst-character-on-parks-rec.jpg [View same] [iqdb] [saucenao] [google]
5237643

Still trying to convince yourself(and others) that all this shit look passable in any way?

>> No.5237646

>>5237561
Probably with the skyboxes, I don't see the point in enhancing those textures.

>> No.5237647

>>5237643
Looks better then all the other filters available, so yeah. Why are you so angry about this?

>> No.5237678
File: 358 KB, 1290x1035, pain2.jpg [View same] [iqdb] [saucenao] [google]
5237678

>>5237561

Couldn't find any Quake models online, so I grabbed a Quake 2 model from models-resource.

I used SFT-GAN over ESRGAN as it does somewhat better with game textures (despite being trained on outdoor photos, I wonder why).

>> No.5237704
File: 184 KB, 309x504, so good.png [View same] [iqdb] [saucenao] [google]
5237704

>>5237678
The gun-arm is objectively the best result of it, hands down. The way the transition from flesh to rifle looks is unbelievable.

>> No.5237740

>>5232604
he made retouches later on and he's also a beta tester on the Nvidia Gameworks AI project

>> No.5237780
File: 57 KB, 292x195, skin.png [View same] [iqdb] [saucenao] [google]
5237780

>>5237678
just Open the pak files with PAK Explorer and extract all of the monster and player PCX files
Then convert all of them to TGA format since quake reads it as high res GL textures
>https://www.mediafire.com/file/g0hmf95k9qda8ml/monsters.7z/file

fortunately i still have the Quake2Evolved stuff, it works with Yamagi, Kmq2 and quake2XP

>> No.5237790
File: 3.01 MB, 1024x1024, unit1_bk.png [View same] [iqdb] [saucenao] [google]
5237790

>>5237780
try to run sftgan in all of them, and put into baseq2/models/monsters

>>523764
one guy on the /v/ thread tried to run the q2 env skyboxes to test it as a request, but the results were iffy since the TGA skyboxes are ugly
in my personal opinion the pcx ones(software mode skybox) would look better, but then he used ESRGAN

>> No.5237794 [DELETED] 

>>5237790
meant to quote >>5237561

>> No.5237802

>>5237790
meant to quote >>5237646

>> No.5237835
File: 375 KB, 1332x720, im not sure what Ebert was thinking.jpg [View same] [iqdb] [saucenao] [google]
5237835

>>5237790
>>5237780

I'm gonna try training a couple of new networks first, one on game textures and another on CG images, mostly from movies.

>> No.5237842
File: 55 KB, 284x195, grunt.png [View same] [iqdb] [saucenao] [google]
5237842

>>5237835
can you test it 1st on this one

>> No.5237845
File: 1011 KB, 1168x780, quakie.png [View same] [iqdb] [saucenao] [google]
5237845

>>5237780
An attempt was made

>> No.5237853
File: 1.21 MB, 1136x780, HUUH.png [View same] [iqdb] [saucenao] [google]
5237853

>>5237842

>> No.5237858
File: 1.92 MB, 1704x1170, texture.png [View same] [iqdb] [saucenao] [google]
5237858

>>5237842
Upscaled

enhanced
https://my.mixtape.moe/iwzzsw.jpeg

enhance+color
https://my.mixtape.moe/qzjyvg.jpeg

>> No.5237891
File: 834 KB, 1920x1080, q2xp0008.jpg [View same] [iqdb] [saucenao] [google]
5237891

>>5237858
tested with the enhance + color, had to convert the image on GIMP to TGA and well shit!

before
https://i.imgur.com/Lt28BHH.jpg

>> No.5237894

>>5237891
If you can give me a batch of just textures I can probably go through them all

>> No.5237895

>>5237858

What exactly did you use to pull that off? It looks much better than my stock SFTGAN attempt.

>> No.5237902

>>5237895
memeGAN
sorry not public just yet, although training ESR with a larger and more varied set would most likely have similar if not better results to this spaghetti messOr just wait a few months for NVIDIA to release theirs

>>5237891
>>5237894
Also TGA is fine just preferably a single directory with just the base textures

>> No.5237906
File: 123 KB, 1920x1040, its all on pcx.jpg [View same] [iqdb] [saucenao] [google]
5237906

>>5237894
use the monster pack >>5237780
i'll try to get those tomorrow once i wake up

>>5237902
its mostly because quake 2 supports both formats, pcx for low resolution stock 8bit textures, and TGA for higher resolution if you use Open GL.
Modern ports has no shit with texture size limit if using TGA

>> No.5238005

>>5236137
I would assume the actual calculations are far too intense to use in real-time. You could certainly slam the textures through the converter beforehand and load them in place of the originals with an emulator that supported texture replacement, however. Or just replace them directly, in the case of PC games.

>> No.5238007

>>5230553
It's a much better base for artists to go over, though. It means there's a lot less work for them to do in cleaning it up. It's a tool, not the entire toolbox.

>> No.5238020

>>5237906
Done, did every file so it will most likely look hideous ingame. Also no idea if I did the TGA's right. Slight issue is that the entire thing is 311mb.

Also did no touchups/further enhancing because lazy.

https://mega.nz/#!Ok0B3I6Q!6YUFSffRnx-iBuxrtu_E3_9E4JXVj3AM0pZTholXv-4

may or may not have forgotten to press submit, woops

>>5238005
Ya, converting beforehand is a much better idea, especially considering how fucked some of the results can look some of the time, even with the fastest implementation possible it'll still take a second or two to do each texture.

>>5238007
This.

>> No.5238052

>>5236437
Fuckin kek m8

>> No.5238220 [DELETED] 
File: 8 KB, 734x40, upscalererror.jpg [View same] [iqdb] [saucenao] [google]
5238220

I keep getting this stupid memory allocation error when in the the test.py is set to 'cuda' mode, and trying to upscale anything over ~800x600 or so.

I'm using a GTX 1080Ti, which has 11GB memory.

Anyone know how to set a limit to the GPU mem it uses? Thanks.

>> No.5238228
File: 8 KB, 734x40, upscalererror.jpg [View same] [iqdb] [saucenao] [google]
5238228

I keep getting this stupid memory allocation error when the test.py is set to 'cuda' mode, and trying to upscale anything over ~800x600 or so.

I'm using a GTX 1080Ti, which has 11GB memory.

Anyone know how to set a limit to the GPU mem it uses? Thanks.

>> No.5238241

>>5230405
That's a lotta squares dawg

>> No.5238271

Can you try with Ecstatica?

>> No.5238294
File: 752 KB, 1280x800, shitstatica.jpg [View same] [iqdb] [saucenao] [google]
5238294

>>5238271
Probably too pixely to go based off screenshots that are probably not native res, do you have any in-game textures? Tried it with this one from the wiki at 2x but the result is... less than great with the current model. Someone should make one trained off pixel-y stuff, although that sounds like a bit of a nightmare to get right.

>> No.5238682

>>5230358
>unironically suggesting the use of a vaseline filter

>> No.5238740

>>5238294
God, I forgot how hideous that game was.

>> No.5238785

>>5238294
Loved both, great games

>> No.5238851

I'd like to see what it can do for RE4. Those HD texture people have been working on that project for how many years?
>>5237891
Iron Maiden next?

>> No.5238878

>>5237894
Q2 uses WAL format for map textures, someone at tastyspleen may know how to deal with that
as for the rest, its just pcx files, besides the skyboxes

>>5238851
look >>5238020 the pack needs some adjustments, it works on yamagi and q2xp, but Q2 models are still Q2 models, someone needs to add more polygons into them
only the Skin and pain needs adjustments, infact i think that resizing, filtering and then passing through the AI, would give a better result as >>5237678 pic

>> No.5238885

>>5237858
here's Ground zero weapon textures
>http://www.mediafire.com/file/m8hy9oibaqv0bmc/rogue.7z/file

>> No.5238901
File: 82 KB, 680x680, f5753870a40ccef114a6cb88e7f48531.jpg [View same] [iqdb] [saucenao] [google]
5238901

>>5230316
>entire thread

>> No.5238909

>>5231091
>that woodgrain TV screen

>> No.5238918

>>5231109
bumping this

>> No.5238934

>>5238901
>>>/r/eddit

>> No.5238967
File: 803 KB, 2720x2720, fug.jpg [View same] [iqdb] [saucenao] [google]
5238967

>>5238901

>> No.5238974

So where do we go from here, as far as improving the AI itself?

>> No.5238981

>>5238974
teach itto recognise context better

>> No.5239097

>>5238974

There's new SR methods coming out every week, though a lot of them are focused on medical imaging.

https://arxiv.org/list/cs.CV/recent

https://www.google.com/search?q="super+resolution"+site:arxiv.org

>> No.5239103

>>5238974
Delete this shit and do it like everyone else does, by hand, you lazy c*nt.

>> No.5239159

>>5238974
all of them will be kinda obsolete once Nvidia gameworks comes out in 2019
and its not shilling

>>5239103
>wanting stuff made by hand
>https://www.moddb.com/mods/quake-2-monster-skins
you should think 50 times before spewing shit anon

>> No.5239161

>>5239159
quake 2 needs a model revamp asap

>> No.5239196

>>5239159
If you are unwilling to invest work into something that is near and to you then just stop and go sit in the corner. Others can't help you and machines are still too "stupid" to replace artists in that field.

>> No.5239238

Command and conquer red alert 2 please please please, or aoe 2? Omg that would be sweet

>> No.5239320
File: 96 KB, 615x593, filters.jpg [View same] [iqdb] [saucenao] [google]
5239320

>>5238934

>> No.5239326

>>5239103
You can swear on 4chan, faggot.
Also inherently, just manually painting up high res sprites, or trying to repaint low res ones as such, is not just labor intensive, but very few people are able to actually do it.

>> No.5239405

>>5239326
So instead you shit everything up by automation, untalented lazy cunt

>> No.5239414

>>5239405
i'd like to see you do any better, untalented lazy cunt

>> No.5239416

>>5239405
I never spoke in favor of it, I think this looks ugly as shit, you cocksucking catamite. I only explained the motive.

>> No.5239610

You really need touch-ups and that Nvidia AI neural shit for this. Way too much grain and wood splattering over everything in this thread.

>> No.5239648

>>5236209
That's not half bad. A skosh noisy.

>> No.5239660

>>5239414
I would never try to make a HD pack for old games in the first place, it's pointless you cuckjuggler

>>5239416
Suck my ass faggot

>> No.5239663

tffff I'm creaming in me pants right now! Is this the final blow to the (((modern))) gaming industry and the indiecuckery?

>> No.5239678

>>5239663
Please piss off. This is a meme and right now some kids on /vr/ and /3/ get delusions of grandeur, thinking they can "remaster" old games by simply shitting low resolution images through an upscaler.

>> No.5239743

>>5239660
Jump up your mom's asshole.

>>5239663
Hah! As if.

>> No.5239993

>>5239678
It's a good start for a lot of projects. A lot of attempts are lost causes, but a neural upscale could cover the tedious parts of painting over a lower res image. And in some cases it does really well on it's own with minimal touch ups.

All of it is pointless if the games don't have a way to mod the new graphics into the game though. Modern source port or hack of some sort.

>> No.5239994

>>5239663
>>5239678
>two trolls arguing because theyre too stupid to realize theyre both trolling from different angles
>letthemfight.jpg
these results are seriously impressive and would look great with some manual touch ups

>> No.5240087
File: 2.48 MB, 2600x1100, tsukihime 2x.png [View same] [iqdb] [saucenao] [google]
5240087

>>5230372
>that's a scaling filter
Both Waifu2x and ESRGAN are neural network filters. Waifu2x isn't a classic scaler like SAI or hq2x. It's definitely smarter, just compare this. I deliberately didn't enable the JPEG artifact reduction for Waifu2x. The upscaled image would have looked even better if I had.

>> No.5240127
File: 673 KB, 1369x1073, comparison2.png [View same] [iqdb] [saucenao] [google]
5240127

>>5240087
But hq2x still does better on sprite-based graphics with a limited palette.

>> No.5240158

>>5233627
>>5233451
That's assuming the assets aren't lost and didn't use custom plugins that would require you to recreate the exact software setup from the time to manipulate/render correctly.

>> No.5240160

>>5235036

Someone tell the Resident Evil 3 Restoration guy about this!

https://www.moddb.com/mods/resident-evil-3-restoration-project

Might even work on the cleaned up video frames.

>> No.5240180
File: 2.99 MB, 1440x1080, ResidentEvil3 2018-12-22 21-28-08.png [View same] [iqdb] [saucenao] [google]
5240180

>>5240160
it wouldn't be better if they focus on the Sourcenext version, which has already higher res videos?
also RE3/Sourcenext on PC is a clusterfuck of graphical bugs while RE2 is perfect

>> No.5240187

>>5240160
Depends on how the images are stored, I could probably shit out an upscaled version in a few hours. Videos are also doable but it depends on the framerate as to how long it'd take me.

>>5240127
Sprite-based things isn't really where GAN's shine. Granted, it'll work for certain things but you'd have an easier time just using an algo like hq2x or doing it by hand, and I don't envision that changing any time soon due to how most/all of the current systems function.

>> No.5240192
File: 535 KB, 1280x960, spicyd.jpg [View same] [iqdb] [saucenao] [google]
5240192

>>5240187
Woops, forgot my image, pulled off google images and then 2x'd. I can increase it more if anyone wants to see what it'd look like..

>> No.5240198

>>5240180
i seriously hope that someday, one guy will be autistic enough to fix the texture and model warping in this game, or get the code from the Gamecube version

>> No.5240208
File: 110 KB, 640x480, arc.jpg [View same] [iqdb] [saucenao] [google]
5240208

>>5240187
Could you apply ESRGAN to this? I want to compare it to Waifu2x.

>> No.5240215

>>5238885
has someone looked at this one?
id like to test the results later

>> No.5240218
File: 2.98 MB, 3200x2400, waifucompa.jpg [View same] [iqdb] [saucenao] [google]
5240218

>>5240208
I don't have ESRGAN set up, I use a different thing..

Just did 5x since you didn't specify.

>>5238885
>>5240215
I'll do it in a sec if you want.

>> No.5240221

>>5240218
Thanks. I think that's somewhat worse than Waifu2x, especially the text, but still pretty good.

>> No.5240241

>>5240221
I mean, of course text is going to be bad, this isn't a font scaling system, ideally you wouldn't have the text there in the first place. Waifu2x will probably be better for most illustrated things, given that it was trained solely on illustrated things.

>>5238885
>>5240215
done, 6x in line with the monsters.
https://mega.nz/#!2wchFCwb!GpJ6eJiqn0Ts0NEI0A-9nRsTGhxtn3gXKcerqbyWs28

Results aren't fantastic, textures are a bit too pixelated.

>> No.5240250
File: 11 KB, 348x272, proxskin.png [View same] [iqdb] [saucenao] [google]
5240250

>>5240218
can i suggest a quick test?
i had to resize this one 3x times with lancsos and bilinear

the filtered blurred version with SFTGAN had a better detailing than the ones that i sent, maybe if the textures are a bit more blurry, scaled(maybe 1.5) and filtered, the AI will try to give more detail.

>> No.5240265
File: 1.53 MB, 2560x1920, 1545531830980_rlt.jpg [View same] [iqdb] [saucenao] [google]
5240265

>>5240208

ESRGAN + the Manga109 model. Looks about the same as Waifu2x, which makes sense as that pic is what Waifu2x is supposed to be for. desu it exposes flaws in the drawing and I'd rather keep it 480p

I trained a "Pixar" model using CGI movies and prerendered images, but it looked about the same as the manga model, so I'm moving on to game textures.

>>5239993

It'll likely do even better in the next few years. SRCNN from 4 years ago was the first deep learning SR paper, and it was primitive as fuck, doing about as well as non-deep learning scalers like A+ and dcci2x.

>> No.5240275
File: 2.75 MB, 2048x1536, risc os 4.png [View same] [iqdb] [saucenao] [google]
5240275

>>5240241
>of course text is going to be bad
Right, but I think it's pretty cool how well Waifu2x does on text despite not being trained on it. The only real requirement is that the text must be anti-aliased.

>>5240265
Nice! This is approaching poster quality.

>> No.5240284

>>5240250
Overall worse result. might work alright with other models but not with this, while some areas do look better due to being blurred to begin with, other areas look far, far worse.

>>5240265
niceu, good luck!

>> No.5240304
File: 1.19 MB, 1920x1080, q2xp0025.jpg [View same] [iqdb] [saucenao] [google]
5240304

>>5240250
quake 2 weapon models and monsters are in desperate need of 3ds/maya/blender mesh optimization

>> No.5240664

>>5240304
Q2 can't use skeleton animation, only vertex, so it's always gonna look like trash in motion (same with Doom 3D model attempts)

>> No.5240751

>>5240664
Skeletal animation just moves vertexes based on weights. If it's saving out each vertex's location on export it'll look the same either way.
Doom 3D model attempts look bad because A) everyone who's tried fucking sucks, and B) the monster movement itself is locked to 8 directions and (by default) a low number of frames. You could add more frames, however.

>> No.5240832

>>5240751
I was talking about Quake, not Doom - It will not look like exactly same thing since during interpolation since it works differently for vertex and skeletal. In skeletal you interp some straight rigid lines that are easy to interpolate. Polygon mesh is stuck to that so things that need to be rigid will stay rigid.
Frame interpolation does not work that well with vertex animation, so it will always be wonky because each vertex is interpolated separately or in contextless lumps - without a skeleton it has no idea what's supposed to be rigid and stay together during the movement, hence the "jello wobble".

Doom supports model formats with skeletal animation, but that does not make it better in any way, since the engine itself can't treat polygonal structures any different than sprite frames, which is too limiting for models to look good. Also no proper shading.

>> No.5240931

>>5240304
Looks like trash
>>5240751
Do it better yourself

>> No.5240937

>>5240751
>Skeletal animation just moves vertexes based on weights. If it's saving out each vertex's location on export it'll look the same either way.

You clearly don't know how these methods work, do you?

>> No.5240947
File: 58 KB, 441x302, wheeze.png [View same] [iqdb] [saucenao] [google]
5240947

>>5240751
OH NO NO NO

>> No.5241796

>>5230316
does it fix morrowinds horrible models too?

>> No.5241884

>>5239663
it can't create textures from thin air

>> No.5241918

>>5233442
Umm...you do realise that the game is set in a world where sci-fi mixes with fantasy? The Kreegans (Inferno) are literal aliens, angels are most probably highly advanced androids, there are laser guns, space ships etc. So it would not be that far fetched that the developers intentionally made the helmet somewhat futuristic looking.

>> No.5242102

>>5241796
Better Bodies and Better Heads did that like a decade ago.

>> No.5243361

>>5235036

Well, fuck, its not perfect but god damn am i impressed with this shit.

I'd love to see pre rendered background games done through this, im wondering how well games with animated pre rendered backgrounds like blade runner or fear effect would fair with this.

Some of the backgrounds in REmake could do with this treatment too.

>> No.5243367

>>5237678

Holy fucking shit.

>> No.5243484

>>5230731
>>5232107
>Fallout 1
Is this already possible? Or how was the picture made?

>> No.5243848

>>5243361
For animation you obviously would have to do some touching up afterwards or it will look like shit.

>>5243484
that's just an HD mod that increases the view area, pixel-size of everything stays the same so at 1080p it's all fucking tiny.

>>5236624
>>5235205
these look the most impressive.
However for it to actually work in game the algorithm would have to be taught to preserve the palette without adding any extra colors for the cases where the software-side paletteswaps were used (and they were used for quite a few enemies/decorations)

>> No.5243870

>>5243848
is that hard to open the MPQ files and recplace it with hd stuff

or blizzard was lazy enough for not adding scaling?

>> No.5243884

>>5243870
>lazy
why add it when there was no plan to use it?

>> No.5243886

>>5243884
well because you can replace sprites with in starcraft

in fact Remastered is just a paid mod with fancy HD textures

>> No.5243918
File: 1.08 MB, 2560x1600, 1535965813438.jpg [View same] [iqdb] [saucenao] [google]
5243918

>>5230835
>>5230843
using a different model
https://kingdomakrillic.tumblr.com/post/181294654011/new-esrgan-model-attempt

>> No.5243949

>>5236105
>You can just use different email addresses
it blocks disposable adresses tho

>> No.5243952

>>5243918
oh nvm didnt read till the end of the thrad

>> No.5243979

>>5243870
MPQ uses one same palette-coded sprite for several things/enemies.
In Diablo 1 some walls in special rooms are colored - like red brickwall set in the pics above is exactly same game file as blue brickwall. The game is just told to substitute one array of colors with another on the fly, which is not hard to do at all processing-wise. This saves both hard drive and RAM space, both of which were not cheap at the time (I first played Diablo 1 on a PC that had just 2 gigs of storage). Some enemy variants use the same technique to prouce variation.
This is called optimisation, an art lost to the modern developers.

Another example: in Diablo 2 all armor is composite of 8 pieces (different sprites for legs, pants, body, separate arms, separate pauldrons, gloves and a head) , each of which can be plate, mail or leather. The material determines the leather/mail/plate sprite combinations of which there are quite a lot, and magical properties combined with tier determine exact palette combinations used for each piece - this results in an amazing variety of possible combinations, while having few completely unique sprites.

The algorithm most probably wouldnt even be able to process the encoded image since it most probably uses basic color combinations to show the palettization - like using green, purple, blue, red etc to denote differently shaded areas of a sprite. Doing it straight would result in a garbled mess. One would have to palettize the image before presenting it to the algorithm, then run it backwards through the color-coding process again to get the actual result.

>> No.5243985

>>5243918
A composite of both would probably be the best. Some parts are better one one, while others on the second

>> No.5243994

>>5243979
This. There's no free lunch with this shit, you have to put in work and effort at some point.

>> No.5244971
File: 92 KB, 256x256, boulders_1_8192.gif [View same] [iqdb] [saucenao] [google]
5244971

Trying to train it on reduced colors by limited all the LR tiles to 32 colors. That still might be too much, and JPG compression added a small amount of noise back (can't use PNGs with my workflow).

In order: LR image, 8192 iterations (2 hours in), ground truth.

>> No.5245102 [DELETED] 
File: 889 KB, 1024x896, not intended to be an example of good upscaling.jpg [View same] [iqdb] [saucenao] [google]
5245102

>>5244971

The color banding's gone, which no other model I've trained could remove, but the sharpening is not consistent. I don't know if it needs more training, a different dataset (I used the General100 dataset the ESRGAN guy uploaded) or different settings (in which case I'm SOL)

>> No.5245238

>>5230343
first legit lol of the evening
have a (you)

>> No.5245276
File: 1.47 MB, 640x360, Fireblu.webm [View same] [iqdb] [saucenao] [google]
5245276

>>5230384
I want AI nightmare texturing.

>> No.5245297

>>5240192
>JAPANIES

>> No.5245362

>>5230898
Looks like a camera filter in my old Motorola flip phone from 2006

>> No.5245371
File: 19 KB, 156x388, smoothingfilters.png [View same] [iqdb] [saucenao] [google]
5245371

>>5230367

>> No.5245486

I have to wonder if it's possible to completely redo the GC ports of RE2 and 3 using this method.

>> No.5245491

>>5245486

There's PC ports, and those are what you'd use instead. Native Wndows programs are better than weird programs on consoles.

The backgrounds are 640x480, so they'd upscale well with this method.

>> No.5245519

>>5245486

https://www.moddb.com/mods/resident-evil-3-restoration-project

They still just need some decent programmers it seems.

>> No.5245720

>>5230856
>MUH COLOR NOISE
It's meant to replicate film grain you dumb-dumb.

>> No.5245810

>>5245720
Cope. It just looks like shit.

>> No.5245832
File: 1.18 MB, 1024x896, dkc_hut2_rlt.gif [View same] [iqdb] [saucenao] [google]
5245832

>>5244971

Reduced it to 16 colors and used the OG ESRGAN model to initialize from instead of the PSNR model, because my first attempt looked like blurry shit. This is not the christmas present I wanted to wake up to.

>> No.5246253

>>5245720
I'll never understand why people want to replicate defects. This is the worst form of baby duck to me. It's borderline objective, it's less random artifacts. It's like taking a painting and throwing sand over it. It doesn't add anything but garbage over the image.

>> No.5246283

>>5245519
Peixoto's dickery was bullshit

>> No.5246297

>>5245519
>no description of what they need
>you can apply via private message
It's no wonder they're not getting any help.
They should have listed what specifically they need. You can't just say "programming" and be done with it, be descriptive.

Do they need tools made, formats reverse engineered, engine hacks. What languages? It's all too vague.

>> No.5246590
File: 443 KB, 1280x688, PredatorFakeM203-3.jpg [View same] [iqdb] [saucenao] [google]
5246590

>>5245720
That doesn't look like film-grain at all. Have you seen a film with grain to it before?

>> No.5246613
File: 344 KB, 1280x688, PredatorM134handheld-2.jpg [View same] [iqdb] [saucenao] [google]
5246613

>>5246253
I can see film grain (which that doesn't look like), and maybe sometimes lensflares, but generally I absolutely hate shit such as chromatic aberration, which isn't like a potentially valid stylistic choice like a lensflare, or just the limitations of the time such as grain (potentially one could be trying to make a callback to film of that era), but chromatic aberration just replicates a flawed film lens, or having fucking bad vision.

Chromatic aberration is what someone with astigmatism will see, and they will get glasses so they don't, why someone would intentionally apply that effect to their work when it's objectively bad and ugly, bewilders me. I'd almost say it should be illegal.

>> No.5246809

>>5240265
Can someone try training it on Danbooru2017?
https://www.gwern.net/Danbooru2017

>> No.5246981

>>5240198
>PGXP

>> No.5247007

>>5246981
>PGXP
I SAID PC

>> No.5247019

>>5231395
it will. and when it will happen, we will see the biggest mass suicide that has ever happened in the history of man.

>> No.5247029

>>5240198
Hardcoded into engine, can't be done on PC, requires a wrapper/injector for engine with no known source code.
Hardware-sided on PSX, can be fixed on machine that emulates hardware by altering how it emulates..

>> No.5247030

>>5247007
>>5247029

>> No.5247035

>>5247029
That was fixed on the RE2CLASIC mod for Sourcenext RE2, and even RE1, but they are hesitant to do the same for RE3

why does people at the RE community hates RE3?

>> No.5247042
File: 2.81 MB, 260x150, VHS.gif [View same] [iqdb] [saucenao] [google]
5247042

>not playing the games as they were originally intended to be played

You should all be ashamed for hyping up this janky shit.
I bet you listen to audiobooks, red colored manga and get the latest color graded bluray of movies that weren't color graded back in the day.

>> No.5247053

>>5247042
No, I generally find this stuff ugly.
I do however use VHS tapes as target practice.

>> No.5247056

>>5247042
>baiting

>> No.5247181
File: 956 KB, 1440x1080, tsukihime_2018-05-17_00-59-48_Tsukihime_-_Translated_by_mirror_moon.png [View same] [iqdb] [saucenao] [google]
5247181

>>5240221
You can already have ONScripter display fonts at a higher resolution, and even have them scroll into place at 120FPS. It's a pretty damn nice engine for VNs. Simple as heck and scales well.
This is an example of just scaling fonts to 1080p. Which removes the largest issue in the way for that in particular.

As for replacing textures, it is pretty simple. But I've never tried higher scaled ones. Might take some reconfiguring. Though certainly feasible.

>> No.5247223
File: 251 KB, 580x405, super_mario_64_fountain.png [View same] [iqdb] [saucenao] [google]
5247223

Will we finally learn what it says through the power of AI??

>> No.5247232

>>5247223
You can't magically create missing data from nothing, only generate new like edge information, noise, patterns etc.

>> No.5247251

>>5247232
You can keep telling them that, but they'll never learn what you're trying to say.

>> No.5247262
File: 34 KB, 338x450, polybius.jpg [View same] [iqdb] [saucenao] [google]
5247262

>>5247232
Will we finally learn the truth behind Polybius through the power of AI???

>> No.5247347

Nvidia Gameworks dude, if you want to give it a try
>https://www.mediafire.com/file/z3qc23xa31yiesl/Quake2models.7z/file
it has all 3 xpacs

>> No.5247362 [DELETED] 

Is anybody making half life upscaled pack?

>> No.5247376

>>5247042
>I bet you listen to audiobooks
Yeah, it's easier than trying to read a book while running the machines at work.

>> No.5247431

>>5237678
cool!

>> No.5247549
File: 2.00 MB, 1920x1280, 500 epochs of todd.jpg [View same] [iqdb] [saucenao] [google]
5247549

I think I'm screwing up somewhere, because all the models I've trained look exactly like each other, and yet nothing like the RRDB_ESRGAN-x4 model.

This was trained off nothing but a single image of Todd Howard's face at various scales.

>> No.5247553
File: 2.36 MB, 1920x1280, mango.jpg [View same] [iqdb] [saucenao] [google]
5247553

>>5247549

And this is the Manga109 one.

>> No.5247563

how good is nvidia gameworks? I heard it was worse than ESRGAN

>> No.5247635

>>5247563

>inb4 it's shit but game devs stick with it because it's Nvidia and it's easy to use

The big Western devs like to put their own spin on image tech, but Square Enix totally would just run every background through it without even any retouching.

>> No.5247691
File: 1.56 MB, 1920x1280, bg_ff.jpg [View same] [iqdb] [saucenao] [google]
5247691

>>5247549
>>5247553
here is my rendition

>>5247563
>>5247635
It's meant to be fairly decent from what I've heard.
>>5247347
Seems like a lot of files, I'll run it through tomorrow I guess, wont have time today.

>> No.5248108
File: 73 B, 4x4, Hum_Body_CookSmith_V0_C0.png [View same] [iqdb] [saucenao] [google]
5248108

Should I keep going? I tried with textures from gothic 1
Here is original

>> No.5248110
File: 470 KB, 512x512, Hum_Body_Naked_V4_C1.png [View same] [iqdb] [saucenao] [google]
5248110

>>5248108
shit wrong picture

>> No.5248114
File: 2.38 MB, 2048x2048, Hum_Body_Naked_V4_C1_rlt.jpg [View same] [iqdb] [saucenao] [google]
5248114

>>5248110
And here is a result

>> No.5248115
File: 167 KB, 512x512, Hum_Body_Naked_V4_C1_rltscalleddown.jpg [View same] [iqdb] [saucenao] [google]
5248115

>>5248114
Result scalled down

>> No.5248773

>>5247691
it will be interesting to see how gameworks would treat the textures, esrgan results aren't bad, but it could had been more

>> No.5249002
File: 1.92 MB, 2036x1388, mana.jpg [View same] [iqdb] [saucenao] [google]
5249002

>download the danbooru dataset
>divide them up into ~50,000 tiles
>some of them are in greyscale
>ESRGAN can only do RGB pics

At this point it would be easier to just change the code.

>> No.5249014

These "AI" image filters are such a meme. They add nothing and generally look like shit compared to the original.

>> No.5249015

>>5248115
Looks like it ran it through a sharpening filter and added a shitload of noise for good measure.

>> No.5249027
File: 269 KB, 916x588, 1545828634378.jpg [View same] [iqdb] [saucenao] [google]
5249027

>>5249015

That image wasn't a good candidate for this anyway

>mixels
>DXT compression
>sharp edges
>kind of looks like the devs ran through a sharpening filter

Most SR programs are trained on good uncompressed photos run through cubic downscaling. It's an ideal that doesn't match the reality of retro game images.

>> No.5249035

>>5249027
If the AI only produces acceptable results on a handful of textures, then it's useless for upscaling an entire game. Not to mention the "acceptable" ones still look pretty bad. There's not that many textures in these old games anyways, you'd be better off just having some art interns paint over the old textures in Photoshop at a higher resolution. Or just leave the textures as is. Ultra high resolution textures on super basic geometry just looks weird anyways.

>> No.5249048
File: 958 KB, 932x640, 667562503_preview_Number 009.png [View same] [iqdb] [saucenao] [google]
5249048

>>5249035

Guess I was unclear. They could work on other textures, but they'd have to be specifically trained on images like it. The creators of SR programs want people to be able to easily compare them, so you have standardized datasets and standardized downscaling methods, and the models they often ship with are trained for that end.

>Not to mention the "acceptable" ones still look pretty bad.

That I disagree with you on. I don't think they'd pass for native, but I'd rather have Square Enix use this instead of going for pic related.

>> No.5249125
File: 1.55 MB, 2560x2560, skinnyd.jpg [View same] [iqdb] [saucenao] [google]
5249125

>>5248114
>>5248110

>> No.5249252
File: 1.99 MB, 1280x952, 39633_rlt.png [View same] [iqdb] [saucenao] [google]
5249252

>>5249125

That looks a bit better, but it's hard to tell if textures look good when they're not on a model.

>> No.5249306
File: 877 KB, 807x747, Screenshot from 2018-12-26 20-52-22.png [View same] [iqdb] [saucenao] [google]
5249306

>>5239097
>>5249048

There's one released a couple of weeks ago that directly compares itself to ESRGAN, aiming to do better with noisy images.

https://arxiv.org/pdf/1812.04240.pdf

https://arxiv.org/abs/1812.04240

> It is observed that ESRGAN is rather sensitive to noise. The possible reason is that ESRGAN aims to generate realistic images with emphasizing more on textures and less on noise, which makes the noise is strengthened as textures. Due to the unsupervised degradation network for generating realistic LR images, our method performs much better with increased noise level.

No Github yet.

>> No.5249312

>>5230316
>>5230384
>>5230418
>>5230526
>>5230638
Holy shit, that's wonderful

>> No.5249375

Can someone post anime? Maybe it does a better job than waifu2x

>> No.5249384

>>5249375
nope, waifu still does a better job than those

>> No.5249395

>>5249384
Sure thing, nagadomi. Post proof, please.

>> No.5249525
File: 136 KB, 512x128, 162771.png [View same] [iqdb] [saucenao] [google]
5249525

>>5249375

Left to right: NN, Waifu2x, ESRGAN training on danbooru, ground truth.

Waifu2x does great with cleaning up low res details, but if the details aren't there at all, it falters a bit. A larger scale image would be more comparable, but I'm still training it.

>> No.5249619 [DELETED] 

>>5249525
>those magically generated fingers, mouth, and other details on ESRGAN
>those clean lines and colors with very little artifacting
Wow.

>> No.5250370

>>5249525
waifu would be great for sprites, i tink someone created a HD version of Capcom vs SNK rock with those for mugen, many years ago

>> No.5250879
File: 1.28 MB, 1600x400, 1545257224549.png [View same] [iqdb] [saucenao] [google]
5250879

so, to keep this thread up

>> No.5250890

Can this garbage please die?

>> No.5250893

>>5250890
>>>/v/

>> No.5250953

>>5249525
It looks pretty good, with some more training it could definitively replace waifu2x

>> No.5251579
File: 193 KB, 256x256, Excal1.png [View same] [iqdb] [saucenao] [google]
5251579

Does anyone wants to gve it a try with Wing commander IV textures?
they are pretty much made using the original CG model textures

>> No.5251584
File: 193 KB, 256x256, Excal7.png [View same] [iqdb] [saucenao] [google]
5251584

>>5251579
the entire pack
>https://imgur.com/a/Ozo0dVd

>> No.5251638

>>5251584
Thanks, I already have that original Excalibur + Textures somewhere. I think it would be better to redo them by hand.

>> No.5251669

>>5251579
>they are pretty much made using the original CG model textures

Pretty much. They just took orthographic renders of the high resolution models in Alias for part 3 and 4, cropped it a bit and voilà. The original models aren't even UV mapped, its done with with a custom tool and planar projection.

>> No.5251678

>>5251669
Frankly though, the models and texture detail in Wing Commander 3 looks better than WCIV
Maybe WCIV was supposed to have more resolutions, since it was updated for windows, but since the game was rushed, they cut that and the cockpits.

in fact WCIV has unused resolution lines

>> No.5251727

>>5251678
OK, then my model is the WC3 version. Texture res not great, but still higher than what you uploaded.

>> No.5251736
File: 789 KB, 523x1024, the wonders of AI shenanigans.png [View same] [iqdb] [saucenao] [google]
5251736

Here's a compilation from the shittertera thread, and its impressive the life that this shit does to scummvm games.
>https://imgur.com/a/6zyahTI

>> No.5251745
File: 2.57 MB, 2048x2048, 541707_rlt.jpg [View same] [iqdb] [saucenao] [google]
5251745

>>5249525

18 hours later and it's complete shit. It works on a very narrow category of images (anime-style drawings with zero noise, like pic related) and nothing else. I'm tempted to just throw in the towel and wait for GameWorks or EDSR.

https://www.mediafire.com/file/xxnon4701xybr24/VeryBadBooruModel.pth.zip/file

>> No.5251751

>>5251736
>https://imgur.com/a/6zyahTI
The Monkey Island-stuff is a billion times better than the DeviantArt-tier HD-remake.

>> No.5251758

>>5251745
that is because ESRGAN was made using certain types of materials as base, SFTGAN is no different though better.
the quake 2 stuff on >>5237678 and >>5237858 >>5237891 is impressive, even without tweaks to correct some stuff like the eyes and teeth colour

>> No.5251759
File: 3.46 MB, 2048x2048, 541707_waifu2x_art_scale_tta_1_waifu2x_art_scale_tta_1.png [View same] [iqdb] [saucenao] [google]
5251759

>>5251745

waifu2x for comparison. There's a difference, but just barely.

>> No.5251776
File: 81 KB, 1920x1080, 20180904233402_1.jpg [View same] [iqdb] [saucenao] [google]
5251776

>>5251727
they also look more detailed, WCIV feels like they were all compressed and shoved into the model.
but that's my opinion

and yes this is OriginGoG WC3 with the Windows/Kilrathi saga patch

>> No.5251842
File: 2.40 MB, 2048x1024, 1545948363116_rlt2.png [View same] [iqdb] [saucenao] [google]
5251842

>>5251579

Not very good.

ESRGAN + Manga109, SFT-GAN.

>> No.5251863

>>5251842
Expected. There is simply nothing to work with.

>> No.5251932

>>5251842
jesus christ, but predictable.
>>5251863
i think it would be better if the textures were scaled to 320 or 400 with bilinear filtering, and then use the AI.

>> No.5252095

>>5238294
Estactica isn't supposed to have textures for PC and NPCs. It would totally ruin the style.

>> No.5252108
File: 298 KB, 864x452, e43k29H.png [View same] [iqdb] [saucenao] [google]
5252108

daggerfall sprite with topaz AI (paid)

>> No.5252124

>>5251758
>>5251759
>>5251745
From the samples on Github, it looks like sucks for "noisey" images, and only works well with simple shapes with less colours.
https://github.com/xinntao/ESRGAN

>> No.5252134
File: 188 KB, 320x200, Wing1_wcdx 2018-06-25 22-51-22.png [View same] [iqdb] [saucenao] [google]
5252134

does someone wants to test SFTGAN or some other paid neural on this WC1 image?

the daggerfall thread gave me the biggest hardon ever!

>> No.5252136

>>5252134
bigger images if this one doesn't work
>https://www.wcnews.com/background/wing-commander-1.shtml

>> No.5252151

>>5232604
Try to first scale the source sprite to the desired resolution (try using both bi-linear and nearest neighbour to see if it's any different) and then apply the AI pass on top of it. This should work better.

>> No.5252154
File: 1.03 MB, 1280x1920, SF1 ePSX waifued1.png [View same] [iqdb] [saucenao] [google]
5252154

>>5251759
Tried Waifu2X for heck of it since it supposedly improved now for artwork... Still looks like Nearest Neighbour for pixellated stuff. Original captured from ePSXe software mode 480p, Saga Frontier.

>> No.5252158
File: 1.33 MB, 1280x1920, SF1 ePSX waifued2.png.png [View same] [iqdb] [saucenao] [google]
5252158

>>5252154
More SF1 waifued.

>> No.5252169

>>5252154
That's because you're scaling up from a nearest neighbor scaled image. Quite obviously, even.
If you input an image that's scaled with nearest then it has a clear squared texture already, which no scaler SHOULD try to remove.
Input 240p instead.

>> No.5252170

>>5251842
I think for realistic textures like Quake etc which have lot of grungy rust and mechanical details you'll need to train the AI with different dataset. I have no idea what that would be.
It seems like it works with line art the best as the anime dataset is similar to this. Thus the ScummVM stuff is astonishing.
But with (semi)realistic details such as Quake and other games it will create lot of stretched noise because the dataset is so different from the source material.
This is like magic. I can see what happens in couple of years...

>> No.5252175

>>5252170
Quake 2 stuff worked really well

though im waiting for the gameworks one to compare

>> No.5252190

>>5252175
Not really. And the Doom was disaster.

>> No.5252195

>>5252169
My GPU blacks out when I try to run ePSXe with 300x200 res.

>> No.5252197
File: 264 KB, 664x428, xjj5bU7.png [View same] [iqdb] [saucenao] [google]
5252197

>> No.5252204

>>5252190
dude if you want to be a contrarian to get sone (you), there's /v/.
but then i know that is you Vaysayan

>> No.5252221

>>5252195
Obviously. Unless you manually create a fullscreen resolution that's 120+Hz for it to select that's just not going to work, since most monitors only allow a minimum of ~29kHz frequencies.
But since it's just nearest neighbor scaled, you can scale it back down with nearest neighbor. That's lossless. Assuming ePSXe even scales it with nearest correctly by integer in the first place, at least. Though that's quite the assumption.

>> No.5252227

>>5252204
>Russian shitposter hating on pure AI handmade pixelart, while we all hate his shitty and stolen 3d heresy of doom
like pottery

>> No.5252245

>>5237495
CRTs are amazing for that, sure.
Low resolution is less of a problem because of just how CRTs output their images. Like, it's physically emitted light from phosphors.
The stronger the beam, for the brighter colors/brightness, the larger that section of the phosphor lights up.
Glare looks like glare because it's physically larger, more intense, and indeed brighter.
The fact that it's emitted light from phosphors instead of a partially blocked out backlight also does indeed give them drastically better contrast in general compared to digital monitors. Though OLED also emits light instead, and VA still manages nice contrast anyway.

There's also the fact that there's empty space between the output to give room for the brain to fill in missing details.
Same with the motion, since the image is only displayed for a fraction of a frame, and the brain fills in the duration that it's black. Preventing stairstep motion, making things easier to follow with the eyes. Blurbusters is great for info on that issue.
But it's important to note that not everyone handles that very well. Flicker/gap stuff gives headaches to plenty of people.
Interpolation can be vastly better for those people. A little excess textures MAY be easier to ignore than missing information can be filled in. Even if some of the interpolation may be incorrect, it may simply make the image as a whole easier to see.

>> No.5252246

>>5252227
> AI handmade pixelart
Look at the smeared poop you posted all over the place and then spot the error

>> No.5252257

>>5251736
No thank you, I prefer the original pixel art.

>> No.5252267
File: 15 KB, 192x171, snake-laughing.jpg [View same] [iqdb] [saucenao] [google]
5252267

>>5252227
>pure AI handmade pixelart

>> No.5252501

https://www.youtube.com/watch?v=mwp3nwtOmvs

AI artist got to working on sonic adventure 2

looks okay

>> No.5252906

>>5238294
The Windows version of Ecstatica 1 has 640x480 backgrounds

>> No.5252984

>>5252501
Yes, really nice. You can even see exactly where the upscaler enhanced and augmented the many compression artifacts of the low res version. Other than that; high contrast haloing and noise up the ass.

>> No.5253197

Topaz AI has a 30 day free trial, and the daggerfall images were made on that
>https://forums.dfworkshop.net/viewtopic.php?p=18898#p18898

does someone wants to give it a try?

>> No.5253257

>>5253197
bought it while it was on sale

>> No.5253467

>>5230316
>can be used for any game
how?

>> No.5253589
File: 3.89 MB, 2117x990, 1546019920431.png [View same] [iqdb] [saucenao] [google]
5253589

>>5253467

>> No.5253590

>>5253589
Now do Minestorm for the Vectrex.

>> No.5253593

>>5253589
Good, now fix the disgusting edge fringe

>> No.5253614

>>5240218
Man, this is so good.

>> No.5253617

>>5253614
It literally looks like a circa 1998 photoshop job

>> No.5253629

>>5230558
I second this

>> No.5253640
File: 1.33 MB, 1528x3416, monkey island 2.png [View same] [iqdb] [saucenao] [google]
5253640

This... doesn't look very good.

>> No.5253675

>>5253640
SFTGan is better for ScummVM stuff

>> No.5253717
File: 2.09 MB, 2040x808, fo1_inventory_1_400percent.png [View same] [iqdb] [saucenao] [google]
5253717

>>5230558
tried it, interface stuff scales ok, game sprites end up as mess

>> No.5253720
File: 2.59 MB, 2092x1034, fo1_inventory_2_400percent.png [View same] [iqdb] [saucenao] [google]
5253720

>>5253717

>> No.5253729
File: 1.89 MB, 2550x982, 0.png [View same] [iqdb] [saucenao] [google]
5253729

>> No.5253734

>>5253729
get your memetic shite back to /v/

>> No.5253776
File: 137 KB, 196x416, AltDefault_output.png [View same] [iqdb] [saucenao] [google]
5253776

testing with an old mugen stuff using Topaz AI, and it looks better than WAIFU 2x in my opinon
though it needs more tweaking

>> No.5253809
File: 336 KB, 1926x1116, SuckDickem.jpg [View same] [iqdb] [saucenao] [google]
5253809

>>5230316
I think AI upscaling is some sort of meme

>> No.5253839
File: 2.96 MB, 1408x1132, KRIS.STEELMILL.IFF.png [View same] [iqdb] [saucenao] [google]
5253839

>>5253197
I tried the A.I. thingy on my old artworks =)

>> No.5253951

>>5253839
holy shit is great
and yes, im starting to notice a pattern here.

ESGRAN is great for 3d backgrounds and certain types of Textures like RE backgrounds, realistic textures and such
SFTGAN and TOPAZ Gigapixels works better with pixelated and filtered stuff, since the AI will add more detail and even rebuild it.

>> No.5253981

>>5235093
>now in 6k.
Oh dear god it's terrifying.

>> No.5254229

>>5253717
>>5253720
Fuck, those armors are so fucking good. Many of these still need a lot of manual touching up or different AI training or whatever, but others could be used as is if you really wanted, and they fit the original look so closely. Which is blowing my mind, because the majority of stuff in this thread, while impressive for an AI, should only be used as a base for an artist.

>> No.5254287
File: 2.10 MB, 1920x1080, 1543435235397.png [View same] [iqdb] [saucenao] [google]
5254287

>>5253809
I think that if you make sure to put in some actual work afterwards, Neural Upscaling can be used as the foundation for a HD spriting project.

If you look at Mr. Cyberdemon here, and his amazing sculpted ass and muscled back, the Neural Upscaling has actually done a pretty good job with scaling this up and mostly retaining the shapes and shading.
There's room for improvement, as you can see, but you have a good foundation to actually make something out of this, *IF YOU ARE WILLING TO PUT IN THE MANUAL LABOR TO MAKE IT HAPPEN.*

>> No.5254304

>>5254287
>Cyberdemon ass
>Doomguy hand

I find this picture slightly concerning...

>> No.5254345

>>5254304
Come on now, who hasn't fisted a Cybie to death?

>> No.5254371
File: 2.62 MB, 1536x1536, crash_output.png [View same] [iqdb] [saucenao] [google]
5254371

well i must say, Topaz is impressive

>> No.5254381

Has anyone released Doom/Quake/etc AI hires texture packs yet?

>> No.5254387

>>5254371
Sometimes...

>> No.5254391
File: 469 KB, 1536x1536, r_skin_output.jpg [View same] [iqdb] [saucenao] [google]
5254391

>>5254381
Neural exists for doom, Quake and the others do it by yourself.

3d stuff still needs some improvement as >>5254287 said

>> No.5254403

>>5254391
The software runs on Python right? So, if it was done using C/C++ it would be much faster. I'm going to tinker something with it and see what happens.

>> No.5254464
File: 169 KB, 128x128, evolution.gif [View same] [iqdb] [saucenao] [google]
5254464

Current project is training it on the DIV2K dataset with fixed dithering applied (as seen in games like Riven and Blade Runner). There's an equal mix of 12, 16, 24, 32 and 40 color tiles.

I've love to train SFTGAN instead, but I can't wrap my head around training segmentation maps.

>> No.5254465
File: 1.73 MB, 1280x1024, 4_output.png [View same] [iqdb] [saucenao] [google]
5254465

>>5230316
Dropping some random Amiga-created artworks/pixelart, Topaz A.I., scale 400-600%, max noise reduction

>> No.5254469

>>5254465
Holy shit!

>> No.5254470
File: 1.70 MB, 1472x1132, 666CYGLE_output.png [View same] [iqdb] [saucenao] [google]
5254470

>>5254465

>> No.5254472
File: 1019 KB, 2560x2048, ARCHTDRY_output.jpg [View same] [iqdb] [saucenao] [google]
5254472

>>5254470

>> No.5254473
File: 2.28 MB, 1408x1132, ARTCORE4_output.png [View same] [iqdb] [saucenao] [google]
5254473

>>5254472
face got kinda messed up

>> No.5254478
File: 1.89 MB, 1280x1024, ATMRIDE_output.png [View same] [iqdb] [saucenao] [google]
5254478

>>5254473

>> No.5254480
File: 1.53 MB, 2560x2048, DARK_K_output.jpg [View same] [iqdb] [saucenao] [google]
5254480

>>5254478

>> No.5254482
File: 787 KB, 1280x1024, faceof_output.jpg [View same] [iqdb] [saucenao] [google]
5254482

>>5254480

>> No.5254483
File: 1.86 MB, 1280x1024, GIMENEZ5_output.png [View same] [iqdb] [saucenao] [google]
5254483

>>5254482

>> No.5254484
File: 1.92 MB, 1280x1024, HODGE_BY_output.png [View same] [iqdb] [saucenao] [google]
5254484

>>5254483

>> No.5254485
File: 1.48 MB, 1280x1024, LOL_MACK_output.png [View same] [iqdb] [saucenao] [google]
5254485

>>5254484

>> No.5254486
File: 1.74 MB, 1280x1024, LOVE_SEE_output.png [View same] [iqdb] [saucenao] [google]
5254486

>>5254485

>> No.5254490
File: 1.69 MB, 1280x1024, MACK_LYD_output.png [View same] [iqdb] [saucenao] [google]
5254490

>>5254486

>> No.5254492
File: 1.97 MB, 1280x980, MINION_output.png [View same] [iqdb] [saucenao] [google]
5254492

>>5254490

>> No.5254495
File: 1.89 MB, 1280x1024, PIC101_output.png [View same] [iqdb] [saucenao] [google]
5254495

>>5254492
Again faces, but the source is relatively small

>> No.5254501
File: 2.51 MB, 2560x2048, POLYCHRO_output.jpg [View same] [iqdb] [saucenao] [google]
5254501

>>5254495
faces are problematic

>> No.5254502
File: 2.00 MB, 1280x1024, RAW5_output.png [View same] [iqdb] [saucenao] [google]
5254502

>>5254501

>> No.5254505
File: 2.00 MB, 1280x1024, SEENPNT_output.png [View same] [iqdb] [saucenao] [google]
5254505

>>5254502

>> No.5254506
File: 1.56 MB, 1280x1024, UPSTREAM_output.png [View same] [iqdb] [saucenao] [google]
5254506

>>5254505
done for now

>> No.5254508
File: 486 KB, 800x480, DoC02.png [View same] [iqdb] [saucenao] [google]
5254508

Try something on Jim Sach's Defender of the Crown graphics.

>> No.5254510
File: 32 KB, 640x416, DoC01.png [View same] [iqdb] [saucenao] [google]
5254510

>>5254508
These should be in native 320x240 but couldn't find hope these will suffice.

>> No.5254524
File: 2.67 MB, 2560x1600, output (1).jpg [View same] [iqdb] [saucenao] [google]
5254524

>>5254508
Looks like Topaz REALLY hates the dithering in some parts. Had to pre-blur it.

>> No.5254536

>>5254510
Same problem. Topaz failed at half the images I threw at it while the other half turned out fantastic.

>> No.5254578

>>5254536

Topaz isn't trained on pixels or game art.

>> No.5254607
File: 2.19 MB, 2432x1568, jvillage.jpg [View same] [iqdb] [saucenao] [google]
5254607

>>5254464

Stopped it at 15,000 iterations (about 3 epochs) and the results are hit and miss. As in it does ok with some patches of the image, but leaves other areas blurry. There's a telltale "filtered" look on some of the rocks. I'm hoping all this will go away with more training.

>> No.5254613
File: 1.30 MB, 2432x1568, 520_tislandexterior.4425_rlt.jpg [View same] [iqdb] [saucenao] [google]
5254613

>>5254607

Though it actually does alright with indoor images, considering that it's generating 16 new pixels for each original pixel.

>> No.5254621
File: 46 KB, 800x800, 1465371575618.jpg [View same] [iqdb] [saucenao] [google]
5254621

You know guys, at least for some of us, when we first played these games in their native state, they had the best graphics that amazed us as kids for that time. As we got older and felt nostalgic, we replayed these old games and they looked like absolute shit compared to what we remembered, due to modernized graphics. Now as we go on into our future, nothing new seems to really interest any of us and we all go back to what we original loved those many years ago and we work so hard on improving them to be completely appreciated.

It's all an addiction at this point, we are all just chasing that feeling we had as children, wow and amazed by the graphics but we need to face the truth, we will never have those times back.

But this work is not for nothing, its for everything we loved.

>> No.5254645

really is impressive
>>5254510
>>5254524
if a image has 640 and is pixelated, it will continue pixelated

>> No.5254658
File: 348 KB, 1115x2117, a pitchfork review but its generated by a neural network.png [View same] [iqdb] [saucenao] [google]
5254658

>>5254621

desu I don't do this because I'm chasing the graphical fidelity of my youth, I do this because it makes me feel like I'm creating something despite not putting in any actual effort into it. I can mash up album covers and generate fake Pitchfork reviews and make videos look like paintings, and the tech is so new that the novelty hasn't worn off yet.

>> No.5254720
File: 1.60 MB, 1848x954, skin_output.png [View same] [iqdb] [saucenao] [google]
5254720

im starting to feel that they were also testing on Quake 2 stuff, because all 3 redoice noise and blur choices are kinda neat, though in eed of some hanmdmade tweaks after it

Soldier gooks
>https://imgur.com/a/pE54Lib
>hardliners and bosses
https://imgur.com/a/TkKoxaF
>Zaero boss
https://imgur.com/a/sfFcY6O

>> No.5254808

>>5254524
This is fascinating, thanks! It looks like the algorithm likes most certain kind of line drawings. Pixelated art like this will just emphasize the blockiness of the original.

>> No.5254865

>>5254621
I'm the opposite. 10 years ago I was jumping at this kind of graphical update stuff, but these days they really don't interest me at all.

Maybe I've learned that they won't ever make the game feel like it felt 15-20 years ago, and that's why I don't bother with them.

More likely, I think as I've gotten older I've just learned to have more appreciation for what the artists accomplished within the limitations they had to work under, because the the feeling I almost always get from looking at updated textures/sprites/models/whatever is that they were made by someone of markedly lower skill than than whoever made the original assets.

>> No.5254950 [DELETED] 
File: 2.37 MB, 768x432, buttpummeled.webm [View same] [iqdb] [saucenao] [google]
5254950

>>5254345

>> No.5254953
File: 2.52 MB, 640x360, butt-pummeled.webm [View same] [iqdb] [saucenao] [google]
5254953

>>5254345

>> No.5255048

Questions:
1. Is there a way to run images through this algorithm in a batch to speed things up?
2. Is there a way to deal with textures that use solid blue for transparency so that there isn't any haloing artifacts with blue near the edges?
3. Can I set the output resolution?

>> No.5255104

>>5254621
i don't mind "bad" graphics, but i always want the highest fidelity graphics if possible. there is no reason to be held back by previous limitations. especially if mods/updates/patches are available usually it only enhances the experience.

>> No.5255164

>>5254524
That DEFINITELY does not look as good as the original image. Does it struggle with dithering on other images too?

>> No.5255254

>>5255048
>2. Is there a way to deal with textures that use solid blue for transparency so that there isn't any haloing artifacts with blue near the edges?
Replace the color with transparency. In GIMP that's "Color to Alpha..."
After running it, fix the edges however you need to. Then fill the transparency with that specific color.

>> No.5255262

So I take it this is mainly meant to be used with Realistic visuals? Not stylized stuff?

>> No.5255269

>>5255262
Depends on what you've got it trained on. It'll try to replicate whatever the AI has learned to replicate.
Some have been trained on realistic things, some on things like manga, some on anime/art/booru stuff, etc.

>> No.5255271
File: 51 KB, 352x452, 1535941501546.jpg [View same] [iqdb] [saucenao] [google]
5255271

>>5255269
Hmm. Interesting. I wonder if you could train it to make Mario World look like this

>> No.5255273

>>5255262
I think it needs more research and development, I also think some people expect it to do their job for them.

>> No.5255310

>>5254953
"Good lord what is happening in there?"

>> No.5255317

>>5254508
>>5254510
>>5254524
You NEED to feed these algorithms non-upscaled pixel art. Upscaling makes the base image super blocky and results in, unsurprisingly, the filtered image still being blocky. Feed the same images at their native 320x240 resolution and the results will be much better.

>> No.5255334
File: 107 KB, 768x768, JmAkpInntT0we_Md0uun6T12IPNPbRdwkzInTLQ6xVo.jpg [View same] [iqdb] [saucenao] [google]
5255334

Ok so this upgraded images are nice and all but won't they make the game run like shit?

Games are created to run with the images it uses at the time, if you just stuff higher res images in the game it causes huge problems with stutter and long load times right?

Your hardware is irrelevant, it's the infrastructure of the game that matters and that's not updated with mods.

>> No.5255338

>>5255334
I think at least one of the ideas with Neural Upscaling is to transform the assets themselves, presumably using them in some sort of sourceport that allows higher resolution assets scaled to proper size (so, stuff like GZDoom or EDuke32).

I don't think we're looking at running this in realtime for every frame the game puts out, that'd be crazy.

>> No.5255426

>>5251736
Is this a joke? It looks like absolute shit.

>> No.5255428

Can someone run an image on it for me, please?

>> No.5255431

>>5254865
I used to mod the shit out of New Vegas but now I don't use texture packs aside of something for the sky or something for the bodies, that looks infinitely better than vanilla. I prefer to play the game with the look the devs intended (sans those two exceptions) as opposed to downloading lots of textures that look like ass/aren't faithful at all to the originals.

>> No.5255441

>>5255426
if You think that the AI will optimize and fix some of the stuff in this image alone.
You need therapy for your autism.

the image may had been rescaled, but it still needs some optimization for HD work.
look >>5254287

>> No.5255559

>>5255334
Depends somewhat on the engine. Many don't give a fuck about a larger texture size as long as it's not absolutely ridiculous

>> No.5255576
File: 3.57 MB, 2432x1568, 628_jlagoonhipup.800_rlt.jpg [View same] [iqdb] [saucenao] [google]
5255576

>>5254607

Training didn't seem to help. I'm at a loss as to where I should go from here, maybe mix in other photos without dithering, or adjust random settings in the training file.

At least I can rest easy knowing that if it looks like ass after 2 hours, it'll still look like ass after 12.

>> No.5255614

quicktest with Quake 1 textures
>https://imgur.com/a/5qIggNY

>> No.5255704

>>5255614
unrelated by why the fuck is everyone doing 400% and 600% upscales. I've read the papers and at max you would want to run 100%-300% TOPS to maintain good image quality. No shit everything is going to look like blurry slop.

>> No.5255723
File: 119 KB, 1920x1080, Swq5HTx.jpg [View same] [iqdb] [saucenao] [google]
5255723

before outdoor (textues only

>> No.5255726
File: 132 KB, 1920x1080, after.jpg [View same] [iqdb] [saucenao] [google]
5255726

after

texture pack can be found here if you want to try
https://forums.dfworkshop.net/viewtopic.php?f=14&t=1642&start=40

>> No.5255741

>>5255704
its just testing, but its impressive how much detail the ai puts on it depending on how much noise and blur you remove it

>> No.5255779

>>5255704

ESRGAN's default networks are at 4x scale. You can change the scale, but it would require training a new network from scratch instead of initializing from the PSNR network. x

>> No.5256268

>>5255576
How do you train it? You just feed the pairs original-image / corresponding-small-one to it, so that it learns which patterns are lost where?

If so, try doing this. Start the absolutely fresh model. Take all the Riven's images, then downsize all of them in batch exactly twofold. So, that every resulting pixel is a median of 4 original ones (the downscaling algorithm should probably be the bi-linear one, so that not to fuck up any colors anywhere). This is bound to kill all the dithering, assuming it has single pixel scale.
Then you can take 1/2-downsized pictures, make 1/4-downsized pictures off of them, then, again, feed the pairs 1/2-picture / 1/4-picture to the model. Then do it for 1/4-picture / 1/8-picture pairs, etc.

This has to be the sole training your model receives.

Then test how it fares with enlarging one full-sized picture twofold.

This principle, I think, could be adequately enough described as fractal extrapolation. You are basically tricking the model to treat everything Riven-related as a fractal, then generate new details on upsampling using that sort of fractal model, implicitly generated during the training process.

>> No.5256289

>>5256268
>Take all the Riven's images, then downsize all of them in batch exactly twofold. So, that every resulting pixel is a median of 4 original ones (the downscaling algorithm should probably be the bi-linear one, so that not to fuck up any colors anywhere). This is bound to kill all the dithering, assuming it has single pixel scale. Then feed all the pairs original-image / corresponding-1/2-image to the model.
Fix.

So, generally. You take, Riven's original image (1-image). You bi-linearly downscale using whatever graphical/photo-editor it twofold, getting 1/2-image. You repeat that, getting 1/4-image. Then 1/8 image. Then 1/16 image. Etc., until you can't be bothered anymore.

Then you take all your 1-images and all the corresponding 1/2-images, and feed them in pairs into the model.
Then you take all your 1/2-images and all the corresponding 1/4-images, and feed them in pairs into the model.
Then you take all your 1/4-images and all the corresponding 1/8-images, and feed them in pairs into the model.
Etc.

That way you are teaching it what "scale twofold" means in the context of Riven images specifically. Naturally, you should only ever upscale anything in it, again, strictly by 2x. If you need to do a 8x upscale, you just do 2x thrice.

>> No.5256296

>>5256289
Also, I don't remember the internal resolution of the Riven's images, but it might be a good idea to expand them with black borders to the 1024x1024 size (512x512 is probably too small to contain them). So that there are no rounding artifacts when working with the lower, like 1/16, fractions.

>> No.5256301

Training can be also completely random. Establish a database of 100s of thousands of google images and then just force feed them to the AI. It will learn eventually and then you are just able to do anything what you want.

>> No.5256306

>>5256301
How would training on a Pepe make the model better at scaling Riven?
Also, it might just be interesting enough to look at what an absolutely purely trained model is capable of in relation to what it has been trained on.

>> No.5256336
File: 220 KB, 608x392, 628_jlagoon.800.png [View same] [iqdb] [saucenao] [google]
5256336

>>5255576

Changing the settings did nothing, so now I'm gonna mix dithered and full color images and throw in some outdoor pics from the "Holidays" dataset. http://lear.inrialpes.fr/~jegou/data.php I'm just hoping that the network itself isn't too small for what I'm trying to do, because I don't know how to make it larger or if that's even possible with this ESRGAN inplementation.

I've also archived every single Riven background (contains spoilers) in case anyone wants to mess with them. They're PNGs dumped straight from the original game. https://www.mediafire.com/file/e7cu35nwa1qth28/b_Data-MHK.zip/file

>> No.5256345

>>5256268
>>5256336

That will teach it how to upscale Riven images to 608x392, their original resolution. It won't teach it how to upscale them to 2432x1568. It doesn't know what high resolution images look like unless it has a database of high resolution images to train them.

>>5256289

>Then you take all your 1-images and all the corresponding 1/2-images, and feed them in pairs into the model.

That's already how ESRGAN learns.

>> No.5256358
File: 44 KB, 500x338, cringe.png [View same] [iqdb] [saucenao] [google]
5256358

please tell me this is a troll thread and you guys don't actually think this looks good.

>> No.5256360

>>5256345
>That will teach it how to upscale Riven images to 608x392, their original resolution. It won't teach it how to upscale them to 2432x1568.
It will allow it to extrapolate. Again, my idea is to trick it into perceiving Riven pictures specifically as a fractal - and extrapolate them specifically based on that assumption.
>That's already how ESRGAN learns.
Again, two principal ideas I add is
1) to use strictly bilinear downscaling in order to get the cleanest fractions possible, so that not to subsequently confuse the model
2) using fractions lower than 1/2: 1/4, 1/8, 1/16, and so on - possibly up to 1/1024.

>> No.5256365

>>5256358
>>>/v/

>> No.5256368

>>5256360

I'm doubtful that it'll work, but I'll give it a shot when I'm done with this dataset/

>> No.5256391

>>5256358
It's a work in progress. It's not looking that good now, but training the AI more, and using the right ones, you can make something which can potentially be transformed into a good High Definition sprite or texture.

>> No.5256409

>>5236105

That's really gorgeious. Has a real impressionist painting going on.

Since there's PC versions of FF7-9, and Re1-3, how bout we test these out with the actual games and in-game models? I want to see how 3D models mesh with these. Someone try this.

>> No.5256424

>>5256409
It looks plain silly. You have these upscaled background with almost painterly qualities in places and then some sharp low poly objects on top, preferably with their textures upscaled as well which looks goofy as hell.

>> No.5256427

>>5256424

I want to see it in action. It's at least worth trying.

>> No.5256469

>>5254287
oh hey, it's my screenshot from the doom /vr/ threads

>> No.5256580

>>5256391
Actually, I'd like to try my hand at preparing the dataset. I can't do the actual ESRGAN thing since, well, I have 2006-produced laptop for a computer, so neural networks are a no-no for me. However, I can shave you some work with actually producing all the aforementioned fractional pictures, clipping them correctly, etc.

However, for that, I would like to know, what the valid input for ESRGAN should look like in terms of filenames and directories, so that the program would have no problem in recognizing which png file is supposed to be a miniature version of which another png file.

>> No.5256614

>>5256580

It needs 4 directories:

>training HR - high resolution images for training

>training LR - 1/4th resolution images for training

>validation HR - high resolution images for validation

>validation LR - 1/4th resolution images for validation

The training and validation directories can't share images (though that's just to prevent overfitting, which doesn't matter much here), and the validation directories should have about 10-20% as many images. The HR and LR images should have the same filenames and file formats.

>> No.5256646

>>5256469
It's a good demonstration of what the right AI can do with the right sprites.

>> No.5256649

>>5256614
This is fascination. I think for best results you could have two different training sets: one for backgrounds (including hard edged items, rooms, plants, trees etc) and one for characters - consisting only of character stuff.
This way you can run it separately for sprites and backgrounds for optimised results.
This is my logical thinking.

>> No.5256662

>>5256614

Shit, forgot one thing. Riven in particular is so difficult because of the dithering used the images. If you just blindly cubic downscale the images, ESRGAN won't learn to work around the dithering and will try to "enhance" it.

>> No.5256705

>>5247232
>You can't magically create missing data from nothing
but you can have strong priors, and a good ai zoom could potentially clean up the texture in question

>> No.5256848

>>5256662
You could use a degrain on the images first and then run them through the AI. Best way to do it is to use Nuke to batch process the images or something.

>> No.5256961

>>5230397
There's already a watercolor painting texture pack. I love it.

>> No.5257012

>>5237495
Just put CRT Royale on everything to get the same experience on an LCD

>> No.5257018

>the virgin 16x HD Texture Project
>the Chad ESRGAN

>> No.5257160
File: 2.88 MB, 2432x1568, more shit that no one cares about.jpg [View same] [iqdb] [saucenao] [google]
5257160

>>5256848

I ended up doing something very similar.

>upscale using the "dedither" model
>downscale it
>upscale it again with the stock ESRGAN model at 0.8 interpolation

I could easily write a Python script that automates all of that. Now I just need to train a model that doesn't render stone as grass.

>> No.5257196
File: 3.89 MB, 4864x3136, even more shit that no one cares about.jpg [View same] [iqdb] [saucenao] [google]
5257196

>>5257160

And here's both interp_08 and Manga109Attempt before and after the de-dither model. It's more subtle on the latter, mostly just making it less grainy.

>> No.5257205

>>5256358
It's very clearly a shill thread.

>> No.5257365

>>5257205
That would imply there's money in this.

>> No.5257373

>>5256614
I put my stuff together in an archive, and currently upload it to MEGA. Provided that connection holds the way it currently does, it is going to take an hour and a half.

Since I have nothing better to do at the moment, I am going to yap about what is in that archive and why.

The archive is nearly 3Gb in size and consists of 100000+ pictures, generated from ~3100 pictures from Riven, uploaded here >>5256336. I have deleted the empty pictures, corrupted pictures, developers' photos, and non-rendered backdrops such as Gehn's mosaics or Starry Expanse views.
There are two data sets meant to be used strictly separately. One either trains the net on one data set OR another one, they are not meant to overlap.
"plain" set consists of the original pictures (001_prefixed in the "training HR" folder) and their progressively more downsampled versions retaining all of their color-related information.
"dithered" set is everything from "plain" set converted to 256 colors with Floyd-Steinberg dithering. The idea is that the "dithered" set will train the net to upsample low-res DITHERED pictures into high-res DITHERED pictures. In other words, the idea is that since everything in the set is dithered, the net will have no other choice than to try to factor it out to the best of its ability - before reapplying said dithering back in.
Again, you EITHER train the model on the "plain" data set (it should train faster, but it will stumble on dithering when applied to full-res Riven pictures), OR on the "dithered" data set (it will probably train much slower, but should ultimately waver from Riven's dithering quite a bit less).

>> No.5257386

>>5257373
Both "plain" and "dithered" folders both contain only "training HR" and "training LR" sub-folders. I have not provided any dedicated validation stuff. For that I suggest one simply to cut and paste some 001_prefixed files from the "training HR" folder to the "validation HR" folder, and the corresponding 001_prefixed files from the "training LR" folder to the "validation LR" folder.
When I was compiling the set, I decided that since I am already doing it, I might just as well do it really thoroughly, and basically push what I aimed to do up to 11. What that means, is that in that archive there is absolutely quite a number of 4x2 pixel HR pictures with their respective 2x1 pixel LR counterparts. This may sound incredibly stupid - and it probably is - but, hell, why the hell not.
What else.
Well, from what I imagine, the stuff in the "training" folders themselves is all set and ready to go, and I would really appreciate if it was given a number of passes (regardless of whether in "plain" or "dithered" variety) as is, barring possibly taking some comparatively high-res stuff away for validation purposes.
Well, anyway, I'll just shut up now and post the link once the upload finishes. It actually picked quite a bit of speed as well.

>> No.5257416

>>5256614
>>5256662
>>5257373
>>5257386
https://mega.nz/#!9wJmmaTK!73_-6wNgD0h-iS2sgv_q17qwbCtLcHFvclY64TOH2W8

>> No.5257418

*thread death poke*

>> No.5257434

>>5230679
>>5230681
>>5236139
HoMM3 was always an ugly game, so don't feel too bad.

>> No.5257707

>>5230679
Looks like Ubisoft's HD "remaster".

>> No.5257712

>>5257386
>>5257373
When I have time I'll run it on Resident Evil 4 and then make that Spanish guy cry, he spent few years doing the textures from scratch. Of course manual work is completely different from this but I bet AI can get really close at least in some places.

>> No.5257838
File: 2.16 MB, 2432x1568, 603_jclearcut.1225_rlt.jpg [View same] [iqdb] [saucenao] [google]
5257838

>>5257386
>>5257416

Thank you for the help, but I just realized I forgot a critical detail: the HR and LR images all have to be divided into 128x128 and 32x32 tiles, respectively. I should be able to do that myself, but it'll take some time given the number of files.

I'm also starting to think that some of the blur is coming from DoF or low resolution textures in the original images, which can't be fixed by anything I know of. The handful of noise artifacts aside, maybe pic related is what would happen if you rendered the original Riven assets at 4x.

>> No.5257973

>>5257434
It looks cheesy as fuck by design, so actually I like that everything is a cheap looking prerender.

>> No.5258478

>>5255334
Usually that just means longer loading time before a mission/level. For games like Half-Life which seems to load textures on the fly depending on how close you are to an object or wall, that might cause a framerate drop.

>> No.5258514

>>5255317
>>5252221
I'll try Waifuing some screenshots from Abandonia. They seem to have screen shot all of them at native res.

>>5253729
Nippy Eroge spritework are works of art. I don't think any smoothing upscaling does it justice.

>> No.5258528

>>5257838
128 - and 32? So it really has to be 1/4th, not 1/2th, of the original image? I am asking because my LRs are just 1/2s of the HRs, so when respectively subdivided the way you describe, they won't correspond to each other.

Are there any other critical details? I am willing to remake my data sets (since I really want to see the results, while knowing I did everything I could in relation to the data, so that to make these resuts the best possible ones, given the paradigm chosen), but I have to be sure it will be more or less the last major revision, since it will obviously take quite a bit of time. So I would really appreciate it if you give me the extended version of this
>>5256614 writeup leaving nothing potentially crucial off the record.

>> No.5258546
File: 426 KB, 640x960, waifu seeded.png [View same] [iqdb] [saucenao] [google]
5258546

>>5258514
Well. Looks good for "realistic" higher res games like Dark Seed. Bad for small text and 240p games like Halloween Harry.

>> No.5258570

>>5257838
Or maybe just post the official readme elaborating on the input, if there is any such thing.

>> No.5258591

new thread
>>5258589
>>5258589

>> No.5258597

>>5258591
There's still 70 images left.

>> No.5259339

>>5251736
>>5251751
When can we get a playable Monkey Island with these?