[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vr/ - Retro Games


View post   

File: 994 KB, 1728x2016, IrEdwkf.jpg [View same] [iqdb] [saucenao] [google]
5258589 No.5258589 [Reply] [Original]

FF9 HD backgrounds are being work on-
https://imgur.com/gallery/K1R3wJZ
from https://twitter.com/Ze_PilOt

Daggerfall HD textures (gigapixel)
https://forums.dfworkshop.net/viewtopic.php?f=14&t=1642&sid=e9913a8ddedd4af0bf0c2e2c065a1f3a

Morrowind ESRGAN textures
https://www.nexusmods.com/morrowind/mods/46221?tab=description

There are also some for doom 1 and Gothic.

Tools used: Gigapixel from topaz AI, or ESRGAN

>> No.5258592
File: 749 KB, 2868x2008, ff92.jpg [View same] [iqdb] [saucenao] [google]
5258592

>>5258589

>> No.5258603
File: 629 KB, 1408x1792, dy5hP1V.jpg [View same] [iqdb] [saucenao] [google]
5258603

Note: The creator of the current waifu2x based HD mod for FF9 is working on a new update to help with updated algorithms.

12 hours ago
Like I said, I'm working on making the FFIX backgrounds assets ready so everyone can start using them and make their own background mod without needing to scratch their heads on some quirks.

https://steamcommunity.com/app/377840/discussions/0/142261352660457538/?ctp=54

>> No.5258608

>>5258603
looks awful

>> No.5258614
File: 567 KB, 3840x2160, ON.jpg [View same] [iqdb] [saucenao] [google]
5258614

b4 AI

>> No.5258617
File: 1.51 MB, 2432x1568, still from the old model sorry.jpg [View same] [iqdb] [saucenao] [google]
5258617

>>5258528

Right now I'm just taking the HR images and downscaling them. I'm gonna give it a shot with the dataset you uploaded yesterday, but if you upload another one I'll try it too. The full requirements are

>exactly 128x128 HR images (it can actually be 192x192 if your GPU has the memory; mine does not)
>exactly 32x32 LR images
>JPGs or PNGs with 3 channel RGB (no indexed colors, greyscale or alpha channels)
>each HR image should have a corresponding LR image with the same filename, and vice versa
>anything other than 4x scale requires you to train a new network from scratch; I don't know how long this would take.

Sorry for being careless.

>>5258570

The readme doesn't go into too much depth , and some of the features are actually broken or have unreasonable requirements. For instance, it's supposed to automatically convert greyscale images to RGB, but it doesn't. It can automatically downscale HR images to form the LR ones, but it requires Matlab for whatever reason.

https://github.com/xinntao/BasicSR

>> No.5258621
File: 2.99 MB, 1920x1080, Off.png [View same] [iqdb] [saucenao] [google]
5258621

>>5258614
After AI

>> No.5258626

>>5258621

The HR textures are exposing tiling issues that were previously hidden by the low res. You may want to edit them so they're seamless.

>> No.5258628

>>5258617
The same rules also apply to validation data, I assume? I think I know now, how to handle it.

>> No.5258632
File: 164 KB, 1920x1080, beforeoutside.jpg [View same] [iqdb] [saucenao] [google]
5258632

>>5258626
not mine. All the examples I've seen have tiling issues that aren't fixed

>> No.5258636
File: 203 KB, 1920x1080, after touch up.jpg [View same] [iqdb] [saucenao] [google]
5258636

>>5258632

>> No.5258639

>>5258589
>>5258592
>>5258603
These look awful. The only game where this faux oil painting style would work is Legend of Mana, because it already looks like a painting.

>> No.5258642

>>5258589
Can this shit please die already?!

>> No.5258646

>>5258642
>>5258639
It's only going to get more hype and interest

>> No.5258649

>>5258628

>The same rules also apply to validation data, I assume?

Yes

>> No.5258651

>>5258589
>>5258592
>>5258603
fucking smeared shit
It's filters on the whole new level!

>> No.5258662

>>5258589
>>5258592
>>5258603

I don't care for a lot of these but wow that looks great.

>> No.5258669
File: 482 KB, 1792x1920, QKUSjE8[1].jpg [View same] [iqdb] [saucenao] [google]
5258669

Why FF9 and why 4K? These are 320x240 images right? Bringing them to something like 960 would make way more sense.

And on top fo that, FF7-8, and RE-13 would seem to be better games to start with. They have 640x480 resolution backgrounds right? The ones that were scaled x2 look genuinely amazing. Bigger resolution so more to work with.

And even then, you can't just run it through the filter and call it a day. You have to do that as step 1, and then start fixing errors and redrawing elements.

>>5258651

Those look super blurry. I suspect they're using already pre-blurred images and THEN scaling them up. And I think it's a mistake to use a game that never had backgrounds above 320x240 resolution. Some of them look blurry and bad, and some look okay, some look good.

>> No.5258670
File: 1.05 MB, 512x2181, OW5QARX.png [View same] [iqdb] [saucenao] [google]
5258670

>>5258662
should be playable with them soon I think.

>> No.5258679

>>5258649
Generally speaking, I will only provide the fully dithered set (256-colored, but in truecolor format). Tiles will be blackbordered, where necessary, to expand them to 128x128 or 32x32 where needed. If the AI ends up being unable to factor out the black borders, then frankly it's probably not a very good AI.
"training HR" will consist of data out of the original 1/1 pictures, 1/4th, 1/16th and 1/64th fractions.
"training LR" will consist of data out of 1/4th, 1/16th, 1/64th and 1/256th fractions respectively.
"validation HR" will consist of data out of 1/2th, 1/8th and 1/32th fractions.
And "validation LR" will consist of data out of 1/8th, 1/32th and 1/128th fractions respectively.
Does aforementioned sound okay to you?

>> No.5258683
File: 3.67 MB, 1824x1656, output_output_output.png [View same] [iqdb] [saucenao] [google]
5258683

>>5258670
WHY

>> No.5258690
File: 2.33 MB, 1408x1792, a7aaq9iztdg3t6i2jg5f_rlt.jpg [View same] [iqdb] [saucenao] [google]
5258690

>>5258603

Found another copy of that pic and ran it through the Manga109 model. It's not as sharp, but at the same time the artifacts aren't as obvious.

It's an improvement over the Waifu2x mod. I'm not sure if it's an improvement over NN.

>>5258669

FF7-9's backgrounds are at a variable resolution, it depends on how large the scene is.

>> No.5258692

>>5258679

It should be ok, but once you get below 16 pixels an image it probably isn't worth it.

>> No.5258703

>>5258592

That looks awful.

>>5258690

That looks pretty good. That's already an improvement of the official FF9 PC version which is super duper blurry.

>> No.5258709
File: 160 KB, 1280x1152, l6Wk6RF[1].jpg [View same] [iqdb] [saucenao] [google]
5258709

This looks good.

>> No.5258712

>>5258692
My reasoning inpursuing these ultra-lowres variants of pictures is that. Well, basically, Riven is a work of art. As such, it, as a whole, has a consistent style to it. That style is evidenced, in particular, in repeating forms, lighting, camera angles, frame composition. The superficial details added by the AI will only look good if they conform to that overall style of the game. My idea is that those low-res miniatures, when produced in clean enough manner, will give the AI the high-level information on how the frame basically tends to be put together in Riven, leading to it possibly becoming more capable of choosing more "Riven-esque" minute details out of thin air.
I am sure this train of thought takes way too much for granted, hence my desire to see, whether this idea actally ends up panning out.

>> No.5258763

Do Chrono Cross.

>> No.5258795

>>5258763
https://i.imgur.com/t49XB2K.png
https://i.imgur.com/DFdA97R.jpg

think this was ESRGAN

https://i.imgur.com/Ljthz0Y.png
https://i.imgur.com/rHZDCdj.jpg

>> No.5258797

>>5258589
pretty decent

>>5258592
pretty awful

>> No.5259025

>>5258617
It appears that while preparing the previous data set, I have not used the crop function, while batch processing images, correctly, which might have, and probably did, result in everything smaller than, and not including, 76x49 (1/8th picture) looking considerably blurrier than it was meant to.

The new data set is progressing slowly due to the amount of manual fuckery with splitting images into tiles, and I would probably be unable to push it all the way through to completion today. Still, I advise you so far to pursue your own leads, while I am trying to do my thing, in my opinion, right, and to basically use the new dataset once I finish it, instead of trying to retrofit my old one.

>> No.5259049
File: 3.54 MB, 3428x3000, thicc plokk.jpg [View same] [iqdb] [saucenao] [google]
5259049

>>5259025

Right now I'm running a batch colorization script, since there's at least 1 B&W image out of the ~120,000 tiles that's throwing everything off. Probably should have checked to see if the BasicSR repo fixed that.

>> No.5259103
File: 843 KB, 1280x720, virtual reality.webm [View same] [iqdb] [saucenao] [google]
5259103

OK... I want to try ESRGAN. I installed Python and also the required packages (torch, and the other ones via the "pip install numpy opencv-python" command).

If I try to "import" these packages in the shell I don´t get error messages.

Now I want to try the "test.py". Then I get the following error message:

"E:\ESRGAN-master>test.py models/RRDB_ESRGAN_x4.pth
Model path models/RRDB_ESRGAN_x4.pth.
Testing...
1 baboon
Traceback (most recent call last):
File "E:\ESRGAN-master\test.py", line 37, in <module>
output = model(img_LR).data.squeeze().float().cpu().clamp_(0, 1).numpy()
File "E:\python\lib\site-packages\torch\nn\modules\module.py", line 489, in __
call__
result = self.forward(*input, **kwargs)
File "E:\ESRGAN-master\architecture.py", line 37, in forward
x = self.model(x)
File "E:\python\lib\site-packages\torch\nn\modules\module.py", line 489, in __
call__
result = self.forward(*input, **kwargs)
File "E:\python\lib\site-packages\torch\nn\modules\container.py", line 92, in
forward
input = module(input)
File "E:\python\lib\site-packages\torch\nn\modules\module.py", line 489, in __
call__
result = self.forward(*input, **kwargs)
File "E:\python\lib\site-packages\torch\nn\modules\conv.py", line 320, in forw
ard
self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR"

Did I make any obvious mistake?

>> No.5259116

>>5258589
These never look good. This fad doesn't need a constant thread.

>> No.5259121

>>5258646
No

>> No.5259134

>>5259103
I got it running with this guide:
https://kingdomakrillic.tumblr.com/post/181294654011/manga109-model-attempt-for-illustrations

>> No.5259137

>>5259103
What is that webm from?

>> No.5259156

>>5259103

You have CUDA installed, and the version matches the version you selected in Pytorch? And you installed the 64 bit version of Python?

>> No.5259190

>>5259116
yeah it does

>> No.5259245
File: 3.89 MB, 2560x3200, 53055_rlt.jpg [View same] [iqdb] [saucenao] [google]
5259245

Quest 4 Glory 4 backgrounds.

I had this failed attempt at a model trained on color-reduced images, for games like DKC. It just makes things blurry as shit, but by running the image through that, downscaling again, and upscaling with the stock filter, I was able to get something sharper than Manga109 but not as harsh as the default model.

>> No.5259248
File: 2.97 MB, 2560x3200, 53055.jpg [View same] [iqdb] [saucenao] [google]
5259248

>>5259245

Though it looks a bit janky compared to the originals.

>> No.5259287
File: 694 KB, 1280x960, redchanit01.png [View same] [iqdb] [saucenao] [google]
5259287

>> No.5259293
File: 1.92 MB, 1280x960, redchanit02.png [View same] [iqdb] [saucenao] [google]
5259293

>>5259287

>> No.5259335

>>5259293
dude I like blurry low rez so much better, has so much soul, fuck this AI shit

>> No.5259345

>>5259335
Little sad you think that was some ai.

>> No.5259352

>>5259345
AI is the best name for it. Even if it's less neurons than a rat.

>> No.5259361
File: 1.66 MB, 2560x1920, R10A06_output.jpg [View same] [iqdb] [saucenao] [google]
5259361

>>5259335
>soul meme
fuck back to /v/ you pussy ass underage nigger

>> No.5259365
File: 275 KB, 1024x1024, redchanit03.png [View same] [iqdb] [saucenao] [google]
5259365

>> No.5259368
File: 1.44 MB, 1024x1024, redchanit04.png [View same] [iqdb] [saucenao] [google]
5259368

>>5259365

>> No.5259373

This is a game changer. I can't wait to see what comes from this

>> No.5259380

Could emulators just dump all sprites/textures and have a way to run an AI pipeline on them? Then just refer to the new texture or sprite whenever the game is played.

Seems like the easiest way to do it for a lot of games would be for emulator support for upscaled textures/sprites.

Dolphin seems to already support dumping/editing/etc textures inbuilt. Is there any others?

>> No.5259384

>>5259368

Do the entire Mako reactor section of FF7, mod it into the game, and then make a demo video in HD. I want to see this in action.

>> No.5259396

>>5259361
>>5259368

Is that from the custom SR program an anon here was working on? It looks too good for ESRGAN.

>> No.5259397

>>5259396
>It looks too good for ESRGAN.

The PC ports are 640x480 resolution, twice that of PS games. So it looks better.

>> No.5259401

>>5259121
fuck off, retard

>> No.5259415
File: 1.57 MB, 2560x1920, R50D00_output.jpg [View same] [iqdb] [saucenao] [google]
5259415

>>5259396
>>5259397
thats topaz AI gigapixel >>5259361
with no noise and blur reduction

>> No.5259417
File: 869 KB, 1600x1200, R50D00_output_output.jpg [View same] [iqdb] [saucenao] [google]
5259417

>>5259415
also you can set custom res scaling
>moderate noise and blur reduction

>> No.5259420

>>5259417
>>5259415

Any for RE2?

>> No.5259421

PPSSPP has texture replacement options and some high profile exclusive games. A few of those might be fun to try upscaling with AI.

Ones that would be highest interest
FFVII crisis core
FF tactics
MGS peace walker

If anyone wants to tackle scaling one.

>> No.5259431

>>5259421

And a bunch of PS1 to PSP ports, like the Persona games.

>FF Tactics

Uh, so the game uses sprites. You're recommending AI upscaling the sprites? Or just the textures? Personally I;d like to see both. Worth an experiment.

>> No.5259435

>>5259361
I'm not sure Resident Evil's old pre-rendered backgrounds benefit from being sized up and sharpened.

>> No.5259449
File: 881 KB, 1696x960, redchanit05.png [View same] [iqdb] [saucenao] [google]
5259449

>>5259384
>>5259396
I'm grabbing these images from ESRGAN/SFTGAN threads all over and touching them up slightly in Krita.

#1 reason why so many of these images look bad is because AI upscaling is a meme right now and loads of people who don't know what they're doing are running them.

>> No.5259451
File: 2.15 MB, 1696x960, redchanit06.png [View same] [iqdb] [saucenao] [google]
5259451

>>5259449

>> No.5259452

>>5259431
well any emulator that makes it easy. Just so people can see results and tests.

>> No.5259454
File: 329 KB, 1024x896, redchanit07.png [View same] [iqdb] [saucenao] [google]
5259454

>> No.5259459
File: 1.24 MB, 1024x896, redchanit08.png [View same] [iqdb] [saucenao] [google]
5259459

>>5259454

>> No.5259460
File: 877 KB, 768x768, r3_128srqw_output.png [View same] [iqdb] [saucenao] [google]
5259460

>>5259435
depends of which size you want them to be, larger scale tends to screw up a bit with some images, like sprites with faces.
its much more Easier to see better results with NVIDIA's tool, which will probably be trained to use that, than normal scaling stuff like what we are using it now.

and ESRGAN-SFTGAN weren't trained for that.

>> No.5259470
File: 815 KB, 768x768, r3_128srqw_output.png [View same] [iqdb] [saucenao] [google]
5259470

>>5259460
this is the same texture run with the 3 types of noise and blur removal on topaz tool
>1st pic is moderate with 600% scaling
this pic is none with 600% scaling

>> No.5259478
File: 860 KB, 768x768, r3_128srqw_output.png [View same] [iqdb] [saucenao] [google]
5259478

>>5259470
lastly 600% with strong noise and blur removal
this may look "fine" but they still need some retouches to look good
the morrowind one atleast had the guys working on it, while the RTCW stuff didn't.

you still need to tweak them after the scaling is done by hand.

>> No.5259540
File: 601 KB, 1216x784, redchanit09.png [View same] [iqdb] [saucenao] [google]
5259540

>> No.5259543
File: 1.79 MB, 1216x784, redchanit10.png [View same] [iqdb] [saucenao] [google]
5259543

>>5259540

>> No.5259684

I do believe the purpose of this is not to make it look better, or even good. But rather to make it look similar and free of aliasing artifacts, like blocky pixels with nearest, diamond shape blurry pixels with bilinear, or warpy smudged shit with xBRZ. Make it easier to accept at first glance, rather than needing the viewer to acclimate to aliasing artifacts inherent to normal scaling.
>>5259540 >>5259543
This is a very good example. It doesn't, and really shouldn't, look any more detailed. It only looks cleaner. It has roughly the same visual clarity and complexity as the low resolution one, just properly interpolated in such a way that it doesn't look obviously scaled.

The real issue at that point is avoiding introduction of other artifacts, like excess rough textures (Some that seem to be a sort of psychedelic grain), smudging, or even strange warping.

>> No.5259749

>>5259287
>>5259293
fuck that's wizardry. It seems to clean up stuff with very flat textures and simple geometry very well.

>> No.5259789
File: 67 KB, 517x313, FOT_Buick_Tile.png [View same] [iqdb] [saucenao] [google]
5259789

fallout car

>> No.5259792
File: 285 KB, 1034x626, FOT_Buick_Tile_output (1).jpg [View same] [iqdb] [saucenao] [google]
5259792

>>5259789
just gigapixeled with moderate blur reduction, nothing else

>> No.5259847

This is God's gift to mankind. The person who coded the first initial version of the AI algorithm was guided and channeled by God himself. His Angels descended from the Heaven and used the author's hands to create this magnificient code. I am truly amazed. We are living in blessed era and I hope the future will carry fruits of heavenly inspiration.
I think Fallout 2 could be a great project to undertake or perhaps some Nintendo game, not sure. Anyways I'm going to install the software soon. It runs on top of Python, I'm wondering how much more quicker it would be if it was done in C++. However this is not the main point though. The most important thing is to how to train the AI and where to get the training images suitable for certain tasks.

>> No.5259861

where is LR folder
I don't know how to use Bash

>> No.5259904
File: 1.09 MB, 960x1024, video2.gif [View same] [iqdb] [saucenao] [google]
5259904

>> No.5259913

>>5259861
Better to just forget it if you can't do a simple thing. I am sorry, educate yourself more and then come back. You can't expect to achieve anything if you can't even manage your way to find a directory. This is a fact.

>> No.5259959

Anyone get it working with PPSSPP replace textures function? I just messed around with it for like 20 seconds and it didn't work. Probably an error on my part gonna experiment with it more later.

>> No.5259995

Someone do MGS codec screens.

>> No.5260093
File: 40 KB, 119x151, o.png [View same] [iqdb] [saucenao] [google]
5260093

>>5259449
>>5259451
Hnghhhhh do more fallout pls

>> No.5260134

>>5259959
I did tinker with it ages ago and I remember having issues getting it to work. I think I tried to make it to export the textures but it didn't do anything (or I had some other issue) but I can't really remember any details.

>> No.5260138

>>5259959
To add: make sure you have the folder structure correct if you want to export textures out with PPSSPP. I think my issue was partly because of not having right kind of hierarchy. I think I needed to manually create some directories. Sorry, can't remember.

>> No.5260140

>>5259995
These are the lowest priority of anything in those games. What is your IQ?

>> No.5260148

>>5260138
I only messed with it for 3-4 minutes

I saved the textures, converted with gigapixel, and placed back inside the same folder with same names. Didn't seem to replace them properly and nothing happened when I re loaded.

I'll have to look into it later

>> No.5260160

>>5258589
>>5258592
>>5258603
these look like a combination of dogshit and sour milk. Sad considering some high res images are available from one of the devs

>> No.5260161

>>5260140
>These are the lowest priority of anything in those games. What is your IQ?

125+

The textures in the game would just look slightly better, so who cares? Instead, the Codec scenes are drawn art and would look the most interseting scaled up. Not saying they'd look good, just the most interseting.

>> No.5260275
File: 2.23 MB, 2432x1568, 603_jclearcut.1225_rlt.jpg [View same] [iqdb] [saucenao] [google]
5260275

30,000 iterations on the Riven anon's dataset. I probably brought this on myself for that miscommunication.

>> No.5260280
File: 1.77 MB, 2432x1568, jvillage.jpg [View same] [iqdb] [saucenao] [google]
5260280

>>5260275

>> No.5260290

>>5260275
Iiii'll just keep on trucking with v2, I think.

>> No.5260326

>>5260275
>>5260280
This doesn't look bad at all. I haven't played Riven, does it have same panoramic scenery like Myst III: Exile?

>> No.5260356

Someone do Donkey Kong Country.

>> No.5260652
File: 1.70 MB, 1024x2688, Klobber_Karnage_DKC2_rlt.jpg [View same] [iqdb] [saucenao] [google]
5260652

>>5260356

I've tried several times, pic related is the best I could do. There's not enough information in the images to upscale well, they're barely a step above hand-drawn sprites.

If anyone's interested, I'll upload the "reduced colors" model I used to make this. It's no good on its own, but it works alright as a preprocessor for other models.

>> No.5260703

>>5260652
Funny how stylized it becomes.

>> No.5260796

Now do Resident Evil 2.

>> No.5260903

>>5260796
http://www.mediafire.com/file/8qap293qo9s07s5/RE2_hd_demo.zip

>> No.5260929
File: 1.36 MB, 2220x2000, some random image from danbooru idk.jpg [View same] [iqdb] [saucenao] [google]
5260929

Redoing the Danbooru model, this time with small amounts of noise added to the image so it can learn to remove it. Results are currently similar to waifu2x, we'll see how it turns out after a few more epochs.

>> No.5260963
File: 146 KB, 640x330, redchanit11.png [View same] [iqdb] [saucenao] [google]
5260963

>>5260093

>> No.5260967
File: 403 KB, 640x330, redchanit12.png [View same] [iqdb] [saucenao] [google]
5260967

>>5260963

>> No.5261120
File: 1.91 MB, 350x262, tumblr_o2ibpgygTE1usrgjso1_400.gif [View same] [iqdb] [saucenao] [google]
5261120

>>5260652
>There's not enough information in the images to upscale well, they're barely a step above hand-drawn sprites.

I'd actually like to see how hand drawn art gets handled by this thing. Any chance to put some SF3 assets through it?

>> No.5261176
File: 317 KB, 1920x1080, ULUS10336_00023.jpg [View same] [iqdb] [saucenao] [google]
5261176

TEST

non high res

>> No.5261179
File: 326 KB, 1920x1080, ULUS10336_00020.jpg [View same] [iqdb] [saucenao] [google]
5261179

>>5261176
high res

>> No.5261185

Okay the pipeline is simple for AI-enhancing any PPSSPP game.

Have AI-Gigapixel installed
Go into PPSSPP
press Escape to enter emulator menu
click tools
click developer tools
Texture Replacement Portion -> Enable Saving textures
This will create a textures/new folder for the game inside the PPSSPP documents folder
Put all textures through Gigapixel, remove all output name modifiers and export to /textures folder (Not the folder named new, but the one above it)
Keep in png format

now go back to the same spot you enabled saving textures, click replace textures. You must do this on any new zone/new enemy if you are not using an already made pack.

>> No.5261189

forgot to mention it fucks up alpha channels that are supposed to be transparent.

>> No.5261203

>>5261185
The goal is to bring a different style to the game reminding the arts/photo/movie style effects , the mod has effect of ''antisaw'' even in low resolution the game stay a smooth appearance. THE RESULTS OF THE MOD CAN VARY GAME AT GAME BECAUSE
LIGHT TONES DIFFERENTS.

>> No.5261206 [DELETED] 

>>5261203
how to fix transparency issues is what I wanna know

>> No.5261217

For those bitching the upscales look like oil paintings, the original low-reses on these JRPGs stylistically looked painter-like to begin with. Hell I wouldn’t be surprised if Square ran every Chrono Cross render thru a color-pencil filter as it is before stamping out the final image.

>> No.5261223
File: 2.41 MB, 1543x1024, before and after.png [View same] [iqdb] [saucenao] [google]
5261223

>> No.5261264

>>5261217
The training material makes the AI to 'like' line art painterly style more. If it was trained with real photoreal images and using extensive image set containing hundreds of thousands of images it would be able to create perfectly photorealistic images.
Our biggest effort should be generating a new kind of training library and doing tests with these.

>> No.5261265
File: 526 KB, 1920x1080, psp.jpg [View same] [iqdb] [saucenao] [google]
5261265

Here is FFVII running on PPSSPP with 200% Gigapixel enhancement of textures

>> No.5261397
File: 53 KB, 312x438, ss+(2019-01-01+at+12.49.33).jpg [View same] [iqdb] [saucenao] [google]
5261397

>>5261185
>Have AI-Gigapixel installed
>99$
yeah pass that MEGA link bro

>> No.5261408

>>5261397
go to the downloads section and scroll way down, 30 day trial.

>> No.5261429
File: 80 KB, 384x126, 160220.jpgtile010_13312.png [View same] [iqdb] [saucenao] [google]
5261429

>>5260929

Almost 3 epochs in, 14,336 iterations. It's getting sharper, but it's having trouble completely shaking off the noise. But at least it's suppressing the noise instead of amplifying it like the old danbooru model.

>> No.5261449

>>5261429
I want this

>> No.5261472
File: 257 KB, 1024x1024, Papilio_xuthus_Larva_2011-07-01.jpg [View same] [iqdb] [saucenao] [google]
5261472

>>5259137
old college humor video.

Google was did something similar for their digital zoom on the Pixel. Ultimately they implemented a linear-ish function (the exact function depends on the picture, and is thus not linear, but the individual mappings are) that was trained on a bunch of natural data sets. Ultimately the resulting set of filters looked like the proposed response of simple cells in the lowest levels of the mammalian visual cortex; they kind of looked liked 2D Gaussians multiplied by various spatial frequencies. Something tells me these neural network techniques are homing in on similar filters, although it would be harder to prove.

Interesting stuff. However, does it really make these games better?

>> No.5261481

>>5260703
It's trying to generalize to an art style it wasn't trained on. It would do better if given the original high resolution DK renders and downsamples of them.

>> No.5261489

That's pretty awesome actually.

>> No.5261517
File: 325 KB, 2048x1280, DheYzmkV4AE8qpJ.jpg [View same] [iqdb] [saucenao] [google]
5261517

>>5261481
someone should train on big booty 90s CG

>> No.5261523
File: 188 KB, 640x472, redchanit13.png [View same] [iqdb] [saucenao] [google]
5261523

>> No.5261524
File: 517 KB, 640x472, redchanit14.png [View same] [iqdb] [saucenao] [google]
5261524

>>5261523

>> No.5261534

I beat RE3 like 9 times to get all the epilogues, but I'm actually way more familliar with RE2. I'd love to see the thing try those. Also super curious how it would do on those end-game result screens.

>> No.5261536

>>5261517
exactly. however it would probably take some effort to compile a big enough training set

>> No.5261543
File: 42 KB, 1024x640, Wesker-cin-bio1[1].jpg [View same] [iqdb] [saucenao] [google]
5261543

Could this be used for RE live action segments? They would have to be done frame by frame, so that's 30 frames per second? That would take a while even with how short they are. I wonder how it would react to live action?

I guess the RE2-3 movies would work better because they're CG and thus less detail.

>> No.5261549

>>5261523
>>5261524
>redchanit
thats a name I havent heard in a while

>> No.5261550
File: 15 KB, 346x339, 1533153502661.png [View same] [iqdb] [saucenao] [google]
5261550

>>5261523
>>5261524
face it, the universe does not want sonic in 3d

>> No.5261562

>>5261536
how big would be big enough?

>> No.5261581

>>5261562
a few thousand would be ideal

>> No.5261601
File: 562 KB, 1024x768, ST3A.gif [View same] [iqdb] [saucenao] [google]
5261601

>>5261581
Nice. I've got several hundred old CG images and pre rendered backgrounds and I'm getting an RTX 2070 in a few days.

>> No.5261649

>>5259380
Technically this should be doable for at least some systems and games, but definitely not in real time, and definitely not for certain setups.

PPSSPP can dump all textures to a folder as you play, but it isn't intelligent and needs to be told what textures are truly new for games like Crisis Core.

A little weird to say, but it makes more sense in practice. It's more likely that these packs would need to be created per-game by people who give a shit and customized. Not done automatically by emulators themselves.

>> No.5261654

Super curious what these N64 games with unbelievably low res textures look like with this and the 3-point bilinear on top.

>> No.5261657

>>5259421
I actually manually redid most of the UI for Crisis Core in higher resolution.

http://forums.qhimm.com/index.php?topic=18051.0

Did it all by hand though, not AI. Just, if anyone wants to see.

>> No.5261672

>>5260148
You need to put them into the folder just above they saved to. Also because of the way some of that shit works, you'll probably need an INI file properly set up to tell PPSSPP where some of them should go because it'll keep generating more copies with more file names and shit. You should be able to replace the ones in the in-game spot you dumped them from, though, without a properly set up INI file.

>> No.5261683
File: 87 KB, 770x750, 1546306471359[1].jpg [View same] [iqdb] [saucenao] [google]
5261683

Upscale artwork to 4K.

>> No.5261704 [DELETED] 
File: 176 KB, 850x852, __original_drawn_by_greatmosu__sample-8a92904e986ce6a63815bb8a49afa20e.jpg [View same] [iqdb] [saucenao] [google]
5261704

>>5261429

No difference. Basically identical to waifu2x. I think the problem is that every image in the Danbooru dataset was resized to 500px when it should be trained on a wide variety of sizes. https://www.mediafire.com/file/yf3368elly2t9s8/DanbooruAttempt.pth/file

And here's the reduced color mode I used with >>5260652. It's only good for preprocessing images with color banding, as it's blurry on its own. https://www.mediafire.com/file/oecs2g06yqiawmd/ReducedColorsAttempt.pth/file

>> No.5261710
File: 1.69 MB, 1892x2200, 2904e986ce6a63815bb8a49afa20e_rlt.jpg [View same] [iqdb] [saucenao] [google]
5261710

>>5261429

No difference. Basically identical to waifu2x. I think the problem is that every image in the Danbooru dataset was resized to 500px when it should be trained on a wide variety of sizes. https://www.mediafire.com/file/yf3368elly2t9s8/DanbooruAttempt.pth/file

And here's the reduced color mode I used with >>5260652 . It's only good for preprocessing images with color banding, as it's blurry on its own. https://www.mediafire.com/file/oecs2g06yqiawmd/ReducedColorsAttempt.pth/file

>> No.5261726
File: 1.10 MB, 1600x1560, 1546306560330_rlt.jpg [View same] [iqdb] [saucenao] [google]
5261726

>>5261683

2x was the best I could do. The base image is blurry to begin with.

>> No.5261747
File: 103 KB, 640x480, redchanit15.png [View same] [iqdb] [saucenao] [google]
5261747

What are some other good pre rendered games to try scaling?

>> No.5261750
File: 535 KB, 640x480, redchanit16.png [View same] [iqdb] [saucenao] [google]
5261750

>>5261747

>> No.5261756
File: 79 KB, 640x480, redchanit17.png [View same] [iqdb] [saucenao] [google]
5261756

>> No.5261758
File: 485 KB, 640x480, redchanit18.png [View same] [iqdb] [saucenao] [google]
5261758

>>5261756

>> No.5261760
File: 44 KB, 512x448, redchanit19.png [View same] [iqdb] [saucenao] [google]
5261760

>> No.5261763

>>5261601

I'll try training a model myself if you upload your collection. Other than that, I think I've done with ESRGAN. Gigapixel is clearly better and there are other SR programs on the horizon.

>> No.5261765
File: 375 KB, 512x448, redchanit20.png [View same] [iqdb] [saucenao] [google]
5261765

>>5261760

>> No.5261776
File: 2.81 MB, 2302x1258, ff type 0.png [View same] [iqdb] [saucenao] [google]
5261776

type 0 results with only gigapixel and no pre-post processing. The source texture are pretty filtered/blurry so not much changes when using giga

>> No.5261785

>>5261776
without bump mapping the difference is borderline imperceptible

>> No.5261790

>>5261763
I'll upload them later or tomorrow. They're scattered across a few places and some are in old/dead formats.

>Gigapixel is clearly better and there are other SR programs on the horizon.
Is there a good site/forum that tracks these developments?

>> No.5261792

>>5261750
>>5261747

Are you upscaling it and then backscalling it back to 640x480? If so what's the point? It gets rid of the alaisiing, sure, but at least make it 960.

>street sign

Notice that when it has enough info, it works great. But those street signs are too low detail. You'd have to manually re-draw them with meaningful info.

>> No.5261801

>>5261790
nope, most is academic so it they don't care about applying it. Also all the money is in other services right now like vision, speech, etc.

>> No.5261809

>>5261747
I think the old RE games benefit from the backdrops actually not being much sharper and clearer.

>> No.5261825
File: 2.28 MB, 1280x960, s33.png [View same] [iqdb] [saucenao] [google]
5261825

>>5261792
>Are you upscaling it and then backscalling it back to 640x480? If so what's the point? It gets rid of the alaisiing, sure, but at least make it 960.

Yes. 480 is still 4x the detail. I'm doing some image edits to fix a few problems. The higher the scaler goes the more guesswork, and the more artifacts become apparent. This is what it looks like simply scaling it to 960. I can't scale any higher because my card doesn't have enough VRAM

>> No.5261831

>>5261747
>What are some other good pre rendered games to try scaling?

Re1 (since everyone forgets that game, and it's a bit ugly)
RE remake
RE:0 (very interesting sine there is a real HD version so you can compare the theoretical HD to the real HD)
FF 7-9
Parasite Eve 1-2
Fear Effect 1-2.

>> No.5261842

Could someone do thus with Ridge Racer 4 ?

Im sure there would be far less work involved than an RPG or GT

>> No.5261850

>>5261825

1280x960 makes a better target resolution to aim for than 640x480. But you'd have to go background by background and start re-drawing major elements of them.

>> No.5261864

>>5261850
>1280x960 makes a better target resolution to aim for than 640x480
Eh, not for specific PC games like FF/RE, since they usually render at 640x480 with nearest scaled backgrounds. Without further hacking that's a pretty acceptable target.
Really not sure if there's any pre-existing support for higher resolutions with later versions or mods.

>> No.5261870

>>5261790

Not really. There's threads on forums like RPG Codex and ResetEra and a small Reddit board (/r/gameupscale), but I just use Google to keep up. https://www.google.com/search?q=super+resolution+site:github.com&hl=en&source=lnt&tbs=qdr:w&sa=X

There hasn't been much new lately since everyone's on vacation.

>> No.5261929

Do Deep Fear.

>> No.5261950

>>5261524
Only good upscale in the thread.

>> No.5261956

>>5261750
>>5261747

>Arukas
>Sakura

I JUST realized this. I guess it's a SF reference?

>> No.5261975

>>5261825
Better than Capcom. Well...maybe I will kill myself tonight. There is literally nothing to see anymore in this world. We can get all the hd remakes we want in couple of months from now.

>> No.5261976

>>5261956
Given how incredibly common Sakura is, as both a name and a word, that's not very likely.
But apparently others agree with you for some reason, including some research book.
https://residentevil.fandom.com/wiki/Arukas
Is there some other indicator of a reference there beyond the simple inverse name?

>> No.5261991

>>5259335
is this bait? >>5259293 turned out fucking amazing. It looks like what the actual 3d models probably would have looked like if they weren't tiny 320x200 pixely shits

>> No.5262696

>>5261976

It's Capcom, and they just added Sakura to STreet Fighter recently. not hard to make the connection.

>> No.5262826

>>5260275
Riven seems like a good idea except it has video playing in frames in game. Upres that and try not to get flickering.

>> No.5262876
File: 327 KB, 960x1100, ULJS00293_00023.jpg [View same] [iqdb] [saucenao] [google]
5262876

I guess the ppsspp method doesn't work too well, some textures outputs to pure black

>> No.5262947

>>5261747
>>5261750

Are you taking the 320x240 PS backgrounds and upscaling them? The DC/Win version has 640x480 backgrounds as a start. Use those, and bring them to 1280x960.

>> No.5263008

>>5262876
Those character sprites def wont work with gigapixel

>> No.5263018

>>5263008
On a sidenote, what are the odds AI could be trained to turn a painting into a realistic photo one day? Imagine all those classical paintings of gladiators and war and shit becoming a photo into the past.

>> No.5263046
File: 872 KB, 1592x1573, Untitled.jpg [View same] [iqdb] [saucenao] [google]
5263046

>>5263008
yeah these look worse than those generic emulator filters

>> No.5263062

>>5263046
Anything unrealistic and soft turns out bad,aka already filtered textures as well.

>> No.5263194
File: 1.47 MB, 1881x303, s2p3.png [View same] [iqdb] [saucenao] [google]
5263194

>>5263018

https://github.com/msracver/Deep-Image-Analogy

Though it's from 3 years ago and I'm not sure if it's still the state of the art.

>> No.5263215

>>5263194
That's the way to go when you want to convert one style to another style, not plain upscaling.

>> No.5263305

>>5261747
Westwood's Blade Runner

>> No.5263341

>>5259449
WHICH FALLOUT IS THAT

BOS? The playstation one?

>> No.5263697
File: 1.39 MB, 1072x1040, ogre tactics.png [View same] [iqdb] [saucenao] [google]
5263697

no real point to this upscale, just messing around

gigapixel on ogre tactics

>> No.5263705

>>5263341
It's Fallout 2. There's a derelict submarine in San Francisco, remember?

>> No.5263706

>>5263341
>>5263705
The submarine is cut content from Fallout 2.

>> No.5263713 [DELETED] 
File: 1.05 MB, 1126x844, ogre tactics test.png [View same] [iqdb] [saucenao] [google]
5263713

This gave very good results actually compared to what I expected and versus emulator options.

The problem is everything has major seaming issues.

>> No.5263761
File: 244 KB, 800x320, upscale compariosn.png [View same] [iqdb] [saucenao] [google]
5263761

Okay I figured out what was wrong.

Here is a legit comparison (no fxaa) of Tactics Ogre Let Us cling together for PSP

Note: Ingame it creates too many seams due to the way adjacent tiles interact to be usable without care in cutting/pasting things back in.

>> No.5263769
File: 903 KB, 1920x2168, tiling errors.jpg [View same] [iqdb] [saucenao] [google]
5263769

might work on doing a version without the seam issues today

>> No.5263880

Dataset for Riven, version 2.
https://mega.nz/#!s5J1SaBS!fmJB4RGm-zi3yT4Md_qOzbQhfGrvtvmtlabFC3QLR5A

A couple of remarks.
1) Everything is set and ready to go according to my understanding of specifications provided above.
2) AS FAR AS I AM AWARE, this dataset is pretty much pixel-perfect in regards to what I intended it to be. One major caveat, however, is that the dithering algorithm on the original full-res Riven pictures seems to be different from Floyd-Steinberg dithering algorithm I employed on the downsampled versions of those pictures. I have currently no idea, as to the extent of the impact of this discrepancy.
3) This dataset is meant to train the neural net to upscale a picture, that is dithered, into a sharper higher-res picture, that is dithered. When upscaling Riven's picture with the net, trained on this dataset, DO NOT DE-NOISE IT IN ANY WAY WHATSOEVER. Just use the picture the way it is, after having been taken from the game's resource files and converted to true color format, without ANY further graphical processing.
4) Start a model completely from scratch and train it on this model exclusively. Don't use a single external photo or any data from higher resolution Riven renders.
5) Seeing that I am very happy with how this dataset turned out, the results one can achieve using the model, trained on it with a sufficient number of passes, hinge only at, pretty much, the "quality" of the ideas it is based upon, and the "quality" of the implementation of the neural net in question.
6) This graphical set is meant to be pretty much an experiment in how far you can enhance the picture from the game, when going SOLELY by graphical information, ALREADY PRESENT in the game, without any external references (to photos, paintings, etc.) whatsoever, using, well, ESRGAN. The only significant exception to this paradigm is the employment of dithering to downsampled versions of pictures in order to train the net to regard dithering as noise, not as details.

>> No.5263923

Can this AI meme please die?! NOTHING in this threads looks presentable.

>> No.5264117

>>5263880
Thanks for the effort. Looks great for such a short amount of time spent doing this. 2019 is off for a great start.

>> No.5264137

>>5263923
no, even if it was garbage now, it can improve far faster than the time dedicated by human artists. There's no reason to stop.

>> No.5264139

Anyone want to make something to help it deal with tiles or have some ideas of the best way to do so? Obviously something that de-tiles the texture into individual files, then feeds through any AI, then re-tiles.

I'm guessing this program I downloaded called ImageMagik is the best place to start. Any tips or ideas from people who have messed with images before? Just script it out?

>> No.5264153

>>5263880
What I forgot to say is that Final Fantasy VII should work similar way as well because the backgrounds are done with that late 90s 3d renders. Maybe you could take a look at that next? I'm sure it will be extremely popular. Then upload them to Nexus or something, not really sure which place is the best for this kind of work though. Final Fantasy VII or IX plox~!

>> No.5264156

>>5264139
Best way to do it is to establish what you want to do and then script it out. First do few manual tests and figure out what you want to do and after this just script it.

>> No.5264214

>>5264153
First of all, I don't even know, currently, whether the results of employing my dataset will even be any good whatsoever, and whether pure extrapolation based only on the ingame graphical data is even a valid approach. In other way, however thoroughly (hopefully) put together, this is an experiment to me, strictly, for the time being, a one-off, made with the purpose of seeing the actual results of its intended usage.
Second, Riven is, when speaking about games, one of the most important ones to me personally. In comparison, I have no significant history with Final Fantasy, and I have seen the two particular ones you've mentioned on occasional screenshots only. What I am leading to is,
third, that what was fueling my efforts was, first and foremost, a genuine curiosity as to what Riven would look like when "enhanced" using neural nets based on information taken solely from Riven itself, not any ambition to fame.
Fourth, if (and only if) the results of that "experiment" would end up being any notably good, so that it would make sense to treat other games in a similar fashion, I MIGHT do a guide detailing exhaustively on what exactly I did with Riven's original graphical data, so that I eventually ended up with my dataset, so that others could employ similar steps in relation to graphical files from their preferred games, in order to end up with similarly enough composed datasets of their own.

>> No.5264229

>>5264214
>To put it another way, however thoroughly

>> No.5264436
File: 786 KB, 880x512, tex1_880x512_1e879d0b76a51e9f_14.png [View same] [iqdb] [saucenao] [google]
5264436

Baten Kaitos: Eternal Wings and the Lost Ocean

Used Gigapixel, gamecube game w/ dolphin

>> No.5264438
File: 787 KB, 1760x1024, after.jpg [View same] [iqdb] [saucenao] [google]
5264438

>>5264436

>> No.5264463

night and fucking day

http://www.framecompare.com/image-compare/screenshotcomparison/021CNNNU#

>> No.5264503

>still waiting on those Daggerfall tits.

>> No.5264569

>>5261185
Why use Topaz? Not only is it payware shite, but it's also closer to xBRZ in results than any of these other AI things. It's only good for strict single pixel wide lines and trying to round them out. And for PPSSPP you may as well just use its xBRZ option if you'd like that.

>> No.5264573

>>5264569
just experimenting..

>> No.5264630
File: 1.06 MB, 1760x1024, ESRGAN 0.8.jpg [View same] [iqdb] [saucenao] [google]
5264630

>>5264569
feels like you have to do more work with ESRGAN, also just testing. will try more.

>> No.5264662

okay after comparing screens it looks like esrgan was the best for the background ingame

>> No.5264664

http://www.framecompare.com/image-compare/screenshotcomparison/WP6LNNNX

>> No.5264709
File: 1.10 MB, 1236x1016, gunshop1.png [View same] [iqdb] [saucenao] [google]
5264709

http://www.framecompare.com/image-compare/screenshotcomparison/02MJNNNU

Framecompare for RE2 on gamecube, will play with it more later

>> No.5264720
File: 142 KB, 1440x1080, hd.jpg [View same] [iqdb] [saucenao] [google]
5264720

>>5264709
here is the existing hd pack, made with waifu and on dolphin forums

>> No.5264779
File: 38 KB, 216x145, Diablo 2 test.png [View same] [iqdb] [saucenao] [google]
5264779

Diablo 2 quick test

>> No.5264785

>>5264137
I was involved in two remastering projects and I call BS on this one. There is no way other than recreating data. The quality in this thread is far from usable in any commercial work.

>> No.5264787
File: 4 KB, 119x160, download (3).jpg [View same] [iqdb] [saucenao] [google]
5264787

>>5264785

>> No.5264792

>>5264787
What a shame, kiddo.

>> No.5264793

Need a pixel art trained gan so bad. Could a program that pixelarts a normal picture work for creating the dataset?

>> No.5264861
File: 216 KB, 1280x720, f2bd0921aabc7dffdea851d1d2f90540_1920_KR[1].jpg [View same] [iqdb] [saucenao] [google]
5264861

>>5264785
>The quality in this thread is far from usable in any commercial work.

Have you seen the garbage they sell for remasters? There's FFV-VI, which are all redrawn garbage. Then you get FFIX Steam which is a blurry mess. These are already better than those by a large margin. "Professional remasters" are just "get some cheap company to do it for the cheapest they can".

>> No.5264903

>>5264861
Not him and not relevant to the current argument, but they probably did FF9 in house. Like having one guy pick a filter that makes it look as least aliased (Pixellated, blocky, etc) as possible while accepting some (A lot of) blur and smudging. Trying to make it look less like a low resolution computer game's asset, and more like a badly focused background.

As terrible as they are, it does make sense for what the were going for when porting it first. The mobile market.
Low resolution at all could be seen as worse than badly focused shit. Especially since phone screens are often oily, scratched, damaged, etc so they're used to things being out of focus or hard to see.
If it's obviously low resolution then it's CHEAP. If it's badly focused or blurred then whatever.
And yeah, people really do think like that.

It's a shame they didn't just include the original resolution and give separate scaling options, or hell, just scale it with nearest like 7 and 8 did (At least originally). Filtered shite should be an afterthought at best for such a game, regardless. Maybe even leave it to modders.

>> No.5264910

>>5261747
Oddworld: Abe's Oddysee or Exoddus
Super Mario RPG

>> No.5265032

>>5264720
Dolphin's autistic filtering is bad
at least sourcenext RE2 has no problems with this shit

>> No.5265985

>>5265032
sourcenext still looks worse

>> No.5266068

>>5265985
peixoto and dege will look at those later
if its possible to crack the resolution limit of all background, it will be interesting

>> No.5266320

>>5264709

Image looks like a mix of bilinear, and other areas look actually sparp and HD. The car is almost sharp.

>> No.5266895
File: 465 KB, 1316x1016, blend.jpg [View same] [iqdb] [saucenao] [google]
5266895

Baten Kaitos eternal wings test

http://www.framecompare.com/image-compare/screenshotcomparison/WP6PNNNX

>> No.5266916
File: 500 KB, 1316x1016, blend2.jpg [View same] [iqdb] [saucenao] [google]
5266916

>>5266895

>> No.5266990
File: 1.39 MB, 2432x1568, 388_jisjunglemd.1802_rlt.jpg [View same] [iqdb] [saucenao] [google]
5266990

Currently training on the vanilla DIV2K dataset. I want to see how my results differ from the stock models, which could give me a clue as to where I'm going wrong with my own models.

I'd like to give FF7 backgrounds a try, but there would need to be a way of automatically compositing and decompositing backgrounds, as FF7 backgrounds are divided into transparent layers and you can't just take out the transparency without ruining the upscaling.

>> No.5267060
File: 91 KB, 566x529, esrgan.jpg [View same] [iqdb] [saucenao] [google]
5267060

character

>> No.5267346
File: 2.20 MB, 1396x1080, esrgan.png [View same] [iqdb] [saucenao] [google]
5267346

thoughts?
http://www.framecompare.com/image-compare/screenshotcomparison/WPP7NNNX

>> No.5267372 [DELETED] 

http://screenshotcomparison.com/comparison/126992

skyrim texture with ESRGAN (example purposes)

>> No.5267529

>>5267372
Really? It looks less like AI generated content and more like another texture, darker with more detailed lighting baked in, is simply laid over it.
I don't like this example at all because it goes against the concept of reconstruction without changing design. But I doubt its validity so yeah.

>> No.5267542

>>5264779
:O DIABLO 2

And I was just thinking about what uprendered animations would look like.

>> No.5267579

>>5266990
Now that's a very nice result. It seems to have created a detailed wooden texture right where it should, without psychedelic warping or painting canvas textures. It still does look artificial, particularly the out of focus parts, but otherwise passes fairly well.

>> No.5268182
File: 1.24 MB, 1280x800, 1516945402669.png [View same] [iqdb] [saucenao] [google]
5268182

passed the quakeguy texture atlas through gigapixel, looks nice

>> No.5268216
File: 2.19 MB, 960x1120, 1.png [View same] [iqdb] [saucenao] [google]
5268216

Letsenhance.io does a way better job of these than Topaz Gigapixel, and it's quite affordable.

>> No.5268224
File: 3.45 MB, 1920x1280, 2.png [View same] [iqdb] [saucenao] [google]
5268224

>>5268216

>> No.5268251
File: 1.06 MB, 1408x1792, 3.jpg [View same] [iqdb] [saucenao] [google]
5268251

>>5268224

>> No.5268260 [DELETED] 
File: 632 KB, 1280x1792, 5.jpg [View same] [iqdb] [saucenao] [google]
5268260

>>5268251

>> No.5268265
File: 886 KB, 1920x1280, 4.jpg [View same] [iqdb] [saucenao] [google]
5268265

>>5268251

It's not perfect, but pretty good.

>> No.5269032

>>5266990

Results were inconclusive, I probably didn't train it enough. Less ugly artifacts, but it had trouble actually enhancing the images.

Next attempt is to increase the batch size as far as I possibly can, even if it means cutting my screen resolution, and mix in some of the OutdoorSceneTrain dataset (which does better than DIV2K alone, according to the paper). I'm also going to be more ruthless about cutting out "useless" tiles that only contain a patch of sky or an out of focus part of the image.

>> No.5269064

Please stop doing FF9. It has low resolution assets. 7-8 are 640x480 backgrounds, so that's more data to work with.

>> No.5269094

>>5261179
It fucked the transparency but that looks PS2 quality.

>> No.5269553
File: 2.36 MB, 1396x1080, ai4.png [View same] [iqdb] [saucenao] [google]
5269553

>>5269094

>> No.5269615
File: 67 KB, 640x480, OWFKYLQ.webm [View same] [iqdb] [saucenao] [google]
5269615

>>5269064
>7-8 are 640x480 backgrounds
That's kind of disingenuous. Certainly 7-8 do have a few background textures that are 640x480, but they're never displayed in the screen all at once. Usually only 320x240 is visible at any time (Since I think the overworld map is the only place that displays 320x224 instead). So that level of zoom is the level of detail you'd expect. It's roughly the same shit.
Outside of this individual background anyway, which is super zoomed in for cinematics. So even though only 320x240 is displayed at once there is indeed more detail.

>> No.5270437

>>5269615

Re1-3, FF7-8 have 640x480 resolution backgrounds.

>> No.5271231 [DELETED] 
File: 1.87 MB, 2432x1568, 45_textnomag.5900_rlt.jpg [View same] [iqdb] [saucenao] [google]
5271231

>>5269032

I thought adding some non-dithered LR tiles plus reverse image searching Riven backgrounds for similar high-res images would work, but there's no change, it's as blurry and inconsistent as ever. Maybe the HR images are too sharp, leaving it unable to handle slightly blurry sources like the rock texture on the left.

>> No.5271237 [DELETED] 
File: 2.18 MB, 2432x1568, 2.jpg [View same] [iqdb] [saucenao] [google]
5271237

>>5269032

I thought adding some non-dithered LR tiles plus reverse image searching Riven backgrounds for similar high-res images would work, but there's no change, it's as blurry and inconsistent as ever. Maybe the HR images are too sharp, leaving it unable to handle slightly blurry sources like the rock texture on the left.

>> No.5271241 [DELETED] 
File: 1.87 MB, 2432x1568, 45_textnomag.5900_rlt.jpg [View same] [iqdb] [saucenao] [google]
5271241

>>5271237

Manga109 model for comparison.

>> No.5271246

>>5271237
Nice. This is the point where you put the A.I. shit aside, crack open a cold one and start touching stuff up like a man in PS.

>> No.5271745

>>5263062
>>5263046
that's because people are training these things on natural scenes and then applying them to pixel art. If you want good pixel art upscalling, then you need to train on pixel art.

It's what I was trying to get at in >>5261481

neural nets could certainly produce better results than linear space-invariant filters, but only if you have good prior information

>> No.5271750

>>5269615
>5271237

PC versions had double the resolution of the PS originals. This was before they lost the assets.

>> No.5271763
File: 1.01 MB, 637x670, bicubic vs old dataset vs new.gif [View same] [iqdb] [saucenao] [google]
5271763

>>5269032

Turns out the trick was to turn up the GAN weight and blur some of the LR tiles by one pixel. The model started to learn that it couldn't leave blurry areas alone, they had to be sharpened too.

This will help games like FF7 and 8 that also use low res textures in places.

>> No.5271809

>>5269064
WTF why did Square go lower-res on 9?

>> No.5271816

>>5271809

All the PS FF backgrounds are 320x240 per screen (some backgrounds are multiple screens in size), but the PC ports are double the resolution. FF9 never had a PC port, so it never got higher resolution. Which is why it looks the most like ass on modern PCs.

>> No.5271830

Is there a Daggerfall patch or something? Or is this just to look at screencaps?

>> No.5271851

>>5270437
>>5271750
>>5271816
complete horseshit. except for RE3, all of those games on PC use the same 320x240 backgrounds from their ps1 versions for nearly everything, and the ones that are higher resolution than that are for sections that scroll

>> No.5271890

>>5271830


https://forums.dfworkshop.net/viewtopic.php?t=1642

>> No.5271901

RE2 PC are lossless and PS is lossy. But they're the same resolution. RE2 PC is better as a starting point. I think this is the same for the other PC ports.

Thus, the PC version is the basis that you should be using for upscaling, but the difference is more subtle.

>> No.5271914
File: 226 KB, 1920x1080, ss_66be6e5bf8944148fe2e1fe7aa5d4c41e6e0f9e0.1920x1080.jpg [View same] [iqdb] [saucenao] [google]
5271914

>>5271816
>but the PC ports are double the resolution
They only double the render resolution, not the asset size. Only the 3D benefited from this at all. Backgrounds are still the same.

I mean, you don't even have to buy or pirate the games to check it. There's countless shots all over the internet.
If you'd honestly believe they're all from emulation, you could just check steam user images instead. Though they're so ugly that they rarely screenshot it. Even the release page for 7 itself only has one pre-rendered background.

>>5271901
Are they really lossless? I know RE2 PC uses 4:4:4 color encoding instead of 4:2:0, so things like smooth transitions on red look far, far less blocky. And at a decent quality so there's far less artifacting. But I didn't think they were actually encoded in a lossless format.

>> No.5271973

Any news on this >>5263880 stuff?

Note: Even if the results ended up being utter unimaginable shit, I would've still vastly preferred actually seeing them with my own eyes.

>> No.5271979

>>5271973

Shit I didn't see it until now.

I'm training a different model at the moment, but when that is done I'll do your dataset.

>> No.5271989

>>5271901
>>5271914
not lossless, but both oddworld games on PC definitely use higher quality backgrounds. 368x240 8 bit color for PS1 and 640x240 15 bit color for PC

>> No.5271994

>>5271979
Okay. When you do, please, do keep in mind points 3) and 4) from >>5263880.

>> No.5272006

>>5271763
How big is the data AI has learned? Could you just upload it to MEGA for example and we all could use the same results?

>> No.5272134

>>5272006

The datasets the models learn from can range from 100MB to 20GB and beyond, but the models themselves are about 65MB, which is all you need to upscale images.

I've uploaded the ones I think have some use. There's just not many that do.

>> No.5272162

>>5261750
This fucks up small text something fierce.

>> No.5272265
File: 3.73 MB, 350x350, 1542860513149.gif [View same] [iqdb] [saucenao] [google]
5272265

So from reading this thread I can gather what is actually playable already :
> GC/Wii games on Dolphin
> PSP Games on PPSSPP
Anything I miss??

>> No.5272360

>>5271914

Someone do the gold Saucer entrace in these AI programs please.

>> No.5272398

>>5272006
MEGA, as in an arduino? Fuck no.

>> No.5272429
File: 2.30 MB, 2432x1568, 83_tislandexterior.1225_rlt.jpg [View same] [iqdb] [saucenao] [google]
5272429

>>5271763

it's good enough I guess.

>>5272265

http://emulation.gametechwiki.com/index.php/Texture_Packs

>> No.5272847

>>5259543
There's a HD screenshot of this render somewhere.

>> No.5272863

>>5261776
If the source is blurry to begin with shrink it down to 50% before AI to give AI more room to work with.

>> No.5272876

>>5272863
The textures themselves aren't inherently blurry, they're just low resolution and scaled by bilinear. Shrinking them to an even smaller resolution wouldn't help anything. Well, unless your goal is to destroy more detail so the AI has to come up with everything from near scratch. But that would be weird.

>> No.5272883

>>5272876
I was trying out gigapixel on screenshots from the empire strikes back Blu-ray, which is very badly done and blurry at 1080p even, upscaling directly would cause the AI to try and enhance grain pattern and already blurry areas and results were bad. If I downscaled the source by 75% or 50% depending on the sharpness of the shot then AI upscaled and then through some clever downscaling back to 1080p i managed to create a sharper and more detailed and less grainy image than the Blu-ray, without actually upscaling at all. It didn't work great for every shot though sometimes it would give plastic faces to people but backgrounds and set design looked very clean and sharp compared to the original.

>> No.5272886

>>5272883
That's a different issue entirely, bad upscaling beforehand. I used to do similar things with downscaling to 720ish and scaling back with Spline/Jinc using madVR back when I cared about that. Nowadays with video I consider motion interpolation far more important.

But game textures generally don't suffer from that.
Though there are exceptions like remakes (FF9 in particular) it's generally better to find the lower resolution versions of those to scale rather than downscaling the badly upscaled shit.
It's the already low resolution textures you'd be scaling with AI, not the bilinear rendered end result. There's no sense in making THOSE even smaller.

>> No.5272960

>>5272886
if the point is to try and add more detail then downscaling could actually work for some cases since a lot of old textures aren't very detailed at all.
If the art is bad to begin with then the AI can't make it better, but if you downscale if first the AI can sorta reinterpret what the original artist had in mind and maybe do a better job at it.

>> No.5272967
File: 34 KB, 500x667, 5TnJLcQ_d.jpg [View same] [iqdb] [saucenao] [google]
5272967

>>5258589
These look absolutely terrible, fittingly like something a generator would shit out automatically.
More importantly, will you idiots EVER be able to let low-res prerendered cg trash from our youth go? It was rarely good and never, ever great.

>> No.5272968

can you guys do old monkey islands?

>> No.5272969
File: 553 KB, 600x422, 1398232127753.png [View same] [iqdb] [saucenao] [google]
5272969

>>5258589
this shit seriously is the Eagle/2xSaI of the late 2010s
my eyes fucking bleed

>> No.5272971

>>5272967
>t. asshole

>>5272969
eat shit, loser

>> No.5272974

>>5272960
>it's the artists' fault that some autistic A.I. using dweeb wants to upscale artwork 20+ years later for whatever obsessive reason

The mental gymnastics on this board, I swear

>> No.5273027

>>5272960
I don't think you understand.
The game's textures are not blurry, they are just low resolution.
What you'd be resizing is NOT the end result, which is blurry due to just how large it has to scale the texture to with bilinear to fit the model. But rather the tiny textures BEFORE they are scaled.
They are not badly upscaled and in need of shrinking to have a base to rescale from. They're barely large enough to display what information is there to begin with.
AI scaling is not magic that can make detail from absolutely nothing. It only works to find patterns and recreate from what it has trained on.
Shrinking the already tiny textures to hide more detail will only prevent more patterns from being detected.

In your example of a bad video upscale, there is no extra detail added. All of the actual detail is still visible when scaled down by about half.
In FF0's textures specifically, they are already small, and barely hold the detail they do have. The buttons on the cuffs of the shirts, for example, are roughly 2x2. Two by fucking two. The whole section around the buttons including the holes are roughly 5x3 each. That's how small that shit is.

>> No.5273062 [DELETED] 
File: 3.90 MB, 2052x4692, 53_gelevmagrm.1000_rlt.jpg [View same] [iqdb] [saucenao] [google]
5273062

>>5271994

15,000 iterations. The evaluation images didn't change much, so I don't think further training would make a big difference.

>> No.5273069
File: 3.90 MB, 2052x4692, 53_gelevmagrm.1000_rlt.jpg [View same] [iqdb] [saucenao] [google]
5273069

>>5271994

15,000 iterations. The evaluation images didn't change much, so I don't think further training would make a big difference.

https://www.mediafire.com/file/ijiw379r4fpdbbx/RivenAnon.pth/file

>> No.5273113

>>5259368

>yfw modders make a better remake before square does and they dont fuck it up like square would

im okay with this.

>> No.5273183
File: 3.09 MB, 1600x1200, nl49wvy0jy521.png [View same] [iqdb] [saucenao] [google]
5273183

>>5259540
>>5259543
>>5272847
Found it for comparison.

>> No.5273283

>>5273069
Could you still do at least 5000 more passes and then do the same three pictures over again, just to measure whether there's really any factual difference?

It seems it gets royally fucked by dithering so far.

>> No.5273295

>>5273069
On a second glance, it seems to work best on contours (see any solid contrasted by sea or sky, clouds boundaries also seem to fare quite well), and to be at its worst on textures and gradients.

>> No.5273386
File: 937 KB, 1280x960, nearest upscale.png [View same] [iqdb] [saucenao] [google]
5273386

>>5261750

If you take these and then add an additional x2 scaling through bilinear or nearest it looks way better than simply going x4 with either on the original.

>>5272162
>This fucks up small text something fierce.

Less data to work with, and it doesn't necessarily know it's even a sign with words. You'll have to manually go through these and correct them.

>> No.5273410
File: 2.56 MB, 1280x960, bilinear upscale.png [View same] [iqdb] [saucenao] [google]
5273410

>>5273386

And then x2 with bilinear.

>>5272969
>>5272967

It's a tool, and it's getting better. Also, even something like x2 makes the backgrounds look really good and undoubtably an improvement. Compare >>5261747 and >>5261750 When doing just x2 it looks like it just removes the jaggies with no real loss anywhere else. After x2 you start to get more guesswork and the images get more distorted.

So even just x2 is an improvement and these new backgrounds themselves scale way better than the original 320x240 ones. Meaning that scaling these new backgrounds x2 is like scaling a natural 640x480 image x2 instead of scaling a 320x240 image x4.

>> No.5274282 [DELETED] 

The only other way I can think to upscale RE2 is by dumping each frame into a buffer and then applying a filter to this, but I'm not sure a real time algorithm could be added to the engine's pipeline.

>> No.5274285

Remake has 640x480 backgrounds.

>> No.5274449
File: 3.23 MB, 2432x4704, 83_tislandexterior.1225_rlt.jpg [View same] [iqdb] [saucenao] [google]
5274449

>>5273283

I ran it for 6,000 iterations 3 separate times, one with the GAN weight turned up, one with the feature weight up, and one with the pixel weight up. The GAN and feature weight ones turned out pret0ty much the same. The pixel weight one (pic related) turned out worse.

I'll give it another shot in a few days, right now I want to train it on a painting dataset I found. I'll also keep this dataset around in case a new SR program comes along.

There are places online where machine learning researchers can rent GPUs (some cheap ones being https://vast.ai/ or https://vectordash.com/pricing/).). Useful if you want to train a network but don't have a GPU good enough.

>> No.5274478

>>5274449
The model in this >>5274449 is a model from >>5273069 with some additional passes, right?

If so, then.
You are, of course, free to use your GPU however you please. Still, there is an obvious difference in quality between >>5273069 and >>5274449 on a pixel-to-pixel basis, very evident when you look at it at its 100% scale. Which indicates that the model is still mid-learning and nowhere near saturation.

>> No.5274523

>>5274449
Also, what, in principle, is the difference between "GAN weight", "feature weight" and "pixel weight"?

>> No.5274649
File: 305 KB, 2048x1792, KAT6xoH.jpg [View same] [iqdb] [saucenao] [google]
5274649

>>5274478

>The model in this >>5274449 is a model from >>5273069 with some additional passes, right?

Sort of. There's no checkpoint system with this unlike other SR programs. When training it again, I told it to initialize from the Riven model instead of the default PSNR model, but that's not the same thing. A GAN is made up of two neural networks, a generator and a discriminator. You can keep the generator if you've paused training and you want to continue, but you can't keep the discriminator.

>>5274523

>Also, what, in principle, is the difference between "GAN weight", "feature weight" and "pixel weight"?

I don't know, I'm not a machine learning researcher and it's not explained in the documentation. I was hoping I'd find out by adjusting the values. It's clear that by turning up the pixel weight, it doesn't stray as far from the original image, hence the mosaic pattern. I suspect that GAN weight controls how much detail it hallucinates, but I can't confirm it.

>>5274285

There's been a few attempts at it. It might be possible to inject high res textures into the GC version via Dolphin, but I suspect that (like FF7-9) the backgrounds are divided into layers, which would make things more difficult.

>> No.5274662

>>5274649
Could you consider OCCASIONALLY doing this
>When training it again, I told it to initialize from the Riven model
>You can keep the generator if you've paused training
for like 5000-7000 passes at a time, until any notable difference (between the start and the end of such a "training episode" - and measured by processing the exact same pictures again and again) absolutely, positively stops showing up? And also just focus on "pixel weight" only with this model (when and if you go back to it) from this point onward?

By the way, what kind of weighting was >>5273069 trained on: GAN, feature or pixel?

>> No.5274674

>>5274649
>2048x1792)

Don't think 4K is a good idea. These are not magic. I think x2 or x3 is best.

>> No.5274679

>>5274662

You want me to just restart it every epoch (an epoch being one complete pass of the training data)? I'll need to write a script to do that automatically, but alright.

>By the way, what kind of weighting was >>5273069 trained on: GAN, feature or pixel?

It was trained on the stock values. In train_esrgan.json:


, "pixel_criterion": "l1"
, "pixel_weight": 1e-2
, "feature_criterion": "l1"
, "feature_weight": 1
, "gan_type": "vanilla"
, "gan_weight": 4e-3

>> No.5274692

>>5274679
Wait, there seems to be a misunderstanding once again.

My dataset is comprised of 200k pictures, 100k HR ones and 100k corresponding LR ones. Of HR pictures, ~75k are in "training HR" folder and ~25k are in "validation HR" folder (LR folders obviously have the same numbers).

From what you say, a compete pass through all this data you call an "epoch".

Now, the question. Here
>15,000 iterations
and here
>I ran it for 6,000 iterations

What exactly did you mean by "iteration"? Is it an act of processing an individual pair of pictures (so that an "epoch" would be comprised of 75k to 100k "iterations" that way)? A complete pass on all the pictures comprising the dataset, that you called "epoch" above? Or something else entirely?

>> No.5274694

>>5274679
>It was trained on the stock values. In train_esrgan.json:
And how did the setting behind >>5274449 differ from those you wrote down here >>5274679?

>> No.5274697

>>5274694
>settings

>> No.5274702

>>5274692

I think that iterations are how often the neural networks are updated based off new information, and that it's basically (number of images / batch size). See here for more info: https://radiopaedia.org/articles/batch-size-machine-learning

If the batch size was 1, one iteration would equal one image pair. There's conflicting data on whether batches do what they're supposed to do (prevent neural networks from overreacting to a small amount of new data), but given enough time it'll converge to the same state no matter what the batch size is.
>>5274694

Feature weight was doubled (since it seemed high to begin with), the others were tripled.

>> No.5274716

>>5274702
What is your batch size, and how much time would it take for your machine (based on estimations) to do a full epoch on my dataset (75k pairs of training pictures; 100k pairs of pictures total, both training and validation)?

>> No.5274758 [DELETED] 
File: 426 KB, 1024x896, I dont know what I expected.jpg [View same] [iqdb] [saucenao] [google]
5274758

>>5274716

13, and it's around 6,000 iterations (2-3 hours on my machine). I had it set up to validate and save a new model every 2,048 iterations, so I had stopped it just after 1 epoch for those 3 tests.

>> No.5274762
File: 512 KB, 1024x896, I dont know what I expected.jpg [View same] [iqdb] [saucenao] [google]
5274762

>>5274716

13, and it's around 6,000 iterations (2-3 hours on my machine). I had it set up to validate and save a new model every 2,048 iterations, so I had stopped it just after 1 epoch for those 3 tests.

>> No.5274792

>>5274758
Okay, 2-3 hours seems more or less reasonable.

Now to your question:
>You want me to just restart it every epoch (an epoch being one complete pass of the training data)? I'll need to write a script to do that automatically, but alright.
Yes, I think OCCASIONAL exactly-1epoch-sized training episodes would be acceptable.

Also, whether to use the settings from >>5273069 or from >>5274449 is up to you, but what I ask of you is just to settle down on a single set of settings for all the subsequent training episodes with my dataset, and to keep it the same between the episodes.

>> No.5274853

https://i.imgur.com/1XFnfJ8.png
>More Grim Fandango:
Gee golly that works well when there's actually enough detail to extrapolate from.
Linked instead of reuploaded because it's APNG and I don't know offhand how to webm/gif that without severe loss or crazy high filesize.

>> No.5275016

Anyone training ESRGAN on pixel art / game art?

>> No.5275017

>>5275016
That's actually pretty hard problem to solve. It would probably need a new algorithm completely.

>> No.5275020

>>5275017
would it really though? Feels like with good enough data set it could learn the little patterns really well.

>> No.5275031
File: 13 KB, 560x264, 33094.png [View same] [iqdb] [saucenao] [google]
5275031

>>5275020
not talking NES type of sprites but something with decent information in it should be upscalable by esrgan if it's trained for it right vs completely getting it wrong like it does right now.

>> No.5275553
File: 1.54 MB, 2304x1760, y8Mu8rl.jpg [View same] [iqdb] [saucenao] [google]
5275553

https://www.resetera.com/threads/ai-neural-networks-being-used-to-generate-hq-textures-for-older-games-you-can-do-it-yourself.88272/page-16#post-16562586
That looks like a high quality magazine scan.

I like how easy on the eyes manga109 scales are, even ran through twice.
There's no psychedelic hallucination patterns, no sharp xBRZ/Sai2x/HQ/etc warping, no nearest/linear aliasing. And it's not nearly as blurry as bicubic or ringed as lanczos. It generally just produces a straightforward image that quite resembles print.

>> No.5275591
File: 1.40 MB, 2560x1920, mZWHBjZ.jpg [View same] [iqdb] [saucenao] [google]
5275591

>>5230405
Now less shitty.

>> No.5275659
File: 19 KB, 480x200, 1513812632201.jpg [View same] [iqdb] [saucenao] [google]
5275659

>>5275591
That's awesome. I think you can churn out the rest of the FFVII and then...beat Square at their own game.

>> No.5275724

>>5275591
Let's be honest, great as it is, no studio would go this way for a remaster. It's common to simply recreate the backgrounds from the ground up.

>> No.5275730

>>5275591
If you as - an artist working on my team - delivered me this quality, I would fire you immediately.

>> No.5275740
File: 74 KB, 460x507, lincoln.jpg [View same] [iqdb] [saucenao] [google]
5275740

>>5275591
i don't know what every one else is smoking, but this looks awesome.

realistically, this is the best we could hope for, and it came out pretty damn good, much better than I would have guessed. I'm impressed.

pic not related

>> No.5275745
File: 107 KB, 615x623, analpocalypse.jpg [View same] [iqdb] [saucenao] [google]
5275745

>>5275730
I don't even know what the fuck you do, and I would never hire you in the first place for anything.

so there.

>> No.5275750

>>5275724
>It's common to simply recreate the backgrounds from the ground up.

Never happened. REmaster HD was just upscaled with filters and touch ups, with a few new backgrounds. FF9 was just blurry filters. It's too costly to remake the backgrounds. And if you're spending millions to make those, why not just remake the whole game and make it more modern like RE2 remake?

>> No.5275758

>>5275724
>It's common to simply recreate the backgrounds from the ground up.
Or filter them to hell and back, yeah.
For FF7 in particular though, it was announced a while back that Square-Enix intends to remake it entirely in FF15's engine. Even with an option to avoid all combat for story's sake alone. They're pretty much done with the age old pre-rendered one it seems.

>> No.5275797

https://youtu.be/cbG34c1qorQ

>> No.5276182

>>5275730
You're just some literally who on 4chan, no one cares what you would do.

>> No.5276223
File: 308 KB, 1200x800, AoS_HD_Remastered_4K_Texture_Pack.jpg [View same] [iqdb] [saucenao] [google]
5276223

>>5258621
>>5258589
amazing work HD texture packs are a fool's errand

>> No.5276259

>>5276223
This is unironically how all these "HD" packs look to me. Absolute garbage.

>> No.5276353

>>5275730
I would fire you first for being a retard on economics

>> No.5276419

>>5276353
But you can't fire me. I'm my own boss and my company is privately held.

>> No.5276462

>>5275730
I think you should be on some other thread. Work here is experimental and it's absolutely amazing for 'free' work. This is the future of computing. Lockheed Martin is using AI to analyze high resolution satellite images to recognize strategic features. In the past this took a long time but now, with the help of AI it's quick and dependable. Source? Go to their site and read their news.

>> No.5276521
File: 301 KB, 640x480, 1389307684875.gif [View same] [iqdb] [saucenao] [google]
5276521

>>5275730
Same. People thinking it looks shit are looking at the image zoomed out, as opposed to looking at it at 100% (as they should, considering it is 2560x1920).

I'd rather stick with pic related.

>> No.5276534

>>5273410
It's not "better". It's initially irrelevant and increasingly hideous as resolution gets higher.
I'm not saying this could never be done, but we're still years away from this machine upscaling, and, most importantly, the the stuff posted ITT is completely horrid.

>> No.5276584
File: 19 KB, 384x139, hexen 2.jpg [View same] [iqdb] [saucenao] [google]
5276584

Hexen

>> No.5276586
File: 223 KB, 1280x800, 94w2eeo.jpg [View same] [iqdb] [saucenao] [google]
5276586

>> No.5276956
File: 888 KB, 1150x365, The Longest Journey Vista.png [View same] [iqdb] [saucenao] [google]
5276956

Can anyone please try to upscale this landscape of The Longest Journey please? There are a lot of jaggy lines but it can't be helped, there is no better quality available.

>> No.5277560

>>5276534
>It's not "better". It's initially irrelevant and increasingly hideous as resolution gets higher.
Some people feel the exact same about regular scaling like nearest and bilinear. The aliasing is also initially irrelevant and increasingly hideous as the image is scaled to higher resolutions. It's not something you can avoid nowadays, since almost all monitors, TVs, and other screens are digital, and increasing in resolution as time goes on.
Personally I don't mind since I grew up with low resolution (into the CRT->LCD switchoff which really really sucked but whatever). I'm used to the scaling aliasing and can tolerate it just fine.
But not everyone thinks that way.
Also, quite a fair portion of the market, particularly mobile markets, will shirk away very quickly from anything low resolution and simply scaled. Not always because it looks bad (Though they may certainly believe that) but because it looks cheap. There are exceptions like styled ``indie'' shit where aliasing is cool. But otherwise it's quite a gamble. Just clearing up that aliasing alone is a viable goal. Making it look less like a blocky bunch of cheap shit and more like something else; in the best case of manga109 sets, magazine print; in a more awkward set like early ESRGAN, psychedelic soup. Those may be less offending to the eye than aliasing. Quite easily so.

>> No.5277656
File: 3.27 MB, 4600x1460, 1546820943133_rlt.jpg [View same] [iqdb] [saucenao] [google]
5277656

>>5276956

Had to use the manga109 model as it turned out even worse with the stock one

>> No.5278016

>>5276584
Amazing.

>> No.5278161

>>5275730
>>5276521
Upscaling that high with those algorithms makes it look better when you scale it down, so that's the perfect way to look at it. This work will allow it to look tolerable on LCDs at 1080p. The real cool factor is to see how this looks back on a CRT and compare with the original.

>> No.5278168

>>5275591
Extremely good, as you can tell from all the idiots trying desperately to put it down and downplay it because of emotional knee-jerk reactions.

>> No.5278206

>>5260652
perhaps it would work better if it were designed to work on a series of frames, rather than just a still. That way it should be able to handle the edges of sprites more accurately.

>> No.5278208

Has someone done the n64 Zeldas yet?

>> No.5278235

Can somone just remake the entirety of Panzer Dragoon Saga using this? It would be so fucking rad.

>> No.5278440

This could be really interesting for things like "HD" re-releases. You could do this to clean up the backgrounds of a prerendered RPG or something and make them look more presentable without completely gutting the original assets. It's not 100% faithful and purist of course, but I'd accept it as a compromise for modern releases of games versus other options.

>> No.5278706

>>5277656
Thank you very much

>> No.5278715

>>5278235
No.

>> No.5279768
File: 77 KB, 384x128, a_is_for_alligator___painting_by_griffin_fire_d2m9bfk-fullviewtile010_2048.png [View same] [iqdb] [saucenao] [google]
5279768

>try training it on a database of 6,000 paintings
>looks like shit

Since the Manga109 model is the only one that turned out well, maybe a small dataset + a large number of epochs is better than the other way around. This time it's training just ~150 works of art in a variety of styles and scales

It's probably not gonna work, but if it does, it'll save a lot of time and effort.

>> No.5279810

>>5278440

A re-release would have these features:

>Original upscaled backgrounds
>Filter options
>default HD backgrounds
>HD backgrounds = AI upscale + touchups.

>> No.5280060

This is kind of related to human eyesight right?

We take 2 inputs and then mix them together into some output after putting them through all sorts of processing steps. I wonder how much of this upscaling / making things look right is done in our brains versus the actual input. Is eyesight noisy at all or anything like that?

I know we merge the images into 1, but we also do things like stability while head bobbing during running.

It's interesting the parallels and extrapolations that work between the two. I'd say upscaling abstract pixel art or things like it via meaning to useful graphics is more impressive than on real images.

>> No.5280065

>>5279768
Have you tried pixel art?

If you found some a small number of GREAT examples and just flip/rotate/augment the dataset it would probably perform really well. I'm not sure what exact patterns exist for pixel art style though that it would find. The unique thing is it's the opposite of the other styles in that you want Jaggies and hard pixels.

>> No.5280089

>>5258589
>AI-created cancer

>> No.5280103
File: 454 KB, 837x342, chun-slight-speed-down-Copy.gif [View same] [iqdb] [saucenao] [google]
5280103

upscaling this without getting a blurry filtered mess would be fucking amazing magic

>> No.5280112
File: 225 KB, 742x864, SF3_Dudley_waifu2x_art_noise2_scale_tta_1.png [View same] [iqdb] [saucenao] [google]
5280112

>>5280103
whereas waifu2x and normal upscale just basically double the size of everything instead

>> No.5280145

>>5280065

It's not possible, at least not with ESRGAN. It needs a database of high resolution images to learn from as well as low resolution counterparts, and I don't think it can do hard edges. Maybe DNSR (https://arxiv.org/pdf/1812.04240.pdf)) could. There's still no implementation of it, though.

>> No.5280157
File: 342 KB, 1920x1080, pixel-art-wallpaper-full-hd-1920x1080-110940.jpg [View same] [iqdb] [saucenao] [google]
5280157

>>5280145
There are high res pixel art examples though. It's just very different patterns than normal photo / manga images

>> No.5280161

>>5280157
I believe it should work with ESRGAN with the right training. There are also style transfers that turn images into pixel art versions.

>> No.5280164
File: 1.90 MB, 1000x2193, JEeiKmS.png [View same] [iqdb] [saucenao] [google]
5280164

>>5280161

The patterns of pixel art should be perfect for ML to deduce

https://www.reddit.com/r/PixelArt/comments/3q67oe/ccocnewbie_creating_pixel_art_by_neural_style/

>> No.5280178

>>5258589
how are these done? is there a program to make your own?

>> No.5280213 [DELETED] 
File: 46 KB, 960x846, pb561kkQth1xvyxl5o8_1280.png [View same] [iqdb] [saucenao] [google]
5280213

>>5280164

A couple of years ago I found a program that attempts to generate pixel art from high res images. http://research.cs.rutgers.edu/~timgerst/

But while it's an improvement over just downscaling and reducing the colors, it can't make images that pass for pixel art. The style transfer results are in a similar boat, at least for now.

>> No.5280230
File: 46 KB, 960x846, pb561kkQth1xvyxl5o8_1280.png [View same] [iqdb] [saucenao] [google]
5280230

>>5280178

There's 4 ways to do it:

>AI Gigapixel, commercial software
>Let's Enhance, website, paid service, 5 free images
>free command-line programs like https://github.com/xinntao/ESRGAN or https://github.com/alterzero/DBPN-Pytorch , most of which require Python and an Nvidia card
>wait for Nvidia to release their own solution

>>5280164

A couple of years ago I found a program that attempts to generate pixel art from high res images. http://research.cs.rutgers.edu/~timgerst/

But while it's an improvement over just downscaling and reducing the colors, it can't make images that pass for pixel art. The style transfer results are in a similar boat, at least for now.

>> No.5280303
File: 1 KB, 48x64, PresiPix.png [View same] [iqdb] [saucenao] [google]
5280303

>>5280230
What all can I use the files outputed by the Pix program for? It's pretty neat.

>> No.5280374

>>5280230
yeah, I was just wondering how esrgan trained on pixel art / for pixel art would do. When the "high res" target ouput is pixel art patterned.

>> No.5281023
File: 3.74 MB, 1944x1896, sadness_and_hope_goes_hand_in_hand_by_starina_lenore_dcwgrzv-pre_rlt.jpg [View same] [iqdb] [saucenao] [google]
5281023

>>5279768

Filing this under "just good enough to upload and share". https://www.mediafire.com/file/2bjlhx5c5gpcafl/RandomArt.zip/file

There's two variations, feel free to mix them with each other or with other models through net_interp.py

>> No.5281042
File: 3.69 MB, 2484x2304, vyasa_by_yunaxd_dcwi07k-pre_rlt.jpg [View same] [iqdb] [saucenao] [google]
5281042

>>5281023

Another example. Both are from DA.

These probably aren't going to be useful for retro games due to the usual color reduction problems, it was just a (successful?) experiment in training off small datasets. But my next plan is to take as many backgrounds I can find from the remastered LucasArts games and pair them with their original counterparts, maybe it could be useful for remastering other adventure games.

>> No.5281168

>>5281023

Strike that, accidentally uploaded the wrong alternative mode. https://www.mediafire.com/file/1rkhc7g9x4i9jhq/RandomArt.zip/file

>> No.5281428
File: 912 KB, 1024x768, 45939301544_befd425d9d_o.png [View same] [iqdb] [saucenao] [google]
5281428

From Heart of Darkness, supposedly a mix of manga109 and RRDB_ESRGAN

>> No.5281431
File: 835 KB, 1024x768, 45939301734_98b97a4e6f_o.png [View same] [iqdb] [saucenao] [google]
5281431

>>5281428

>> No.5281939

>>5259248
top left, what's wrong with the tree and rock. I looks like it is partially cut away? Both images.

>> No.5282091

No matter how long you try, it still looks like shit

>> No.5284280

>>5258589
is this the same as converting mp3 to flac and saying it sounds better?

>> No.5284332

>>5284280
No. That's just re-encoding with no generation of detail whatsoever, a completely different concept.
It's not quite like filtering either. As the equivalent of that in images is also normal filters, like lanczos, linear, bicubic, etc.

What this is is having an AI generate larger images and discriminate between them to find something that looks most like a larger image to begin with, based on the images it is trained on.

>> No.5284391
File: 9 KB, 470x495, retro_critic.png [View same] [iqdb] [saucenao] [google]
5284391

This algorithmic upscaling is pretty damned interesting. I've absolutely no issue with it, and find many of these images to be as good or far better than the originals. Like any tool, it can be abused because laziness (remember the first wave of games that overused bloom?). Hell, some of these look absolutely fantastic!

Honestly, some of you try way too hard with the "soul" meme, and I can't see how you believe your own bullshit. Act like you even give a damn that Daggerfall is possibly getting revamped textures in the first place ffs.

Quality thread.

>> No.5284419
File: 1.66 MB, 1024x1024, 7fGykHB_rlt.jpg [View same] [iqdb] [saucenao] [google]
5284419

>>5281168

Redid it today to be friendlier with JPG artifacts. Half the LR images were set to 100% quality and the others were 50-90%. It's frustrating having to train new models for every single degradation pattern.

DNSR implementation when

>> No.5285208

>>5263923
looks better than you, you miserable hopeless obese virginal weirdo, fuck off and die irl

>> No.5286750
File: 2.39 MB, 1675x1080, 165_glakegehnevanobr.1625_rlt_rlt.jpg [View same] [iqdb] [saucenao] [google]
5286750

I gave up on making a 1 pass Riven upscaler, instead using a mix of ESRGAN (with the pixel weight cut in half), SFTGAN (using downscaled ESRGAN results) and Gigapixel. Sometimes one of them alone produces a good images, sometimes neither of them do. But since people are willing to do a fan realRiven game...

I'll still do that other Riven anon's dataset tonight.

>> No.5286751
File: 2.86 MB, 2432x1568, 207_gplateaunobridge_output.jpg [View same] [iqdb] [saucenao] [google]
5286751

>>5286750

This one is straight from Gigapixel.

>> No.5287464

>>5279768
It could be (depending of the algorithm of course) that training too long and with too many images results the AI starting to average the image or doing something what it's not necessarily optimal. This is just my guessing bu t it makes sense when you know how iterative computation works.

>> No.5287475

>>5280060
the AI does millions of small comparisons when it analyzes an image. Then it just fills the missing information with it's 'knowledge'.

>> No.5287881
File: 602 KB, 1062x807, colorbanding.jpg [View same] [iqdb] [saucenao] [google]
5287881

https://arxiv.org/pdf/1901.02840.pdf

>We focus on the challenging task of GIF restoration by recovering information lost in the three steps of GIF creation: frame sampling, color quantization, and color dithering. We first propose a novel CNN architecture for color dequantization. It is built upon a compositional architecture for multi-step color correction, with a comprehensive loss function de-signed to handle large quantization errors. We then adapt the SuperSlomo network for temporal interpolation of GIF frames. We introduce two large datasets, namely GIF-Faces and GIF-Moments, for both training and evaluation. Experimental results show that our method can significantly improve the visual quality of GIFs, and outperforms direct baseline and state-of-the-art approaches.

This tool is designed for GIFs, but it would work equally well on restoring lost color data in old games, possibly even DKC. Hope for a public release of the code.

>> No.5287893
File: 1.07 MB, 1164x830, comparison.jpg [View same] [iqdb] [saucenao] [google]
5287893

>>5287881

Comparison (GT means "ground truth", or the image it's trying to imitate)

>> No.5288956

>>5286750
'preciate it. I was just about to ask you whether there were any new developments on that dataset.

>> No.5289828

>>5286751
It seems like Gigapixel is the best solution right now

>> No.5289830

So, what's your end game...or do people here just mess around?

>> No.5289883

>>5289828
Depends. It does a lot of things terribly, especially low resolution textures like >>5261776 where it comes out looking warped similarly to xBRZ/HQx/SuperEagle/etc.
The one you've quoted looks quite blurry in comparison to other Riven scales in this thread.

I personally like >>5286750 the best. It almost has a sort of artificial camera focus look where sections are extra clean and sharp. Extra detail without looking like some psychedelic hallucination or warping. Including some beautiful water ripples and stone faces. And cleaned skies that are probably incredibly dithered in the source.
Nothing ran through giga/topaz/whatever has produced anything like that yet.

Though in general, for a single run image, manga109 seems to give the best results in the sense of natural scaling with few artifacts. Probably because the datasets were supposedly scanned in separately at different resolutions instead of just being downscaled. And because magazine/print scaling just makes more sense to my eyes.

>> No.5290132

Damn I'm jizzing and blowing my load while looking at these screens. I have never had this huge hard on for anything else before in my life.
I think 320x240 pixel graphics require a special training. The AI and all the sets available are just for line art or something photoreal, i.e. photographic content.
The other issue is how are you going to exactly upscale pixel art anyway? Are you replicating the same image with just more pixels and preserving the original style or... is it just interpolation? This is why AI doesn't work too well unless it's made specifically for pixel art.

>> No.5290306

>>5290132
I have a genius idea.
You see, art isn't that super original in games. It's usually inspired by other things.
Things like films and art, which got me thinking.
What if we removed all of the trainer models and built our own that runs exclusively on video frames of 'Aliens' and 'The Good, The Bad, and The Ugly'?

>> No.5290309

I don't understand, how do you run test.py, I click on it and nothing happens in the /.results folder

>> No.5290317
File: 93 KB, 959x162, Capture.png [View same] [iqdb] [saucenao] [google]
5290317

>>5290309
Actually, it's giving me this:

>> No.5290318
File: 3.69 MB, 2056x4648, 191_tgr_s3.2_rlt.jpg [View same] [iqdb] [saucenao] [google]
5290318

>>5288956

7 epochs and 12 hours. It doesn't look too bad at 2x.

https://www.mediafire.com/file/hm2v17da0z9m8pq/AnonRiven2.pth/file

Image examples: https://www.mediafire.com/file/wo238g6fnza28xf/results.zip/file

>>5289830

I was mostly just trying to get the best possible results out of ESRGAN. I'm gonna try actually upscaling textures later today.

>>5289828

Most of the Gigapixel results actually turned out worse. But it's more forgiving and works on a wider range of images out of the box.

>> No.5290324

Why doesn't one of us just do this once and then share the build around trained for low-res game textures

>> No.5290350

>>5290317

You need to specify the model path. Either create a .bat file or go into the command line, navigate to the ESRGAN directory.

python test.py models/(model name).pth

>>5290318
>>5288956

For what it's worth, I think the "multipass" method I tried gives better results. But Riven's a photorealistic game; your recursive upscale method may work better for games where it's impossible to find a good training dataset.

>>5290324

I try to share whatever models I come up with here, but none of the game texture attempts I've tried have come out as good as SFTGAN.

Comparison vs. ESRGAN (sorry for the compression): https://imgur.com/a/JDsvVei

>> No.5290603

Are there any models based off of satellite photos?

>> No.5290650

>>5290306
You could, just make your own set and see what the results are? Then we can call it '4chan set' or something. I'm bad with making names. Someone will probably invent a memetastic name for the set.

>> No.5290661

>>5290603
Lockheed Martin uses AI to analyze satellite imagery. The originals are huge size especially spy satellite footage, we are talking probably at least 4-8k range source images. They put AI to go through tons of footage and find strategic features which they then mark up into a database etc. In the past this process took ages as they needed to do brute force iterative comparisons and matching but AI has speeded up this process signifcantly.

>> No.5290691

>>5290318
How well does all this work with animations? Riven had animated elements, right? I wonder if all this holds up temporal / spatial wise or if you get some weird warping or flickering.

>> No.5291090
File: 283 KB, 512x512, AZORC01.jpg [View same] [iqdb] [saucenao] [google]
5291090

I gave Strife a shot, but I couldn't figure out how to replicate the Doom mod's look. Best results were either Manga109 or the random art model + blurring the texture beforehand.

>>5290603

There's several super resolution programs dedicated to satellite photos. I saw one or two on Github, haven't tested them though https://arxiv.org/abs/1711.10312

>>5290691

I don't think ESRGAN is temporally consistent, but there are other SR programs out there that are. https://github.com/flyywh/Video-Super-Resolution

>> No.5291160
File: 1.09 MB, 928x659, sq4.png [View same] [iqdb] [saucenao] [google]
5291160

Google drive of upscaled Sierra On-Line backgrounds:

https://drive.google.com/drive/folders/1r6wxN7xQzuV9bm4H_rw3u4aCOO5r43P4?fbclid=IwAR2G6n3lxEmbYeP4EK4OsZbNZEfxhznoZU6sn47-vLwBj-xIligV5S-M3ZE

>> No.5291192
File: 2 KB, 65x64, nu-pixel-male.png [View same] [iqdb] [saucenao] [google]
5291192

>>5280230
>A couple of years ago I found a program that attempts to generate pixel art from high res images. http://research.cs.rutgers.edu/~timgerst/
>
>But while it's an improvement over just downscaling and reducing the colors, it can't make images that pass for pixel art. The style transfer results are in a similar boat, at least for now.

Not bad.

>> No.5291424

>>5290318
Okay, that's quite convincing an indication, that the net has "gotten a wrong idea" and set up on a completely wrong track. Maybe using the wrong kind of dithering (I was training it to work around patterned kind of dithering by introducing unpatterned kind) chipped in too, to some extent. Regardless, it clearly doesn't work.
I wholeheartedly thank you for all the time and goodwill that were necessary on your part to see this through.

>> No.5291645

>>5291424
Also, to elaborate, here
>Regardless, it clearly doesn't work.
I am not trying to shift the blame on the net implementation used. I am legitimately saying that it must be, that my idea, in the current circumstances (for which it was proposed in the first place), generally speaking, sucks ass. Just so that there is no misunderstanding whatsoever on that particular point.

>> No.5292379
File: 441 KB, 1024x1024, F-ZERO X#FC026947#0#2_all.png [View same] [iqdb] [saucenao] [google]
5292379

F-Zero X portraits, model is 70% >>5281168 and 30% Manga109.

It's a pain to do large textures because they're split into chunks.

>> No.5292754 [DELETED] 

>>5276223
fucking kek

>> No.5294217 [DELETED] 

jew penis

>> No.5294404

>>5294217
>>5292754
Great way to improve the thread. This is 2019 and neo 4chan.

>> No.5294414

>>5294404
Fuck off it's the way of the future.

>> No.5294469

>>5294404
Calm your tits, I was just bumping the thread from page 10.

>> No.5295049

How possible it would be to do a pass for Metal Gear Solid? I know the textures are pretty mechanical and don't probably benefit from upscaling too much though.

>> No.5295059

>>5295049
Do you plan on playin it through epsxe?

>> No.5295115

>>5295059
Not sure which is the best emulator but think so. I have few installed but every single one of them are having some kind of issue. Like crackling sound or some video problems. I have relatively good machine with i7-6700k though.

>> No.5295127

The thread's theme music:
https://www.youtube.com/watch?v=D4aUcOOaOFw

>> No.5295156
File: 509 KB, 512x512, m06_wal3_output_rlt.png [View same] [iqdb] [saucenao] [google]
5295156

>>5295049

None of the models I tried did a good job with MGS' environmental textures, so I'll probably need to make a custom one again. Urban100 (a database of buildings) might help teach it to draw straight lines.

>> No.5295183

>>5295156
Yeah they are too cubical for the algorithm to understand them.

>> No.5295207

>>5295183
Forgot to add: and because they are cubical, standard rescaling models could work better than AI.

>> No.5295213

>>5258589
Oh my god, this is worse than those 2d filters back in ZSNES.

>> No.5295217

>>5264503
You and me both brother.

>> No.5295360

I put an 100x59 image through it, but the 400x236 came out super artifacty, what gives

>> No.5295585

>>5295360
>I put an 100x59 image through it, but the 400x236 came out super artifacty, what gives
If you need to ask you don't need to know, that gives.

>> No.5295608

>>5295360
>100x59

>> No.5295615

>>5295608
>>5295585
okay, what should I use for "pixel art" to make it better
it's not really pixel art it's just early 32 bit graphics

>> No.5295654

>>5295615

The default models are pretty bad with reduced colors because they weren't trained to handle them. Try one of these two.

https://www.mediafire.com/file/oecs2g06yqiawmd/ReducedColorsAttempt.pth/file (for color banding)

https://www.mediafire.com/file/2kwho6m69twfqum/Riven.zip/file (for dithering, made specifically for Riven and I don't know how it'll look for textures)

I'm also training a model for MGS textures, which could also work for other old games.

>> No.5295659

>>5295654

Forgot to mention, neither of them look good by themselves, you usually have to run them a second time through a different model.

>> No.5295738

>>5295659
This man understands the process. To get best results in any image processing, you'll need to combine multiple approaches into one result. Good job.

>> No.5295752

>>5295738
Could the second pass be replaced by scaling?

>> No.5295758

>>5295752
No you dumbass.

>> No.5295768

>>5295752
You are too incompetent and don't understand the process. Please don't make your life more complicated by trying something you don't have any clue of. By scaling you are not creating any new information, you are losing it. Mongoloid.

>> No.5295775

>>5295758
>>5295768
If you want to insult people who want to contribute to your little pet project, maybe /g/ is more of your taste.

>> No.5295801

>>5295768

rude

>>5295752

I'm not sure what you mean. The 2nd pass is upscaling. The idea is upscale with one model -> downscale -> upscale with a different model. Though maybe you could skip step 3 if all you want is to reduce banding/dithering.

>> No.5296368
File: 307 KB, 512x512, m06_wal3_output_rlt.jpg [View same] [iqdb] [saucenao] [google]
5296368

>>5295156

No good. Maybe at 2x scale, definitely not 4x. Haven't tried it on character models either. MGS1's textures are closer to pixel art than photos, so it might just not be possible.

That said, that Doom mod uses GameWorks and it turned out pretty decent, maybe wait for that.

https://www.mediafire.com/file/bsaoga1tk5mrjzo/MGS.zip/file

Next task will be to see if I can get training up and running on Windows. It's not ESRGAN itself that's the problem, but the requirement of batch converting images to tiles.

>> No.5296491

>>5275797
This shit gave me nightmares when I was a kid.

>> No.5298639 [DELETED] 
File: 690 KB, 1430x966, 2SWN8um.jpg [View same] [iqdb] [saucenao] [google]
5298639

https://pastebin.com/X29EeK6G

I cobbled together some instructions to get ESRGAN training working under Windows. I really recommend using Linux (it seems to have worse performance on Windows, a couple of features don't work and I haven't figured out why)

>> No.5298658
File: 690 KB, 1430x966, 2SWN8um.jpg [View same] [iqdb] [saucenao] [google]
5298658

https://pastebin.com/nkkLwy02

I cobbled together some instructions to get ESRGAN training working under Windows. I really recommend using Linux (it seems to have worse performance on Windows, a couple of features don't work and I haven't figured out why)

>> No.5299670

Can someone do this for Postal 1 (not the Redux version)? It had hand-painted backgrounds and loading screens.
It would be simple considering it's a really short game even in it's expanded Steam/GOG release but I don't know which upscaling program to use, how it works, and if it can handle paintings well.

>> No.5300689

>>5298658
>https://pastebin.com/nkkLwy02
You know the issue with performance on Linux vs Windows is that you haven't set your CUDA to GPU and it's using CPU by default. This is most probable cause and other one could be just an issue with file system or something but other than that I can't imagine anything else as it's using Python and the code isn't compiled. I would imagine that if the code was C++ and was compiled using MingW on Windows and on Linux it would use native GCC that would create a performance difference but even then 5x is a huge margin.
Thank you for the guide. I will start to do some tests.
In Linux it's easier to do scripts to automate the whole thing as well.

>> No.5300695
File: 1.13 MB, 1089x804, Diablo_upres.gif [View same] [iqdb] [saucenao] [google]
5300695

>>5300689
This makes my vegana very moist. If I do anything I will examine the possibility of upscaling Diablo 1 and 2 backgrounds. The sprites are probably wise to left alone because I don't know what it does with animated sequences would it be pretty or not. But thinking about Diablo in nice big resolution makes me fucking want it now.

>> No.5300740

>>5300689

I fucked up a few instructions, mostly pertaining to a dependency on Windows and the validation/checkpoint settings. Nothing critical, but for completeness' sake: https://pastebin.com/th579mzP

>You know the issue with performance on Linux vs Windows is that you haven't set your CUDA to GPU and it's using CPU by default.

I considered that, but

A: GPU mode works fine when upscaling with ESRGAN. I can set it to GPU or CPU and there's a big difference in how long it takes, and

B: CPU mode is normally way slower than 5x compared to GPU mode. It's closer to 50x.

>>5299670

As long as the pixel data wasn't destroyed by compression it should work fine.

>> No.5300747

>>5300740
Which system you have? If it's not CUDA then it has something to do how Python runs. Has Python some environment variables etc? I'm sure there is a solution to this as the code is running on interpreter and not compiled.

>> No.5300756 [DELETED] 

>>5300740
Do you have a tight butt? I am very curious how it would feel when I slide my hand over your buttocks in shower while we discuss about the performance differences and hot, steamy water moistens our bodies.

>> No.5300827

https://www.youtube.com/watch?v=fU-j0f73A4A
Onimusha was released and the guys are talking about "what it took to re-render the backgrounds and so on" and when I look at the backgrounds it looks like they used AI to upscale them. This was originally a PS1 game. Would like to have some kind of confirmation for this but to me it's a combination of AI and hand-fixing some issues.

>> No.5301076

>>5264785
It’s afraid

>> No.5301510

>>5300827

It was originally a ps2 game. It was also released on original Xbox and pc.

>> No.5302017

So I just ran SFTGAN, but the result image looks identical to the sample image. While running, I did receive the error in Bash:

D:\Users\NiggerSlayer\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\modules\upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))

Do this mean anything, or do I have to do something else to make SFTGAN output big image

>> No.5302120
File: 938 KB, 3200x1800, Onimusha-Warlords-Remastered-Screenshot-2.jpg [View same] [iqdb] [saucenao] [google]
5302120

>>5302017

SFTGAN's weird in that you need to manually upscale the image before running it. It'll downscale the image by 4x, than upscale it again. Use nearest neighbor upscaling.

>>5300827

Judging from the high res screenshots, it's just Waifu2x at best + a bit of film grain.

>> No.5302365

I want to touch you... please let me touch your buttocks.

>> No.5302550

>>5302120
RuntimeError: The size of tensor a (640) must match the size of tensor b (160) at non-singleton dimension 3

>> No.5302583

>>5302550
Just follow the https://pbs.twimg.com/media/DxDv29QXQAARLJ5.jpg and https://kingdomakrillic.tumblr.com/post/178254875891/i-figured-out-how-to-get-esrgan-and-sftgan
You are too clueless if you need to ask about every error.

>> No.5302602

>>5302550

It worked fine before, but after enlarging the images you got that error?

I can't reproduce it. Maybe downgrade your version of Python to 3.6.

>> No.5302697

>>5301510
https://www.youtube.com/watch?v=FvOFYZ5Cm2I
Background probably rendered at 640*480 since PS1 build

>> No.5302742

>>5259459
>>5259454
Do dinocrisis 2 please.

>> No.5302758

Fascinating thread. I believe theoretically the same A.I. could be used on games which displayed sprites as textures.

>> No.5302767

>>5284332
Could benefit feeding the training dataset with specific textures and materials instead of random images?
For example how to resize different types of metals, rock, grass, bark, etc.

It's seems that it makes a really good work adding detail to metal and stone but fail miserably with grass for example.

>> No.5302769

>>5284391
The soul meme is the industry trying to stop geeks from keep modding, updating and improving their old games instead of buying new crap.

>> No.5302771

>>5289828
> t - Gigapixel.com

>> No.5302797

>>5290318
>results.zip
Looks somewhat like a colored pencil sketch:) I wonder why that might be the case.
Although the amount of detail it managed to "restore" purely "recursively" is honestly quite impressive at SOME places.

>> No.5302847

>>5302797
You know what. I think I got something important about that game. Problems with dithering notwithstanding, there was another source of serious distortions that affected the process of upscaling images.
I think, that Riven is a game with a highly internally inconsistent artistic style. What I mean by that is that on one hand you have all these photo-textures, directly photographed from real-life objects and then wrapped around complicated 3d models. And, on the other hand, everything related to 3d modelling, to the forms of various objects, both large and small, in this game has this subtly - but very consistently - "crumpled", dented, curved, plasticine-like look to it. As if all the models were made to conform to referential concept art - they were modeled after - way more meticulously than was really warranted, down to all the imperfections stemming from concept artist drawing by hand and down to all the mannerisms comprising his personal drawing style. I think, that, at its core, Riven is something like a drawing, adorned in all those photorealistic images. And what, I think, PARTLY happened, is that those "concept-arty" large-scale aspects of the picture started to "bleed" onto the "photo-realistic" small-scale aspects of the picture, producing, frankly put, pretty much a mess.

>> No.5303646

>>5271901
I might be mistaken, but if i remember correctly, the pc version of RE3 might've used JPEG backgrounds. I remember using some program to extract data from the game in an attempt to make my own psf's, and one of the data archives contained jpgs of pretty much all the backgrounds from the game.

If you know what a psf is and wondering why in the hell was I looking through the pc version for the files for that stuff, apparently all the ports of 3 had leftover data from the ps1 versions which included all necessary music data despite each of them using their own streamed format.

>> No.5304145

>>5300695
There is already a HD version of Diablo 1.
No idea how they made it, but you could probably extract the assets to train the neural network from there.

Check out the Belzebub mod here: https://mod.diablo.noktis.pl/

https://www.youtube.com/watch?v=_StVRh8rP_g

>> No.5304760

>>5302767
Yeah I think the best way to do it is to train the AI with the same style of source images as the game's textures are. If it's cartooney then feed it with anime stuff etc but if it's Quake etc, stuff the AI with footage of rocks, mechanical things etc.
And then on top of this it's probably best idea to use two different AI passes.
This is what I would do if I were to play with it but I'm unfortunately too busy for now.

>> No.5304764

>>5304145
Yeah you dingus I play Belzebup but that only gives you 60fps and hd resolution. I doesn't do any re-texturing with upscaled textures. Why are you so stupid?

>> No.5304769

>>5300695
Makes the characters look like shit

>> No.5304784

>>5304145
Belzebub isn't a HD mod
Its a HD render, that is also a mod.
Meaning:
1. Doesn't play like Diablo I
2. No asset resolution at all

>> No.5304814

>>5304769
Largely because they're morphing with the background. Lots of ringing and fuckups in general around the borders. The inner parts of characters looks fine to me.
When scaling assets before rendering that wouldn't really be an issue. They'd be scaled independently.

>> No.5305927

>>5302847
In the behind scenes book, they admitted to following the principle of Wabi-Sabi, so the models and texture were crafted to be imperfect. This gave the game a great deal of realism.

>> No.5306803

Would this look good on Digimon World?

>> No.5306960

>>5264910
I would love to see some Oddworld. I might try when I get off work.

>> No.5307127

>>5306960
Yeah, Oddworld has that late 90s 3d rendered style backgrounds. Should work well because the Myst upscaling was such a success.