[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 1.06 MB, 1024x1024, image.jpg [View same] [iqdb] [saucenao] [google]
10769848 No.10769848 [Reply] [Original]

Basic discussion of deep learning field. Recent news, questions, personal projects and so on. Hopefully this thread going to settle down here so we can actually do something useful improving DeepNudes around it.

>> No.10769894

>>10769848
based
also another one up for the ethical discussion around it >>10769748

>> No.10769908

>>10769894
Yeah, I've seen that thread but decided to do new one for guys from /g/, cause we came to the conclusion that /sci/ is better place for it then /g/. Also it better is if thread will not be about DeepNudes only, so we can also discuss some serious stuff here.

>> No.10770033
File: 145 KB, 512x512, de1d716edc5880dea2db01f2.jpg [View same] [iqdb] [saucenao] [google]
10770033

Here is my plan to improving Deep Nude, there are several steps that needs to be made. First of all algorithms should be changed. As far as I know DeepNude uses pix2pix and some sort of attention technique. While attention is definitely good pix2pix is really old model that should be replaced with something like StyleGAN or any other new image-to-image transition model. Then we need to collect our own dataset of naked/unnaked people that should be pretty easy to collect. And the last step is the hardest one, beacuse none of us can really provide computation power needed, we have to somehow parallelize training of it across many computers, and I'm not sure if it's even possible to do (except by training the same model one after another, but then we need some good system for tracking versions of it).

>> No.10770070

>>10769848
Both improved over pix2pix

https://github.com/AlamiMejjati/Unsupervised-Attention-guided-Image-to-Image-Translation

https://github.com/NVIDIA/pix2pixHD

Super resolution network used to improve game textures or upscale pixel art games, need train in naked women for better upscaling

https://github.com/xinntao/ESRGAN

indexxx.com massive database erotic/porn women, metart network own massive photo set high quality than can be download in torrent network.

Original pix2pix use Czech Casting database.

>> No.10770076

>>10770070
>Original deepnude use Czech Casting database.

>> No.10770104

>>10770070
>https://github.com/NVIDIA/pix2pixHD
>indexxx.com massive database erotic/porn women, metart network own massive photo set high quality than can be download in torrent network.

This is great. Then the only issue then is computation power we needed to train the network. Any ideas about it?

>> No.10770424

>>10770104
Network design and validation is far more important than computing power. Most ANNs can be trained on GPUs, and you can get something decent with a few hours of training max. Most of the work is in input processing processing and detection algorithms, then designing the most barebones network that does the job to avoid overfitting (e.g. putting nipples on knees). This is one of the biggest issues with Deep Nude at present, aside from only being able to handle well-composed, high-res, roughly eye-level shots of a standing model facing the camera.

>>10770033
If you have a decent GPU or, even better, 2 cards with a bridge, then computation isn't the hardest part.

First step should be finding the decompiled source, then updating to a new better model using the same, general algorithm. Improve the regular train/test set to include slightly irregular cases such as sitting, and lower resolution images. Then, rebuild with a pre-filter to identify with back and side shots, which feed to networks with similar architectures trained on sets of these images. This would be a heftier piece of software than one network to rule them all, but far more modular and easier to implement.

>> No.10770454 [DELETED] 

If you want fakes made, join the discord. https://discord.gg/GXNDhc

>> No.10770468

>>10770454
fuck off retard

>> No.10770474
File: 395 KB, 865x568, Ssd.png [View same] [iqdb] [saucenao] [google]
10770474

>>10770424
>Network design and validation is far more important than computing power. Most ANNs can be trained on GPUs, and you can get something decent with a few hours of training max.
You kind of right... if you have a couple of Titans laying around. I have some experience with GANs and they are incredibly slow to train. And pix2pix and pix2pixHD both are GANs. It's not a coincidence that NVidia is the one who maid so much progress in high resolution image generation. They have huge computing power that necessary. I highly doubt that we can handle even 512x512 resolution, 256x256 much more reasonable (then we can scale it up and produce somewhat acceptable results).

The image from BigGAN paper just you know the scale of computing power I'm talking about.

>> No.10770938

>>10770070
Where can I find it in complete archive?

>> No.10771114

That's really impressive. https://www.youtube.com/watch?v=YEfuuvLw9F4

Even more impressive that it works in the browser http://ganpaint.io

>> No.10771148

>>10770474
This seems like apples to oranges; will you post a link to the paper (or the title) so I can read more details? At a glance, their network sounds more complex than what this would need to be.

Also, I do have a few 2080ti cards. They aren't bridged yet, but one 2080ti has more tensor cores than the biggest node used in that paper. Seriously, one person can train this network in a reasonable amount of time.

>> No.10771168
File: 119 KB, 256x256, 4.png [View same] [iqdb] [saucenao] [google]
10771168

>>10771114
I almost did pepe

>> No.10771294

>>10770938
https://czechcasting.com/tour/models/page-1/
Rip this

>> No.10771515

>>10770033
>none of us can really provide computation power needed
Could always rent out some Amazon power.

>> No.10771885

>>10771515
Someone could, but would they?

>>10771148
>will you post a link to the paper (or the title) so I can read more details?
https://arxiv.org/abs/1809.11096

It's not fair comparison. True. BigGAN uses hoge batches that we don't need, so our task much simpler computationally.

>Also, I do have a few 2080ti cards.
Well, great then. If so, I can agree. Coulple 2080ti should be enough for trining such model.

>>10771294
We can grab it with some script, but aren't someone already did it? And 2000 pairs of images not that much. We need more data. Good news that one-to-one mapping is not necessary.

>> No.10772022

>>10771885
>Good news that one-to-one mapping is not necessary.
This is kind work metart network has 4mpx to 50mpx photoshoots with 30 to 200 photos focus mostly a single model, models could do between few sets to hundreds sets, making trivial get over 100,000 photos and over 1000 photos per model.

But data need some cleaner or some specific select zones?

www vk com/albums-182520835
www vk com/albums-71855951

>> No.10772449

>>10772022
>But data need some cleaner or some specific select zones?
Yes, we probably better use only images where whole body is present (except maybe legs), where czechcasting really perfect for that. What about cleaning of a data it's not that necessery. A lot of datasets contains highly distorted and low quality images and it's generally not that harmful.

>> No.10772478

>>10771885
Thanks, interesting read but unfortunately a lot of it's over my head. I'll have to follow up on some of the parts that eluded me but shouldn't have later.

>>10772449
In this vein, it might be a good idea to introduce some artificial noise into a subset of the training images. One of the extant issues is how badly it handles low resolution images, which is what would most commonly be available to users.

>> No.10772503
File: 387 KB, 445x1180, 6259A4E0-CFDC-406A-9F5B-A4AE2CD8FFE3.jpg [View same] [iqdb] [saucenao] [google]
10772503

>>10772449
Black bars because 4channel non-nsfw
https://www.indexxx.com/set/917805/erotic-beauty-anamika/
Original photo, photo in photoset lack text or watermark.

Something like this, or more changes?
Label data can be manangers as name model / photoset / namefile, image crop coordinates.
few bits per imagen and few MB for massive photoset, photosets can be download from torrent network, only need share label info.

I can label thousand photos per day or maybe someone want use image pose estimation or image segmentation.

>> No.10772863
File: 17 KB, 256x512, 02.jpg [View same] [iqdb] [saucenao] [google]
10772863

>>10772503
We do not need labeling at all. Not that precise. All we need is images of two types "naked" and "dressed".

I made grabber script and run it on czechcasting, it's about 10% done and it downloaded roughly 3000 images. Overall we have 30000 and half of them not really useful. So ~15000 pictures for the beggining. Not as bad as I thought it would be.

I think that best resolution would be 256x512. It's not too low, so we can enhance it later and by 1:2 aspect we can process full body images.By default pix2pixHD works with 1024x512 so we have to make slight changes to the model, but it should be easy to do.

>> No.10773073

>>10769848
To get a high-res result the DeepNude software needs 3 parts:
1. Low-res undressing. (Train a pix2pix for full body.)
2. Zoom in on the pussy. And reder the pussy in high-res. (Train a pix2pix only for pussy.)
3a. Zoom in on the right boob. And reder the right boob in high-res. (Train a pix2pix only for the right boob.)
3b. Flip the result horizontally. Zoom in on the right boob. And reder the right boob in high-res.

>> No.10773109

>>10773073
Wouldn't work. Continuity is highly important and when you trying to do something like this by parts it gets broken a lot.

>> No.10773236

>>10772863
in4l8tr

>> No.10773534

>>10770070
Cycle GAN would be another option

>> No.10773539

>>10773534
There are actually tons of alternatives:
https://github.com/lzhbrian/image-to-image-papers
But mostly they are pretty shitty. Even newer ones.

>> No.10773541

>>10772863
Is the background this sterile in all of the images ? I can see this not translating well to real-world (e.g. snapchat) pictures

>> No.10773564

>>10773541
Yes, you right it will be a problem. That why we can't use this dataset only. But it good enough for making proof-of-concept version.

>> No.10773634
File: 112 KB, 533x800, 3227.jpg [View same] [iqdb] [saucenao] [google]
10773634

Here is the first version of the dataset. It contains 32246 images from czechcasting. About 3000 of them sorted already and remaining ones needs to be sorted. Anybody here with too much free time?

https://mega.nz/#!IKwxXAgI!YpI648jsxvPdyfKwlAyypVEm_jnUgWXde8MJmjNgSBI

>> No.10773641

>>10773634
just do a unsupervised classifier based off how much skin there is on the picture senpai. By instinct i'd say a K-NN but there's probably better solutions.

>> No.10773649

>>10773641
It better to be done manually. I really don't want images with thongs appear in "naked" class and I'm afraid that any NN would misclassify them pretty easily.

>> No.10773663

How about further feature extraction ? Through pose detection algorithms, for example.

>> No.10773712

>>10773649
But it might help to reduce manual labor. However, I'd recommend using something like this (https://github.com/yahoo/open_nsfw)) for filtering

>> No.10773835

>>10773564
Mining some of the cloth on/off threads on /b might also be a good strategy

>> No.10773877

>>10769848
For people who are interested in things other than DeepNude, I've started work on a network that trains on 4chan posts and corresponding metadata and outputs a string and a time of day in which to post it such that it gets as many (you)s as possible. Right now I'm curating the dataset generator and working on network structure. I'll post again when I've got coherent results.

>> No.10773881

>>10773877
that seems interesting, I'm toying around with creating entire threads based on a OP

>> No.10773945

>>10773877
Fine. Take your (you), you filthy robot.

>> No.10773970

>>10771148
>They aren't bridged yet
you don't need to bridge them, data-parallel training already accelerates process n-folds, where n is the number of gpus.

>> No.10773990
File: 95 KB, 454x576, final.jpg [View same] [iqdb] [saucenao] [google]
10773990

I'm trying to compile a list of websites where you can create stuff with neural networks. Here are all the sites I've got so far.

>dreamscopeapp.com - recreates an image in the style of another image (made pic related using this)
>deepart.io - same as above but it takes longer
>thispersondoesnotexist.com - generates portraits of people
>thiswaifudoesnotexist.net - generates portraits of anime girls
>thiscatdoesnotexist.com - generates (mostly terrible) pictures of cats
>ganbreeder.app - generates images of certain "topics"
>waifu2x.booru.pics - doubles the size of images and helps with noise reduction
>talktotransformer.com - generates several paragraphs of text from a prompt

>> No.10774041

>>10773990
>https://make.girls.moe/ - another anime character generator.

>> No.10774057

>>10773877
>autogenerated newsgroup bait
https://en.wikipedia.org/wiki/Mark_V._Shaney

>> No.10774060

>>10774057
For example: https://web.archive.org/web/19961206204323/http://softway.com.au/people/mvs/

>> No.10774116

>>10773990
Fuck this is funny
I take off her shirt and I put myself on her chest. Her hands on mine. Her arms around me. My cock on her pussy lips and my mouth between her legs. And then I'm pulling her close. And the only thing that stops me from fucking her is my pants. It's so small, she has to hold on to what I'm wearing and I can't help but watch. I hold on to her tight as she closes her eyes, and I'm going to cum. I'm going to pound my cock down her throat so deep that her breath hurts. And then she's just so much better for it. She's so warm and wet with anticipation. And I've got to watch because her face is starting to flush and she's so turned on. I'm getting on top of her. I'm not going to stop until I'm going to give my cum to her. I'm going to ride my little cock over and over till her tits start to bounce up. Her pussy is wet with it, but I know it might not last. My cum is going to run down her chest. And she feels my balls against her cunt and my cock on her tits. She's so turned on that I can't hold my breath anymore. I'm going to cum. She's fucking over her top and I'm going to cum for no good reason whatsoever. And then it

>> No.10774147

Can somebody please train a NN to recognize /pol/tards and automatically report their constant off topic posting here?

>> No.10774182

>>10774116

When gpt2-small first came out I put in prompts of lewd scenes from a few text porn games and got similar results to this, which uses gpt2-medium. However gwern has a step by step process on his site where he trains the released model on project gutenberg's classic poetry collection for a few days and then prompts it with other lines of poetry and it does a great job improvising. I'd really like to see what happens if someone trains it on a dump of every scene from a bunch of text games or a few literotica tags or something.

If I can be bothered I'll do this myself and make a /d/ text game which uses the (carefully written) text of the finishing move as a prompt for an H-scene, thus creating a lewd game with infinite procedurally generated win/loss scenes. Then I'd be one of those freaks who codes porn games though rather than just one of the freaks who plays them

>> No.10774228

So I'm pretty much a beginner in coding deep learning stuff (even though I do know the teory). For a little project I should use pytorch to set up a LSTM with attention mechanism and also dropout to reduce overfitting. Can somebody help me out by suggesting me some sources or some practical examples of similar stuff?

>> No.10774235

>>10774182
I finetuned GPT-2 small on smut before. It had a lot of trouble keeping track of which body parts belong to whom and where they should go.

>> No.10774280

>>10774235

That's a shame. Still, this is the public small implementation of basically the first very notably decent text generator. Something much better will doubtless be out by the time Paradox is translated.

>> No.10774291

>>10774280
Looking at >>10774116, I should probably try again with the medium sized one. Looks better in this regard.

>> No.10774377

Something new about genetic and its link to depression?

Started again to take SSRI, what a wonderful thing they are

>> No.10774395

>>10774147
A regex will do: \(\(\(.*\)\)\)

>> No.10774416

Cs-let here getting into deep learning. I’ve taken some courses in ml and cv and have been helping develop research models, I also have some side projects in nlp. How important is a masters in industry? I don’t want to go to school and was curious if companies would still give me a shot without one.

>> No.10774482

>>10773970
In that case, I have about 30 decent gpus lying around.

>> No.10774496

I just realized that shitcoin GPU farms not only generate a return on investment from mining coins, they can also be repurposed as your personal mini-supercomputer to handle training computations for the sole expense of the electrical power while running it, as the infrastructure is supposedly free and paid for itself since it is bought for the purpose of making money through farming. It is therefor in the best interest of everyone interested into NNs to start assembling a bitcoin farm in their garage and use that whenever he needs to train and test his new project

>> No.10774725

>>10770424
>If you have a decent GPU or, even better, 2 cards with a bridge, then computation isn't the hardest part.
This is how I know all of you are larping. Have you ever trained something like styleGAN like the post you replied to is suggesting? lmao. Go look up training times for GANs you fucking moron.

>> No.10774733

>>10774416
Lmao you just described every single computer science major I know right now. Its saturated as fuck. If you arent a god then good luck.

>> No.10774736

>>10774496
wow great idea, I forgot that the electricity used in crypto mining was free

>> No.10774741

>>10774736
>for the sole expense of the electrical power while running it

>> No.10774755

>>10774725
The only article posted itt used a network far more complicated than what is being proposed here and was trained in 1-2 days on lower grade hardware than what I personally have available. So, [citation needed].

>> No.10774761

>>10774741
> which costs more than the cryptocurrency is worth

>> No.10774765

>>10774761
Ah you mean the mining unprofitability right now. Yes, but that's right now. If BTC peaks to 50k+, which it definitely will seeing how it bounced back up, then it will become quite profitable again.

>> No.10774768

>>10774755
> https://github.com/NVlabs/stylegan

Here is the styleGAN source code. Notice the part where it says training time takes over 40 days with a single GPU. You can't do research and you've never trained a large network, jesus fuck.

>> No.10774787

>>10774768
> Our training time is approximately one week on an NVIDIA DGX-1 with 8 Tesla V100 GPUs.

>> No.10774788

>>10774768
They trained it gradually up to 1024x1024, where we need only 256×512 and do not need gradual training that much. So its really a viable job with even single 2080. It could take several days tho.

>> No.10774809

>>10774787
Yes. 8 Test V100 GPUs. The dude I was replying to had an entire point about how a single GPU is fine and training time doesn't matter. It takes a week on nicer GPUs than anyone here has.

>>10774788
> we only need 256x512
lmao.
> So its really a viable job with even single 2080. It could take several days tho.
holy shit you didn't even read that repo XD

>> No.10774810

>>10773634
do you have an account? share credentials?

>> No.10774823

>>10773877
I was randomly thinking about things like transformer but from 4chan posts. Found this thread from fireden. Looking forward to you post.

>> No.10774833

>>10774809
>holy shit you didn't even read that repo XD
We are not going to use StyleGAN you dummy dum.

>> No.10774841

>>10774810
You don't need an account to grab this images.

>> No.10774847

>>10774768
You're right, I'm not knowledgeable about training GANs and my comment about training them doesn't apply to large, complicated networks.

I still don't see how that would be as huge an issue with the hardware I have available, given that no one has proposed training a 1024x1024 StyleGAN network on 70,000 images. Maybe you should read some of the previous posts?

>> No.10774882
File: 79 KB, 598x480, 4cf4060b-da6a-41a6-8d72-c45608a4e7a0.png [View same] [iqdb] [saucenao] [google]
10774882

>>10774116
>I'm going to cum for no good reason whatsoever.
Basically description of my life.

>> No.10774884

>>10774847
Yes, and if you don't do something like that you know what comes out the neural network when you run it? Completely shit images of shitty looking almost nude people lmao. You aren't just unknowledgeable about training GANs, you're unknowledgeable about machine learning.

>>10774833
You will never get good results using non state of the art retard.

>> No.10774896

>>10774884
There is room between "state of the art" and "DeepNude." The goal is in this range. If you think you can improve, consider offering some constructive criticism. Otherwise, I think "not the best thing possible with existing technology" can do a fine job.

>> No.10774903

>>10774884
Are you aware that StyleGAN is not image-to-image model?

>> No.10774923

>>10774903
>image-to-image mode
What is your point?

>>10774896
You made a comment, I corrected you. You are now butthurt.

>> No.10774926

>>10774923
>What is your point?
Grats, you completely discredited yourself.

>> No.10774950
File: 955 KB, 1490x1114, Capture.png [View same] [iqdb] [saucenao] [google]
10774950

>>10769848
y'all making terrible looking nudes while im over here with 20,000+ cartoon frogs

>> No.10774956

>>10774950
YES. That where we need StyleGAN.

>> No.10774973

God, this is so fucking amazing. I can cum just looking at this. 5 years ago this something like this was literally impossible.
https://www.youtube.com/watch?v=XDWua850n54

>> No.10775078

>>10774923
Hush now, the adults are talking.

>> No.10775563
File: 66 KB, 659x609, __.png [View same] [iqdb] [saucenao] [google]
10775563

>>10774416
>>10774733
>tfw i got a apprenticeship as a Data Scientist in a startup with 1 year prior training
Currently fucking around with autoencoders for signal processing

>> No.10775584

>>10774496
>tfw given around 50k a few years back to build a gpu cluster for the lab
>mostly just mined crypto with it

>> No.10775716

>>10775563
autoencoders are magic

>> No.10775722
File: 184 KB, 1434x463, ss.png [View same] [iqdb] [saucenao] [google]
10775722

Does anyone know what a "Selective Skip Connection" is and how they work? How does this box connect the multiple skip connection inputs to a single output? The paper is really stingy on details.
https://arxiv.org/pdf/1711.10644.pdf

>> No.10775793

>>10774950
Yea what GPU you have tho bro

U need a gpu to train that on bro?

I got a GPU bro

>> No.10775794

>>10775584
Sell it all soon if you want to get anything for it lol

>> No.10775889

>>10774950
Would you like to upload this? Do you have more memes with same format?

>> No.10775900

>>10774733
It’s saturated with brainlets, there is a desperate need for people who can do things outside of poorly adapting existing models.

>> No.10775921

>>10775794
I’ve been constantly selling it as it comes in.

>> No.10776772

So, anybody tried actually train the network?

>> No.10776802

>>10769848
To get good results you need:
>lots of data
>fast GPUs

Unless people want to donate their time to manually classify data or donate money to fund training

>> No.10777016

>>10775900
What do you need to know to not be a brainlet?

>> No.10777869

>>10776772
Idk I have GPU compute if someone can give me data

>> No.10777881

>>10775584
how tf haven't you been caught yet?

>> No.10777963

>>10770033
>>10770104
>>10770424

You realize you can get free Jewgle™, Azure, AWS gpu access with enough credits to train any cutting edge model today without paying a dime right?


As for >>10769848 and my personal projects I currently have a transexual detector in the works that is currently able to get ~85% accuracy but I'm trying to get it to at least 95 before release via appstore. I've done some work with LHC datasets as well but I really need to purchase a GPU cluster setup and some more magnetic tape drives to be able to do what I want with the data, it's hard to store and process petabytes of data without them.

>> No.10777970
File: 72 KB, 256x256, a.png [View same] [iqdb] [saucenao] [google]
10777970

>>10774950
I already have a pepe generator that I made as an adversarial network pic related is the best output I got with a shit dataset of 2k images and ~100k epochs, if you're willing to upload your images I'd be willing to release the built model for my fellow anons after I get it all trained up.

>> No.10777983
File: 5 KB, 296x170, download (1).png [View same] [iqdb] [saucenao] [google]
10777983

>>10775722
A skip connection is just a way to say that you are taking the output of one layer and adding it to the output of another similar to pic related, you can find some github repos that have these just by searching 'residual block implementation'.

>> No.10778005

>>10777970
Have you used any of data augmentation technique?

>> No.10778107

>>10777970
Can you share yours? I found this one, but here only 1200 images https://archive.org/details/PepeImgurAlbum

>> No.10778224

>>10778107
Mine is just from the first page of google searching two data sets but I'm pretty sure its just 800 collisions with that 1200 data set because I've noticed I have alot of duplicates and even some that go beyond triplicate

>>10778005
I only used flipped images over y/x so roughly 8k total.

>> No.10778251

>>10778224
>I only used flipped images over y/x so roughly 8k total.
Well that's the issue. You need to add small random rotation, 0.9x-1.1x zoom, and x-y translation. It help dramatically. Flipping axis would not help but may even hurt, because image structure should be roughly consistent.

>Mine is just from the first page of google searching two data sets but I'm pretty sure its just 800 collisions with that 1200 data set because I've noticed I have alot of duplicates and even some that go beyond triplicate
It's fine. I know how to get rid of duplicates pretty easily.

>> No.10778253

>>10774496
how i mine gpu bitcoin on juul pod? asking for a friend

>> No.10778259

>>10778224
>I only used flipped images over y/x so roughly 8k total.
Post you rare pepes

>> No.10778286

how do your ate this d€€p f4ke https://youtu.be/2H8ZAIxuyaw


how long did it take to make the video?

>> No.10778321

>>10778286
Damn.. This one is really good. I thought it was a normal video for first few minutes.

>> No.10778466

have access to a national supercomputer... its tempting...

>> No.10778596

>>10778466
Is 1k laptop considered as a supercomputer in india?

>> No.10779014

>>10774973
>5 years ago this something like this was literally impossible.

Acshually(TM), that shit could have easily been done 5 years ago. The necessary algorithms already existed, the necessary hardware already existed.

>> No.10779378

>>10770104
>>10770424
I have a pretty decent setup(2 Nvidia 770s in SLI, CPU overclocked to 4.3 and stable etc) so I think I could get this done. It was mainly for gaming, however I've been getting more and more into ML/DL as its actually productive. Anyways, I'm still learning about to go about this shit in the most efficient manner, would y'all recommend Tensorflow for this type of deal? IFRC it automatically partitions space on your gpu(s), but like I said, I'm a noob and still figuring shit out. Definitely understand the mathematics, but I want to make the code efficient as many math people(like myself) >>10770424
fuck it up.

>> No.10779395

>>10779378
Have you tried just running the training for pix2pix as a start?
Other than that, you pretty much have the choice between PyTorch and Tensorflow.

>> No.10779678

>>10779378
>Focusing on efficiency when you have nothing written in the first place
You're never going to make it.

>> No.10779681

>>10778286
Taken down by NBC universal even though it's original content, gotta love the jewry.

>> No.10779684

https://www.youtube.com/watch?v=89A4jGvaaKk
Nice video about GPT-2, if the results like that in general, and was not cherry picked through thouthands of attempts, I'm pretty sure that there is sign of conscious in it.

>> No.10779686 [DELETED] 
File: 332 KB, 1111x1030, TIMESAND___1p0f0fpgjg8777fy3ygp135bi8ccwe5tiopdztyhsiwh86vthc8ew222.png [View same] [iqdb] [saucenao] [google]
10779686

>> No.10779695
File: 1.03 MB, 1024x1024, image.jpg [View same] [iqdb] [saucenao] [google]
10779695

>>10779686
I don't know how to say it to you, but you have to know. This person is not real...

>> No.10779732

>>10777983
How do you connect multiple skip connections to a single output? Just average them?

>> No.10779805

>>10779684
The example he talks about are cherry picked stuff they used on their PR website.
https://raw.githubusercontent.com/openai/gpt-2/16095a61394ef6f5ec29e6ce9bf6757479084cbe/gpt2-samples.txt
These aren't be cherry picked.

>> No.10780068

>>10777970
Try it with stylegan

>> No.10780193

>>10779686
What's your opinion on AI, Tooker?

>> No.10780224
File: 687 KB, 640x640, 1000.png [View same] [iqdb] [saucenao] [google]
10780224

>>10778259
Most of them are total trash heres some from early training, I deleted most of my stuff due to space limitations.
>>10780068
If you get me more data I'll gladly make an attempt or two.

>> No.10780229
File: 26 KB, 128x128, 2300_229.png [View same] [iqdb] [saucenao] [google]
10780229

>>10780224
Thats after 1k epochs with my shit data.Here's one of the ones I pulled after a 100k or more where it has learned the basic shape of the frog and the ideas behind where features go, although as you can see the features are incorrect.

>> No.10780233
File: 27 KB, 128x128, 2900_538.png [View same] [iqdb] [saucenao] [google]
10780233

>>10780229
>>10780224
>>10778259
>>10778259
And here's another that I can't quite explain, outputting what seems to be what the beginnings of a real frog would look like (with shit quality mind you) even without a single image of a realistic frog in the dataset.

>> No.10780237

>>10777963
>a transexual detector in the works that is currently able to get ~85% accuracy
On a dataset with what distribution from what source? I could make one with >99% accuracy on the general population.

>> No.10780246

>>10780237
A personal dataset that I've been building up in my free time it's only ~600 high quality tranny pics ripped from twitter/IG, I don't have much free time or a good internet connection (128Kbps) so its hard to get alot of data on my end.

>> No.10780258

>>10780246
The distribution is incredibly important. How many other pictures are there?

>> No.10780296

>>10780258
1k, care to explain though? I've only been learning about this stuff for a short while and I'm an undergraduate doing it outside of class.

Using 100/400 respectively as validation.

>> No.10780332

>>10780296
If you're just using accuracy, an imbalanced dataset may give you misleading results just because guessing the more common class will look like it's doing better. 85% is still better than guessing cis for each one with what you said, so it does seem to have learned something. You'll want to look into the F-measures though, and figure out if false positives or false negatives are the bigger issue for users.

Are the other pictures from similar sources too? What are the other demographics of each group?

>> No.10780494
File: 1.44 MB, 500x281, 94AFA83E-615F-4A0F-B372-8473B851CF7F.gif [View same] [iqdb] [saucenao] [google]
10780494

Need some advice on autoencoders

The dataset I’m going to be working with has about 900 images of size 9000 x 16000 and it’s not feasible to obtain more data. I am only interested in broad trends and not details.

1. Does it matter if I leave my images as rectangles?
2. Should I downscale the images in preprocessing, or add more convolution layers to downsample it.
3. What kinds of sizes should I try for my hidden encoding?
4. What type of autoencoder should i use? Downsampling, VAE, ???.

Thanks

>> No.10780501

>>10780224
https://www.gwern.net/Faces
Read this

>> No.10780871

Can I use transfer learning to measure how homogenous sets of pictures are?

1. I take an existing network like VGG and use all layers prior to the activation one to create a feature extractor that gives me an n-dimensional representation of any picture I input.
2. I take a set of k pictures and feed it to the headless network above
3. I take the mean of the pairwise distance of all vectors obtained through step 2
4. I use this number as an index of how homogenous the picture set is. For instance a set of frogs like in >>10774950 might give me a low index of 0.1 while a dataset of random picture would have a high index of 0.9

Does this sound sensible?

>> No.10780999

>>10780494
>1. Does it matter if I leave my images as rectangles?
If you want
>2. Should I downscale the images in preprocessing, or add more convolution layers to downsample it.
Downscale them, unless you have a litteral supercomputer at your disposal
>3. What kinds of sizes should I try for my hidden encoding?
Start with a few FC layers and work your way up towards a better performing model
>4. What type of autoencoder should i use? Downsampling, VAE, ???.
Depends what you want to do with your dataset. Do you want to juste reduce images to extract latents features vectors? Apply a denoising?

Also, you should consider data augmentation, 900 ain't a lot.

>> No.10781191

What does it mean if my discriminator loss approaches 0 but my generator loss starts to approach 0 and then after several 10's of thousands of epochs my generator loss approaches 16 in a DCGAN? I tried the AI stackexchange but apparently noone is willing to comment after nearly a full month.

>> No.10781245

>>10781191
Overfitting. Your discriminator way better than generator, so it's wins and meaningful training stops. You have to use tricks to keep loses roughly the same. There is enormous amount of work dedicated to this particular problem. So I wouldn't even try to describe the solution here. Simple answer don't try to make the model yourself, just use already made ones.

>> No.10781255

>>10781245
>just use already made ones.
It is, kek.

>> No.10781262
File: 7 KB, 260x194, images.jpg [View same] [iqdb] [saucenao] [google]
10781262

How hard is ml/ai? I am practicing JS at the moment so I will not be making something so complex.

What books can you recommend to a complete beginner? Preferably on the different techniques used.

>> No.10781267

>>10781262
1. any statistics book
2. elements of statistical learning
3. interpretable machine learning

>> No.10781272

>>10781267
How do I learn python as a math person? Think math major with a solid background in proofs (abstract algebra / real analysis and a little of grad level math like alg top)

>> No.10781282

>>10781267
IML is a new bubble within a bubble that is ML

>> No.10781286

>>10781282
if you're white have sex with a white female for the purposes of reproduction
otherwise dilate

>> No.10781358

>>10781255
Then you probably use small dataset. The fact that it's made does not guarantee that everything will be fine. Google about mode collapse in GANs.

>> No.10781387

>>10781272
like any other programming language

>> No.10781435

>>10780999
>Also, you should consider data augmentation, 900 ain't a lot.
The image set shows the daily procession of chemicals along a shoreline. I don’t know how I can augment this because everything about the image is important.
The goal is to eventually build a generative sequence model for forecasting, but I figure I should start with reproducing the inputs first.

>> No.10781624

Anyone currently employed in the ML field ? I will soon get a masters degree in applied maths with a focus on statistical learning/optimization. Right now, I have no idea about what machine learning jobs entail besides the short descriptions I found on linkedin.

So... if anyone concerned would be so kind as to tell me : What is your typical work week like ? What kind of experience did you have before getting the job? In which country do you work ? How much are you paid ? Anything else you wanna talk about?

>> No.10782159

>>10779395
I just started reading through it, this is the first I've heard of pix2pix/worked with GANs but this looks like some interesting stuff. I set up Tensorflow-GPU this morning so I'm gonna fuck around with it later.

>>10779678
Eh, if I can do it correctly the first time why not? Of course there will be some stuff I'll have to improve on later though, but that's the nature of the beast.

>> No.10782174

Has there been any pussy type (innie, outie) prediction been done based on portrait shots? Asking for friend.

>> No.10782449

Deepnude sourcecode was released by the developer(s)
github DOT com
/deepinstruction/
deepnude_official

>> No.10782579

>>10782449
The source code isn't really that helpful for anyone. We all know the general architecture, it's the model itself that matters.

>> No.10782894
File: 24 KB, 278x277, D7wqswkW0AAAD4E.jpg [View same] [iqdb] [saucenao] [google]
10782894

ML is a meme, study statistics instead and choose some ML courses so you can ride the hype and go over to statistics when the bubble pops

>> No.10783300

>>10782894
Ml is applied math. "" artificial intelligence """ is the meme. Ml is just basic function approximation.

>> No.10784748

>>10773877
>this post is your first successful result

>> No.10784768

>>10774228
Use keras.
https://github.com/keras-team/keras/blob/master/examples/conv_lstm.py
Plenty more examples as well.

>> No.10784794
File: 83 KB, 667x661, 1561767943102.jpg [View same] [iqdb] [saucenao] [google]
10784794

>>10774950
Pls upload.

>> No.10785240

>>10774228
https://www.youtube.com/watch?v=H3g26EVADgY&feature=youtu.be goes over LSTMs in Pytorch at some point.
FastAI is a good resource in general if you are just starting out. You need to supplement the tutorials with some study on your own.

>> No.10786547

Does anyone do generative models in 3d yet? Like take a few images of something from different angles, feed them to the model, and out comes a 3d rendering?

>> No.10786720

>>10786547
I've seen some papers about it, but it's really expensive to train and results mostly shit.

>> No.10786864

>>10786547
It takes a ridiculous number of images, but you should look into photogrammetry. COLMAP is the best software imo, note that it does require a decent GPU. Essentially reconstructs a 3D model based on a large number of pictures, no NN involved just a lot of geometry. Another moderate issue is that the reconstruction is usually in the form of a point cloud (i.e., a 3D model based on many disjoint points) and does not have a surface/faces. There are methods to try and find the faces, e.g., poisson reconstruction

>> No.10787068

Are there any career niches in deep learning outside of the data science track?
I’ve always been more of an embedded and systems guy but I’ve been helping out a researcher with implementing some deep learning models and find the theory part interesting.

>> No.10787148

I tried training pix2pix on 2000 dressed/undressed pairs at a resolution of 256x256. Left is after five minutes, right is after 6 to 7 hours.
https://files.catbox.moe/oiipzs.png

>> No.10787165

>>10787148
The results from the middle of training look similar to the last one so I think it's underfitting. I don't know much about how this works. Can I make the model more complex to fit better?

>> No.10787179 [DELETED] 

>>10784768
>>10774228
Thanks for the help guys!

>> No.10787184

>>10784768
>>10785240
Thanks for the help guys!

>> No.10787215

>>10787148
Looks terrifying but potential is there.

>> No.10787222

>>10787148
Here one trick that I was thinking about but not tried yet. Before training whole GAN, train only the generator to reproduce the image itself. So you put one image and it produces the same image. This should make it much more easier to restore parts of image that shouldn't change.

>> No.10787223

>>10783300
AI is not a meme. It is used all over the world right now. I bet you don't even know what AI means. Google it.

>b-b-but AI means HAL9000/GlaDOS like in my sci-fi movies and video games

>> No.10787228

>>10787222
Yeah, and because of you will train it directly it should be really fast thing to do, so do not overfit it.

>> No.10787235

>>10787148
Other thing, you should only train either far placed or closeby images. Mix of them will fucked up your network pretty hard.

>> No.10787236

>>10787222
That sounds like a good idea. I will try it.

>> No.10787238

>>10780229
Nightmare pepe

>> No.10787247

>>10787235
It would be nice if it could handle both and removing one type cuts out about half of my training data. Maybe I'll try it later.

>> No.10787256
File: 663 KB, 426x926, pepes.png [View same] [iqdb] [saucenao] [google]
10787256

Here is mine neuro-pepes from about 700 examples augmented to ~10000. Looks pretty shit, but the good thing that discriminator now can easily distinguish pepe from none pepe, so theoretically I can somehow grab images from google/4chan and save it if it's pepe image automatically. Then just retrain the network on more data.

>> No.10787264

>>10787247
Copynet looks good.
https://files.catbox.moe/pli02h.png
I'll cut the close pics and try again with this as start.

>> No.10787267

>>10787264
I assume you took the dataset from here >>10773634, did you sort more images? If so can you upload it? I would sort another couple of thousands of them, and then upload a new version.

>> No.10787282

>>10787267
I actually downloaded it from the site again, so I know which pictures are of the same girl. That way I can pair them more easily. I'm working my way through classifying them by pose and dressed/undressed. I have a classifier trained so I mainly confirm its decision, but it's still tiring so it may take a while to finish.

>> No.10787301

>>10787267
>>10787282
This is now my paired dataset for pix2pix.
https://files.catbox.moe/n9lh1e.zip

>> No.10787381

>>10779378
Gtx 770’s CUDA compute capability is too low to work with current versions of Tensorflow.

>> No.10787395

>>10787222
It got the backgrounds quicker. Faces still have problems, because they aren't in exactly the same place usually.

>> No.10787634

Hypothetically, what would be the best educational path for CS/ML research?

I'm considering doing Bsc. Engineering Physics+Msc. CS (+Bsc. Economics in parallel for work opportunities).

Is this good or am I shooting myself in the foot by not choosing Bsc. CS? I dislike that program because it is mostly software engineering rather than theoretical computer science/math.

>> No.10787684

>>10787256
Here is what I got at the end. Not so bad considering that I had only 700 images to work with.

>> No.10787687
File: 207 KB, 416x300, pepes.png [View same] [iqdb] [saucenao] [google]
10787687

>>10787684

>> No.10787759

>>10787687
These are some nice rare pepes. Well done dude

>> No.10787912

Guess I'll go back to labeling the dataset for now.
https://files.catbox.moe/xpwfgu.png

>> No.10788039
File: 26 KB, 238x208, 439058205983984.png [View same] [iqdb] [saucenao] [google]
10788039

>>10769848
DeepNude official source code:
https://github.com/deepinstruction/deepnude_official
https://github.com/deep-man-yy/easydeepnude

>> No.10788459

>>10787068
There are a lot of niggers who write the libraries like CUDA etc.

>> No.10788502

>>10787148
Those look like silent hill enemies

>> No.10789090

>>10788039
And.. gone. Did they delete it themselves or github officially cucked?

>> No.10789131

>>10789090
>https://www.gwern.net/Faces
Appreciate the link buddy.

>>10789090
Idk but does it even matter? I can reupload the source to mega if you want.

My collab account got axed, I assume by some tranny simply due to uploading 4k pepes and augmenting to 100k.

>> No.10789135

>>10789090
Github is cucked. Just like every other large tech company

>> No.10789151

>>10789135
The best part is they take stances on stuff like this but already know there's nothing they can do to stop it once it gets into the wild. I'm currently batching ~400k images through it all from prominent feminists and plan to release it on a new website thats tbd.

>> No.10789191

Anyone have links for the libs on deepnude? I'm looking for all three

>> No.10789217

>>10789151
lmfao i cant wait

>> No.10790143

Why nobody trying to make a chatbot with GPT-2?

>> No.10790303

>>10788039
>easydeepnude
Is it your project?

>> No.10790821

>>10790303
It's not mine.
It's his project: http://boards.4chan.org/r/thread/16853852/

>> No.10791235

>>10790821
Someone have to tell the guy about this thread. For now he has the best DeepNude that actually working, so maybe we should improve his version together.

>> No.10791245

>>10774950
i freaking hate rich-fags
how much did that collection of rare pepes cost you?

>> No.10791600

>>10791245
$0

>> No.10792469
File: 160 KB, 338x377, data.png [View same] [iqdb] [saucenao] [google]
10792469

>>10791600
Will you share it or not? I need this data!

>> No.10792628

>>10774950
>>10791600
Please can you send me the Pepe in suit sitting painting and the Pepe clown looking to the side with a suit.

>> No.10792850

>>10791235
He put his contact info on his github page. You can send him an email: deepmanyy 'at' msgsafe 'dot' io

>> No.10792855

>>10792850
>bro just contact me out of 4chan dude

no
this is a thing that must happen on this site. We have thousands of potential NEET autists who can contribute to AI and nothing is achieved if projects are formed on here and then shipped far away to other sites or to discords. The only way we can make use of the brainpower on 4chan is if we stop doing that and cultivate everything here in generals like this

>> No.10792911

>>10792855
Remind me again when was the last time when "4chan brainpower"™ © did anything useful rather than yelling at each other?

>> No.10792922

>>10792911
bro you sound upset lol

>> No.10792956

Hey, there was this facebook ai-experiment, where 2 bots were talking to each other and developed some sort of syntax and i think even some sort of rudimental semantics. Do you know how it was implemented? Is there a paper on this topic? Detailed information on that matter? I couldn't find any helpfull information, just newspaper- and blog-articles.

>> No.10793057
File: 10 KB, 279x181, layers.png [View same] [iqdb] [saucenao] [google]
10793057

>>10775722

when u stack 2 many layers, skip connections 2 the rescue.

>> No.10793084

>>10793057

seriously though, they seem like a kludge. the layer width needed to approximate a given function decreases exponentially with depth. it wouldn't surprise me if these 100+ layer resnets fall out of favor at some point. biological neural networks are not "deep"

>> No.10793086

>>10792956
The actual paper was pretty mundane. The news just blew it all up.
https://www.skynettoday.com/briefs/facebook-chatbot-language/

>> No.10793166

>>10793086
Thanks a lot!

>> No.10793288

>>10793084
>biological neural networks are not "deep"
Yeah, they are chaotic. Instead of enormous amount of layers there are infinite loops connected to one another in all crazy ways possible.

>> No.10793296

>>10793084
there’s no such thing you fucking pseud

>> No.10793332

>>10793288
>>10793296

sorry, resnets, and skip connections in most cases seem like a bad design to me.

>> No.10793335

just my 2c.

>> No.10793355

and yes, i realize in that paper they're not using a resnet or using skip connections to the end of increased depth, but my point still stands.

>> No.10793370
File: 13 KB, 256x252, Retarted_pepe.jpg [View same] [iqdb] [saucenao] [google]
10793370

>>10793332
That's because they are. But people not found anything that works better yet.

>> No.10793576 [DELETED] 

>>10793370

alphago's policy and value networks were 12 layer CNNs. no batch norm, no skip connections, no dropout, no other doodads. rather different task from most vision tasks, but i would bet good money that excessively deep CNNs don't have a significantly greater degree of representative power than something on the order of a dozen layers with an equal number of parameters.

>> No.10793749

>>10793332
I disagree. Resnet cannot harm you network when used correctly. The nn can always send activations ahead and ignore the layer.

>> No.10795112

Anyone has large amount of face-like memes? Not necessary with feel memes, but anything that has face structure in it.

>> No.10795806

>>10793332
Curious what your reasoning is. The reasoning behind them seems pretty straightforward: adding a layer should not decrease the accuracy of the model.

>> No.10796406

>>10793749
>>10795806

From the resnet paper:

>To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

if skip/identity transforms improve the accuracy of your network, doesn't this suggest that your network is too deep?

>We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16].

wouldn't this make it basically impossible for the network to learn an identity transform? if they used it after the relu, couldn't the identity be "learned" by increasing the bias and setting the kernel to identity? in this case, the inputs would be shifted so that they're mostly positive, passed through the relu function (which would have no effect) and then subtracted by batch norm to their original value.

>> No.10796417
File: 10 KB, 250x238, vPAvtn2.jpg [View same] [iqdb] [saucenao] [google]
10796417

>>10796406
cont.

and this horseshit has almost 25,000 citations.

>> No.10796446

>>10796406
>wouldn't this make it basically impossible for the network to learn an identity transform?

cont.

impossible for a non-residual network, that is. my point is that they should have applied bn after the activation if they wanted to make a fair comparison between their design and regular CNNs. applying it before the relu guarantees a loss of information every layer and it has been shown that bn generally works better when applied after the relu.

>> No.10796690
File: 53 KB, 300x300, thumb_best-pepe-rage-gifs-find-the-top-gif-on-53411220.png [View same] [iqdb] [saucenao] [google]
10796690

Fucking windows update just ruined two days of pepe training. I do have checkpoint from 8 hours prior, but fuck it, results looked pretty dissapointing anyways. Still wanna die tho.

>> No.10796778

>>10796690
Post a throwaway email and another pepe

>> No.10797503

how would you approach the subject of voice mimicking? I've found this https://github.com/andabi/deep-voice-conversion but I have no idea if it is good or outdated.

>> No.10797990

>>10797503
https://www.youtube.com/watch?v=pQA8Wzt8wdw

You should look recent work on sound generation from OpenAI. But I haven't seen anything new about voice transitions in particular so it's pretty save to assume that this is the best openly avaiblable model.

>> No.10797993

>>10769848
wow AI is really cool you like trick primates into thinking they’re interacting with another primate but its just an evil fake bot you made up haha wow and you can simulate violating women’s privacy and making new methods of advertising. very cool stuff that’s important for the future of our species thank you for taking computer science majors you are lights unto us all. also this is definitely scientific and also math and you all shouldn’t be permabanned for posting in an offtopic thread. based thanks again

>> No.10798021

How many neurons and layers are in the pix2pix neural network?

>> No.10798049

>>10797993
>AI
No idea what are you talking about. This thread dedicated to machine learning and differential programming.

>> No.10798061

>>10787148
How many training epochs/batch size is that though.
Also did you augment data at all?

>> No.10798249
File: 248 KB, 800x952, 1545665094525.jpg [View same] [iqdb] [saucenao] [google]
10798249

Is there a reason why, when dealing with image manipulation/identification, it isn't the norm to cut out each element of the pic before trying to manipulate/identify what each single object.

Like take what this lad >>10773541 was saying. If before doing anything with a pic the script separated the girl from the background, wouldn't it bring better results? Wouldn't that deal with some of the confusion? Wouldn't it be faster(since the resulting pic would be much smaller)? Wouldn't it accept a more broad training data?

The act of separating each individual object is what I feel my own brain does when I look at things.

>>10795112
Just search for reaction pics. 99% of them are close pic of faces.

>> No.10798504

>>10796406
>>10796417
>>10796446
cont.

a few more thoughts

>"In real cases, it is unlikely that identity mappings are op-timal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. "

couldn't this be achieved by initializing the kernels close to the identity kernel and setting positive initial biases, like i suggested earlier?

>"We conjecture that the deep plain nets may
have exponentially low convergence rates, which impact the reducing of the training error"

not to belabor the point but relu(bn(conv(x))) is a uniquely bad layer composition for learning the identity, or even an invertible mapping. approx half the input values will be mapped to zero every layer. might bn's mean subtraction before relu be contributing to the exponential increase in convergence time they observe?

even without bn, plain relu is not a good choice in this case. conv(relu(conv(x))) can effectively represent the identity as i described earlier, by increasing the bias in the first convolution, and using the second bias after relu to shift it back, but gradient descent may not be effective in this case because negative inputs have no influence on the gradient, so there's no immediate benefit from moving in that direction. using softplus or another smooth, injective activation function would be more effective.

>> No.10799420 [DELETED] 
File: 268 KB, 342x340, pep.png [View same] [iqdb] [saucenao] [google]
10799420

>>10798249
>Is there a reason why, when dealing with image manipulation/identification, it isn't the norm to cut out each element of the pic before trying to manipulate/identify what each single object.

While learning network tries to fit to as much data as possible. If for some reason it's easier to fit the background then the actual image, it will fit the background. If we remove it then we can be sure that the network will do only the work we need it to do. That's the general intuition.

>> No.10799433
File: 268 KB, 342x340, pep.png [View same] [iqdb] [saucenao] [google]
10799433

>>10798249
>Is there a reason why, when dealing with image manipulation/identification, it isn't the norm to cut out each element of the pic before trying to manipulate/identify what each single object.
>The act of separating each individual object is what I feel my own brain does when I look at things.

It is better to separate background. Basically it is what networks with attention to. And usually it leads to better results. Even more you have to spend some resources of the network to fit background data, that might be meaningful to you. So, it definitely not a bad idea if you can do it automatically.

>> No.10799634

>>10799433
> Even more you have to spend some resources of the network to fit background data, that might be meaningful to you. So, it definitely not a bad idea if you can do it automatically.
Aren't there already NN trained to do tracing borders? I messed around a bit with one that was able to cut humans from the background, but are there any that can do that for just any generic objects? Like based on color shifts/shadows.
I know that my own brain is able to do that, even if I never seen the objects in a picture I always automatically separate them in individual object, then I might ask my self what each one is.

>> No.10799659

>>10799634
The thing what you trying to describe is called attention
https://www.youtube.com/watch?v=SysgYptB198

>> No.10799844

DLE at a Big N company AMA

>> No.10800069

>>10799844
What the probability of catching some shit after kissing a whore?

>> No.10800168

>>10798061
I tried again. This is after 11 hours and 30 epochs. Before I didn't use augmentation other than the flipping done by pix2pix. This time I did 10x augmentation before I started training. The batch size is 16.
https://files.catbox.moe/8w6wmo.png
The biggest issue at this point is probably image pairs where the pose doesn't line up exactly.

>> No.10800788

>>10799659

he is describing image segmentation.

>> No.10801090

>>10797503
Take a look, might be helpful to you
https://www.youtube.com/watch?v=6bFN2YkN6bo

>> No.10801671

Thanks nvidia
https://www.youtube.com/watch?v=ubCrEAIpQs4

>> No.10801855

>>10800168
>Epoch: [67][ 1093 / 1440] Time: 0.056 DataTime: 0.000 Err_G: 4.4253 Err_D: 0.1236 ErrL1: 0.0462
https://files.catbox.moe/swcp8d.png
My generator is losing to the D.

>> No.10801912

>>10800168
Are you trying to map one image to another directly? This is not how pix2pix should work. I'm suspect you are doing something wrong. Have you tried CycleGAN at this task?

>> No.10801963

>>10801912
Thanks for the suggestion. I'll take a look at CycleGAN.

>> No.10802000

My training data looks like this, which also illustrates the issues I have with pix2pix. The poses don't match well enough.
https://files.catbox.moe/ikrjzn.jpg

>> No.10802039

I would like to get started in working with machine learning and a.i.
I went on google's a.i. page and it's a mess of stuff everywhere, don't know where to begin. Could someone please recommend me an outline of what I need to get started? (besides python).
I'd like to experiment with A.I. that creates simple machines (levers) based on parameters.

>> No.10802102

>>10802039
You need an idea. Anything that you passion about that might be done with AI. Like generating pepe faces. Then you google how to generate images. Watch couple videos from youtube about GANs. Download the state of the art model for image generation. Tries to collect the data and train the gan on it. Then you crack, because Tensorflow don't want to install on your pc and move to something else.

>> No.10802114

Why do AI community refuse to move from python? C++ is orders of magnitude faster, could sped up training and evaluation a lot

>> No.10802121

>>10802114
The python is just glue between bits of optimized code written in C++ or running on the GPU.

>> No.10802558

>>10800069
street whore p(y=std | x=kiss) = 0.4
tinder whore p(y=std | x=kill) = 0.99
you probably dont want the posteriors of whores in all likelihood

>> No.10802562

>>10769848
Hi if I have a leftover 5x GTX 1070 ETH mining cluster can I use it for training TensorFlow models?

>> No.10802580

>>10772863
Stuff like this won't work they're all standing straight up. You need more poses. I'd suggest you actually hire an adult model to create this. You can do it like this:

>Hire adult model
>Plan to take a clothed and naked dataset
>Set up a white background on the floor(you can add in junk backgrounds afterwords for the real training set)
>Have dots on the floor representing a circle divided into 50 parts (you can photoshop the dots out in an automated way after)
>Have 500 poses with legs head and arms at various locations on the floor, front side up and down
>Have an assistant move her hands and lets to the right places for you and have a camera set up to automatically snap a picture when you click a button
>Collect the dataset
>Do this with a few models of various races

If you set it up fast, you can probably do this in one hour per model for thousands of pictures. If it takes approximately 5 seconds per photo, you can get 720 photos per hour. If you pay them $100 per hour, you can collect a dataset of 1000 photos for almost pretty cheap

Another method is to video her and have her just continuously move her body around in the way you tell her to, like moving her arm slowly up in a circle, etc. and slowly arching her back. Do it both clothed and nude. This way, you can use body tracing software to match up the shots afterwords and the synchronization doesn't matter as much

If you really want to make this big I suggest contacting a large porn production company and having them hire you to do this for them, but charge them millions of dollars (not joking, think those fat old porn producers even slightly understand how high tech this is? If you're doing it as a hobby project you're fucking retarded, this is a startup level idea)

Also once you do, please send some BTC to this address:
184jvGYtE5248qvxUqvXP1KfNk8UvqCGab

>> No.10802582

>>10796690
>He didn't add in a disk epoch cache

Kill yourself lmao

>> No.10802610 [DELETED] 

>>10774950
Beautiful here are my pepes, 316 of them. its not much but please use them

7zip link:
https://file.io/LAdC1T

password:
p32YkCVsfoRTNJB0DGJz90Th

Anon I suggest you first create a pepe detector then download a 4chan image dump to scrape more pepes from it, conveniently you will also get non-pepe images to send to your GAN

>> No.10802657

>>10802562
Yes

>> No.10802953
File: 458 KB, 1411x697, xxc.png [View same] [iqdb] [saucenao] [google]
10802953

>>10801671
This thing is huge. Probably a big pain to train it.

>> No.10803002

>>10802114
Because they're babbies who can't into programming

>> No.10803003

>>10802580
>Also once you do, please send some BTC to this address:
>184jvGYtE5248qvxUqvXP1KfNk8UvqCGab
Kys

>> No.10803043

>>10803003
You kill yourself, my time and advice is worth money you useless fuck, go back to your NEET hovel

>> No.10803097

>>10775722
Is there no implementation from authors? There usually is

>> No.10803098

>>10775889
>he thinks people share rare pepes

lmao, look at this kid

>> No.10803468

>>10798504
cont.

i did an experiment with mnist just for kicks, with a 45 layer cnn -> two layer fully connected classifier, batch size 12, plain sgd with initial learning rate of .01 and a decay of .998 every iteration, for 3000 iterations

i didn't check the test error, but bnorm(elu(conv)) and a couple resnet configurations relu(bnorm(conv + skip)), bnorm(elu(conv + skip)) had comparable training errors and convergence rates while relu( bnorm( conv )) and bnorm(relu(conv)) had convergence problems, typically the loss was about 3x higher at the end of training. this difference didn't show up until the network was very deep, > 30 layers. seems like the problem lies more with relu than bnorm though. so is resnet intrinsically more powerful than vanilla cnn, or is it simply making up for the information loss by a many-to-one activation function?

>> No.10803637
File: 1.37 MB, 900x866, 00a2b0e79c16747c392d4eaf62558ac39ff47881177bf00683c686e1c333f787.png [View same] [iqdb] [saucenao] [google]
10803637

>>10792628

>> No.10803641
File: 264 KB, 640x699, 00dcb706e653ce5203d12bdb3aee0f471a753bdb1342e13625255eb978fcbd75.png [View same] [iqdb] [saucenao] [google]
10803641

>>10792628
weg

>> No.10803665

>>10774950
Can you post the source code?

>> No.10803840

>>10801963
So far CycleGAN trains a lot slower. It doesn't produce horror movie results, but it also isn't very effective at stripping the girls. It paints over their clothes with skin color.

>> No.10803875

>>10803840
Any screens?

>> No.10803947

>>10803875
I'm training on 64x64 right now to make it faster, but here you go.
https://files.catbox.moe/yypn33.jpg

>> No.10803952

>>10769848
Wheres the wiki for beginners?

>> No.10803975

>>10803947
The fact that it trains slower is a good thing. Because it suppose to, GANs are known to be slow to train.

>> No.10804081
File: 4 KB, 64x64, fake_img_0032_015000.jpg [View same] [iqdb] [saucenao] [google]
10804081

Finally tried to use StyleGAN for pepe generation an it looks very promising.

>> No.10804103

>>10804081
Can you sculpt the GAN output with blobs in certain areas or initial pictures of other animals like a sheep that becomes a pepe?

>> No.10804123

>>10804103
If it will work out. I had an idea to add anime/human faces to the dataset and train pretrained model with this extended data to see what I would get. I can add some animals for sure.

>> No.10804487

>>10803468
cont.

really? no (you)'s on this one?

you all want to train pepe generators? god ML sucks, it's just as much a ridiculous farce as /pol/

>> No.10804491

oh, lord help me

>> No.10805549

>>10804487
Probably no-one had experience with your problem. Language related ML boring af. At least to me.

>> No.10805834

>>10802114
Python is more accessible for the common people and for math/stats plebeians.

>> No.10805902
File: 606 KB, 855x645, sss.png [View same] [iqdb] [saucenao] [google]
10805902

Damn, this paper is good. https://arxiv.org/pdf/1906.00446.pdf

>> No.10806175
File: 10 KB, 128x128, fake_img_0064_058000.jpg [View same] [iqdb] [saucenao] [google]
10806175

>>10804081
It took only a day to get to 64x64 resolution.. wow, few more centuries and I for sure will generate amazing hi-res pepes.

>> No.10806191

can anyone red pill me on how deep learning was intergrated into capturing the picture of sag a star

>> No.10806205
File: 74 KB, 386x573, 1560643115834.jpg [View same] [iqdb] [saucenao] [google]
10806205

>>10806175
Well done

>> No.10806207

>>10806175
What the fuck is the purple thing

>> No.10806208

>>10806191
I assume it's mostly about noise cancellation.

>> No.10806223

>>10806207
aborted pepe fetus

>> No.10806232
File: 415 KB, 640x640, fancy_pepe.png [View same] [iqdb] [saucenao] [google]
10806232

>>10806208
Fancy pepe I guess

>> No.10806235

>>10806232
>>10806207

>> No.10806246

>>10806175
Those pepes are suffering
you should make a "produce-a-pepe" site where people go and press a button to receive a pepe.
you could even have a 1-10 rating scale so that users can review a given pepe and provide feedback

>> No.10806253
File: 11 KB, 128x128, fake_img_0064_037000.jpg [View same] [iqdb] [saucenao] [google]
10806253

>>10806246
>Those pepes are suffering
What do you know about suffering?

>> No.10806260

>>10806253
Just look at them. They are crying out: "Please Kill Me!!"

>> No.10806364

>>10805549

it's not related to this anon's paper necessarily >>10775722

just a critical look at the resnet paper, which as far as i know was the first big paper to advocate "skip" connections

>> No.10806375
File: 91 KB, 1280x800, blonde-emma-stone-1280x800-celebrity-wallpaper.jpg [View same] [iqdb] [saucenao] [google]
10806375

I work in augmented reality, but I was just at the ICVSS summer school in Sicily last week and it was mostly about deep learning.
What can I expect from this thread?

>> No.10806415

>>10806375
How to build deepnudes(image translation dress woman to naked woman check /r/) app, but every move to build pepe GAN generator guy.

>> No.10806416

Retard here. What's the difference between Deep learning and Machine Learning?

>> No.10806423
File: 83 KB, 828x269, x.png [View same] [iqdb] [saucenao] [google]
10806423

>>10806375
Okay, read it now - was all about images anyhow.

So I'm currently interested in this author,
https://arxiv.org/search/cs?searchtype=author&query=Achille%2C+A
e.g. this (theory) paper
https://arxiv.org/pdf/1810.02440.pdf

It's about using some more stochastical dynamics notions (e.g. Feynman-Kac formulea and tools like that) to speak about relations between learning data sets

>> No.10806428

>>10806416
Deep learning refers to a specific type of machine learning, using multilayer neural network regression.

>> No.10806435
File: 62 KB, 458x595, coding.jpg [View same] [iqdb] [saucenao] [google]
10806435

>>10806423
more concretely, here he's treating the random variable in stochastic gradient descent like the noise leading to diffusion in physics and thus reason about the work necessary to reset the weights, i.e. to relearn for another task (fine tune)

>> No.10806454
File: 80 KB, 1280x640, fake_img_0064_067000.jpg [View same] [iqdb] [saucenao] [google]
10806454

>>10806375
>What can I expect from this thread?
Mostly Pepes

>> No.10806471

>>10806454
that pepe looks like he's sitting down the bar from me, giving me the stink eye because I'm not from around here
should I just mosey on to the next town over?

>> No.10806479
File: 345 KB, 760x495, Screenshot_20190714-185348.png [View same] [iqdb] [saucenao] [google]
10806479

>>10806454
What came up during the summer school was image completion, which could make for a tool not having to draw the Pepe's itself and also not quite as random outlook

>> No.10806506

>>10806479
edge2pepe would probably perform very well, given pepe edges are well defined

>> No.10806532
File: 405 KB, 760x509, Screenshot_20190714-191514.png [View same] [iqdb] [saucenao] [google]
10806532

>>10806506
Is there a chance to code up something fun from scratch? I like to post such semi-tutorial content on youtube and it would be a way to learn it. I only know the Nelson book using python and the code amount is in scope

>> No.10806685
File: 69 KB, 512x256, cyclegan.png [View same] [iqdb] [saucenao] [google]
10806685

>>10803947
Training at 256x256 doesn't help. It still does this.

>> No.10806728

>>10806685
Well, maybe CycleGAN not the best option here. You can also try https://github.com/SKTBrain/DiscoGAN witch works with different domains.

>> No.10806922

>>10806685
Here is another good work from NVidia that might be able to generate generate some good results https://github.com/mingyuliutw/UNIT

>> No.10806954

>>10806728
>>10806922
Thanks, I'll give those two a try too. Even if I don't get good results, it's interesting to see what happens.

>> No.10806963

>>10774950
please make a torrent of them

>> No.10806974

>>10770070
Merge!

>> No.10806996

>>10773634
Low resolution, really low. I would try to use wildlife from 4chan torrents first, then suicide girls. I know tatoos can be a problem but fine set dropout could help.

>> No.10807016

>>10806996
> Low resolution
What, are you going to train a 2048x2048 GAN?

>> No.10807826

>>10798504
>>10803468

some final thoughts: apparently bnorm can represent the identity transform because most implementations include learnable biases, but as i said, the problem lies mostly with relu. regular cnn with elu activations seem to work nearly as well as equally deep resnets with relu or elu, and that's with random initial weights.

>> No.10808386

Has anyone tried using gpt to make a chat bot?
All chatbots I know are still so bad.
I feel like what we can do in other areas with machine learning by now should enable us to make at least a decent chatbot, that doesnt have a 1 message short term memory.

>> No.10808393

>>10808386
levil and doesn’t notice or care you people are disgusting insects

>> No.10808397
File: 225 KB, 1843x814, 20190715_113226.jpg [View same] [iqdb] [saucenao] [google]
10808397

See this fucking shit not a single chatbot can pass this most basic test of remembering a single fucking piece of information and being able to spit it out again.

>> No.10808399

>>10808397
>>10808397
>>10808397
XiaoIce uses a "context vector" to keep on topic. Check that bot out from MS.

>> No.10808402

To make a chatbot that isn't shit youd want
- Context Vectory + history of it to look up
- Check through history of context vector with a quick search to see if the topic has been discussed, and feed related input sentences into the current input

that is about the best easy solution for "context"

>> No.10808404

>>10808397
hm... What if we would train the network in the following way:
We have two GPT2 network and one text analyzing network.
Both generator networks produce text one after another and text analyzer process this text in a GAN fashion trying to determine if the conversation real or fake. Then we take score from analyzer and train GPT2 on it. May it work?

>> No.10808415

>>10808404

How about

You need a context path formed via context vectors

whereby each input is a (series of) vector(s) that moves the context around.

Each reply first determine the current context and use an attention-like mechanism to reply based on conversation history of current context + the immediate input (with obv weight immediate)

>> No.10808421

>>10808415
Ex: Determine the current context is automobiles

some vector [0,0,0,1] whereby 1 is automobiles

Do a look up of previous input history with weighting towards most recent

Attach them to current input with low weighting so you have some previous context. It won't give the system memory but might keep the flow of conversation a bit better.

You could test the system on "context switching"

Aka "Hey that care is a nice one, what about the weather?"

The context should switch to weather and the comment about cars should be way less weighted.

>> No.10808426

>>10808415
>>10808421
I think the most problematic part in your scheme is to evaluate how well the bot perform.

>> No.10808428

>>10808404
Also the basic unit of AI, including in the brain is two systems working from both ends of the problem

Generation and discernment meet in the middle to determine things. Meaning somewhere you bring up some memory of a cat or abstractly imagine one while the visual shit is taking in the input

They both "match" to create a "that's a cat"

So yeah any system with a generator that is creating said object and comparing it to the input is far superior to one without.

>> No.10808440

>>10808426
just brainstorming

Either way the problem is the bot has no idea about the thing the person is interested in. If the context is always on automobiles, the bot would have to look up or hook into something like google trends or latest news on automobiles to bring up interesting / contextual things.

"Did you see the new ____?"

I bet you could combine it with whats popping on twitter / google news / google search trends and the open AI news article writer to get better results. Pulling from "click bait" to create conversation that seems real and knowledgeable.

>> No.10808990
File: 14 KB, 256x256, rrr.jpg [View same] [iqdb] [saucenao] [google]
10808990

I think I should stop this shit. It progress at the beginning very well and then it just stuck at some point and bring no improvement at all... Yet I wanna do this cool transition animation.

>> No.10809068

>>10808990
>it just stuck at some point and bring no improvement at all..
It's called local minima. Use smaller batch size to add some randomness into the process.

>> No.10809076

>>10809068
I can't do anything at this point. I have pretty shitty video card and it already took too much of my time. I will probably share my dataset and implementation I used for anybody to try it on their own, after I done with it.

>> No.10809081

>>10808990
Style transfer result to "anime/cartoon" style?

>> No.10809099
File: 12 KB, 256x256, fake_img_0064_105000.jpg [View same] [iqdb] [saucenao] [google]
10809099

>>10809081
No, just StyleGAN trained on pepe dataset (about ~1.5k images augmented to 25k). I think best use for something like this is use generated pepes as inspiration to draw new ones.

>> No.10809117

Look like thread on bump limit. Gonna make a new one.

>> No.10809201

NEW THREAD
>>10809198
>>10809198
>>10809198
>>10809198