[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 72 KB, 630x630, B4D01504-1906-4279-85F1-9615BEC842C0.jpg [View same] [iqdb] [saucenao] [google]
9865952 No.9865952 [Reply] [Original]

What in the world is:
Genetic Algorithms?

>> No.9865966

https://simple.wikipedia.org/wiki/Genetic_algorithm

>> No.9865992

>>9865952
A really shitty way to do optimization.

>> No.9865994

>>9865992
Redpill me on better ways to do optimization

>> No.9866010

>>9865994
Please someone help us!

>> No.9866049

>>9865994
Run the algorithm in parallel on a million machines :^]

>> No.9866274

>>9865994
Neural network

>> No.9866278
File: 112 KB, 617x456, 1531033901683.jpg [View same] [iqdb] [saucenao] [google]
9866278

>>9866274

>> No.9866294

>>9866274
ZIMBABWE

>> No.9867687

>>9865994
Deep learning.
At least until quantum computing

>> No.9867693

>>9865952
A genetic algorithm mimics the process of natural selection to evolve a solution set towards an optimal point. It works by mating good solutions with other good solutions, producing offspring that are (ideally) even better than their parents. Poor performing solutions are dropped from the population.
IT'S THE CIRCLE OF LIIIIIIIFFFFFEEEEE

>> No.9867704

>>9866278
He's not wrong.
ANN works really well. That's why it's such a meme now.
Genetic algorithms are fucking terrible on the other hand, they take forever to converge and only really benefit you by sounding kind of cool as a concept.

>> No.9867706
File: 16 KB, 360x360, earthrise.jpg [View same] [iqdb] [saucenao] [google]
9867706

>>9867693
Here is the target for my GA. The solutions are compared against this target and given a "fitness" value from 0 (completely different) to 1.0 (exactly the same).

>> No.9867710
File: 1.17 MB, 360x360, earthrise.webm [View same] [iqdb] [saucenao] [google]
9867710

>>9867706
Here is the GA in action. It starts with many solutions consisting of random polygons, each of which has several genes (vertices + colour). By mating a pair of solutions and sharing genes we can generate a new solution. If the new solution is an improvement, we keep it and drop a worse solution out of the population. This continues for some time, until the solutions start to resemble our target.

>> No.9867711
File: 23 KB, 360x360, earthrise.png [View same] [iqdb] [saucenao] [google]
9867711

>>9867710
After 3622 generations we reach this image. This is an "ok" result - but for very small images (say ~10px by 10px) we can reach a pixel perfect result fairly quickly and with a low number of polygons. The goal of this GA was to create a novel image compression algorithm, with long encoding times as a trade off for small filesizes. But I got a job before finishing it :^)

>> No.9867724

>>9867711
The first few chapters of "An Introduction to Genetic Algorithms" by Melanie Mitchell are a good introduction
PDF: https://mega.nz/#!VVV3CCKB!IQUSnxOVS-sBoxn7SMOou1uUrV4dMdWZ3t3xU1z-SmY

>>9866274
>>9867687
These posters are scared of having to think for themselves.

>> No.9867734

>>9867724
Not the guy who originally posted "neural networks," but I will say ANN is fine.
And I'm not "scared of having to think for [myself]." I was a developer for about a decade (working as a data scientist now because that's the meme thing to do now if you want a much higher salary) and usually prefer figuring out ideas from the ground up instead of depending on prefab solutions.
Figuring out how to write an ANN program yourself using just standard library C++ shit gives you a good appreciation for how that particular sort of optimization works. I also didn't get as deep into math when I was younger / still in school as I should have and working through how to make something like that work is what helped make calculus click for me too.

>> No.9867771

>>9867734
It is a very harsh task to properly develop an ANN library. An appropriate task for anyone under graduate level is a real basic perceptron.

>> No.9867911

>>9865952
>is
are

>> No.9867919

>>9865992
>Nature does shitty optimization

>> No.9867929
File: 367 KB, 2700x1228, 2.jpg [View same] [iqdb] [saucenao] [google]
9867929

>>9867919
That's correct.
Nature takes the most direct path in the short term.
It takes insanely long and convoluted paths in the long term.
Because it doesn't have the luxury of planning things out in advance and every step of what it does has to stand on its own for everything that comes after it to work.
It's better to think of nature as "lazy" rather than as "efficient."

>> No.9867959

>>9865992
They just work though, all you have to do is throw more cpmputing power at the problem
>>9865994
Gradient based optimization. If you can calculate the gradient you can typically find solutions much faster, especially if the problem is convex. If the problem isn't convex then find a way to mske it convex
>>9867687
>>9866274
Those are not optimization approaches.

>> No.9867970

>>9867929
Not
>lazy
but "undirected". Sure we can evolve a GA at much higher rates than nature, but more importantly we can analyze the outcome to alter the algorithm's parameters for our next run (or implement adaptive parameters)

>>9867959
>They just work
This attitude will slowly lead you to local optima. Good GAs for complex problems require a lot of fine tuning.

>> No.9867980

>>9867959
>Gradient based optimization
>Those are not optimization approaches.
Neural networks are gradient based optimization. They use gradient descent to find a minimum of an error function.
I guess you're saying it doesn't count because it's the error function minimum that constitutes the optimization problem and not the task that's learned *by* solving that optimization problem.

>> No.9867982

>>9867959
>gradient based
doesnt apply for reinforcement learning, which is the main application of genetic algorithms which are, right now, being used in ML research.
Many search space exploration algorithms are genetic algorithms of some kind

>> No.9867984

>>9867929

It can be slower but it reachs global optima more likely, gradient optimizatiom sucks ass for any slighty deceptive problem, think about a S shaped maze, if you optimize by distance you will get stuck in a corner

>> No.9867985

>>9867984
>it reachs global optima
When you have so much biomass and huge timescales you're always going to have a few winners

>> No.9867992

>>9867984
For the sake of discussion, let's say human intelligence is the optimum point of evolution. Nature only succeeded by chance - had the dinosaurs not been hit by an asteroid, would humans have still won out, or become raptor food?

>> No.9868007
File: 244 KB, 450x225, comparison(3).gif [View same] [iqdb] [saucenao] [google]
9868007

Optimization thread? Optimization thread.

Post optimization algorithms.

>> No.9868047
File: 714 KB, 620x480, saddle_point_evaluation_optimizers.gif [View same] [iqdb] [saucenao] [google]
9868047

>> No.9868050
File: 893 KB, 620x480, contours_evaluation_optimizers.gif [View same] [iqdb] [saucenao] [google]
9868050

>> No.9868056

>>9867992
Why were these algorithms ever a thing, then?

>> No.9868068

>>9868056
that's the point - they work well because we can exert more control over the evolutionary process than nature. nature just gets lucky every now and then.

>> No.9868076

>>9868068
But as you just said, it's a shitty way to do it. So why did anyone care about them?

>> No.9868210

>>9868076
GAs are used for problems where it would take more time and effort to ubderstand how to properly optimize the problem than it would to solve it in a simple but really inefficient way.

>> No.9868563

I disagree with posters saying that GA are trash. They are very nice to finding global optimisation methods, you just need to use them in hybrid aproach. I did DC control using GA for parameters and then after reducing the range with genetic algorithms I switched to simplex method and it was very fast since it then comverged towards true global extremum and wasn't stuck locally.

>> No.9868818

>>9867980
He's saying it doesn't count because you can use any optimization method to train a neural network, there's no reason it has to be gradient descent.