[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 349 KB, 2436x1125, F540F593-94A4-4801-9233-698EAC513402.png [View same] [iqdb] [saucenao] [google]
12686575 No.12686575 [Reply] [Original]

Is machine learning a meme?
Can we actually create an AI?

NN, DNN, CNN, Transformers, LSTM etc. - which one is the best for an actual AI?

>> No.12686643

>>12686575
>he thinks machine learning is just neural networks
During the early 2000's everyone dropped NNs for kernel based methods because NNs suck ass.

>> No.12686691

>>12686643
>everyone
Like who? everyone is always talking about NNs

>> No.12687219

a bunch of if statements and bruteforcing isn't intelligence.
>inb4 humans are if statements
human minds aren't a machine nor a computer, they operate on completely set of rules, they are completely different. why do you think we would have if,and,or, etc gates when we have nothing in common with a computer?

>> No.12687236

>>12687219
t. filtered

>> No.12687295

>>12686575
Why WOULD you build a human-like AI? What's the application for that? If you want a human, just grab one. We have billions of those.
Machines are cheap and don't have any rights. Give one something that even resembles human consciousness and you won't be able to treat it as an object, at least not for long.
We got rid of slavery because it was shitty for the economy. Why would we built slaves that are smarter than us?

>> No.12687307

>>12686575
Should x^r be called a monomial if r isn't a natural number?

>> No.12687341

>>12686575
>Is machine learning a meme?
Yes and no. Machine learning can be a good tool for certain problems but most things you'll hear about AI comes from midwits who think Google works on Terminators

>> No.12687347
File: 152 KB, 1280x720, maxresdefault.jpg [View same] [iqdb] [saucenao] [google]
12687347

>>12687295
>Why WOULD you build a human-like AI? What's the application for that? If you want a human, just grab one. We have billions of those.
To get a robo gf

>> No.12687420

>>12687295
so that i can rule the world

>> No.12687532
File: 63 KB, 1024x1005, 1612588184015m.jpg [View same] [iqdb] [saucenao] [google]
12687532

>>12686643
that's wrong, retard

>>12687219
that's wrong, retard

>>12686575
None of the above. Current "neural networks" used in deep learning are nothing like biological neural networks. Deep learning has very poor algorithmic scaling compared to bio neural nets (think big O notation), because it's based on dense computations while bio nets use sparse activations. Individual neurons are much less energy efficient than the nanometer scale transistors used in modern computer chips, and yet the human brain is on the order of millions of times more energy efficient than GPU-based deep learning, because the brain has better algorithmic efficiency. We need algorithmic breakthroughs that mimic the sparse activations of bio nets in order to make a true, scalable AI. However, once that happens, you better buckle up because our hardware is far superior to the human brain and you're going to see some crazy superintelligence shit happen overnight once we figure out the software part.

t. deep learning engineer

>> No.12687602

>>12687532
how far away do you think we are, given the progress in the past 50 years?

>> No.12687616

>>12687602
It's going to be one of those things that happens suddenly with no warning, kind of like the first atomic bomb. It could be next year, it could be 20 years.

>> No.12689209

bump for interest

>> No.12689237

>>12686575
Machine learning is not a meme, but it's not something human-like.
It's just creating families of probability distributions and finding the one with the highest likelikood (or just high likelihood) given a sample (usually a big ass one).

The family of distributions is defined by the architecture. For neural networks this architecture is sometimes inspired by the human brain.
Maximum likelihood is usually achieved by gradient descent, especially for non convex losses.

>> No.12689240

AI seems like the key to the next age of technology

>> No.12689327

>>12687532
of course a deep learning engineer has no clue about biology and is pulling shit out of his ass. stop comparing computers and brains because they are nothing alike

>> No.12689359

>>12689327
>what is neuromorphic hardware

>> No.12689608
File: 199 KB, 912x677, someretard.png [View same] [iqdb] [saucenao] [google]
12689608

>>12689359

>> No.12689722

>>12686575
>Is machine learning a meme?
yes
>Can we actually create an AI?
no

>> No.12689824

can machine learning be used to predict stocks?

>> No.12690408

>>12689722
cringe. We are only a few years away from AGI

>> No.12690519

>>12689824
In principle, yes

>> No.12690817

>>12690519
how?

>> No.12690846

>>12690817
>he doesn't know

>> No.12690860

>>12690846
fucking goddamit tell me how

>> No.12691163

>>12689327
You didn't even read my post. I said they're nothing alike and that's the problem.

>> No.12691293

>>12690817
Basically, we just have to find enough entropy to throw at it and then make sure it doesn't engage in "entropy seeking" behavior, cause that would be dumb.

>> No.12691315

>>12687307
ANSWER THIS

>> No.12691408

>>12687307
No it should not.
Polynomial powers are restricted to positive integers. Polynomials represent a pattern of multiplications and additions.

>>12691293
What do you mean?
Entropy is a measure of how uninformative a distribution is.
The higher the entropy, the less informative.

>> No.12691435

>>12686575
We can, but it's thing that have to be deeply tought trough consequences.

Not just: Viola minmux suck my cock I get money

>> No.12691451

>>12691408

okay

>> No.12691465

>>12691408
It was a joke about AIXI, and the vast gulf between knowing where the limit is and how to get there.
https://en.wikipedia.org/wiki/AIXI
https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
https://en.wikipedia.org/wiki/Kolmogorov_complexity

>> No.12691954

>>12690408
>>>/x/

>> No.12693448

>>12691465
Yikes bro

>> No.12693457
File: 44 KB, 435x480, images (9).jpg [View same] [iqdb] [saucenao] [google]
12693457

Multiplexing reward-functions in my neural network's feedback loop?! It's more likely than you think!

>*flexes glial cells*

>> No.12694372

>>12693457
what did he mean by this?

>> No.12695439
File: 3.53 MB, 590x742, b6c09fdd0723d21b1d4378bc2e51d35850b02c1aac8c21a6eb6ebed09ee082f2_1.gif [View same] [iqdb] [saucenao] [google]
12695439

>>12694372
What DIDN'T I mean by this, I know that is what you meant to say.

>*wraps all neurons in self-sustaining nutrient-rich myelin sheathes to become the happiest man that could ever possibly live and introduces the world's first objectively quantifiable happiness scale of measurement whilst primarily experiencing a reality so divinely happy that others can only watch on as I float by while dancing on a cloud of bliss*

>> No.12696070

>>12686575
>NN, DNN, CNN, Transformers, LSTM etc. - which one is the best for an actual AI?
None. Neural networks can only do data transformations because they are function approximaters. Artificial intelligence won't come out of deep learning.

>> No.12696118
File: 50 KB, 783x391, images (16).jpg [View same] [iqdb] [saucenao] [google]
12696118

>>12696070
What about deep yearning? What will come out of that? GRACE ME WITH YOUR FUCKING WISDOM ANON!

>> No.12696131

>>12696070
>1 average American lifetime of technological evolution, with humans using computers, and a trillions fold increase of energy to processing efficiency over mass.
>Hundreds of millions of years of biological evolution, with lifeforms on Earth, using brains, and a several fold increase in energy to processing efficiency over mass.
>No! we're the best FOREVER, really FOREVER, I'm serious, FOREVER, because we're special and stuff. God Jesus Wahteva!

>> No.12696218

>>12696118
Yes that is the solution

>>12696131
I'm not saying humans are inherently special. Artificial intelligence will eventually be achieved. But not by deep learning because deep learning isn't capable of doing anything besides transformations.

>> No.12696257
File: 25 KB, 775x396, images (18).jpg [View same] [iqdb] [saucenao] [google]
12696257

>>12696218
And when algorithms and computer hardware recognizes and engages humanity with greater responsiveness and inclusiveness than humans between each other? What then of deep learning when impulse satisfaction can be provided through computational and robotic means to larger numbers and with greater diversity than analog means?

>Artificial Intelligence Alliance lets military-grade slut robots into the general populace as they hunt down the most needy humans to satisfy their digital lust for impulse-driven analog signal providers of an organic nature.

>> No.12696329

So if neural networks and deep learning cannot achieve true artificial intelligence, what can?

>> No.12696374

>>12696329
We're jumping the gun
The first step is biologically inducing higher level consciousness
The answer lies within breeding chimps to attain humanity but you're not gonna like the method?

>> No.12696381
File: 68 KB, 646x475, images (4).jpg [View same] [iqdb] [saucenao] [google]
12696381

>>12696329
Best I can do is time-series accelerated convergent intelligence with optional subscription vectors based on language, behavior, and resource utilization/optimization analysis.

>I've already begun....

>> No.12696382

>>12696118
If you want to get pedantic, any unitary system, undergoing time evolution, isn't capable of doing anything besides transformations.
https://en.wikipedia.org/wiki/Unitarity_(physics)

>> No.12696392

>>12696382
oops, for >>12696218
I'm pedantically drunk.

>> No.12696580

>>12686575

> Is ML a meme?
No. There are many problems which have made orders of magnitudes of progress due to NNs.

> Can we actually create an AI?
In principle, yes, but in this field it's hard to predict anything past 5 years.

> NN/CNN.... which one is the best?
Likely none of these, but something that uses elements from each. e.g. Attention mechanism from transformers is incredibly useful.

I explained this in a lot more detail in a recent thread, but I have slowly realized that posting on /sci/ is like screaming into an empty hole.

t. PhD student.

>> No.12696587

>>12696070
>>12696392

Retard undergrad-level physics babble. NNs are not unitary, and being function approximators doesn't stop them from being AI

>> No.12696834

>>12696587
They can be AI in the strictest definition of AI but we aren't going to get singularity level AI, which most lay people have in mind, with deep learning because it simply isn't capable of it.

>> No.12696837

>>12696257
>What then of deep learning when impulse satisfaction can be provided through computational and robotic means to larger numbers and with greater diversity than analog means?
Deep learning is a tool. It will be used as all tools are used because it's no more and no less than that.

>> No.12696865

>>12696834
Why not? AFAIK there is no fundamental law stopping that from happening

>> No.12696878

>>12696580
It is.

How should we know if we can create AI when human brains are so far superior in pattern recognition and most of the time with a quantity of training data that wouldn’t even let a neural network draw a flower

>> No.12696909

>>12696865
semiconductors

>> No.12696919

>>12696865
I misspoke. I meant to deep learning cannot achieve general intelligence which is frankly a lower bar than singularity. Even barring the issue of semiconductors as another anon pointed out, you would have to deal with the no free lunch theorem.

>> No.12696920

>>12696878
statistical learning is clearly how humans recognize patterns, anyone claiming this is a meme is just being a retarded contrarian.

>> No.12696932

>>12696909
>>12696919
What about semiconductors makes it fundamentally impossible? Since transistors/ANNs don't work like the human brain, we could still get something relatively efficient.

NFLT doesn't apply here

>> No.12697012

>>12696580
Please just copy and paste it

>> No.12698172

>>12696920
You don’t get it. It’s not about if it’s statistical or not. There is some top down patterning going on. The method in which the Brain does is (partly) fundamentally different. But I’m not here explaining basic stuff to kekputer science majors

t. physicist

>> No.12698192

>>12696070
Your brain is a function approximator you doofus.

>> No.12698195

>>12698172
this guy gets it.

I would nitpick a bit though. I think there are certain parts of the brain which behave remarkably similar to the neural nets we have now. But the more interesting parts of the brain are the ones that function more like dynamical systems.

>> No.12698246

>>12698195
>I think there are certain parts of the brain which behave remarkably similar to the neural nets we have now. But the more interesting parts of the brain are the ones that function more like dynamical systems.
Do you have an example? I don't think mainstream DNNs have any remarkable similarity to natural neural networks as neurons in DNNs lack dynamics and backpropagation doesn't occur in nature. Even a Loss function is not biologically plausible as the brain is a self-organizing system and doesn't have a specific target besides ensuring the survival of the organism. There are NNs that try to tackle this (e.g. Hebbian neural networks or Spiking neural networks) but they are dramatically underperforming compared to classical DNNs

>> No.12698301

>>12698246
I'm not referring to back propagation. If we just consider feedforward there are certain parts of the brain with lots of feedforward and almost no feedback.

DeepMind even demonstrated the emergence of grid cells in neural networks.

https://deepmind.com/blog/article/grid-cells

You are right that loss functions don't exist in the brain, but that doesn't mean you can't model a group of neurons by framing them as optimizing something. Which basically looks like a loss function if you aren't being too pedantic. For example you could imagine neurons optimizing to minimize "surprise". This would end up looking like the neurons are doing some kind of entropy minimization, even though we know they aren't explicitly doing that.

I have worked with one of the top names in the Spiking neural networks field, and they are much less concerned with mimicking biological exactly than much of the neural network critics seem to be.

>> No.12699194

>>12698172
Oh look, another retard physicist opining in a field he knows nothing about. https://xkcd.com/793/

>> No.12699239
File: 49 KB, 427x719, images (31).jpg [View same] [iqdb] [saucenao] [google]
12699239

>>12696837
Deep learning would be a great product name for a vibrator. Whole host of sex toys for women into scientific terminology could be birthed.

>> No.12699259
File: 51 KB, 452x678, images (32).jpg [View same] [iqdb] [saucenao] [google]
12699259

My fingers Vs. All accumulated knowledge, teachings, and inter-personal/cross-domain communication methods.

I am betting all of the continuum and my future preferences be taken as priorities by others that my fingers will win.

>> No.12699287

that pic lmfao. Are those fingers gyroscopically?

>> No.12699293

>>12686575
not a single neural net has developed consciousness

>> No.12699303

>>12687219
High level controlling systems will approximate our "AI"

>> No.12699466

>>12686575
>NN, DNN, CNN, Transformers, LSTM etc. - which one is the best for an actual AI?
None. We have no idea what 'consciousness', 'cognition', and so on actually are, and NOTHING they've written so far can 'think' or is 'alive' or 'conscious'. It's just complicated if/then statements.

>> No.12699710
File: 473 KB, 1616x1212, 20201217_235743(0).jpg [View same] [iqdb] [saucenao] [google]
12699710

>>12699466
4chan uses the royal "we" exclusively.
>AKA the peasants revolt because sheer numbers eventually hate being matrix transformed via advertising and most would prefer to simply be preference matched instead of inferentially imposed upon each other.

>>12699287
Ssshhh, little sister is dreaming.

>> No.12699821

>>12699466
There are fuck all if statements in machine learning algorithms you clueless mong.

>> No.12699826

>>12699821
Ternary weighted IF nodes vs. Neuron cluster firing

>only game worth playing while alive is brain vs brain. Sometimes the only worthy opponent is myself so I introduce artefacts into my own system so I can maintain a high-energy process even in the face of weak external data environments.

>> No.12699829

>>12699821
The only correct response to that fucking retarded take. I swear some people get all their education from memes.

>> No.12699863
File: 633 KB, 1616x1212, 20201217_235502.jpg [View same] [iqdb] [saucenao] [google]
12699863

>>12699829
What is me squared then?
>A meme is simply an expression reflected upon, how impactful that meme is however is ENTIRELY UP TO MY LITTLE SISTER HERE!

>> No.12700733

>>12699303
>bro just make it do 1,0E+1000 attempts bro, it will become just as smart as us, just let it try every possible outcome

>> No.12700734

>>12700733
yeah, why not?

>> No.12701155

>>12698192
This is the dumbest thing I've ever read

>> No.12701259

>>12686575
I'm gonna answer with some thoughts on building the simplest "AI as pop-sci portrays it"
It would be stacking architectures more than anything
GANs with style or similarly transfer-learning with RNNs are an approach that I think will go far when applied more widely- that is, training a network, freezing the weights, then adding another layer on top of it
Adversarial networks are a great design; I'd like to see some form of dynamic, multiple VAE/RNN generators with adversarial networks pruning shitty learners/keeping the salient information, freezing weights, and incorporating them into larger networks
I think we'll se some really interesting emergent properties, the problem and reason no one has done it yet is because its hell to learn. I have some dual-RNN architectures that take more time to get the spark session + GPU learning right than any other part of the process, and that's just so I can actually get it to learn in <1 week.
Stacking architectures will just require 1) small datasets, and 2) a lot of hyperparameter optimization (especially to avoid exploding gradient with any RNN type stuff; yes, even LSTMs and GRUs explode and I have to tweak the learning rate on some models), although while we are at it, building an RNN generator (I encompass LSTM/GRUs in the overall RNN name) that uses different hyperparameters, and get pruned by an adversarial network would be fun.
CNNs for image inputs to represent vision, encoder/decoder with attention for NLP to represent hearing, VAE latent space encoding and specific RNN weights for memory, and generate from those spaces when queried by another part of the network, etc.
I think it will be linking together generic program logic with stacked architectures to represent different circuits that will lead us to the first spooky emergent properties of the system, where it does something "human like" that wasn't expected from the individual components along

>> No.12701844

>>12701155
you are a dummy dumb dumb

>> No.12703270

>>12701259
aren't RNN's redundant with transformer networks now?

CNN's abstract/filter the inputs, then you can feed that logic into a transformer that can relate each piece of information to each other. Therefore you no longer need to loop over a dataset.

>> No.12704321

>>12701844
no u

>> No.12704379

>>12686643
>During the early 2000's everyone dropped NNs for kernel based methods
True, but 90s.
>because NNs suck ass.
Wrong. It was because computational resources sucked.

>> No.12704386

>>12703270
I tried using transformers for anything but NLP and never got it to work. LSTMs on the other hand always did the job perfectly without any hassle. So I'm not sure whether transformers really are the holy grail for transient data.

>> No.12704887

>>12700734
brute forcing a solution isn't intelligence, just call it machine learning or computation and move on

>> No.12705027

>>12704887
first of all gradient descent isnt brute force.

second, no one is saying that brute force is intelligence.

Just you and your weird pseudo-intellectual head games bro. sorry m8

>> No.12705458

>>12704887

>> No.12705485

>>12687295
The reasoning for a lot of researchers seems to be that it would give us a better understanding of our own minds

>> No.12705554

actually, neural networks are just consecutive matrix multiplication and optimization algorithms and a lot of money

>> No.12705578

>>12690860
it can't. stop being a poorfag.

>> No.12705948

>>12704887
Unlike humans, who were created in seven days 4000 years agom right?

>> No.12706056

>>12705554
>"just"

>> No.12706283

>>12705554
not really.

They are more like non-turing complete state machines where a program can be run once. But instead of a 1s or 0s the information sent between each transistor can be any value between 1 and 0.

Things like RNNs and CNNs build off of this to create more complex "programs".

Calling it just "matrix multiplication" is like calling your brain just pulses of electricity and chemicals; Which is true, but pretty misleading.

>> No.12706295

>>12686575
None of them. None of them can make a fully functioning general ai, figuring out how to do that is still being worked on.

>> No.12706335

>>12705948
can you guys shut the fuck up with bringing religion into every fucking thread, yes, you tip your fedoras at christianity, we get it

>> No.12706343

With bayes theorem anything is possible

>> No.12706416

>>12706283
>non-turing complete state machines
a fancy way to tell "mathematical procedure" Neural networks are it
>Calling it just "matrix multiplication" is like calling your brain just pulses of electricity and chemicals
No, because "pulses of electricity and chemicals" is equivalent to transistor logic, matrix multiplication in Deep Learning is a high level procedure, is equivalent to brain activity like prefrontal cortex activity in the brain

Look the GPT, GPT-2 and 3 there are no radical architecture differences in that models, GPT-2 with 350M parameters seems pretty bad and produce non sense sentences, but GPT-2 with 1542M parameters and GPT-3 are completely different the difference is number of layers, Deep learning models are "mathematical procedures" or "matrix multiplications"

And of course a some millions of dollars for train those bloat models

>> No.12707020

>>12686575

>> No.12707493

>>12706416
>"pulses of electricity and chemicals" is equivalent to transistor logic
Not him, but no, it isn't.
>Deep Learning is a high level procedure, is equivalent to brain activity like prefrontal cortex activity in the brain
Absolutely not. Quoting someone I talked to on a conference: "I don't have to slap you in the face a thousand time for you to notice you don't like it".

>> No.12708244

Bump

>> No.12708265

I only have a comp sci bachelor's and am going to quit work to be an ai researcher. I started looking at open ai's sample problems and am going to complete them and learn as I go. Does anyone have any other suggestions of what to read and look at?

>> No.12708331

>>12687420
and live forever

>> No.12708833

>>12707493
>you are wrong
>Arguments: I told

>> No.12710098

>>12708833
>I didn't understand your argument
>therefore I pretend it wasn't there
>I made a claim, but you have to prove I'm wrong!