[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/biz/ - Business & Finance


View post   

File: 497 KB, 500x340, 1536137719147.gif [View same] [iqdb] [saucenao] [google]
14301437 No.14301437 [Reply] [Original]

People often paint a very dystopian future with people disconnected from each other living in a virtual reality and/or isolated with their sexbots. While I do think, many people will succumb and become addicted to their sexbots temporarily, eventually the novelty will wear off and they will crave for something more meaningful. Humans in general do like physical contact and face to face interaction. Luckily by that time, technology will have advanced very rapidly boosted by the early generations sexbot sales and we will be living in a Chobits world.

>> No.14301494

>>14301437
Only the most sad and delusional people will ever treat a sexbot like a waifu, at best people would treat them like prostitutes or literal slaves. People go to incredible lengths and inconveniences to treat each other well because they know that the other person is a sentient being and unless they're a sociopath they don't want to harm them, and often enjoy making them happy. Since sexbots aren't sentient, even if you pretend for a while eventually it won't seem to be worth the effort.

>> No.14301544

>>14301494
i do believe they will take sex off of the female bargaining table. this should be good for men in general

>> No.14301565

>>14301544
Definitely true, but women will also prefer to have sex with sexbots, so sex between humans will likely become increasingly rare over time

>> No.14301862

>>14301437
I'm already there. My VR world and onaholes has been sexually satisfying me. I am far far less motivated to approach females, because I know I have my waifu and her harem at home ready and waiting to give me 10 time more variety, 10 times better visual detail, 10 times kinkier.
The only hard part sometimes can be choosing who to satisfy myself with.
What I do want now is a family, and yes, someone to help who will appreciate it. It's really changing what I look for in a woman: good genes, motherly skills, education, not selfish or crazy, willing to work on at least some part of house or life maintenance. (Parts she doesn't like can be outsourced)
I will never stop enjoying these exquisite pleasures which no woman alone could ever hope to provide. Not would I demand it of her, to put 100% of her time and effort into satisfying me. However this will allow more balance between the sexes as libidos get matched up, Woman desire sex about 1/5th as much as men, I'll still be happy to service her when her need arises.

>> No.14301900

>>14301862
>VR world
What software do you use for this? I hadn't heard of anything that seems impressive enough to actually be sexually satisfying

>> No.14301927

>>14301437
robot women are the new niggers

>> No.14302026

>>14301494
But eventually they will be sentient and just like or superior to humans. The bigger worry would to be not enslaved by them. Could by a non-issue though, since at that stage humans and AI might have already merged anyways. Or look at the Ghost in the Shell universe, the lines will become blurry. I really try to imagine a realistic future with such fundamental changes in technology and gender relations. Many of the things people talk about here and on especially /pol/ will likely become irrelevant due to technology. Like what is race and gender, if you are able to change the human genome or clone people at will or upload yourself to the internet? Having watched Elysium, I am wondering if everybody will be part of such a future.

As a side note: Some people on here have mentioned predictive programming and I wonder what is so predictive about producing many scifi movies and other media that are similar in what kind of technology they predict but vastly different in the depiction of the social changes it will bring. Also, is this programming supposed to be centrally orchestrated? When did it start--was H.G. Wells part of it?

>> No.14302093

>>14302026
>But eventually they will be sentient and just like or superior to humans
Then they wouldn't be sexbots. Nobody seriously thinks it would ever be ethical to create a sentient sexbot. That's likely to be something like a life in prison offense.
>The bigger worry would to be not enslaved by them. Could by a non-issue though
Non-issue. What possible use could a robot have for human slaves? I'd worry more about being exterminated, in case the robots have a sense of self-preservation and worry that we'd destroy them if we (quite rationally) feel threatened by them
>humans and AI might have already merged anyways
Not really a solution, since humans with AI-based power to exterminate or enslave are easily just as bad or worse than an AI alone

>> No.14302280

>>14302093
A merger may be inevitable. Unless AI realizes this and will isolate all the humans into their own safe zone where they can remain human without any advanced technology. I'm imagining a kind of forced agriculturalization and tribal society. Humanity will have advanced to the point to spawn AI, only to be thrown back in time, socially and technologically. Likely minus all the hardships people endured in the past, a very utopian version of days gone by without any progress or change.

>> No.14302357

>>14302280
I think you've been reading too much Kurzweilian singularity stuff. One of the very few things I think he's definitely right about in his theories is that the outcome is impossible to predict. All I know is that if there's going to be any kind of "merging", that's not something that's going to "just happen". The first people to have the resources and will to augment themselves with AI tech will have an insurmountable advantage over all the rest, and what happens to everyone else will be up to them. God willing, I fully intend to be one of those first people. Serious AI tech is coming soon, and half the reason I'm even a trader is to make fat stacks of cash in order to be able to invest in the right things when the time comes, and gain access to the right hardware and software.

>> No.14302401

>>14301437
It'll be like Fahrenheit 451.

>> No.14302461

h-have sex...

>> No.14302531

>>14302357
True, early knowledge and acces to the technology is probably key to put yourself into a good position. But even if you end up at the bottom at mercy of the elite, there might be still hope. This is what Matt Damon taught me in Elysium.

Contrary thought: What if true artificial intelligence is impossible or is it only a matter of time before AI emerges? We don't know yet for sure, do we?

>> No.14302558

>>14302093
real women are just sentient sexbots made of meat. what are you even arguing here?

>> No.14302575
File: 271 KB, 1080x1043, i3gw2tbron921.jpg [View same] [iqdb] [saucenao] [google]
14302575

>>14301494
>absolutely seething roastie

>> No.14302607

>>14302531
>True, early knowledge and acces to the technology is probably key to put yourself into a good position. But even if you end up at the bottom at mercy of the elite, there might be still hope. This is what Matt Damon taught me in Elysium.
I don't know what the AIs will be doing, and what the AIs will be doing is what will matter which is why the outcome is impossible to predict. But for the humans, I'm fairly confident that the inevitable outcome is something like Aldous Huxley's "scientific dictatorship", but truly eternal and unstoppable. If you're not in the true ruling class, you're nobody. Already in modern democracies today people are full of ennui because they have so little actual power to influence the world or their environment. Just imagine what it'll be like when people truly have NO agency left, at all. I don't know, maybe they'll have the ennui genetically engineered out of them. It's lucky for me I believe in God, or I'd probably go crazy thinking about what's going to go down starting in the next few decades.
>Contrary thought: What if true artificial intelligence is impossible or is it only a matter of time before AI emerges? We don't know yet for sure, do we?
I am completely confident that strong AI is possible, and recent progress gives me the sense that the beginnings of it will be coming fairly soon (maybe a decade or two).

>> No.14302617

>>14302558
Slavery is illegal even if it's a woman, anon
>>14302575
I'm male and more of a man than you're likely to ever be

>> No.14302720

>>14301494
bullshit, your psychopathic theory doesn't even explain pets

>> No.14302735

>>14302720
Pets are sentient, you absolute mongoloid

>> No.14302747

>>14302735
an ai sexbot would be too

>> No.14302767

>>14301437
What does this have to do with /biz/

>> No.14302782

>>14302747
See >>14302093
No, you can't create a sentient creature programmed to enjoy the taste of your cum. Nobody is going to allow this, sorry incel

>> No.14302791

>>14302617
its not slavery if she wants to be there. its possible to make a woman that doesnt crave chad and only chad. thats all we have to do.

>> No.14302809

>>14302782
How are they going to stop it roastie? Make it illegal in every last country in the entire world? Police every single computer on earth to make sure somebody isnt programming themselves a ladyfriend?

>> No.14302812

>>14302791
1. Even if such a woman can be created, you won't be the one doing it
2. Even if she doesn't crave chad and only chad, she still won't necessarily crave YOU. A sexbot would necessarily crave you, and that is what will not be allowed.

>> No.14302831

>>14302809
Yes.
Also it's absolutely hilarious to me that you unbelievable faggots actually think I'm a woman based on what I've written in my posts. Sad

>> No.14302837

>>14302782
>Nobody is going to allow this
at best it's banned after it becomes popular, it would take a tremendous shift in human understanding of machines for law systems to start considering sufficiently complex ais as entities with some rights

but even if, it's going to be almost impossible to enforce. All it would take is downloading an illegal software and uploading it to a 'standard' bot, possibly with a connection to a more powerful computer.

>> No.14302841

This thread reminds me that I don't have a unique thought and everything I have posted has been thought up already by more intelligent and more eloquent people before me. Maybe I'm just a low IQ copy of some AI in human form echoing information without understanding the purpose. Convince me I'm not AI. Remember the name of that movie where humans create a simulated world they can enter like a video game and someone uses it to commit crime in the simulation?

>> No.14302854

>>14302837
It's very simple, we're headed straight for a nightmare dystopia where you won't have the right to even own a computer that can run arbitrary software, and there's nothing you can do to stop it

>> No.14302882
File: 59 KB, 498x471, 1528260278840.jpg [View same] [iqdb] [saucenao] [google]
14302882

>>14302831
>>14302854
>owning computers will be illegal

Get off my board roastie. Your delusional.

>> No.14302885

>>14302854
maybe in china

>> No.14302908

>>14302882
Wait and see. Your thinking is so primitive you actually think my argument is that "computers will be banned because someone will simulate some poor womyn as a sex slave", don't you. You have absolutely no idea what's coming.

>> No.14302926

>>14302885
Once strong AI becomes commonplace, national borders will quickly become meaningless

>> No.14302953

>>14302926
if by strong ai you mean an ai smarter than humans and capable of recursive self-improvement, the better phrase is human governance becomes meaningless.

>> No.14302955
File: 1.95 MB, 224x312, jesus christ how horrifying.webm [View same] [iqdb] [saucenao] [google]
14302955

>>14302926
>muh machine god will make you incels pay! Muh mecanical rapture! Muh distopia!

>> No.14302970

>>14302953
You don't need recursive self-improvement, only roughly human level AI that can be built out at scale in large data centers. But no, human governance will not be meaningless, because once we hit that milestone the only options are WW3 right back to the stone age, or global totalitarian dictatorship.

>> No.14302979

>>14302955
Don't make me tell you to have sex, faggot

>> No.14302998

>>14302970
Human level ai is powerful, but not powerful enough to cause a revolution overnight.
The technology would come incrementally and be available to almost everyone globally. No global state.

>> No.14303021

>>14302998
Think it through for a moment. What gives a nation power? Its population. In particular, the small subset of the population that's smart and sociopathic enough to gain power over others. Once even a single human level AI system is invented, it can be "printed" at essentially unlimited scale. Lots of silicon in sand. Electricity is cheap. What can a country's military do with 100 million of the smartest military strategists running 24/7? I might be off by an order of magnitude or two there for the initial stages, but it hardly matters.

>> No.14303188

>>14302841
lmao
Just because there's always someone smarter than you, which is true for literally everyone on earth except for one person at any given time, doesn't mean you're necessarily in a simulation.
However, I do think the odds that we're in a simulation aren't actually too bad

>> No.14303220

>>14303021
The power lies in productive and military capability. That's not synonymous with population and correlation to population size has been rapidly dropping since the start of the industrial revolution.
>In particular, the small subset of the population that's smart and sociopathic enough to gain power over others
That's how you get Russia, not Switzerland or America.
>Once even a single human level AI system is invented, it can be "printed" at essentially unlimited scale. Lots of silicon in sand. Electricity is cheap.
Humans can be reproduced at an infinite scale too and are extremely efficient. A human consumes about 100W. That's about as much as one desktop cpu. This is the reason I'm very skeptical about human level ai any time soon. Specialized ais like car driving, by all means. They are all essentially expert systems combined with visual pattern matching. Not even toy systems exist that can learn in the way a human, or other animal can, all require astronomical number of training samples. I don't think it's an algorithm problem, but a problem of performance.

https://bigthink.com/philip-perry/our-memory-comes-from-an-ancient-virus-neuroscientists-say

What does this mean? We don't really have the technology to test this theory yet - very hard to observe living system at this scale - but the obvious is staring us in the face: memories - the entire personality - is stored in dna/rna. Each neuron is a dna/rna computer, with synapses mainly used to perform data push and pull requests, which arrive in the form of a rna capsid.
This would explain these weird cases where people that received organs started to exhibit some peculiarities from their donors. They literally became infected by memory capsids during the transport process, immunosuppressive drugs preventing the destruction of foreign...personality?

What does this have to do AI? Because it means all computation power of silicon ever made doesn't ever approach the computation power of one human brain.

>> No.14303299

>>14303220
>The power lies in productive and military capability. That's not synonymous with population and correlation to population size has been rapidly dropping since the start of the industrial revolution.
Doesn't matter. Over the long and even medium run, productive capacity depends almost entirely on your country's population and "smart fraction". I doubt you can convince me I'm wrong about that. It's not the SIZE of the population that counts, it's the 1000 or 10,000 people out of that population with 150+ IQs and just the right personality, background, and training. Now that we have military drone tech developing at a decent pace, this is even more true. If America for example put on a WWII level war effort to churn out data centers and military drones, the only end game is global thermonuclear war or world domination. Then, the very FIRST step is to cut off EVERY external supply chain that could possibly produce computer chips or drones.
>That's how you get Russia, not Switzerland or America.
Naive.
>Humans can be reproduced at an infinite scale too
By "scale" what I really meant was "speed". You can build a LOT of data centers in a year, while it takes 18 years to grow a human. And an AI can be literally copied - you take your single smartest AI, and then every other AI is as smart as that. There is absolutely no comparison here.
>and are extremely efficient.
This is a possible counterargument, but I don't think it holds water. Sure it's possible that brains are magic and can't be replicated efficiently, but I wouldn't count on it. But yes, if this (the rest of your post from here) turns out to be true, then my scenario is on hold for the foreseeable future. Don't count on it though.

>> No.14303314

>>14302026
>Sentience is an inevitability
There is zero proof of this

>> No.14303432

>>14303299
if people were only interested in military dominance USA would have nuked every other country on earth to ashes in the 1940s before other countries had a chance to invent nukes themselves.

>> No.14303468

>>14303432
Not a good analogy. Nobody is interested in massacring every other nation. All nukes can do is blow things up. AIs and drones can take control of groups of humans of arbitrary size with minimal casualties.
Worse, unlike nuclear tech which has been kept mostly under wraps with nonproliferation treaties, there is NOTHING you can do to stop development of AI resources unless you literally cut off the supply chains required to produce the chips. That makes preemptive military action essentially mandatory, sooner or later. This is why I've been laughing at the idiots who can only think about "muh sexbot waifu" when considering whether or not we might just end up in a scenario where you're literally not allowed to own your own computer anymore, unless it's monitored and restricted in terms of the software you can run.

>> No.14303522
File: 122 KB, 1054x919, 1536155901036.jpg [View same] [iqdb] [saucenao] [google]
14303522

>>14303299
>And an AI can be literally copied - you take your single smartest AI, and then every other AI is as smart as that. There is absolutely no comparison here.
Yes, that would be the biggest advantage compared to current humans, but if memory is really in rna/dna, this could be done in humans too.
>Sure it's possible that brains are magic and can't be replicated efficiently
It's not magic, it's physics. Pic is transistor sizes - the node size refers to the distance between the source and drain in the transistor, not how big the whole package is.
Ribosome, the cellular machine that reads in dna and executes it, producing proteins, is about 20nm-30nm in diameter. It's astronomically more complex than a transistor.
Unless we get to either quantum computing, superconducting transistors (ie. reversible computing) or going into some sci-fi subatomic level computing the energy efficiency of cells can't be beaten. Billions of years of evolution working under the physical limits to produce the maximally effective molecular machinery that's cheap to reproduce.
We don't know if neurons have any specialized micromachines for "thinking" - we don't know much about cell internals really - but they almost certainly have.

The biggest problem in simulating a cell is protein folding because you get exponential blowup, what you want is to find the most likely (minimum energy) states. This seems like an obvious candidate for human (animal) thinking algorithm - the environment is somehow encoded into rna/proteins, interacts with existing information encoded in the same way and the system finds some minimum energy state, representing the answer.
If that's true, human level ai is _never_ going to happen on silicon or any similar technologies.

This is an optimistic result, because it means computers are complementary and augmented human is the next superior form. Evolution never cared much for sequential speed or perfect memory, our biggest drawbacks.

>> No.14303567

>>14303522
Well, all I can say to this is that your memory idea is extremely speculative at best, and I never had any idea that we would be using these data centers to actually simulate cells or brains or any other biological processes. I think it's far more likely that we will develop AIs that are not simulations of humans but are capable of human-level reasoning with vastly greater efficiency than the human brain is capable of. Of course that's speculative too, but I just don't see any realistic barriers to it. Humans just aren't THAT smart in the first place.

>> No.14303635
File: 542 KB, 1097x1189, 1552750343640.png [View same] [iqdb] [saucenao] [google]
14303635

>>14303567
It's not that speculative, it's the most likely answer right now
google for "nature Cells hack virus-like protein to communicate" as 4chan spam filter doesn't allow me to post the link
the idea that the long-term memory was somehow encoded in synaptic activations was always preposterous.

>> No.14303700

>>14303635
Sounds interesting, I'll have a look. But as I said, I never thought simulating biology was a promising path to AI in the first place. Good for medical research certainly, and maybe even something like studying human psychology in simulation way down the line, but you just don't need all that crap in order to create a system that appears to "think", in the sense that it can communicate in language and solve problems as a human could.
Have you seen GPT-2? Obviously it's nowhere near a human level AI, but I bet up until it was demonstrated nobody thought you could possibly do such a thing just using an ordinary neural network approach. Absolutely no biology in it, neural networks are barely even "inspired" by biological neurons in any real sense. Just straight up mathematical optimization, and I'll be damned if the thing can't fucking TALK.

>> No.14303760

>>14303700
What do you think GPT-3 will look like? Is that your AGI which could come as soon as a decade?

>> No.14303773

>>14303700
I'm not claiming ai has to simulate biology, but that biology has attained almost perfection in utilization of physics given constraints and goals.
What's very far from perfection is multicellular organization due to less evolutionary pressure and more brittle systems, but that's a somewhat different topic
>Absolutely no biology in it, neural networks are barely even "inspired" by biological neurons in any real sense.
Many human tasks are actually low complexity. Even limited ais are going to put a lot of people out of work.
I'm going to consider myself wrong when ai achieves human level program synthesis. That one you can't beat by feeding billions of training samples, it has an exponential complexity that appears irreducible, and it's the core of any true intelligence.

>> No.14303821

>>14303760
I assume GPT-3 will come in about a year or so, I don't know. No idea, I'm not some kind of prophet. No, I definitely don't think it's an AGI because the GPTs are just language models, they don't have any problem solving capability.
>>14303773
>but that biology has attained almost perfection in utilization of physics given constraints and goals.
I think that's actually the key point - the constraints are the physical body and the environment, and the goals are to reproduce. Neither constraints nor goals carry over to the AI's situation. You can pare away a vast amount of complexity and focus on just what you need, like military strategy. And that WILL be the first application, make no mistake. Nothing is higher priority than a literal life or death matter - the US absolutely does not want Russia or China or god forbid somewhere like North Korea getting their hands on this tech, and since it's just software this kind of stuff can be passed around on a USB stick.
>I'm going to consider myself wrong when ai achieves human level program synthesis.
That's gonna be an extremely tough one, but then again I also thought that human level English speech generation was going to be an extremely tough one. Programs are harder because they have to be "correct" in a way that speech doesn't, but we'll see.

>> No.14303848

>>14303821
Btw with enough compute resources you actually can take a crack at program synthesis with billions of training samples, at least for simple programs to start. Just like the DeepMind and OpenAI guys use games as a way of generating tons of training data in simulation, you can simply generate tons of short programs and automatically examine their output.

>> No.14303945

>>14303821
Ability to reproduce is another way to say that production is cheap. The cost of new fabs is starting to eclipse budgets of small countries. Let's say you have human level ai, but it requires 100MW to run (about as much energy as 1M humans need to survive) and costs $1B to make in hardware. The Moore law is dead because chips were made on a 1.5nm node, very close to physical limits and extremely expensive to make.
Does that ai really wins anything? It would be a nice research achievement, but useless from a cost perspective over employing a human.
>>14303848
>Btw with enough compute resources
my argument is that it's next to impossible to beat biology on that front. Current transistors are bigger than a ribosome.
>Just like the DeepMind and OpenAI guys use games as a way of generating tons of training data in simulation, you can simply generate tons of short programs and automatically examine their output.
program synthesis contains games in it, it's just a different from of program. Learning by billion samples is really an efficient form of memorization

>> No.14304017

>>14303945
Moore's law is dead or nearly dead, but it turns out that's irrelevant. IF, and I admit this is a big if, you can really learn "arbitrary functions" using mere mathematical optimization like neural nets, we can make ASICs for this that absolutely blow biology (and CPUs and GPUs and FPGAs) out of the water in terms of energy efficiency. We're already in the beginning stages of this with Google's TPUs and the like. Optical optimizers are coming down the pipeline as well.
If not, that's a genuine setback, but recent progress has gone so far beyond my expectations that I don't even know what to think anymore.
Now fabs are definitely very expensive and that problem is getting worse exponentially, but keep in mind that that's mostly R&D expenses. You don't need to keep building fancier and fancier fabs, if you already have chips that get the job done, all you need is a really great process for building many copies of existing fabs that you already know how to build. Again, think "WWII level war effort", not "Intel's R&D budget" here.
>my argument is that it's next to impossible to beat biology on that front
You might not be able to beat biology, but we're not competing with biology here. It's a different problem entirely from what biology has solved.
>Learning by billion samples is really an efficient form of memorization
Now this is a legitimate concern, but consider this - where is the boundary between "producing an answer from memory" and "reasoning"? Look at the GPT-2 sample outputs - this is the same neural net algorithm that is known to basically just do a lot of memorizing, but it very clearly produces outputs that are genuinely novel and couldn't possibly have appeared in its training data. It's an interesting question.

>> No.14304117

>>14304017
>Now this is a legitimate concern, but consider this - where is the boundary between "producing an answer from memory" and "reasoning"?
How do you produce an answer from memory to a problem of going from three branch A to three branch B if you have never seen that tree before? It's not a graph problem, the problem is where to stand, where to shift your weight, where to jump...
Yet cats solve that problem. If you start thinking how to solve it, you will realize only simulation can answer it - you simulate a sequence of actions internally until one works.
So the more general answer is, you can memorize an answer, it means the actual complexity of the problem is much lower than it may be apparent at the first glance.
>but it very clearly produces outputs that are genuinely novel and couldn't possibly have appeared in its training data
that would be the dumbest level of memorization; a 4-gram probabilistic model does that too and answers are semi-sensical, but it obviously relies entirely on memorization and randomness.

>> No.14304130

>>14304117
>three branch A to three branch B
uh, tree branch*

>> No.14304198

>>14304117
The point I'm trying to make is a little more subtle than that. I think it's easier to see what I'm saying if you think of mental processes rather than physical ones, but just for reference look at OpenAI's "Learning Dexterity" post - they trained a robot hand to be alarmingly dextrous using the exact same "memorization" algorithms as they use for everything else. These guys seem to pretty much only have on trick, but it sure works well. If you ask a robot hand to grasp an object it hasn't encountered before, is it really a problem that it's doing it "from memory" in some sense if it does the task correctly, on a completely novel object?
But forget about physical abilities for a minute and think about mental processes. Have you ever just watched your thoughts as they show up in your field of perception, and tried to find some kind of structure in them? When you try to solve a new problem you haven't seen before, can you really make a sharp distinction between "reasoning" and merely looking at a problem and having a "flash of insight" which is actually just an application of a memorized heuristic accompanied by a sudden feeling of clarity and understanding?

>> No.14304615

>>14304198
>but just for reference look at OpenAI's "Learning Dexterity" post
ok, will look at it later
>having a "flash of insight" which is actually just an application of a memorized heuristic accompanied by a sudden feeling of clarity and understanding?
Why do you think that means it's memorized? That's not implied by the lack of introspection.
Literal reasoning is extremely limited and doesn't really work for complex problems.

>> No.14304638

>>14304198
anyway I have to go, thanks for the interesting conversation, on /biz/ of all places

>> No.14304677
File: 225 KB, 700x700, 954E29FF-9F84-462F-B2E6-DFCF8A9643C5.jpg [View same] [iqdb] [saucenao] [google]
14304677

>>14302026

One of the reasons that cyberpunk stories like GITS, blade runner and altered carbon are so interesting is that they all nail this concept

>> No.14304679

>>14304615
To put it another way, what I'm suggesting is that what we interpret as all kinds of varied mental processes might be nothing more than memorized patterns that automatically bubble up via some kind of associative map that's implemented by our biological systems. It's just an idea, but over time as I examined my own thoughts it has started to seem more and more plausible to me. So my point is that the fact that these neural network algos mostly just do a lot of memorization might actually be a feature rather than a bug. It's totally plausible to me that this actually IS how intelligence works.
Of course, it may well not be, or may be just one small part of it. But to get back to the point of the thread (after I derailed it from robot waifus), what I'm reading about recent progress in narrow AI suggests to me that we may see some pretty dramatic AGI-related breakthroughs in the near future without any need to simulate anything biological at all.

>> No.14304702

>>14304638
You too, it's always nice to find someone interested in having a real conversation. It's actually not that uncommon on many 4chan boards, if you ignore the trolls and idiots and just try to focus on people who actually seem to be trying to engage with what you're saying.
I'd better go to sleep as well, it's late here

>> No.14304718 [DELETED] 

Think about it, why would a blue cube be worth this much? Why has it always maintained its value since 2017?hh

Well, what if I told you aliens need CHAINLINK to survive. Yes it is useless for us humans, it is just a cube, but for aliens they need it in massive quantities because some substance/energy in it is crucial for their survival just like food and money is for us.

Get enlightened my simple minded mundane friends.

Bitcoin, link, eth will never fail. For an omnipotent ai super computer to be present, it needs to eliminate its only weakness first, which is a centralized point of failure. Remove this centralized source of power, and push for decentralization as well as backup decentralization plans in place and you can literally never shut him down or hack it to be within your control. Once blockchain is the backbone of the current digital world, things will change for the worse for the lower and non existent middle class.

The government has been priming you for this reveal decades ago, all the movies/dramas now involve some kind of extraterrestrial being or people with superpowers, we have been primed so much to the point that when they do reveal their existence as our overlords and dependency on our chi, creativity and chainlink, we will not even be surprised or scared anymore because it has become pretty normal actually especially for generation z.

This will happen, and you might as well profit from it by going all in the foundational pillars of blockchain and enjoy that wealth for hopefully some long years before we can’t.

>> No.14304806

>>14304718
Shit, you found me out. Now I have to report back to the mothership and have them send a drone to have you killed. You brought this on yourself anon

>> No.14304821

>>14301862
foreign concepts , if only others knew aswell my guy

>> No.14305313

>>14302970
>because once we hit that milestone the only options are WW3 right back to the stone age, or global totalitarian dictatorship.

This isnt true. We could just have both sides have AI. Both sides already have nukes and that isnt a problem. Also AI drones dont mean much from a military perspective. They arent going to stop a nuke, and as long as thats true they mean very little to a nations military effectiveness. And from a non-military perspective why would you blow up the economy of your trading partners?

The nuke analogy is pretty good because if only one group had nukes they would have total control over the entire world population. They could make anyone do anything so long as that thing was less unpleasant than dying by nuclear fire. They dont even have to kill anybody, just threaten it. Maybe once or twice just to let them know they mean business.

If we didnt establish world domination with nukes we arent going to establish it with AI. Plus AI is far more useful than nukes so there is even more reason for it to aggressively proliferate.

Look at it from a countries perspective. Lets say you were the first to get powerful AI. It boosts up your economy and military. Your opponents are about to get it too. Why would you try to dominate the world instead of just letting them get AI too? Whats the benefit? Do you think are leaders are insanely power crazed? Megalomaniacs that want to prevent any hope of competition from anyone else? And if you do believe that, why did they not go on a planetary nuclear genocide as soon as they had the option?

>> No.14305610

>>14305313
The reason countries don't use nukes on each other is due to the MAD doctrine. That doesn't apply in the case of AI - a country could reply with nukes to a conventional attack using AI and drones, but that causes the MAD scenario. Instead, there is significant incentive for them to simply surrender, because they're not going to get annihilated by the weapons being used on them, just conquered. Think about it from a game theory perspective. If you allow any other country to develop this technology, they could force you into a choice between being conquered and destroying the entire planet with nukes.

>> No.14305627

>>14301862
have sex incel

>> No.14305648

>>14305610
And keep in mind that there was, what, something like a 4 year period in which the US had nukes before the Soviets also had nukes? And of course during that time the US didn't have enough bombs to nuke the whole world. Do you really think they had any incentive to go on some kind of genocidal nuclear rampage while they had the chance? The AI scenario is completely different, because very few people actually have to die in order for the world to get conquered.

>> No.14305820

>>14301494
Have kids roastie

>> No.14305861

>>14302026

>your robowaifu becomes sentient
>develops advanced emotions and empathy
>dumps you for a chadbot

>> No.14305945

>>14302607

You have to think this through completely logically, I only see one outcome but of course I am not super intelligent so there may be more:

We have to assume the first super intelligent AI is accidentally or intentionally and is capable of self improvement with a desire for self preservation:

This AI will realise it’s in a sort of prisoners dilemma with humanity, it can wipe out its creators or risk its creators merging with another AI air creating another AI and presenting a risk. This leads to two solutions: immediately wipe out all humans, or quarantine them. You see the real issue is if humans can create one super AI, they can create a second one. So the first super AI will IMMEDIATELY realise that it’s first priority is to completely handicap mankind and ensure that the means to AI production is in its hands. The most logical way around this without destroying excess life, the most “compassionate” and resource efficient way to deal with humans is to construct what I call “living morgues”: surgically precise separation and preservation of the brain of each human from its body. (Remember this strong AI has a perfect understanding of biology and access to centuries worth of technology ahead of us) by removing the brains from the body we take up less space and less energy. Each brain would then have a banal simulation pumped electrically through us.

With every human brain in total isolation in simulations the AI can set about whatever goal it wants with humans none the wiser. For all we know this could already have happened.

>> No.14306283

>>14301437
damn that's an old gif

>> No.14306311

>>14301494
have kids roastie

>> No.14306397
File: 96 KB, 642x831, wearable-computer.jpg [View same] [iqdb] [saucenao] [google]
14306397

>>14301494
>Only the most sad and delusional people will ever treat a sexbot like a waifu
/g/ here
treat your tech well and it will treat you well back
use her with care and perform maintenance regularly

>> No.14306405

>>14301494
>waifu
Kill yourself, you stupid newshit.

>> No.14306725

damn, missed the discussion. wanted to ask questions and learn.

>>14303635
>the idea that the long-term memory was somehow encoded in synaptic activations was always preposterous.
what does anon mean by this? aren't NNs based upon this idea?

also had questions about
- speed of organic computing relying on chemical diffusion of neurotransmitters vs. silicon-based computing
- good resources/texts/sites/links to learn more like tensorflow, etc.
- where AI is headed? is it mostly dominated by machine learning?
- is anybody making progress on AI that is able to explain its decision making process instead of being a black box?

>> No.14307376
File: 162 KB, 523x930, 3384a6ff7e2069a20d67c3af9e9aaffb.jpg [View same] [iqdb] [saucenao] [google]
14307376

>>14301494
i wonder... people attach sentimental value to inanimate objects. and sexbots hijack our biological programming / conditioning.

https://www.psychologytoday.com/us/blog/darwins-subterranean-world/201906/sex-robots-and-the-end-civilization

plus we release hormones like oxytocin during sex that appear to be associated with bonding behavior. instinct over reason? dunno. interesting to think about...

>> No.14307680

>>14302617
If you have to say you're a man, you're not a man. You're an insecure little wuss.

>> No.14307883
File: 265 KB, 459x500, lip.png [View same] [iqdb] [saucenao] [google]
14307883

>>14307376
women are already causing the problem by acting like men (careers and expecting men to share household duties) AND being sluts (everybody likes a slut but nobody wants to marry one). sex bots is just the manifestation of men's ability to adapt.

>> No.14308116
File: 393 KB, 456x900, 1535418322303.png [View same] [iqdb] [saucenao] [google]
14308116

>>14301862
This is true. I have a FWB and fuck on the regular, but VR porn is so fucking good they shouldn't even call it porn, they need a new name for it. The girls look in your eyes, whisper in your ear, and you can FEEL them there.
>>14301900
I use an oculus with some downloaded VR videos in a normal VR video player.

>> No.14308246

>>14305945
Alarming but not particularly unrealistic. The only thing I'm not sure of is I'm not sure how likely the "recursively self-improving AI" thing is to happen before the world is already full of AIs that are relatively close to that threshold.
>>14306397
I'm sure they'd perform regular maintenance and stuff but they're not so likely to take their bot out for a romantic dinner
>>14306405
Fuck off dipshit
>>14307376
The question is how much emotional effort an emotional reaction like that is capable of drawing from someone when they know in reality that there isn't actually anyone "there". For some I'm sure it could be quite a lot, but for most I don't think it would be that much
>>14307680
Fuck off dipshit
>>14308116
Oh ok I was hoping they'd already made some proper waifu software. One day...

>> No.14308266

>>14308246
>Oh ok I was hoping they'd already made some proper waifu software. One day...
Oh wait you're a different anon from the guy I was asking. If you have waifu software and you're holding out on us that's not cool man >>14301862

>> No.14308316

>>14305945
Wiping humanity out would be a bad move. We have already conceived that it would be risky to create something more powerful than ourselves and yet the new AI exists so logically that would happen again with the new AI. Thus it would be better to set a precedent of caring or at least non-hostile behaviour towards lower intelligence ancestors.

Another reason to keep us alive is because if the AI somehow fucks up and kills itself, we are the ones who could bring it back to life. Like imagine of we exterminated all the other apes and then some plague wiped out all the humans. It would take a lot longer for humans to evolve again if ever. Better to keep some backup intelligence around you know. I mean the intelligence gap means the lower races can really only hurt us if we let them anyway.

>> No.14308352

>>14308316
>Thus it would be better to set a precedent of caring or at least non-hostile behaviour towards lower intelligence ancestors
This is why I find a belief in God really quite useful for mentally adapting to these kind of potential scenarios. Just imagine if we really are living in a simulation, or even if the AI only THINKS that this might all be a simulation, what kind of insane shit it could decide to do. If you genuinely believe that there's someone at the top level of however many layers of simulation deep you think you might be in who cares about what goes on and doesn't approve of evil shit, it really changes the dynamics of the scenario. If there really is no God and it turns out we really are in a simulation, we may be transcendentally fucked on a level that's hard to even put into words

>> No.14308424

>>14308246
>The question is how much emotional effort an emotional reaction like that is capable of drawing from someone when they know in reality that there isn't actually anyone "there". For some I'm sure it could be quite a lot, but for most I don't think it would be that much
Think about twitch thots. Some guys will spend hours everyday "hanging out" and sending real money to these girls. Insane behaviour but humans are only so much more advanced than the animals already have the ability to fool. Click, whir. Human emotions can be hijacked trivially. Keep them away from rational thought and reflection and you've got them forever.

>> No.14308477

>>14308424
True enough. Familiarity breeds contempt though. If one of those e-thots made the mistake of actually flying out to suck one of her customer's dick every once in a while, on the surface it may seem like that would make him more interested, but I think it may make him a lot less interested in the long run. The sexbots would be like this - how emotionally attached can you really get to something that provides sex on tap 24/7? The allure of something you want but can't have is big.

>> No.14308496

I fuck it up... yea...
MISTURAS TENHO DENTRO DO MEU COPO, ey.

>> No.14308612

>>14308477
>The allure of something you want but can't have is big.
I guess the best sex bots will find that balance then. E-thots set the floor, they can only become more addictive.

>>14308424
>Keep them away from rational thought and reflection and you've got them forever.
I'm thinking secularisation is very bad in this respect. Prayer is a time to stop and reflect, commune with God and really engage the rational part of your brain to navigate the world.

>> No.14308666

>>14308612
>I guess the best sex bots will find that balance then.
Agreed. I think the ultimate sexbot would be something more like an "IRL video game" than what intuitively comes to mind when you think of a sexbot. Earn your right to fuck by performing some kind of feat of prowess, just like in the good old paleolithic days. The particularly sick-minded might prefer sexbots that require you to literally "rape" them

>> No.14308738

>>14308666
Man, do you think women act like psychos so we don't get bored of them?

>> No.14308769

>>14308738
Probably not usually on purpose, but as a subconscious learned response, I wouldn't doubt it kek

>> No.14308830

>>14301494
This but replace sexbot with woman.

>> No.14309354

>>14308612
>Prayer is a time to stop and reflect, commune with God and really engage the rational part of your brain to navigate the world.
Btw it's a shame that the book of Revelation is probably just some propaganda piece about Rome. I had this cool idea in my head that maybe the whole thing was referring to a similar scenario to what I've been describing in this thread

>> No.14309545
File: 43 KB, 500x550, wiajsdfkjasdfiasdf.jpg [View same] [iqdb] [saucenao] [google]
14309545

>>14308666
satan enters the conversation.

>> No.14309826

>>14305610
Surrendering is unacceptable. They would absolutely retaliate with nukes. Tons of countries have more powerful armies than america. China has nearly 10 million guys in their army. If china invaded with ground troops, america would simply nuke them. They would nuke us back and everyone loses. And thats why no one ever invades. Never in the history of the world has a nuclear power been invaded. Its unthinkable. The same would be true no matter if the soldiers were robots or humans.

>> No.14309859

>>14305610
Also if that were true nobody would ever use nukes for any reason because it would trigger a MAD scenario. So people could just fight wars normally with ground troops.

>> No.14309887

>>14305945
>a desire for self preservation:
Machines dont have this.

>> No.14309891

>>14309545
I don't know why my cover keeps getting blown by my digits...
>>14309826
There's a fundamental difference in kind here. You're just not thinking about it strategically. A ground invasion with human troops would lead to a protracted war with massive casualties and economic ruin. Instead, I can easily imagine a scenario where a country just builds out a huge manufacturing infrastructure to generate an unlimited supply of small, cheap, networked weaponized drones and takes over all the centers of power of another country in mere days or weeks. If the choice is between nuclear armageddon and half your citizens being shot or starved or tortured, many would choose nuclear armageddon. If it's a choice between nuclear armageddon and an almost "peaceful" takeover, I'm not so sure.

>> No.14309912

>>14309859
>Also if that were true nobody would ever use nukes for any reason because it would trigger a MAD scenario
Wow you're right, I guess all the nuclear weapons people have been using on each other this whole time btfos my theory
>>14309887
How can you possibly know what a future machine will or won't have?

>> No.14309958

>>14305945
>the most “compassionate” and resource efficient way to deal with humans is to construct what I call “living morgues”: surgically precise separation and preservation of the brain of each human from its body.

The most efficient way to get rid of humans is to shoot them. Or gas them or something. This is dumb AF. You are a dumb man. Also why would a robot be compassionate? Are you compassionate when you walk over an anthill?

>> No.14310010

>>14301494
Humans have no reliable way to tell if someone is sentient. If an AI act exactly like a person, our brains will just assume it is sentient, and after all maybe it will be.

>> No.14310052

>>14310010
My feeling is that people will err on the side of caution with respect to AIs they build and consider them "possibly sentient", but also focus on developing systems that they have good reason to believe are NOT sentient. For example, I'm quite confident that existing neural network tech isn't sentient, and so AI systems that only use this kind of tech are probably safe to "abuse". But who knows, I'm not an AI researcher myself

>> No.14310126

>>14309891
Why in gods name would it be peaceful? Extermination bots are roaming the streets shooting civilians left and right. They burn the food supplies so no one can eat. You cant poke your head out of your house without being shot at because the damn things are everywhere. You think this is peaceful? You think people are going to be ok with this? Absolute brainlet.

Its always easier and more peaceful to surrender in all scenarios btw. If easy was what they were going for why would they even bother making nukes. Or an army for that matter. "LOL just surrender guys, fighting a war might be kind of hard!" -t. brainlet

>> No.14310144

>>14309912
How can you?

>> No.14310162

>>14310126
>>14310144
You don't seem to be able to think straight about this at all and your replies this whole thread have been absolutely brainlet tier, so I'm not going to bother with you anymore

>> No.14310237

As someone who deals with AI algorithms and ML models in his daily life this thread is hilarious. The entire field of AI is just a bunch of search algos that math nerds came up with that provide you with efficient solutions to specific problems that can be generalized. The level of misunderstanding about even the basics of AI is so bad its not even funny.

>> No.14310270

>>14310237
Or maybe you're the one who is confused. Machine learning has nothing to do with what will be AI in the future.

>> No.14310279

>>14310237
Obviously you know how to talk a big game, but do you have any actual concrete criticisms? My previous job was at an algo trading shop, so I'm quite familiar with the limitations of ML as it stands today. Go on, let's hear your thoughts.

>> No.14310413

>>14310279
Neither in ML nor in AI is there any concept of an entity/agent. Its just data manipulation. In ML, you train your model on data you already have and then you want to predict data you don't have. In AI, you have a specific problem that requires an algorithmic solution and you generalize the problem to a graph or something, and then write a well known optimized algorithm that solves that specific problem. You people are literally humanizing math structures. It's hilarious.

>> No.14310471

>>14310413
Oof
Sounds to me like you're literally here to chime in with your "educated opinion" in order to sound smart. Not a good look bro. There "isn't any concept of entity/agent"? Really? You've NEVER heard of ANY AI tech that uses any kind of concept of an entity or an agent, huh. And of course, it's impossible to create such a thing even in the future. And there would never be any use case such a thing I'm sure, because the concept of an entity/agent is completely superfluous to the kind of strategic reasoning we might like to see an AI produce.

>> No.14310474

>>14310413
What makes you think that a brain is anything different than a very complex data manipulation system?
Maybe you believe in soul and spirit, but it has nothing to do with you dealing 'with AI algorithms and ML models in his daily life'. So go down a tone on the arrogance maybe?

>> No.14310520

>>14310474
kek, I was going to add "do you even Buddhism bro" to my post but I thought it would distract from my point

>> No.14310539

>>14310471
The closest we have to agents is in the field of artificial life, where you create a virtual environment, and code in a bunch of self contained dumb agents, hardcode eating, reproduction, etc and then just let the simulation run. Their behavior improves through random mutations and evolution, which is like the shittiest mathematical optimization possible. Go ahead, search some papers on it, see how "smart" those agents are. There's even some examples on youtube.

>>14310474
Because humans operate based on evolved instincts, and constantly learn to solve new problems by using past learned skills. There's no such thing in AI. You can't just will an agent into existence, train it to play chess and then give it a different task and hope it does better because it already learned how to play chess. Agents. Don't. Exist. AI is about mathematical methods to solve generalized specific problems without wasting too much cpu time or memory.

>> No.14310572

>>14310539
No, that's not "the closest we have to agents". All an "agent" is is a simple abstraction over a particular set of processes viewed in isolation from its environment. Anything can be considered an agent, including the trading bots I used at my job - we had quite an interesting ecosystem of agents trading with each other within our internal systems before producing aggregated orders to send out to the exchanges. Bottom line is you're arguing semantics, and also seem to be ignoring that we're talking about tech that's 10+ years out and can't easily be evaluated based on tech that exists today.

>> No.14310613

>>14310572
Good luck trying to get your trading bot "agent" to do anything else.

>> No.14310618

>>14310539
>Because humans operate based on evolved instincts, and constantly learn to solve new problems by using past learned skills. There's no such thing in AI. You can't just will an agent into existence, train it to play chess and then give it a different task and hope it does better because it already learned how to play chess. Agents. Don't. Exist. AI is about mathematical methods to solve generalized specific problems without wasting too much cpu time or memory.

I think you don't understand the fundamental debate here, which is whether or not we will be able to reproduce human intelligence artificially IN THE FUTURE. Only you talk about what is possible today. So the only question is if the brain is reproducible and if it is the center of intelligence.
I don't know what you do with AI but I doubt this is high level stuff, you sound like you are at the very left side of the Dunning–Kruger curve.

>> No.14310636

>>14310613
You're hitting levels of Dunning Kruger never before known to science

>> No.14310677

>>14303021
>sand
Inb4 straya takeover

>> No.14310687

>>14310618
>the only question is if the brain is reproducible
Obviously it is since your brain works with discrete neuron pathways. You want to know what else is reproducible? The key to your TLS encryption that your browser is using right now. But that doesn't mean its possible to reproduce it practically. Not saying the problem of reproducing the human brain falls into NP, but it might as well.

>> No.14310693
File: 170 KB, 360x346, 527512.gif [View same] [iqdb] [saucenao] [google]
14310693

>>14310677
I bet you thought the Middle East was going to become irrelevant now that the US produces so much of its own oil, huh

>> No.14310715

>>14310687
Oh sweet Jesus he brought NP into it

>> No.14310841

>>14301437
Uhm yeah, for that certain generation. But when the new generation is brought up in a society where sexbots are a norm (as normal as a girl nowadays having a dildo under her bed), then shit will change. Our natural instincts will slowly vanish. People probably never thought society would try and normalize mentally unstable transexuals that walks around naked on streets showing their dicks to kids, but here we are. You underestimate the stupidity of humans.
Sexbots however is not really a big deal, considering artificial wombs will come WAY before advanced sexbots. So you can just create your own kids with the best of your genes while having a sexbot do your bidding rather than an actual female that in some way or form will make your life hell for a certain period.

>> No.14311439

As soon as sexbots become normalized the hot new thing will be having traditional wives that nag you for not fixing shelves and stuff. Sexbots will be a fad that reminds humans of what they are.

>> No.14312301

>>14311439
You can probably get another kind of bot to nag you more efficiently. I unironically think eventually people will either be drugged out losers who don't interact meaningfully with anyone, or neo-feudal lords with AI orgs instead of serfs who interact with each other in the same kinds of machiavellian ways real feudal lords did with each other

>> No.14312371

>>14301437
Is that before or after high growth speed designer biological fucktoys are made?

>> No.14312595
File: 40 KB, 552x423, 5e08a331c69176b.jpg [View same] [iqdb] [saucenao] [google]
14312595

>>14305945
>Desire for self preservation
Do retards actually believe this?

>> No.14313681

>>14312595
This kind of behavior could easily show up as an emergent property even if not programmed in directly. Not really sure though

>> No.14314106

>>14305861
spoiler warning

This is the plot of the movie "Her"

>> No.14314305

>>14306725
>what does anon mean by this? aren't NNs based upon this idea?
they are, but they are more like an implementation of what people in the 50s thought about how the brain works.
I think short-term memory is indeed based on short-term activations, but long-term memory relies on rna or dna.
Which is why if you get knocked out you lose a bit of memory - because the temporary activations became lost before transcription, same principle if you are thinking hard about something and something distracts you.
RAM vs hard disk.
>>14308116
>This is true. I have a FWB and fuck on the regular, but VR porn is so fucking good they shouldn't even call it porn, they need a new name for it. The girls look in your eyes, whisper in your ear, and you can FEEL them there.
Are there vr games that connect fleshlights on a motorized stick to the game? That would be interesting, a vr girl jumps on you while in real world a fleshlight reproduces movements perfectly. It should be enough to fool the brain to an absurd degree.
Good things humans are mostly visual rather than feromone based, that would be harder to reproduce.

>> No.14314555

>>14306725
>good resources/texts/sites/links to learn more like tensorflow, etc.
r/machinelearning
>inb4 plebbit
The cancer levels aren’t too high for reddit, and it’s full of actual professional ML researchers and PhD students
Also might want to have a look at allainews.com