[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 177 KB, 1350x853, Screenshot (41).png [View same] [iqdb] [saucenao] [google]
15151776 No.15151776 [Reply] [Original]

Full answer:
>GF(25) and GF(5)[x]/(x^2-2) are not isomorphic, which means that they are not structurally identical.

>GF(25) is a finite field with 25 elements, which consists of the integers from 0 to 24, with the operations of addition and multiplication defined modulo 25. It is a field of prime order and it is used in fields such as error-correction codes and cryptography.

>GF(5)[x]/(x^2-2) is a finite extension field, also known as a Galois field, which is a polynomial ring over the field GF(5) modulo the polynomial x^2-2. It consists of all the polynomials in x with coefficients in GF(5) that can be written in the form a + bx, where a and b are in GF(5), and when multiplied by x^2-2, it equals 0. The operation in this field is polynomial addition, polynomial multiplication and polynomial scalar multiplication.

>The two fields are not structurally identical because they have different numbers of elements and different operations. GF(25) has 25 elements and two operations, while GF(5)[x]/(x^2-2) has 25 elements, but it has three operations: polynomial addition, polynomial multiplication and polynomial scalar multiplication. Additionally, GF(25) has a prime number of elements and it doesn't have any non-zero divisors, while the elements of GF(5)[x]/(x^2-2) are polynomials and they do have non-zero divisors.

>Isomorphism is a property that relates two algebraic structures that are identical in structure, and it is not the case for these two fields.

>> No.15151785

>>15151776
No matter what I do, it's convinced that [math]\mathbb{F}_q[/math] is [math]\mathbb{Z}/q\mathbb{Z}[/math], and when asked to find a specific inverse, it fumbles and uses the Euclidean algorithm wrong, saying stuff like -3*9+1*27=1

>> No.15151806

>>15151776
what's your point? we know these things talk like schizo pseudointellectual talk show hosts. what makes you think they're capable of any sort of logic?

>> No.15152298

>>15151806
I guess what surprised me is how wrong it got the definition of GF(q), you'd expect that either it doesn't know what it is, or it heard a sensible definition. It's like it learnt this undergrad misconception, and forgot the correct one

>> No.15152327

>>15152298
It's wrong about everything and understands nothing. It just knows how to put one word after another in a way that makes grammatical sense. It's pure filler.

>> No.15152331

Has anyone put the chatGPT through the writing IQ estimator? Or managed to get it to write something that gets estimated as profoundly high IQ?

>> No.15152374

Its a fancy auto-complete system, why would you expect it to be able to do maths? Its like complaining your spell checker didn't flag two plus two equals five as being wrong.

>> No.15152483

It's offline now, but OP clearly doesn't know about the difference between a continuous function and a function such that the inverse image of every open set is open.
This is a well-known (or should be well-known) exploit for getting the chatbot to emit nonsense, just as OP did.
In my experience, the chatbot will outright lie and say a map such that the inverse image of every open set is an open map.
This is, of course, backwards.
An open map is one such that the image of every open set is open, so it looks like the chatbot just dropped the "inverse" part
I wasn't able to get it to reproduce the garbage response the last time I tried, it complained about something like
>the language module you requested could not be loaded
some internal error, I have no idea what they're talking about, some internal company jargon
That response was also in red, and unlike the red letter bibles that use red to indicate the words of our Lord Jesus Christ, the chatbot red was just an error message

>> No.15152489

> using a text based general assistant chatbot to do math
> not using a calculator

the only retarded person here is you. think 10 years into the future, where the text generation is just one of many modules, similar to a human brain architecture, instead of claiming AI failed miserably and will never be important.

>> No.15152501

>>15152489
>"And then the teacher said, 'Neuromancer failed miserably and will never be important. Let that sink in.'"
>The class was totally unprepared for these remarks. They had just opened their algebra workbooks to the last page of filled exercises and were prepared to briskly review the activities and primary concepts of the previous lecture, er...class
>Teacher went back to the chalkboard and picked up the chalk. She started thus, "If Looney Zone has 6 whores, two cheap ones at $1,500 per hour, three medium ones at $4,500 per hour, and a fun sized one at $16,000 per hour, and customers queue at an average rate of 0.15 per hour with an average service time of 3 1/2 hours, and Looney takes a 20% cut from each of his whores, how long before..."

>> No.15152513

>>15152489
>"Kubla Khan's Xanadu, the hedonic dome of pleasure, failed miserably and will never be important."
https://xanadusounds.bandcamp.com/album/is-it-a-mirage
>In Xanadu did Kubla Khan
>A stately pleasure-dome decree:
>Where Alph, the sacred river, ran
>Through caverns measureless to man
> Down to a sunless sea.
>So twice five miles of fertile ground
>With walls and towers were girdled round;
>And here were gardens bright with sinuous rills
>Where blossom'd many an incense-bearing tree;
>And here were forests ancient as the hills,
>Enfolding sunny spots of greenery.

>> No.15152553
File: 36 KB, 500x559, 6s7u1m.jpg [View same] [iqdb] [saucenao] [google]
15152553

>>15152483
>It's offline now, but OP clearly doesn't know about the difference between a continuous function and a function such that the inverse image of every open set is open.

>> No.15152626

>>15152553
The "difference between a continuous function and a function such that the inverse image of an open set is open" is a shibboleth referring to the OpenAI ChatGPT chatbot.
They're only the same thing if you refuse to follow the OpenAI ChatGPT chatbot convention of dropping the 'inverse' when mentioning, thus transforming "the inverse image" into "the image"
>he doesn't drop the inverse
>the inverse preserving pleb
>the inverse dropping patrician
TUNE
>IN
TURN
>ON
DROP
>INVERSE

>> No.15152628

>>15152553
The issue is that the chatbot refuses to hear you; it only mishears you and as a result doesn't understand the intended meaning.

>> No.15152631

by the way, the website is doing full-on MTV style deflection right now
the idea that we aren't up to our fucking eyeballs in compute power and they didn't figure out how to throttle down the compute requirements of chat interacting users...
I mean, we're going to war against communism, that much is clear.
However, we don't have to watch these bozos fuck up. Doesn't OpenAI have any real competition?

>> No.15152633

It's not capable of math, it's right now as educated as an MBA, which is not very intelligent. It's best uses are for organizing your thoughts logically, I'll give it literal schizo notes and ask it to organize it for me and it will, telling it to write you boring things given your inputs (it's even better if you rewrite whatever it wrote and tell it that this is better), and just general admin shit.

If you have writers block, the chatgpt can soothe that. Honestly that's the best innovation of these new bots, complete destruction of writers block.

>> No.15152680
File: 2.42 MB, 504x896, 1674669835415.webm [View same] [iqdb] [saucenao] [google]
15152680

>>15151776
I tried doing engineering stuff with it and it literally got basic calculations wrong.
It gliched out after and kept apologizing and changing its answer wren I asked it about specifics.

>> No.15152698

>>15152626
I get what you're saying, but you have such a peculiar way of saying it. I wish I understood
your
>way
of
>alternating
greentext

>> No.15152713
File: 19 KB, 211x350, 210834.jpg [View same] [iqdb] [saucenao] [google]
15152713

>>15152698
>shibboleths have meaning
you never read Kim

>> No.15152835

>>15152298
Your misconception is that it "learns." It's a statistical regurgitator. It assembles real-sounding dialogue from the genuine human dialogue in a database. It "knows" nothing about the content of what it types except that it must resemble the input from a pattern recognition perspective.

>> No.15153530

>>15152298
Right, so technically we can bash their brains in with some "voodoo learning" theory that posits learning without teaching.
In other words, we are supposed to tolerate an offering that encroaches on what are essentially education industry trademarks, such as learning, and the premise here is that somebody (who?) is supposed to speak as if someone or something has learnt something even though there is no teaching, i.e. hang the teachers
Now that we've established that the "machine learning" crowd is really just trying to call teachers witches and organize a teacher hunt...
the theory of learning without teaching, "voodoo didactics" or whatever is just bunk
it's literally the latest brand of American Snake Oil™
Patent medicine for the 21st century!
https://en.wikipedia.org/wiki/Patent_medicine

>> No.15153544

>>15151776
It's shit with math but it's decent with history, though I used CharAI and not ChatGPT.

>> No.15153546

>>15153530
Either that or we can just do some sort of imputed augmented reality overlay and say
>these people must believe in machine teaching
and then it becomes really fucking obvious
>who the fuck is training these "machine teachers"
and it's a really serious problem because absolutely fucking NONE of this aligns with anything at all going on in education philosophy, mass education, education history, math history, and everybody involved is
- a total coward
- looks down on "soft" studies like English and History
- never even looks at /lit/ or /his/ let alone /pol/ or /gd/ or /qst/
- sucks Elon's fat dick
- uses Twitter
- is ugly and stupid
and—I just want to point this out—the relevant cultural "civil society" reflections on these topics were Frontline episodes that came out in 2000 and 2008. These are "Growing Up Online" and "The Merchants of Cool." And what's at issue is the extent to which teachers are willing to bring the "desires economy" of Bernays and other public relations councils into the classroom and study it, or at least use its energy and the attention it drives to create engaging classroom material

>> No.15153575

>>15152680
Ages ago when I was in jr high school i use to have some jokes about dumb calculators, I thought it was funny that no matter how much you pay, the calculator's accuracy never increases, the cheap ones don't make the occasional mistake.
Soience has finally made my memes come true, soience has invented the inaccurate calculator. So much progress in soience since i was schoolchild aged. Must've taken a lot of effort to circumvent all the naturally logical mathematical framework of computers to generate a stupid AI. Is it a reflection on the programmers that a dumb AI is within their ability to create, but a smart one seems out of their reach?

>> No.15153576
File: 69 KB, 198x721, kk.jpg [View same] [iqdb] [saucenao] [google]
15153576

>>15153544
you only say that because you've never even loaded /his/ or /lit/ in your browser
it's classic Dunning-Krueger
you've never actually interacted with people who live and breathe history and philosophy
you guys really really really really don't get that you live in an artificial bubble that has been artificially constructed for your pleasure
yah
guize
it really is Kubla Khan
>be me
>be retard
>point at 4chan
>mumble "Kubla Khan" inaudibly
>nobody understands
>nobody even hears

>> No.15153588
File: 76 KB, 1200x800, everyone is like me.jpg [View same] [iqdb] [saucenao] [google]
15153588

>>15153576

>> No.15153644

>>15153576
>it's classic Dunning-Krueger
Lol. I never said I was an expert or said CharAI was an expert. I never said the bot knows history as good as experts. I said the chatbot is better at history than math.
When I tried it, the math bot failed to comprehend even elementary trigonometry questions, but the history bot was able to talk and answer questions coherently at length about the Norman Invasion.
That's all. Not sure why you're so triggered.

>> No.15153695
File: 14 KB, 751x487, ELIZA_conversation.png [View same] [iqdb] [saucenao] [google]
15153695

>>15153644
>the history bot
this is sort of...insane
these bots originate with ELIZA
I mean
why the fuck would you even let these creeps utter "history" without bashing their teeth in Nazi Thug Style™
I'm more than happy to be a Nazi thug if some jew says
>I'm gonna make money by claiming my psychologist bot knows history
this is just too Nineteen Eighty-Four
Is your name WInston in real life?
Do you smoke WInstons?
Remember: Freud did coke.
You're letting a coke addict tell you what history is.
You have to see the bot as part of history and psychology and mass psychology.
Freud's psychoanalysis was directly concerned with the fate of nations...probably because Freud was addicted to cocaine, and there are all sorts of things that stimulate cocaine addicts that ordinary folks would never even DREAM of
https://www.youtube.com/watch?v=fJiThcUs8-M
in short, you're letting a coke addict manipulate you

>> No.15153703

>>15153644
Right, so why is this business allowed to build their product on the legacy of a simulation of interacting with a coke addict?
This is a cocaine-fueled experience.
This business wants to use Freud's legacy to...do what, exactly? Change our perception of what coke addicts are capable of?
I'm more than happy to marginalize psychoanalysis as "cocaine-fueled research" for the time being, although it's somewhat clear that an oxymoron

>> No.15153713

>>15153703
You could even go to work with Anglo-Latino cultural differences
>"What do you mean coke is viagra?"
>"You mean you guys had heart attack fuel all along?"
>"And you didn't tell us?"
so here California is inflicting their racist ham-handed approach to cultural differences on the rest of the world and calling it—get this
>artificial intelligence research
they should just take coke history seriously
obvious solution to a complex problem
thank you academia for making history possible

>> No.15153720

>>15153713
Oh, I'm sorry. Cali won't take coke history seriously because it's a massive Nazi experiment and they overstaff their police with people whose jobs depend on not understanding the history of cocaine.
>"Hello, mr. police cadet. Your job is to be so racist that you can't contemplate the relation of cocaine to performing your job duties. Enjoy your career."
obviously this is just a fucking meat grinder