[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.15686195 [View]
File: 41 KB, 952x960, kNizkZk.jpg [View same] [iqdb] [saucenao] [google]
15686195

I thought that the obvious answer to OP is because the data that gpt was trained on did not have that many examples on his particular example, therefore didn't get to make the necessary logical abstractions to be able to comprehend such a simple structure.
Also, while gpt is supposed to be constantly learning please take note that not many people use that particular structure during speech and therefore it limits its capability of learning it even more.
But I suspect that the model doesn't even learn from current interactions, rather it has been trained on past access to some data and then is just released to the public.

>> No.11259416 [View]
File: 41 KB, 952x960, 1497056524681.jpg [View same] [iqdb] [saucenao] [google]
11259416

>>11259407
(cont.)
Based on this I was thinking of two different approaches.
a) The "I don't believe approach"
b) Seeming enthusiastic, give fake believable information, answering the health questionnaire strategically (e.g. hinting at more severe problems etc.) or straight up telling them I tore my meniscus two years ago while skiing and such and seeing if it shows up. Maybe saying I have screws in my hand to see if it shows up as a black square.

Note: My mum put me through it when I was like 17 and I went just to make her leave me alone. That's why I can't state my real information as it might be searchable in some sort of a database.

Navigation
View posts[+24][+48][+96]