[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers

Search:


View post   

>> No.69236916 [View]
File: 73 KB, 286x282, 145003774.png [View same] [iqdb] [saucenao] [google]
69236916

>>69212185
So let me get this straight. He:
> give me a break
(/Break ends after 2 weeks or so)
> Let me cook
(/Makes others expect something grand)
> Don't expect anything great
(/Presumably cooking failed)
> "I'd be surprised if I didn't come back this month"
(/Sounded like he would come back, but could be more of a threat of 'if i dont come back this month, you know im never coming back')

lilbro may be a 20 year old now, but holy fuck, get some antidepression pills and stream our litttle neuro that you were gifted 50k+ within a month.

>> No.69160560 [View]
File: 73 KB, 286x282, 145003774.png [View same] [iqdb] [saucenao] [google]
69160560

>>69159889

>> No.68133057 [View]
File: 73 KB, 286x282, 145003774.png [View same] [iqdb] [saucenao] [google]
68133057

>>68131061
Yeah... I also got extremely disheartened after realizing if I do want to train models above 7B, it would take actual GPU renting... Which I am not about to spend on without proper expertise that i am also not willing to pay for.

I am slowly trying to get the hang of tensors, though, I am at a loss at how much should be manual correction (converting words to the token-equivalent, and appending it to the unfinished generated output) v.s. 'delete last sequence, retry using different parameters.'


And it really fucking sucks how much conversational history / context can also accidently over-influence the LLM's generation too... Maybe I am doing it incorrectly, and maybe i should recall memory per-sentence generated v.s. having it static during entire generation.

Navigation
View posts[+24][+48][+96]