Quantcast
[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/sci/ - Science & Math

Search:


View post   

[ Toggle deleted replies ]
>> No.9895700 [View]
File: 18 KB, 491x235, cX13J.png [View same] [iqdb] [saucenao] [google] [report]
9895700

What do you guys think about these new deep learning grammar bots that took a huge stride within the last year, such as google grammar checker, quillbot, and crio from stanford? Is deep learning models capable of of resolving the poverty of stimuli? https://en.wikipedia.org/wiki/Poverty_of_the_stimulus

>> No.7897859 [View]
File: 18 KB, 491x235, cX13J.png [View same] [iqdb] [saucenao] [google] [report]
7897859

>Dear CSists

I was reviewing some formal language theory for a seminar I'm giving. I am not a CS person.

Here's the thing: this whole branch of "math" seems like trivial BS. E.g., the CFG~nondeterministic pda equivalence literally seems like a theory of smashing your fingers into a keyboard. Most of the first results you see are always the pumping lemmas, which are pretty much ornate facts about counting and running out of fingers, and don't even fully characterize the classes they describe.

I ask you guys this: please prove me wrong. In the sense that, I don't feel like I even know enough of the "big results" of formal language theory to really criticize it fairly. What are the "major theorems"? What can you do with it that is more straightforward than other methods from algebra?

>Pro version: what results aren't simply about word problems, or putting a metric on proof procedures?



Navigation
View posts [+24] [+48] [+96]