[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/jp/ - Otaku Culture

Search:


View post   

>> No.38346807 [View]
File: 115 KB, 2172x1120, graph (1).png [View same] [iqdb] [saucenao] [google]
38346807

>>38346790

>> No.38297014 [View]
File: 115 KB, 2172x1120, 1626471149533.png [View same] [iqdb] [saucenao] [google]
38297014

>>38296988
if you really want to get into the specifics youll have to read up some papers
i went down a similar investigation when i questioned the validity of picrel and found scientific papers indicating there is at least some basis of truth for it even if the line isnt a perfect match

>> No.37163247 [View]
File: 115 KB, 2172x1120, 1626471149533.png [View same] [iqdb] [saucenao] [google]
37163247

>>37163168
i too use random pictures involving japanese to blogpost and get away with off-topic shit

i hate going to supermarkets because i always get lost
this post is on-topic because i have a a japanese related picture

>> No.36197679 [View]
File: 115 KB, 2172x1120, graph(1).png [View same] [iqdb] [saucenao] [google]
36197679

>>36197607
>before you at least picked a point and learned the words below that threshold
How did you pick it? Using a graph like this probably. Now you don't need a graph like this. You can just aim for everything under 95 as a beginner and everything under 98 as an upper beginner no matter what freq list you use (vn, narou, bccwj)

>>36197607
>you have to find the new threshold and you don't even know how many words that is in relation to how many you know
Why should that matter? If a word is common then it's common no matter how many words you know, same goes for uncommon

>> No.35720682 [View]
File: 115 KB, 2172x1120, graph.png [View same] [iqdb] [saucenao] [google]
35720682

another way to think about this: if you graph the data from any of the freq dicts you will get this exact shape but the numbers at the bottom will be arbitrarily stretched and squashed depending on differences in size and methodology

>> No.35508348 [View]
File: 115 KB, 2172x1120, graph (1).png [View same] [iqdb] [saucenao] [google]
35508348

bros....

>> No.35353097 [View]
File: 115 KB, 2172x1120, graph (1).png [View same] [iqdb] [saucenao] [google]
35353097

>>35353092
around 95%

>> No.35336472 [View]
File: 115 KB, 2172x1120, graph (1).png [View same] [iqdb] [saucenao] [google]
35336472

>>35336424
oh that one has reading too? that's cool. the one you linked me before didn't. but also this GitHub one just shows rank which I recently determined is kind useless so I changed all 3 of my freq dicts to use the Y axis of this type of graph rather than the X axis, this way results are more normalized and meaningful without having to do an analysis of the dataset yourself

>> No.35287253 [View]
File: 115 KB, 2172x1120, graph (1).png [View same] [iqdb] [saucenao] [google]
35287253

>>35286919
hi. do you find the number all that useful? i feel like it's a little arbitrary unless you have a graph like pic related.

except you'd need a graph for every frequency dict you use, and you'd have to reference these graphs. so why not cut out the middle man?

instead of saying put the x-axis of this graph in the dict put the y-axis in

thoughts?

Navigation
View posts[+24][+48][+96]