[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math

Search:


View post   

>> No.4400568 [View]

>>4400555
Damn, I knew I shouldn't have posted as anon in the last few threads. I missed my fan club. You don't know how much your support means to me, anon!

>> No.4400547 [DELETED]  [View]

You gave the intransitive definition of "to drink". Give the transitive definition, I'm pretty sure it says its object has to be a liquid.

>> No.4400529 [View]

Consider that the necklace's beads are, in order but starting at an arbitrary bead, <span class="math">(b_1,\dots,b_n)[/spoiler]. Now we consider the set <span class="math">\mathcal{X} =\{(x_1,\dots,x_n)~|~( x_1,\dots,x_n) [/spoiler]is a cyclic permutation of <span class="math">(b_1,\dots,b_n)\}[/spoiler].

We consider n>1 (n=1 is trivial). Consider that the statement of the problem is false. Then for each <span class="math">(x_1,\dots,x_n)\in \mathcal{X}[/spoiler], there is k such that <span class="math">\sum_{i=1}^kx_i\geq k[/spoiler]. For a given sequence <span class="math">X=(x_1,\dots,x_n)\in \mathcal{X}[/spoiler], let us denote by <span class="math">k(X)[/spoiler] the smallest such <span class="math">k[/spoiler]. Let us now consider <span class="math">X_0[/spoiler] a sequence which maximizes <span class="math">k(X)[/spoiler] and write <span class="math">k(X_0)=k_0[/spoiler]: we have <span class="math">\forall X\in\mathcal{X},\ k(X)\leq k_0[/spoiler]. Clearly, <span class="math">k_0>1[/spoiler], otherwise every <span class="math">b_i[/spoiler] has to be non-positive and their sum cannot be <span class="math">n-1[/spoiler].

We can write <span class="math">X_0=(b_{a+1},\dots,b_{a+n})[/spoiler] for some integer <span class="math">a[/spoiler], with modulo n sum. And we have <span class="math">\sum_{i=1}^m b_{a+i}\leq m-1[/spoiler] for all <span class="math">b\leq k_0[/spoiler]. Now if <span class="math">b_{a+n}\leq 1[/spoiler], then for all <span class="math">m<k_0[/spoiler], we have
<span class="math">b_{a+n}+ \sum_{i=0}^m b_{a+i}\leq 1+(m-1)[/spoiler]
<span class="math">\sum_{i=0}^{m+1} b_{(a+1)+i}\leq (m+1)-1[/spoiler]
and <span class="math">k(b_{(a+1)+1},\dots,b_{(a+1)+n})>k_0[/spoiler], which is a contradiction. Therefore, <span class="math">b_{a+n}> 1[/spoiler].
The same reasoning extends to <span class="math">b_{a+n}+b_{a+(n-1)} >2[/spoiler], and eventually to <span class="math">\sum_{i=k_0}^n b_{a+i}> n+1-k_0[/spoiler].
Therefore,
<span class="math">\sum_{i=1}^n b_i= \sum_{i=1}^n b_{a+i}= \left(\sum_{i=1}^{k_0-1} b_{a+i}\right) +\left(\sum_{i=k_0}^n b_{a+i}\right) >(k_0-2)+(n+1-k_0)=n-1[/spoiler].

This is a contradiction, and we're done.

>> No.4379798 [View]

>>4379793
Beeeeeecause he first shared the same name and then the same tripcode, and all the time the same capabilities at problem solving?

>> No.4379783 [View]

>>4379761
I would rather think that people just hate tripfags randomly. I posted in the last 3 putnam threads anonymously and didn't get the same hate. Now that I know that, I'll probably go back to tripfagging, since I can fairly assume I'm not really being the dick people tell me I am (which I would otherwise try to fix, since I do take criticism into account).

>> No.4379766 [View]

>>4379756
No I'm not the moderator, and I only posted like two of these threads when nobody else posted them. But anyway, that guy didn't notice you were here before you got a trip, it seems. See http://archive.installgentoo.net/sci/thread/S4337992 and previous uses of "Yo, I got this one" in the archives.

>> No.4371984 [DELETED]  [View]

Only 3 positions are possible. If you place a mine on the 2nd square, you necessarily have one on the 5th and none elsewhere. If you have a mine on the 3rd square, you have no other mine. If you have a mine on the 4th square, you have a mine on the 1st and no mine anywhere else. Assuming that you don't know how many mines are left to discover, you can assume that the 3 probabilities are equally likely, and that each block has a 1/3 chance to be mined. You need a thinner analysis and more data about the rest of the board and how the mines are drawn if you want to refine that.

>> No.4368026 [View]

>>4367995
Oh cool, now I can get insulted when others post. Awesome. Also if you had checked these stickies often enough, you'd know that we aren't the same person.

>> No.4349294 [View]

>>4347722
>Yes, I liked it more when it isn't obvious that it comes from an arrogant aspie.
Arrogant? Am I looking down on anyone? Am I unable to admit when I'm wrong? (Those aren't rhetorical questions)
Maybe I'm annoying you, maybe I write too much, but please, any time you consider something really arrogant in one of my posts, let me know, because I don't think it happens but if it does, I want it fixed.

>> No.4347394 [View]

>>4347335
I'm the one who posted the problem, and who gave >>4346652
Strange how you liked my post better when I posted as anonymous.

>> No.4347242 [View]

>>4347210
Yeah maybe I should have posted the B6 from yesterday instead of this one. It was something a bit related to the cutting sticks problems. Didn't really think about it but it was more interesting for sure.

>> No.4340609 [View]

>>4340571
If I understood you well and you pick the centers in an annulus and you want some disks centered in it that cover the whole annulus, you might not be able to, since part of the coverage might come from the outer or inner annulus. However this can probably be fixed and is a way to reduce the problem to smaller problems (filling the annuli). I'm not sure whether it's actually better than dividing into regions with square shapes, for instance. I think that using squares as shapes reduces the amount of borders (if taking roughly same kind of size of subproblems), which reduces the border effects. Though, if for some reason it's much simpler to treat the subproblems because of the easy way in which you can sort the centers by angle, it would probably be a good pruning before maybe running a more complex and closer to optimal algorithm.

>> No.4340527 [View]

>>4340510
Yeah. I didn't think of it that way. If the array of neighbors still fits in the RAM, I think the idea can still work, even if it might be less efficient than I though. Otherwise I don't really see how it can be cleverly adapted, besides running the algorithm area by area with a limited number of points, and then once dense areas have been pruned, run it again on the result (losing a bit more of optimality on the borders of the areas, but whatever...).

>> No.4340443 [View]

Also if you don't mind, I think I'll use that problem in tests. It's kinda cool.

>> No.4340436 [View]

>>4340426
I think this is a realistic approach if <span class="math">f(d)[/spoiler] isn't too big for your value of <span class="math">d[/spoiler]. The only hard part is to code the procedure to figure out the new area covered by a new disk, but this can be estimated instead of being done in an exact manner. You can use squares instead of circles, for instance, maybe squares larger than the circle for the first passes of the algo, and then squares inscribed in the circle for the last passes, so the there is indeed no point left that should have been covered and isn't covered because of the approximation.

>> No.4340426 [View]

Okay, so I'd first create an array of length 1.7M that contains the list of every node within 2r of the current node. In 2D, I'm pretty sure this takes less than <span class="math">O(n^2)[/spoiler] time and is actually doable. This also fits withing your RAM assuming you don't have that many postboxes too close to each other.

I'd create an empty list L which will store the nodes I want in the end.
Starting from there, I'd:
1) run through all the centers in my array, adding to L the nodes I go through, unless I've already added to L one of their neighbors: <span class="math">O(nd)[/spoiler] (where <span class="math">d[/spoiler] is the degree of the neighborhood graph),
2) run through all the centers, computing the maximum new area that a new center can cover if added to L: <span class="math">O(nd)[/spoiler] or <span class="math">O(nd^2)[/spoiler] or something like that which is linear in <span class="math">n[/spoiler] and maybe more than linear in <span class="math">d[/spoiler], let's say <span class="math">O(nf(d))[/spoiler],
3) run through all the centers, adding to L those that add almost the maximum new area (figure out this parameter by trial and error, the higher the better as long as the algorithm completes in a reasonable time), under the condition that no neighbors has been added yet: <span class="math">O(nd)[/spoiler] if you stored the new area per center, <span class="math">O(nf(d))[/spoiler] otherwise,
4) Go back to step 2 until your maximum is 0.

>> No.4340173 [View]

>>4340166
It does 4 comparisons for two cells. OP does 3 for only one cell.

>> No.4340171 [View]

>>4340150
Yeah but in the end you only remove the comparisons that check if the loop is done or not, but for your array of size 10000, you still need to find the min and the max. Assuming that this end of loop test is cheap and that comparison between array elements is costly (say, it's not integers, but pointers to stuff that you can't compare very fast and can't just convert into an array of numbers), you wouldn't gain anything by just processing big arrays if there's no clever reorganization of the comparisons.

>> No.4340153 [View]

>>4340146
It's actually 4 (divided by 2):
- 1 to know which is the min and which is the max of the two cells,
- 1 to compare the min to the previously stored min,
- 1 to compare the max to the previously stored max,
- 1 for the loop.

>> No.4340129 [View]

>>4340121
Actually, counting the loop condition as OP does, it would be 2 and not 3/2 operations. And in my discussion about "can we get it down", I also put the loop condition aside.

>> No.4340121 [View]

>>4340117
Well it's 3/2 comparisons per cell, including the one to get what is the min and what is the max. He's not just onto something, he has actually described a correct answer.

I wonder if that can be applied somewhat recursively or with larger amounts of cells checked together to reduce the 3/2 factor down to 1+epsilon.

>> No.4339397 [View]

>>4339391
I have yet another CS/math friend who works with neural networks, specifically associative memories using neural networks. His former PhD advisor (and potentially future boss again) got a 2M€ grant for this project, so it should be quite awesome. He plans to actually build something in the relatively near future ("near" as in "research-near").

>> No.4339273 [View]

>>4339271
Huh I didn't notice there was such a big difference. I withdraw my comment about numerically solving this, I doubt it would create such a big discrepancy.

>> No.4339271 [View]

>>4339270
Wolfram probably solved this numerically. That happens with systems of non-linear equations.

Navigation
View posts[-24][+24][+48][+96]