tsimionescu 3 hours ago

This is an interesting case of a conjecture holding true for small objects but breaking down for huge ones, without specifically adding that size in somehow.

Sometimes we tend to have this intuition that if a rule applies to all low numbers, than it must apply to all numbers, that there can't be some huge number after which it breaks down (unless of course it explicitly includes that huge number, such as the rule "all numbers are smaller than a billion billion billion").

This is such a powerful intuition, even though it's obviously wrong, that rules that break it are even seen as a problem. For example there is the so-called "hierarchy problem" in physics, which states something like "there is no reason why the weak force is so much stronger than gravity". As if 2 times as strong is somehow fundamentally different than it being 10^24 times as strong from a mathematical perspective. And this has ended up being called a major problem with the standard model, even though it's completely normal from a mathematical perspective.

  • ants_everywhere 11 minutes ago

    One way to think of this is that smaller numbers are more constrained in a sense. As numbers get bigger more possibilities arise.

    For example, in lower dimension you get accidental isomorphisms [0]. These exist essentially because the defining equations are more constrained in lower dimensions.

    A more tongue in cheek intuition is the interesting numbers theorem, which says essentially that every integer is interesting. One of the things that can make a number interesting is that there's a property for which it's the first counterexample.

    [0] https://en.wikipedia.org/wiki/Exceptional_isomorphism

  • snarkconjecture 30 minutes ago

    There are two kinds of naturalness principle in physics, sometimes called "technical naturalness" and "Dirac naturalness" respectively.

    Dirac naturalness is as you describe: skepticism towards extremely large numbers, end of story. It has the flaw you (and every other person who's ever heard it) point out.

    Technical (or t'Hooft) naturalness is different, and specific to quantum field theory.

    To cut a long story short, the "effective", observable parameters of the Standard Model, such as the mass of the electron, are really the sums of enormous numbers of contributions from different processes happening in quantum superposition. (Keywords: Feynman diagram, renormalization, effective field theory.) The underlying, "bare" parameters each end up affecting many different observables. You can think of this as a big machine with N knobs and N dials, but where each dial is sensitive to each knob in a complicated way.

    Technical naturalness states: the sum of the contributions to e.g. the Higgs boson mass should not be many orders of magnitude smaller than each individual contribution, without good reason.

    The Higgs mass that we observe is not technically natural. As far as we can tell, thousands of different effects due to unrelated processes are all cancelling out to dozens of decimal places, for no reason anyone can discern. There's a dial at 0.000000.., and turning any knob by a tiny bit would put it at 3 or -2 or something.

    There are still critiques to be made here. Maybe the "underlying" parameters aren't really the only fundamental ones, and somehow the effective ones are also fundamental? Maybe there's some reason things cancel out, which we just haven't done the right math to discover? Maybe there's new physics beyond the SM (as we know there eventually has to be)?

    But overall it's a situation that, imo, demands an explanation beyond "eh, sometimes numbers are big". If you want to say that physical calculations "explain" anything -- if, for example, you think electromagnetism and thermodynamics can "explain" the approximate light output of a 100W bulb -- then you should care about this.

solidsnack9000 11 hours ago

It is helpful that the post links to the Wikipedia page: https://en.wikipedia.org/wiki/Bunkbed_conjecture

Reading that and then rereading the post, it all made a whole more sense: why the conjecture is intuitively appealing and why the computational approach doesn't readily result in a proof.

  • yread 8 hours ago

    I don't know. To me it sounds like it's obvious the conjecture doesn't hold. if you have a path in the upper bunk that gets broken you are screwed but if that path is broken you have the option to switch to path in the lower bunk at ANY moment. So you have n+1/n higher chance of a break but n-1 ways how to avoid it

    • winwang 3 hours ago

      Intuitively (subjectively) why this argument doesn't work: if you switch to the path in the lower bunk, you can switch to the upper bunk "at any moment", regardless of if there is a path which exists on the lower bunk.

      In this example, a1 is connected to both c1 and c2:

        a1    b1 -- c1
        |     |     |
        a2 -- b2 -- c2
      
      If, instead of deleting edges, we assigned a distance d = 1/p, then you'd expect (and probably easily prove) that the distance to the target vertex is at most the distance to its partner in the other bunk. The fact that this intuitive "problem translation" doesn't actually translate to probabilities is quite surprising (to me)!
    • supernewton 4 hours ago

      The conjecture is true for all small graphs that they tried, so if it's "obviously" false to you then something went wrong with your intuition somewhere.

    • tech_ken 4 hours ago

      I'm pretty sure you're sampling the "bedposts" as well, the upper and lower subgraphs aren't just connected at every node; I understood the rough (and incorrect) intuition to be like:

      P[nodes in upper bunk connected] > P[nodes in lower bunk connected] * P[sufficient "bedpost" edges survive connecting upper and lower]

      Since

      P[nodes in upper bunk connected] = P[nodes in lower bunk connected]

      And by definition

      P[sufficient "bedpost" edges survive connecting upper and lower]<1

  • knodi123 10 hours ago

    I still don't get it. So you make this graph, with a random number on each edge. Then you duplicate that graph, stack them like a bunkbed, and connect every lower vertex to the one above it. Then you delete all edges with a number below some threshold. Wouldn't the upper and lower graphs still be identical? Since you had the same edge weights in both? So the route from any upper node X to any upper node Y should be 1 less than the route length between X and the point beneath Y.

    What did I miss?

    • cochne 10 hours ago

      There is no thresholding, you delete edges randomly based on the probability assigned.

      • knodi123 10 hours ago

        Thanks, that's an important distinction.

    • TheRealPomax 10 hours ago

      You missed the part where the conjecture is about subgraphs: after forming the "bunk bed", we start randomly deleting edges based on edge probability, so there may no longer be a direct path from x to y in what was originally the upper graph.

      • torstenvl 5 hours ago

        That seems irrelevant since then there would be no path in the lower graph either. This must be true because deletions are based on probability and the corresponding edges between upper graph and lower graph are required to have the same probability.

        • calfuris 4 hours ago

          Having the same probability of deletion doesn't mean that they will necessarily share the same fate.

          • torstenvl 3 hours ago

            Ah. The Wikipedia description of the conjecture does not say that each edge is independently randomly deleted (or not) based on the probability assigned to that edge. I just assumed the probabilities were the weights in a weighted graph and had some other effect (e.g., likelihood of success traversing that edge, comparative likelihood of choosing that edge vs another when traversing the graph, etc.).

        • arjvik 4 hours ago

          Same probability, but still independently sampled.

        • TheRealPomax 3 hours ago

          Each edge is independent from each other edge: you don't roll a die and then go "4: remove every edge that's 4 or less", you make a separate roll for each edge.

andrewflnr 10 hours ago

While I wouldn't accept a probabilistic "proof" of a theorem like this, it does seem clear to me that those results are important for directing the focus of research, especially in cases where they go against intuition. Given that most of the math community was barking up the wrong tree, even if these guys only had the probabilistic result, surely that would eventually help someone find the right proof? That's at least worth publishing as a letter or something, right?

Ed: reflecting on my first sentence, I guess I tend to think of proofs as being at least as important a product of math as the raw truth of a statement. A fact is a fact, but a proof represents (some level of) understanding, and that's the good stuff. Experiments are potentially good science, but usually not math.

  • bubblyworld 10 hours ago

    Can you elaborate? It's possible to prove probabilistic results about discrete objects rigourously - in fact this is a reasonably common technique (going back to work by Paul Erdos, and probably further). It can give you concrete information about the existence or non-existence of these objects (if something exists with non-zero probability in a finite set there has to be at least one example). It's not a "less valid" way to prove things, just a particular choice of approach. Sometimes it makes things easier, sometimes it doesn't.

    • ilya_m 9 hours ago

      We are talking about different kinds of "probabilistic" proofs. Igor Pak's blog post and the root message in this thread refer to statements that are validated only probabilistically. The probabilistic method, by Erdos and others, is a proof as rigorous as any. "Probabilities" in the this method are just a convenient apparatus for counting objects.

      (As an aside, there's important distinction with statements that are believed to be true based on empirical evidence, such as hardness of factoring or the Riemann conjecture. If the bunkbed conjecture was refuted based on Monte Carlo simulations, it'd be possible to reduce the level of uncertainty arbitrarily low by throwing more resources at running more simulations.)

      • bubblyworld 9 hours ago

        Right - I admit I stopped reading the blog post at the point they linked to the paper and started reading that instead (which contains a rigorous proof). Having read the rest of the blog now I can see where the OP is coming from.

    • feoren 6 hours ago

      > if something exists with non-zero probability in a finite set there has to be at least one example

      I think you mean an infinite set here. If I have a set containing one single coin, and I flip every coin in my set, both heads and tails have a non-zero probability to exist within my flipped set. But there's certainly not an example of both heads and tails in my single-coin set. Indeed for any finite set of coins, there's some chance that either heads or tails never shows up.

      However, if I have an infinite set of coins and I flip them all, certainly both heads and tails must exist (although I wouldn't be surprised if even this depends on the axiom of choice or something.)

      • zamadatix 4 hours ago

        I think GP did mean finite, just not perfectly clearly how (and also all in missing what the article was focusing on a bit). I.e. if there is some finite set and there is a non-zero probability of something existing in it then there must be at least one _possible_ example of how this can occur - otherwise the probability could not be non-zero. Not in that a single given instance of the finite set must show that result, just that one possibility of what it could contain must have.

        What you're saying about the infinite set version of GP's statement depends on what flavor of statistics you're measuring by. E.g. a frequentist approach would give the probability of something that occured 50 times out of \inf as 0 while other approaches may say a small non-0 probability.

        Flipping an infinite set of coins is actually an inverse view of what's described above and not necessarily "certainly" even though the probability is 1 ("almost surely" rather than "certainly" is the wording) for the same reasons of a non-empty set not necessarily nudging the probabilities of an infinite sequence. The Wikipedia article explains this better than I could https://en.wikipedia.org/wiki/Almost_surely

    • samatman 9 hours ago

      What you're describing is different from the distinction the parent is making, which is between providing a high probability of evidence that a conjecture is true, versus proving the conjecture true through a rigorous formal proof.

      Often there are three useful states: none, some, and all. You describe a rigorous proof-of-some, because a proven probability of not-zero guarantees a counterexample exists. It's still news when or if the counterexample is found, but given the standard of rigorous proof is met, it does exist. Of course another option is to construct a counterexample, as was done here.

      The case of probability discussed in the Fine Article was rigorously demonstrating a 99.99% chance that the probability is not zero. That's informative but it isn't proof. Proof that the probability is not zero, absent the counterexample which we now know (rather than expect on good grounds) exists, is also a proof. But that isn't the case being discussed here.

      • bubblyworld 9 hours ago

        Yeah, I missed that there was a discussion of empirical evidence in mathematics in the blog post (rather than the probabilistic method).

dataflow 11 hours ago

> This is completely obvious, of course!

Could someone explain why the conjecture seemed obviously true? I have no background with it, but just reading the description here, I was caught off guard by the sentence. What made it seem obvious?

  • jprete 10 hours ago

    The upper and lower bunk graphs are symmetrical in their probabilities, so it would intuitively seem like connecting to the same bunk's point would be easier/more likely than connecting to the other bunk's point. The first would require zero (or an even number of) crossings, the latter would require one (or an odd number of) crossings. Every crossing would seem to only decrease the odds of the path existing, or at best leave it the same.

    To me it smells like a trap, though. This feels like exactly the kind of problem where some algorithmically chosen infinite graph would break the conjecture, possibly through some kind of NP-hardness-proof-style transformation of the problem into another form, and who knows, maybe the Axiom of Choice gets involved too (but probably not based on the headline!).

    • joelignaatius 10 hours ago

      I may be missing something. If theres edges u1 to v1 and symmetrical graph u2 to v2 then any post has to be between u1 to u2 for any u. Which means traversing a post would essentially be adding an edge to the graph (no matter the probability). It seems intuitively obvious (which is stated here as incorrect) that going through a post would be more expensive.

      • elseweather 8 hours ago

        Yes, it seems intuitively obvious, which is why mathematicians spent a long time trying to prove that the conjecture was true. It turns out The conjecture is false in a non-obvious way. The result described in the blog post is a specific counterexample: the conjecture fails, just barely, for a specific graph with several thousand nodes and edges. It's not the kind of counterexample you would intuit in your head or even on a whiteboard.

        • joelignaatius 8 hours ago

          It would seem the next logical step would be to come up with a series of examples where the conjecture fails and then extrapolate from there what new rules you come up with. And then possibly attempt to draw an isomorphism from another field. At some point mathematics will turn into an LLM problem (I know hype cycle). I'm interested in knowing if there are branches of mathematics which are near inaccessible to non computational methods of proof. And then there would be levels of mathematics where the proof itself would be true, but it would be much like me asking you for the intuition except it would be man versus the computer. If you do this level of mathematics and you put it in a box you have some real world result the operations of which are non comprehensible but demonstrably have an analogy to something understandable. Schrodinger's AI.

    • sweezyjeezy 10 hours ago

      The (disproven) conjecture is for finite graphs.

  • ted537 10 hours ago

    I also know nothing but here's something missing from the blog post:

    "A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability."

    So the (apparently incorrect) intuition is that an (upper<->lower) connection starts with an extra edge in the connection, so an (upper<->lower) connection has a greater risk of disconnect via random edge removal. Therefore, a same-level connection is more likely to remain after the random edge removal.

  • cashew22 10 hours ago

    Here is my reasoning: start with two vertices u and v in the first floor and the vertex corresponding to v in the second floor (call it w). u and v are connected if, after deleting some edges randomly, there is still a path from u to v, similarly for u and w. But for any fixed path from u to w, you can get a shorter path from u to v by just going across between the floors one less time. Because a shorter path is more likely to survive than a longer one (by a factor of p), it makes sense that overall, the probability of some path from u to v surviving is larger than some path from u to w surviving. But, that reasoning breaks down, in part because when you drop the last crossing edge, there are many ways of adding it back in, so each path from u to v might correspond to many paths from u to w.

  • stonemetal12 8 hours ago

    Intuitively the longer a path is the more likely it is to be broken when you start randomly removing edges, and before you remove edges the shortest path from U1 to V2 is the shortest path from U1 to V1 plus a post. Therefore it seems like the path from U1 to V2 is more likely to be broken than the path from U1 to V1.

  • keepamovin 10 hours ago

    Why is it "obviously true" (that lower probabilities of connection between than on levels)?

    Because if it wasn't true then that would imply that V on different levels (as reached presumably by a canonical DFS tree // spanning tree) were closer (in terms of paths) than V on the same level, which would mean that those V should have "more likely" been on the same level (as however measured).

    TL;DR - 'obviously true' because there's a level between them so (on average) of course!

IshKebab 8 hours ago

This post could really do with a couple of diagrams of what these graphs are. I think I got it from the description but I'm not sure...

keepamovin 11 hours ago

Wow, funny how our intuitions fail on objects with 7523 vertices and 15654 edges

  • y-curious 10 hours ago

    It was always obvious for me. But they did call me gifted in the third grade.

    • keepamovin 10 hours ago

      Hahaha it was obvious that it was false for you???

  • fbn79 10 hours ago

    and funny 7523 is a prime number.

    • keepamovin 10 hours ago

      Nice! 15654 = 2*3*2609

      and (7253/15654)**(phi*pi)*100 ~ 2

      haha :)

    • f1shy 10 hours ago

      But 16564 not!

willcipriano 7 hours ago

It does not in fact give you extra space in your room to do activities.