So just from the headline, "unknown person solves p=np" there is a 99.9999% chance this is wrong.
However if i read the first sentence of the abstract correctly, they are claiming to have a constructive proof (i.e. they found an algorith). So can't we just check the proof by using their alleged algorithm on some np-hard problems?
Np hard problems aren't that hard to solve. I'm solving one right now actually for work (graph bi-partition using simulated annealing), and it doesn't even take that long.
Of course my algorithm isn't actually correct! Or if I had a correct algorithm it might be really fast but still not have polynomial growth. You need a proof.
This belies a misunderstanding of NP space. Any problem is easy to solve if the dataset being processed is small enough. Computers are wonderful at brute-forcing terrible solutions.
You need the proof to show it really works in all cases. However if the goal is some quick (non-exhaustive) verification, then why can't you just use the algorithm? (Ignoring the big constants, high degree polynomial issue siblings mention).
Either A, it doesn't work, showing the paper is incorrect. Or B it works, showing that even if it doesn't prove the paper fully correct, it shows that at least something interesting is going on.
I mean use it on an instance of the problem big enough that an exponential algorithm would take years and see if the new algorithm doesn't *. Either something interesting will happen or it won't.
* yes i am aware that this falls apart if there is a very high constant or the algo is n^1000.
> Conjecture. There is no polynomial time algorithm for deciding <some problem>. […] [A variant of the problem] is NP-complete. […] Theorem 2. There is a constructive algorithm for deciding [the variant problem]. The number
of basic steps to do this [is bounded by something that is, I think, intended to be polynomial, although there is a hidden exponent of k and a direct exponent of n].
I don’t see any justification or source for the statement that the variant problem is actually NP-complete. Even if their proof was correct, wouldn’t it just disprove a conjecture?
> can't we just check the proof by using their alleged algorithm on some np-hard problems?
Not necessarily. An algorithm may be polynomial but impractical for all but the most trivial cases. Moreover the algorithm may be very difficult to implement correctly.
The large constant problem might not apply here as NP problems suffer from an exploding complexity: you don't need to grow them that much to reach a point where that constant would be a non-issue.
Ease of implementation could definitely be an issue.
You can't check the proof by "using their alleged algorithm on some np-hard problems". You would have to prove that all problems in the space are handled in polynomial time by their algorithm, correctly.
In electronic design automation we have to deal with NP-hard problems all the time. One approach is to use algorithms that are worst-case exponential but that finish quickly most of the time even for fairly sizable problems, or that fail to provide a solution for some cases. SAT solvers are a typical example. Another, for an optimization problem, is to provide a good solution that is not necessarily optimal.
I have no chance of understanding this paper, but why speculate if this is the case when someone should be able to come along and determine if it is the case here.
The definition of "basic step", which includes arithmetic on unbounded integers, is suspicious. And I don't see any attempt at establishing an upper bound on the size of coefficients. I wouldn't be surprised if they grow exponentially.
This might be it. Good catch. A machine that can perform all integer arithmetic operations (+, -, *) in constant time can solve all problems in PSPACE in polynomial time.
Yup. Just at a glance, I don't see even a cursory attempt to prove that the upper bound on the proposed algorithm's time complexity is actually a polynomial. It depends on the "serial numbers of [...] monomials, in the natural order of monomials". But the number of different possible monomials in a polynomial with bounded length is obviously exponential.
The closest the paper gets to actually talking about time complexity is:
> As it is mentioned in [3], there is no sense to make use of the formal definition of Turing machine for this problem. From practical point of view, it is much more useful to show existence of a polynomial-time constructive algorithm for solving it ([2]). Therefore, in the next section we will introduce our own definitions and notations, describing a kind of a formal computer, more practically oriented, but resembling that of a Turing machine.
But the so-called "formal computer" is only described in extremely hand-wavy and informal terms, and no attempt is made to relate the number of "steps" it would perform to the running time of a corresponding Turing machine.
Not just that, but this paticulary problem is also famous for crazy people claiming they have a proof. Until some mathmatician agrees with the proof (or even just indicates it is interesting), this is a non story.
Yeah. The "proof" of theorem 2 being just a handwavy do the first proof again, but use Q_2 and normalize the result after each step to a simple fraction, which will only increase c_1.
That really sounds suspicious to me. Why not just solve this version directly with these changes. That should have been a more straightforward proof.
I've learned to be cautious about supposed proofs of famous conjectures, especially if they're published by one person who is not a well-known mathematician, but I don't know nearly enough about this topic to judge this one and it looks like it knows what it's talking about. I'm curious how it turns out.
I’m not sure about that. I have never seen the notation Z_2 being used for the Gaussian integers, for example. Their notation is not quite standard and this is often a bad sign. Also the abstract on arxiv is full of latex commands…
> I have never seen the notation Z_2 being used for the Gaussian integers
For some reason, my brain totally slipped over that when skimming the abstract. I was like “Gaussian integers are a real thing, ℤ₂ is a real thing, okay, cool”.
My first thought from the title was that this result is in the Blum-Shub-Smale computational model rather than the traditional Turing machine model. In the BSS model, arithmetic operations on exact real numbers are done in constant time. That is useful for analyzing numerical algorithms where you are more interested in e.g. convergence speed than things like roundoff errors. But the exact real arithmetic in constant time is equivalent to doing infinite-precision integer arithmetic in constant time. A couple of people already have mentioned the latter as an assumption of the paper as if that were a big error, but in the BSS model it is part of the deal. Thus the BSS model gives speedups for some problem classes and it's not a big surprise that P=NP there.
I didn't realize P vs NP was supposed to be an open problem in the BSS model. I thught that this solution was an early result. The catch is simply that it doesn't apply to the Turing machine model which is what most people think of when they hear of P vs NP.
There is a wonderful and mathematically accessible book from the 1990s about the BSS model: Complexity and Real Computation, by Lenore Blum, Felipe Cucker, Michael Shub, and Steve Smale. I thought it was going to open up into a big field, but I don't know if much happened with it.
Given how many people have failed to prove P=NP, and how strange the world would look if P did equal NP, it seems very likely that P!=NP. This would also help explain why it's so hard to prove P!=NP -- finding the proof is itself in NP!
“Finding the proof is itself in NP” does not make sense as finding a singular proof is not something that depends on the size of its input, which is what the entire field of algorithmic complexity studies.
It actually does make sense, and is an important idea in complexity theory. In general, the problem of proving an arbitrary claim in ZFC isn't computable (this is just Godel's incompleteness theorem). if you confine your claims to smaller languages, you get smaller complexity classes. For instance, the problem of proving things in the propostional calculus is called BSAT, and is the quintessential NP complete problem.
In general a nice way of thinking about what NP is are the set of formula that have polynomial time checkable proofs.
It's important to remember that P=NP doesn't just imply that P = NP; it implies that P = PH (the polynomial hierarchy). There are a whole series of much harder problems than NP that also are in P if P = NP.
Finding proofs is always at least NP hard! Remember, any proof is at least going to include some boolean phrase (this is true and that is true so we get such and such), which is just BSAT, an NP complete problem.
In other words, yes. It's very very unlikely.
Computing the n-th Ramsey number R(n,n) is doable in exponential time. You can easily brute-force R(4, 4); a perfectly credible proof that takes exponential time! A credible proof must be verifiable in demonstrably finite time -- not necessarily polynomial.
Hard to find =/= hard to verify. Indeed that is the point of NP (which stands for "nondeterministic polynomial" time): solutions must be easy to check.
This is wrong. For a proof of polynomial time requires steps to construct the actual computable function. This paper does not do that. It simply shows that it's possible to solve a system of equation in some finite number of steps, but this tells you nothing about construction of the finite polynomial algorithm. All this proves is Gaussian elimination is o(N^3) which is well -established.
So a meta question is ... recently there was a bit of a hubbub about arxiv not posting preprints from academics with findings critical of the big bang. The claim was basically that arxiv isn't supposed to be refereed or edited that way, and that discussions that are at odds with established mainstream views should still be accepted.
At the other end of the spectrum, the USPTO won't look into anyone's claim to have invented a perpetual motion device. And the social networks will flag me if I claim that my one cool trick will stop covid transmission. Sometimes you don't have to look closely into a claim to be pretty sure it's false, and possibly harmful in some way.
_Should_ there be any upstream filtering on arxiv posting when an unknown person claims to have an extremely surprising result, and their reference list suggests that they may be disconnected from the literature in the relevant field? Is there any kind of claim that _should_ be proactively rejected?
It's already suspicious that the author left a bunch of non-math LaTeX commands in the abstract on arXiv. You proved one of the most important open questions in the history of humanity, but couldn't be bothered to make sure the abstract looks presentable?
The author also published a paper last year claiming to prove Sendov's Conjecture that has not been published in an actual journal (as far as I can tell anyway).
Maths papers aren't easy to write or organise. And whether a paper is well-structured is somewhat subjective unless we're writing for computers. That's not enough to knock it down.
Maybe you can enlighten everyone on why exactly it is a bad paper, by pointing out the first red flag(s) found in the proof itself, assuming you have read and analysed the entire paper to get to that conclusion.
It is not good enough to draw a conclusion without a detailed explanation for a substantial claim like this 'proof'.
But anyone who has read enough math and cs papers can see that this paper is clearly badly structured, written, and organized. It is a bad paper regardless of the correctness of the argument.
However if i read the first sentence of the abstract correctly, they are claiming to have a constructive proof (i.e. they found an algorith). So can't we just check the proof by using their alleged algorithm on some np-hard problems?