Back when I was younger, I literally taught the course on decentralization at university. We didn’t call it that, of course, because we were not trying to sell a ponzi scheme, but it meant that every year, I’d have 100 or thereabouts students up for oral exam in distributed algorithms and the like. I had an important take-away from that: no student should ever be in doubt that their grade was the right one. Students should feel what they got right and wrong, and we as teachers should provide feedback directly to the student, both good and bad, and also write it down for posterity. The reason is two-fold: getting a slightly-above-average grade feels bad if you expected a top grade, but good if you expected an average or below grade, so making sure the student doesn’t feel they did better than they did makes them happier with the grade they get. Thus, there’s a need for feedback on what they did badly. Additionally, telling what they did well improves confidence and makes the critique go down easier. Making sure to note both good and bad things makes the student less likely to start the complaint process for their grade, and makes it easier to argue for the grade given if it ever comes to that. In my years doing around 500 exams, I never got one complaint about the grades, even if some students did worse than they had hoped.
The trick of keeping track of both good and bad is also used by analysts. This morning, one stock in my portfolio presented their Q1 numbers. The numbers were great and the stock went up in the morning. Papers reported that the stock was up due to the great numbers, but also noted that there was a downward pressure because of bad expectations for the future, and there had already been a notification of above-expected performance previously, meaning it was at least partially priced in already. Also, Putin attacked Ukraine, giving further downward pressure because that reignites fear of the war and the risk of slowing markets. Also, Putin attacked Ukraine, giving further upward pressure because it may lead to increased interests, which is good for banks’ earning potential. 30 minutes after the market report the stock was up 1%, it was down 1% and now that drop has increased to 2.5%. The analysts writing the report had a huge bag of positive and negative indicators for the stock, pulled one of the positive ones out of their collective arses to explain the market-open jump, and made sure to mention negative indicators to explain the poop on their fingers in case the price would later move down.
That makes it extremely easy to predict price movements after they already happened: just claim the upwards/downwards movement is due to some bullshit reason and make sure to mention things pointing in the other direction to counter claims that your predictions come from the place where bitcoins are mined: the butt. One online newspaper I read, likes to publish an article every time bitcoin gets pumped along with an arse-sourced reason explaining why this was obvious as well as a couple reasons the market isn’t entirely rosy-red. Straight out of my no-complaints examiner’s text-book for never being proven wrong. I hope they just have a single intern true believer who is allowed out of his sub-basement room (in the Netherlands, where sub-basements are rare) rather than a real employee that writes other articles I might accidentally read.
Come to think of it, predicting price movements, even if incorrectly, after they happened is neither very impressive nor very useful. The best analysts would be able to predict them, correctly or incorrectly, before they happen. In 2007, Steve Ballmer, the then-CEO of Microsoft, was laughing at the iPhone. He was ridiculing the cost of $500 and the lack of a keyboard. Boy, was he ever wrong! Or was he? The iPhone did not take off at $500, it took off at $199, the price cut it got shortly after release. And it did not really take off in 2007 either, it took off in the early 2010s. I used a Nokia until 2010 because the iPhone was not a smartphone at the time and couldn’t do trivial things like streaming music while browsing, or track bike rides without leaving the screen on. Was Steve Ballmer really wrong to ridicule the 2007 iPhone? What did he actually predict?
We are very good at recontextualizing predictions made in the past: if we were wrong, it was either because we were actually right seen in the right light or because something unforeseeable happened nobody could have predicted. If we were right, it was because of our perfect predictions, even if things didn’t happen exactly as we said they would. We all like to place the locus of control externally if it is bad for us (or for somebody we like, or if something good happens to people we dislike) and internally if it is good for us (or somebody we like, or if something bad happens to people we dislike). That means “we” were never wrong, it was the world that was wrong, and we’ll never learn to make better predictions. Or we just move the goal post, known as the Texas sharpshooter fallacy. Every software developer knows something unforeseen will happen; the bad developer assumes that this time nothing unforeseen will happen while the good one takes the unforeseen into account, even if they don’t know what exactly it will be. A good analyst should not explain every mistake away by something “unforeseen” happening. If you get surprised that every prediction is wrong, are you even making predictions?
The only way out of this is making testable predictions and using the feedback to make better predictions next time. Not rubber-banding your bad predictions into being technically correct, but using the “unforeseen” as a stepping stone to take the concrete factor into account next time and to take unforeseen elements generally into account as well. That is also the only way to judge the predictions of an analyst: by taking their entire history into account, weighing recent predictions heavier than predictions made 10 years ago, but not only taking the most recent prediction into account. Dr Doom (not that one) correctly predicted the fiscal crisis of 2007, giving him authority when he predicted new crisis in 2009. A crisis that didn’t happen. Same as when he predicted a crisis following Hurricane Katrina a couple years earlier. It’s not enough to consider the 1 or 2 (or 3 or more) times an analyst was right, it is also important to consider all of the times they were wrong, whether they predicted something that didn’t happen or didn’t predict something that did. One cold or mild winter, or one hot or wet summer, does not prove or disprove global warming, only carefully observed developments over time can do that. It is also important that predictions are precise. Anybody can predict that “markets will go down” or “markets will go up,” and they will be right given enough time.
Using past predictions, all past predictions, and only testable predictions, is a reasonable way to judge the quality of a single analyst, but is not a good way to find the best analyst. Given enough people, it will always be possible to fine somebody who made the right predictions. That may be because they are good or it may be dumb chance. This is essentially p-hacking, the library of Babel, or my own Drunk Monkey Consulting: given enough idiots who make random predictions, it is easy to find somebody after the fact who was was correct. In psychology we have a rule of thumb that we divide the cut-off by the number of people/times an experiment was repeated. If the initial cut-off is that an experiment has a 5% chance of showing a relationship where there is none, this is reduced to 0.1% if we pick the best analyst out of 50. When there are millions of analysts, this would increase the requirement of accuracy to insane levels. Another way would be to test the best predictor from scratch after they did their extremely accurate predictions; this would have refuted the predictions of Dr Doom or the football predicting octopus.
That does not mean that a good analyst has to be perfect, though. They just have to be better than average. The best investors have been consistently better than the market over decades. They have been wrong at times, and individual predictions should not be taken too seriously. Warren Buffett is not a great investor because he is never wrong, and not all of his predictions should be taken as truth. Buffett is a great investor because he is right more than he is wrong. Being wrong is not a sign of a bad analyst, but never being wrong is. Because if somebody is never wrong, it means they either got rid of their bad predictions or rationalized their bad predictions instead of learning from them.
Even so, the best analyst will still be prone to the black swan effect. The black swan effect is what happens when something extremely unlikely with extremely high impact happens. A good analyst would not predict a global pandemic, because it is extremely unlikely to happen. But when it did happen, the impact over-shadowed everything else for more than a year. An analyst that does not predict a black swan event is right more often than one that does, but risks losing everything when one does occur. And just like the chance of a drunk monkey correctly predicting market movements increases as the number of monkeys increases, so does the risk of a black swan event when the number of ways such events can happen does. Like the good software engineer, the trick is to account for some black event happening, even if you don’t know which one. The people that have been “predicting” a global pandemic every year for the past 30 years were right only once out of 30 and provided as little actionable advise as the analysts predicting zero pandemics every year for the past 30 years.
All of this is irrelevant to predicting the price of bitcoin. These considerations are important when there is a correlation between events and the price, but that is not the case for bitcoin. It, like all other crypto, is more dependent on manipulation than on macro-economical factors. There is no crypto market predictions, because there is no crypto market. Only scams in obviously fake mustaches. Sure, if people have more money, they are more likely to give it to scammers, including crypto-peddlers, but more predictive than observing markets is which smileys market manipulators use on Twitter or whether bad news are about prominent scammers is about to come out (yes that correlation is the wrong way around to make meaningful predictions). Plus, of course, whether somebody nobody knows has shorted a large amount of crypto on an exchange prone to scamming (which is all of them).
So, that’s the answer: the only way to predict the bitcoin price is to do it after the fact, and garnish with rationalizations why it may change in the future. Or, be Elon Musk, I guess, so your predictions are self-fulfilling prophecies because your army of yes-men are willing to throw money at their “rich” internet daddy in hopes of attention.
This of course raises an interesting question: is the crypto sceptic predicting all crypto scams to be scams better than the crypto grifter predicting all crypto scams will moon? Is the analyst predicting 0 out of 1 global pandemics in the past 30 years better than the one predicting 30 out of 1? Many a crypto skeptic (myself included) has predicted the demise of Tether for 6-7 years now, yet it is still alive. It may be a scam, likely is, but is the prediction of its demise really worth more than the crypto grifters that have claimed they are legit, and the audit will be coming in “just a few months” for the same duration? The skeptics will more than likely be right in the end, were right it was a scam all the time, but they will have been wrong about it crashing for all the time leading up to it, whereas the grifters will have been right all the time, except for one single mistake. Mathematically, the two options are equally predictive, but the crypto skeptic has an advantage over the grifter: while the grifter may be collecting pennies in front of the steamroller, the skeptic refuses to go in to the fraudulent casino and bet money to casino tokens that the casino is fraudulent (or “short Tether,” as the grifters insist we do).
Even so, a lot of my fellow crypto skeptics will highlight when they are right. This goes for both Bitfinexed, Cryptadamus, Amy Castor, David Gerard, and Mt Goxed. It’s easy to predict a crypto scam when you predict all crypto projects are scams. Also, very likely to be correct. And this is neglecting that most of these people do actual brilliant investigative journalism highlighting problems old media is unable or unwilling to do due to the time involved. Crypto grifters, on the other hand, like to bury or forget their endorsements when a new crypto scam is revealed. This goes for Pomp, CZ, SBF (oh wait, are we still allowed to include him after it turned out he was a big fraud as predicted several times?), Paolo and many more. I believe the skeptics are right more than the grifters, but am obviously biased.
What would be fun would be a project similar to Web 3 Is Going Just Great that were to track predictions of skeptics and grifters alike, grading how testable they are (bitcoin to the moon = untestable, no concrete target or time frame, bitcoin to zero = 50% testable, concrete target but no timeline, bitcoin below 20k before end of 2023 = 100% testable), and their outcome (correct, incorrect, or not yet definite). Somehow, I believe the skeptics would be much more cooperative in such an endeavor as their predictions are not made to promote a scam, but rather in an honest attempt to warn against the scammers. If somebody is interested in doing the research part, I’d be happy to make and host the site for tracking this.
Time person of the year 2006, Nobel Peace Prize winner 2012.