# COVID-19/Iluvalar

## Method

Computer assisted Step-by-step minima optimization via recursion in a 9-Dimensional space. That was to sound smart, technically it boils down to make millions of simulation and see which one fit best the data we have. Some of those curves were achieved by locking some of the parameter at specific values to see how plausible is the curve as explained above.

## Suggested model

### July 18

```\$m['a']=-125.65159588571;//num of days before feb 16 (initial case)
\$m['b']=1.0684323078723;//growth rate (per day)
\$m['d']=0.0048137171567879;//Maximum Impact of mitigation
\$m['e']=0.82834719039164;//Likely % of people susceptible on feb 16
\$m['f']=0.00046376433202448;//% of people becoming susceptible (remote villages etc..)
\$m['g']=0.99327190491309;//% of immune people keeping their immunity (daily)
\$m['h']=10553.565694855;//Death count before feb 16
\$m['i']=0.0072242127659876;//seasonnal amplitude
\$m['c']=426383.87786083;//Total susceptible
```

### July 28

I had the idea to enforce the h parameter to be true. This is the simulation being coherent with itself. I'm not sure if it's a strike of genius or folly at the moment. It would be a good news as the first few days of italy's death would seem to be over counting of previous death. It's the new optimistic approach. It is noteworthy to mention that this new method for the first time do not enforce anything prior to the data, it only enforce the death count with itself. EDIT: I just called 8000 extra death optimistic. Never trust an epidemiologist with a "good news".

```\$m['a']=-159.17168234663;//num of days before feb 16 (initial case)
\$m['b']=1.0722850170194;//growth rate (per day)
\$m['d']=0.00034437131117403;//Maximum Impact of mitigation
\$m['e']=0.57673718801671;//Likely % of people susceptible on feb 16
\$m['f']=0.00033473039718786;//% of people becoming susceptible (remote villages etc..)
\$m['g']=0.99386795745051;//% of immune people keeping their immunity (daily)
\$m['h']=18809.424215614;//Death count before feb 16
\$m['i']=0.0072242127659876;//seasonnal amplitude
\$m['c']=573024.27734544;//Total susceptible
```

## journal

### Omicron warning and not really... (Jan 9 2022)

I know I'm late for the party, but the buzz in the media about Omicron is largely exaggerated and I had hope until now seeing the result in Israel that the vaccine would cut my own deaths prediction by about 50%. But seeing the new explosion of cases in Israel, and looking closely to Italy (as always), I start doubting of my doubt. This season of covid-19 may be as deadly as I predicted after all. Despite the vaccines. Omicron seems to have mutated to go around vaccine immunity ? Let's hope I'm wrong but I'm here to warn : Expect as much death as last years. Vaccines may have no visible impact on covid-19 this year. We are talking about 800'000 deaths worldwide I was hoping I'd be wrong about.

### November 18 2021

Well first, let's appreciate my July 2020 long term prediction for a moment. It's by far my favorite, it does the job and I had far less data back then, which make it impossible to beat now. And I won't try to today.

Ever since nov 2020, I tried several time to have a model with several viruses to work, to account for multiple variant, I now believe it'S because the variants interact with each other immunity (giving partial immunity to the other variant) that my models fails. I'd need to model that as well. I've been recently using newton's approximation instead of my own brew of trial and error testing. I let the model drift for quite a while and now have a model which show what would it look like if covid-19 was endemic from the beginning. To be clear I do not think it'S the case, but I felt like sharing the scenario anyway. I think there is also ludic approach to this. It shows that wildly improbable scenario can still "fit" to some extent the data, but in any case the ~1% of people losing their immunity each days stays. As well as the overall growth rate of the virus. The total amount of death stay similar and so on...

### Vaccine recommendation

• Option 1 : Beat covid  : Vaccinate 75% of all MAMMALS during the month of october
• Option 1.1 : pretend to Beat covid  : Vaccinate 75% of all human during 2021 and pray that it will have the same effect.
• Option 1.1.1 : Create a second class of citizen for those who don't pray well enough. Censor social networks to create a cult of "it's our last chance" and fuel on each other fear to escalate into full blown religion where non-believers have to be banned from society.
• Option 2 : Prevent some deaths : Vaccinate people who are uncertain to live 5 years more during october.
• Option 3 : Ignore the problem : We already lost 3 months out of a maximum of 4-6 months of life expectancy.
• Option 3.1 : Cause more problems : Do your math wrong, ignore clear serology science (again) and arrive to a maximum of 1-2 years of life expectancy. Take a plethora of useless measures to save more lives that was never in danger to start with and cause more world wide depression because of your incompetence.

Self explanatory... Favorites so far are 1.1.1 and 3.1 by the look of it.

### "Overwhelming evidence" about the origin (June 12 2021)

I hate doing politics, but I feel I have to mention it. The odds that Covid-19 come from a lab compared to A random mammal in the world are maybe 100'000:1 . Now that Facebook stopped censoring the theory, I understand the reasoning amongst a lot of politicians and news media about it. This being said, depending on when the first contact happened, there is probably between 10 to 100 BILLION of those random mammal who could have been the source. Covid-19 is absolutely inside the parameters of what nature could have produced itself via natural selection. Coronaviruses, influenza and the sort have been navigating around our immune system since the dawn of humanity like that (and probably since the first mammals). It's not a question of IF those viruses will happens, but WHEN the viruses happens. And it seems that the answer is quite regularly. I vaguely recall of a study which was estimating it at ~7 years for ILI.

The real question about the origin, is not about did it come from a bat, a pangolin or a lab. The question of the origin is still wide open : which country, which month ? There might be 1000 more chance it come from wuhan than a random city. 100 chance that it came from a surrounding city. 10x more chance it come from a city in china maybe... but how many cities in the world ? We are not even sure it would come from Asia.

Edit: All numbers in this chapter are gross estimates. The argument is about order of magnitude, not accurate calculations.

### Final nail in the coffin (April 8 2021)

[1] Google map data clearly shows the unimpressive impact of our measure on our total contacts. ~30% change in mobility, even with the best hand washing practices cannot explain the fluctuations in the waves we are witnessing all around the globe. Common sense dictate that a significant amount of susceptible people must have been infected to change the course of the epidemic like this. It's definitively not human. The population susceptible to each variant is not the same.

### Jan 2 2021

Hello, I calculated in April that 45% of the population had a resistance. There is also clear evidence of a new variant of Covid-19 that seems to spread like wild fire. I'm tempted to add 1+1 together and assume that Covid found it way in the other half of the population.

Is there a chance that we have them inverted ? This is that the 2nd variant would be responsible for the 2nd wave (which deaths in italy peaked around dec 1). And that the 2nd wave of the first variant is yet to peak ?

### November 17

I am improving technical difficulties since the start of the 2nd wave. I was expecting it a little later (30-60 days). There are many things that are difficult to model.

1. Varying human attempts to affect the curve.
2. Improvement in therapy.
3. Improvement in prevention techniques.
4. Faith in the early date data.
5. The amount of people affected. I used to have success in assuming a % of new people affect in region, but as the time pass and the complexity of the situation increase it doesn't seem applicable anymore.

While I'm struggling to push the model to a point where I can predict the exact month where the wave happen, I am extremely confident about the amplitude of the waves. The long term projections doesn't change much.

### November 16

Source : SARS-CoV-2 RBD-specific antibodies were detected starting from September 2019 (14%) [2] (that's about 170 days before feb 16th)

### October 19

Stubborn model. Doesn't want to recognize any human intervention at all. Also, the more data I throw at it, the more it start to ignore the first peak of data. I think I might be measuring the improvement in treatments ? I might be back in a few days with yet another dimension to the model.

### September 20

The july 28th model suggested an increase in case since sept 1 or so. The actual data also seem on the rise. I just came here to continue the projection. The second wave is coming as far as I can tell. The extra data only allow me to make imperceptible change to the model. I'm not even making a graph, but here is the data for hard core curious.

```\$m['a']=-159.17168234663;//num of days before feb 16 (initial case)
\$m['b']=1.074216578214;//growth rate (per day)
\$m['e']=0.56560140978122;//Likely % of people susceptible on feb 16
\$m['f']=0.0002151190810254;//% of people becoming susceptible (remote villages etc..)
\$m['g']=0.99386795745051;//% of immune people keeping their immunity (daily)
\$m['h']=18809.424215614;//Death count before feb 16
\$m['i']=0.011592740743;//seasonnal amplitude
\$m['c']=509124.64856267;//Total susceptible
```

I guess the reduction of 64k suceptible is a good news. However, I also come here to confirm the second wave and the inevitability of it.

## earlier model collection

While I believe the latest projection is likely the most accurate, I also want to show what my simplified method was already able to achieve in april. After all, I'm just a stanger on the internet. But showing the accuracy I obtained even at early date with partial data hopefully shows the kind of accuracy I can get with this. So enjoy the previous estimates I did here.

### april

My suggested model in red pose the first death in Italy on dec ~10. growth (daily): 1.0785697466622, susceptible to die: 317269 Italians.

By July death count: 40519. Distancing impact so far: 1500 deaths If maintained up to july??: 2784 deaths

The R0 seems high. ~2.5 but also 45% of the population seem resistant cutting the pool of victim quite a bit. therefore 13% of the susceptible population dying.

### Rerun may 9

\$m['a']=-93.496587895621; //~dec 15 \$m['b']=1.0779228959116; //Growth rate per day (R0 of 3 assumign 15 days of infectivness) \$m['d']=0.0084969690972043; //impact of distancing on \$m['b'] \$m['c']=317269.64892627; //Maximum susceptible deaths.

Troubled by the exact same susceptible number, i tried to kick the 'b' parameter in another position (1.1), and run the script setting the the susceptible amount first in the approximation, i land on vaguely different numbers : \$m['a']=-85.562622590657; \$m['b']=1.0849025277116; \$m['d']=0.016467854572557; \$m['c']=308028.78536532;

Everything fought back in a very similar position. Since the distancing impact was higher on that model, I kept the no distancing like for this one.

### June 22

Hi I'm back !

```<?php
\$m['a']=-125.65159588571;//num of days before feb 16 (initial case)
\$m['b']=1.0852279984699;//growth rate (per day)
\$m['d']=0.0093283765873803;//Maximum Impact of mitigation
\$m['e']=0.56407657414306;//Likely % of people susceptible on feb 16
\$m['f']=0.0011272578709613;//% of people becoming susceptible (remote villages etc..)
\$m['g']=0.99595695951109;//% of immune people keeping their immunity (daily)
\$m['h']=11255.0881;//Death count before feb 16
\$m['c']=725890.01044992;//Total susceptible

--- 2 (trying to seek for a credible mitigation impact)
\$m['a']=-141.42197815993;//num of days before feb 16 (initial case)
\$m['b']=1.0706780360658;//growth rate (per day)
\$m['d']=0.01887266008676;//Maximum Impact of mitigation
\$m['e']=0.79285388387757;//Likely % of people susceptible on feb 16
\$m['f']=0.0015038745097599;//% of people becoming susceptible (remote villages etc..)
\$m['g']=0.99745179158056;//% of immune people keeping their immunity (daily)
\$m['h']=10553.565694855;//Death count before feb 16
\$m['c']=747666.71076345;//Total susceptible
```

It became obvious as I was simulating on more and more data that the model I had was too simple. There was more susceptible people entering the model over time. There were people infected before feb 16, etc. There is many other factor i can simulate. Sadly, the more I add the more it become art rather then hard science as nothing of this have been tested. So take the numbers here with skepticism.

2 things worth mentionning about this model. Saddly it seems to stabilise on a parameter g of 0.996%, this mean that saddly 0.004% of the immune people are losing the immunity daily. This mean people could be infectious every year. Also the param a seems to have backed an entire month since I allowed the model to consider the param h. That would mean a first case in italy around oct 15.

#### Later confirming sources

Quote from wikipedia i found after running this simulation : In the month of March, 10,900 excess deaths have been estimated, that have not been reported as COVID-19 deaths.[479] [3] (see my parameter H projection).

Addendum (june 23 2020): [https://sites.krieger.jhu.edu/iae/files/2022/01/A-Literature-Review-and-Meta-Analysis-of-the-Effects-of-Lockdowns-on-COVID-19-Mortality.pdf A LITERATURE REVIEW AND META-ANALYSIS OF THE EFFECTS OF LOCKDOWNS ON COVID-19 MORTALITY] "More specifically, stringency index studies find that lockdowns in Europe and the United States only reduced COVID-19 mortality by 0.2% on average." See parameter D.

### Lesson

When I did the apr 7 evaluation, I assumed that that data before march 27 was noise. There was an apparent plateau from march 21 onward, but the spike of death on march 27 made me believe that it was not usable and that I should start from there. in retrospect, with the may 9 rerun, even if it's still instructed to only account for march 27 and later, the curve seems to accommodate nicely the data from march 21 and consider both march 21 and march 27 as both abnormally high spike. If i had used the data from march 21, my apr 7 model would be even more accurate. I'm still happy with the early projection, it shows that my model worked.

### Other idea toyed with

Finally, I'm adding 2 curves. One orange with distancing and one purple with no distancing. The distancing indeed save 10k lives in those models, from 15k to 25k total deaths. The problem with those curves is that they suppose a very small pool of 75k susceptible max people. And a growth rate that is very hard to explain (Even less given the tiny pool of people at risk). This is the best social distancing can do, anyhow.

Today I pushed the model to find my worst plausible case (in cyan). It require a R0 of 12 without distancing and 2.5 with distancing and a very specific amount of people susceptible (186000). I don't think our data match with that (specially the high starting R0), but i wanted to try my best to reach worst cases that some other source have predicted. Death without distancing : 40k with distancing : 20k . But i want to stress again, this model is stable only if I lock the social distance impact to that high level and cherry pick the perfect susceptible amount to reach the perfect recipe for disaster.

## Deaths and IFR

The june 22 simulation was expecting 53205 deaths per 392.43257826601 days (whatever the mitigation chosen). It's a little bit the point of this page to show that we can accurately predict the deaths without having to estimate the infected at all. But since I heard the CDC now google IFR to make a mean out of them, I will guesstimate too. I can estimate the R0 by taking param B and powering it by the number of day and infected is infectious. I'm estimating 12 days. Using the CDC 6-6-6 rule. I obtain an R0 of 2.2673. Knowing that Rt will oscillate around 1, I can infer that 55.9% of italians will be resistant to COVID soon. The highest amount of infected given a population of 60.36 million is 33.73 million. If that's the case the IFR would be around 0.1577%

IFR=0.1577% (with covid)

However, we need to emphasize that if 55% of the population was to be infected and they were to be tested positive for about a month. from the 1.07% of yearly death in italy, 25,680 Italians would have died WITH covid and not really FROM covid. Factoring for this our IFR could get as low as 0.0761%

IFR=0.0761% (from covid)

As I understand, this last estimation is how other coronaviruses and flu are usually estimated. And yes I reach roughly the same results. Covid is a coronavirus (plot twist)

This being said, it is important to remember that an enormous amount of the world population _will be infected_. It's still a world health crisis.

## Interesting discovery (11 April 2020)

I'm not sure how, I think it have to do with the curve in the data. But I seem to be able to see that the absolute maximum amount of Italian susceptible of dying is about 300'000. However, if i use the current age pyramid of Italy and compile it with the current CFR of the decease we could get a total of 4'869'075 deaths. It could mean 2 things. 1) Some of the italian are already immune or 2) they have mild cases that doesn't get tested.

I can't know for sure but all i know is only 6.1% of Italians should be concerned about the actual CFR. Iluvalar (discusscontribs) 00:33, 11 April 2020 (UTC)

(jan 10 2022) immunity from previous infections of common cold confirmed : [4] see also COVID-19/Iluvalar#Jan 2 2021

## Social distance efficacy

I'm thinking today, if 60-70% of the population must be immune to stall the spreading. By social distancing we may remove 30% of the risk (or more precisely, half the risk for 60% of us) and stall the curve at a lower place. But it is paradoxically increasing the risk to eventually get sick for people who have to stay active. The nurses who take care of the elderly now have a 100% chance to get eventually the virus. Less chance this week, but more eventually. At the end it just guarantee to our elderly that each and every nurse taking care of them will have the virus at some point instead of the basic 70%. This could explain why the models i'm using kinda refuse to give a too big effect on social distancing. The effect is positive, but some of it backfire at us.

see graph at the top of the page

I managed to include a seasonal effect to the simulation. I'm not fully confident, but it's a probable scenario. ~65369 deaths per year. In this one.

September 20 review : 51424 per year. Spike a little more scary but overall lower curve. November 19 : Was obtained by insisting A LOT about not missing the recent death. It is therefor not an overall increase of the total death. Even if it looks slightly higher due to how I obtained it.

July 11 2021 : there as been 127,775 deaths to this day, It'S about ~61500 deaths per season. There was about 20,000 deaths that was caused by the new variants. I do not believe that the new variants are more dangerous however, there was a fraction of the population ~45% who was non-susceptible to the first strain. I do believe that covid evolve to slowly reach those 45%. It would be wise to expect another season coming despite the vaccines, I do not think it will be sufficient.

March 3 2022 : I've clean a few predictions. July 28, one in September and one in November. I'm keeping the first prediction and the panic correction in November when I realised that wave would be early. The other predictions were not bad, but they were hindering the readability. Now that real data are there to show how accurate I was, there is no need to clog the graph with other predictions. But feel free to preview the predictions I left in the history for yourself.

## Invalid Data "Hypothesis"

Here is the confirmed case in China starting at jan 22. And 2 curve of theoretical exponential growth. As we can see, if there was a unique case in Dec 1 or even Nov 1, we would have got at best 1 single day of true data somewhere at the start of FEB 1. The rest of the data of confirmed case would be under the real curve since then. And by now orders of magnitude lower then the real cases.

The data from any other point of confirmed cases data only inform us on the amount of cases tested. And could not reflect any real progression curve of the virus.

### What does it mean?

All the confirmed case data we have is pointless and only a depiction of our own test capacity until we hit a change in the curve. The usage of the test production as indicator of a curve or any model is buggus and ill-informed. I see too many governments and scientists using the confirmed cases curve. 2 exponential curves can only meet at one point (2, but that's obviously impossible in this case).

The first usable data is the one obtained when the curve drastically change. When the graph switch from showing the amount of tested and showing the amount testable. I believe we can clearly see it in Italy's death data since March 21.

This mean that attempt to infer the effectiveness of the social distancing using the previous curve is futile.

I first open this page to attempt to explain how data prior to march 28. The day after, (april 5), I already had a model to predict the rest of the infections in Italy. In the following months, I made a slew of slightly variating model using my code as I was trying to encapture not only my best projection, but also a margin of error. Somewhere around July 2020, I realized that my initial comment was less important and less telling then the graph of the models, and I started rearranging the page and break chronology. I also had a small addendum called "long term". Which was a run of my model over several years. I came of few time, making a few comments in the "journal".

And now June 15 2022, I'm coming back and I'm breaking the page again as I feel like the "long term" graph is now today and it conveys more information than the short term prediction that are already behind us. Please feel free to explore the long history of this page and see what I knew and when. And how it could be used in future pandemic. I left many models behind that could be interesting.

## Code

Per request but it's ugly. If you want to add a premise or test a scenario, it is likely faster to contact me here and ask me to edit that soup myself to spit more projection.

```<?php
//Covid sim
//Warning : This code only find the local minima to minimize the distance with the Real data
// It does not interpret it and can and will spit non-sens given the premises fed to it.
error_reporting(E_ALL & ~E_NOTICE);
\$ital2=[0, 0, 0, 0, 1, 1, 1, 4, 3, 2, 5, 4, 8, 5, 18, 27, 28, 41, 49, 36, 133, 97, 168, 196, 189, 250, 175, 368, 349, 345, 475, 427, 627, 793, 651, 601, 743, 683, 712, 919, 889, 756, 812, 837, 727, 760, 766, 681, 525, 636, 604, 542];
\$italydeaths=[627, 793, 651, 601, 743, 683, 712, 919, 889, 756, 812, 837, 727, 760, 766, 681, 525, 636, 604, 542];
function model2(\$m,\$daymax){ //This is just good old SIR not even integrated
\$day=\$m['a'];
\$case=1;
while(\$day++<\$daymax){
if(\$day>-5){
\$case=\$case*(\$m['b']-\$m['d'])*(\$m['c']-\$totcase)/\$m['c'];
}
else{
\$case=\$case*\$m['b']*(\$m['c']-\$totcase)/\$m['c'];
}

\$totcase+=\$case;
}
return \$case;
}
\$check=approx_diff(\$m,\$data);
\$lastcheck=\$check;
\$initval=\$m[\$index];
//echo '<br>check:'. \$check .' '. \$adj;
while(!\$stop and \$x++<100){
\$check2=approx_diff(\$m,\$data);
//echo '<br>--check2@'. \$m[\$index] .'='. \$check2;
if(\$check2>\$lastcheck){
\$stop=1;
}
else{
\$lastcheck=\$check2;
}
}
unset(\$stop);
while(!\$stop and \$x++<100){
\$check2=approx_diff(\$m,\$data);
//echo '<br>--check2@'. \$m[\$index] .'='. \$check2;
if(\$check2>\$lastcheck){
\$stop=1;
}
else{
\$lastcheck=\$check2;
}
}
}
function approx_diff(\$m,\$data){
\$ret=400/model2(\$m,-26);
if(\$ret<1){
\$ret=1/\$ret;
}
\$ret=pow(\$ret,2); //I used this bit to create a false data point
\$ret=1;
foreach(\$data as \$no => \$val){
\$modelval=model2(\$m,\$no);
//echo '<br>'. \$no .','. \$val .'|'. \$modelval .')';
\$diff=\$modelval/\$val;
if(\$diff<1){
\$diff=pow(1/\$diff,2); //This cheeky power here make values under the curve less plausible
}
\$ret*=\$diff;
}
return \$ret;
}

\$m['a']=-99.190530098464; //Day of 1st death
\$m['b']=1.0785697466622; //Growth per day
\$m['d']=0.0073135735980529; //impact of distancing on \$m['b']
\$m['c']=317269.64892627; //Maximum sucseptible deaths.

\$x=0;
while(\$x++<5){
approx_improv(\$m,\$italydeaths,'b',1.0003);
approx_improv(\$m,\$italydeaths,'d',1.0003);
}
echo "<br>\\$m['a']=". \$m['a'];
echo ";<br>\\$m['b']=". \$m['b'];
echo ";<br>\\$m['d']=". \$m['d'];
echo ";<br>\\$m['c']=". \$m['c'] .';<br>';
\$x=0;
while(\$x++<count(\$ital2)+100){ //+100 so the the sum of deaths can be calculated
//while(\$x++<365){
\$day=\$x-count(\$ital2)+count(\$italydeaths);
\$val=model2(\$m,\$x-count(\$ital2)+count(\$italydeaths));
//echo '('. \$day .'):';
//\$x+=30;
echo floor(\$val) .',';
\$sum+=floor(\$val);
}
echo '<br>sum:'. \$sum;
```