Published by Lars Bobeck

25 of March 2015

This text contains a basic course in Bible Code matrix probabilities, to understand it you need to know
certain parts of Senior High School mathematics.

Numerous predictions are believed to be encoded in Bible Code matrixes, through a much better proximity
between related terms than expected by chance. But is it real? No code key has been found.

It is the Torah (the five Books of Moses in Hebrew) that is believed to be coded this way.

The text below tells proper ways to calculate, misleading ways to research and factors of uncertainty.

The reader will see that chance findings sometimes can get very good probability figures.

I am a skeptic and I have pondered Bible Code probabilities for a number of years, I have also worked with
this text a number of years and a number of additional notes form a fifth main section. After many changes
I consider the text finished, I have tried to make the text reasonably complete and verbally precise.

I started writing as a believer who in vain tried to get a popular and faulty Bible Code program corrected,
generally pretty many Bible Code faults remain. Then I became a skeptic and the text below was gradually
developed to become a course. I apologize for some faults in especially early versions of this text.

My first section is about the historical background. My second section is a description of what a matrix is
in a Bible Code program. My third section is mainly on matrix probabilities, and my fourth section deals with
other factors of importance and misleading ways to research. There is also an appendix with extra formulas.

Notice that true mathematics is the same for skeptics and believers, it cannot accommodate to who we are.

The most well-known proof for the Bible Code is known as The Great Rabbis Experiment.

This experiment was published in Statistical Science in 1994, offered by the paper as a challenging puzzle.
The experiment seemed to prove a proximity code for some related terms in Genesis. Term pairs of Rabbi
names (appellations) and corresponding dates of birth or death were examined. For the two sets of term pairs
chosen, the probability to get the final result by chance was reported to be 0.0016 percent. This figure was
rounded off to one chance in 50 000 as the final level of improbability. The experiment used term occurrences
at skips, the letters of a term occurrence were an equal number of text letters apart.

In Statistical Science in 1999 it was told that The Great Rabbis Experiment worked only in Genesis. It did
not work in the other four books of the Torah, and they are believed to be coded too. A number of full-scale
control experiments to check The Great Rabbis Experiment in Genesis were also told, the results reported from
this investigation were not remarkable. The report from 1999 was called Solving the Bible Code Puzzle.

All experiments above used the same disputable result formulas, except for eventual program bugs.

Judaism has a traditional belief that the Torah is coded with predictions, but I don't think that this belief says anything about improbable proximity between related terms in Torah matrixes (this code theory is documented from the eighties, but a Rabbi may have had the same idea earlier in the twentieth century). There is also a claimed Judaistic tradition that roughly says that short text skip occurrences are most likely to be encoded, mathematically they are (somewhat simplified) most probable to get by chance (but this does not mean that it is impossible to encode at short text skips). I have chosen to consider our first 500 text skips short (in both directions), the interval is a matter of opinion.

The most well-known book in the field is "The Bible Code" from 1997, it is written in a positive way but
it contains some important incorrect statements.

The book "Who Wrote the Bible Code" from 1999 claims to disprove the code, yet it says that there can be a
big number of term encodings at skips. See the pages 119 and 85, page 129 tells that we have many skips.

A matrix is formed by the letters of a freely chosen first term and an area of surrounding letters in a flat right angle array. If the letters of the first term are found one thousand letters apart in the Torah, we say that the term is found at the skip 1 000. The program normally displays the letters of the first term vertically or diagonally (I stick to these cases), with the skip 1 000 we normally get text rows that are 1 000 letters long (signs and spacings are removed). Around every skip occurrence of the first term, the program can search (within a chosen search area) for freely chosen related terms (I mean related to the first term). Related terms can be vertical, diagonal or horizontal. They can also be formed by every second or third letter and so on. Only occurrences with a text skip of terms longer than 2 letters are used (occurrences that are sequences of equidistant letters in the text). Vertical, diagonal and horizontal matrix occurrences have a text skip. When we search within the full text at all possible skips, the program tries every possible skip from every possible starting point. If we use row splitting we divide the rows so that we get the letters of the first term at every second or third row and so on (we get row skips higher than one). The program searches both forwards and backwards in the text, so terms can be displayed in both directions. The plain text is called the skip one and skips backwards are denoted with a minus sign. When the program frames the terms of a matrix in a tight rectangle, that area is called a cropped matrix. There may be exceptions from this description.

If there are encodings we risk to miss some of them if we don't search in the full text at all possible skips.

This text uses expected occurrences as probabilities. For small values expected occurrences and the ordinary probability for one occurrence are almost the same. I consider expected occurrences typical values, but I find it hard to strictly define them. It is considered improper to call them probabilities, but I consider them a kind of probabilities in this context. Expected occurrences enable us to start with big figures and end up with a small matrix probability figure. The point with Bible Code matrix probabilities is to see if Bible Code matrixes are improbable to occur by chance.

The first term probability that should be used in the final matrix is basically equal to the number of expected occurrences in the text, compensated for the skip of the final occurrence. Not the small probability for the term to occur in one single matrix. What we search for in the text has a text probability, and what we search for in the matrix has a matrix probability. The single cropped matrix probability for the first term is very much smaller than the text probability, since a single cropped matrix has a much shorter text. A deviation between the number found and the figure of expected first term occurrences, can not alter the first term probability. The first term probability holds without row splitting. Row splitting often increases the first term probability, when we try more than one row skip. Each row skip practically used has the basic first term probability, but it is often reduced. The possible matrix height can be smaller than the final cropped matrix height for first term occurrences at high text skips, and a diagonal first term occurrence can be too wide for the final cropped matrix. As a basic rule the first term probability is reduced in the cropped matrix, with the percentage of possible first term occurrences that can not give a cropped matrix with the original size. A basic first term probability should be compensated for this percentage and often for more than one row skip tried. Without compensation the percentage alone will normally give a range from a pretty small fault up to about 2 times too high first term probability in the cropped matrix. Further a reduced first term probability normally means a reduced skip compensation factor C for the first term in the final cropped matrix, 1.5 will no longer be my maximal C. The appendix has a formula to compensate for this. To compensate for row splitting take the increase in first term probability for the first term within the final cropped matrix size. Subsequent terms should be treated normally in cropped matrixes with row splitting. A deviating calculation for subsequent matrixes wrongly gives two probabilities for each matrix, any one of them can be the first.

If we get the total probability through a comparison to for example one million chance matrixes (with the first term), then we must compensate for the probability for the first term. A matrix with a less good total significance can otherwise be the best. We can also compare to what we find in a number of equally long chance texts and in both cases compensate for the text skip of the first term in the original matrix and eventual row splitting. Our finding starts with the first term included in a search area, and the chance matrixes we compare to should have the first term included in the original position (thus at the original row skip) to be reasonably comparable. We should also compensate for the matrix skips of subsequent occurrences in the original matrix. Comparison to chance matrixes doesn't compensate for the sometimes big free significance when we research with our eyes (discussed later). The significance is the improbability to get something by chance. I in general discuss program research in this text.

If Bible Code matrixes are very significant a 10 times probability fault for a matrix should not matter, but at least some conventional Bible Code faults can be very much bigger for a matrix. If you can accept rough results, you can ignore compensations told in this text when they are less big.

**The text formula:**

Petext = Etext x C (Pe for the use of expected occurrences as probabilities, but with skip compensation)

Petext = The probability for the term in the text, at a skip where it occurs.

Etext = The number of expected occurrences for the term in the entire text (searching both directions).

C = The compensation for the text skip (discussed later), not a constant.

I expect the figure for total expected text occurrences to be (roughly) reliable when given in a current Bible Code program (except for eventual bugs).

The Petext probability is the basic first term probability, it should when necessary be compensated for
the use of row skips and reduced in the cropped matrix.

For a one term finding in the form of a sentence the cropped matrix is not relevant, there is no proximity
to estimate. In this case we should use the text probability.

**A rough reduction factor:**

When all possible first term occurrences at the row skip are within the final cropped matrix width.

Expected text occurrences should be multiplied with this factor for the first term in the matrix.

R = 1 - ( 1 - ( Th / Mh ) )^{2}

R = The reduction factor (except when one).

Th = The first term height in letters.

Mh = The final cropped matrix height in letters.

**The cropped matrix formula for subsequent terms:**

Pecm = Ecm x C (Pe for the use of expected occurrences as probabilities, but with skip compensation)

Pecm = The probability for the term in the cropped matrix, at the matrix skip where it occurs.

Ecm = Expected occurrences of the term in the cropped matrix.

C = The compensation for the matrix skip (discussed later), not a constant.

In the text we can on the whole approximate occurrence probabilities to be linearly falling with an increasing skip, from twice the average at skips towards zero. In the matrix we can normally very roughly do the same. The number of text occurrences per interval of 5 000 skips can verify it very well in the text, when we have sufficiently many occurrences. Conventional Bible Code probabilities are at least often rising instead.

Ecm = ( Lcm / 304805 )^{2} x Etext

Ecm = Expected occurrences of the term in the cropped matrix.

Lcm = The number of letters in the cropped matrix.

304805 = The number of letters in the Torah (the Koren version, commonly used in Bible Code research).

Etext = The number of expected occurrences for the term in the entire text (searching both directions).

The parenthesis shall be squared since the text length and the number of possible skips are equally
reduced in the cropped matrix. The area under the linearly falling curve is then the square (but some matrix
skips are seldom or never used). There are minor factors (compared to the square relation with a normal size
matrix) that can affect the result, so the formula above can give a rough result.

A 39x39 search area test with a 5 letter subsequent term had about 72 percent of subsequent occurrences
that would seldom or never be used in findings, normally they have irregular skips in the text. If the formula
is used for the search area we normally find less, on the other hand the cropped matrix is normally smaller
than the search area. I have checked how the formula works for some 3 occurrence matrixes, in a small
test the formula gave roughly 60 percent of the real probability as the median for 2 subsequent occurrences
together. (the test used a 39x39 search area and a 5 letter subsequent term). Some further testing changed
the figures to about 75 percent and roughly 58 percent. Since these figures need not be very precise there
is no need for big tests. The matrix probability can also be affected by the percentages of letters, if they
are different from the percentages in the text. A computer program can deal with this through multiplication
of deviation factors, for letters in subsequent terms.

C = 1.5 x ( ( |Smax| - |Sterm| + 1 ) / |Smax| )

C = The basic matrix skip compensation, for the first term the basic text skip compensation (not a constant).

|Smax| = The maximal matrix skip for the term, for the first term the maximal text skip (with positive sign).

|Sterm| = The matrix skip of the term, for the first term the text skip (with positive sign).

I suggest the theoretical maximal skip to be used (with positive sign).

Whatever skip we happen to get a single occurrence at, it is the probability to find the term in the entire skip range (of skips used) that holds, although adjusted for the skip. We can not use the probability to find the term at the actual single skip. When we expect and find one occurrence in the full skip range (of skips used), that occurrence would get a small probability if we did (no matter what the actual skip is). When we have many occurrences we should be consistent and still use the probability for the entire skip range (of skips used), then we can compare probabilities.

Since the probability can normally be (very roughly) approximated as linearly falling with an increasing skip
towards zero I divide the remaining number of possible skips (plus one) with the full number. The probability
to find a term is in reality related to the number of possible text skip starting points (in the text or
matrix text), it normally does not matter for subsequent terms what text skips a matrix contains. The probability
is (roughly) proportional to the number of possible text skip starting points for a term, and we can normally
(very roughly) approximate them as linearly falling with an increasing skip. When the skip gets longer, more
and more letter sequences are cut by the end or beginning of the text or matrix text. In the matrix normally
(and very roughly). It is simple to use pure skip compensation instead of starting point compensation, but for
various reasons pure skip compensation can work bad for subsequent terms in the matrix.
The pure skip compensation for subsequent terms should normally be considered very rough, a program should
rather use the appendix formula for possible text skip starting point compensation in this case (in the matrix
at the possible text skips). I suggest my pure skip compensation for subsequent occurrences to be used only at
really long matrix skips, when we don't have too few starting letters for the subsequent term in the matrix.
Also used this way the pure skip compensation for subsequent occurrences can be very rough.

If we get an occurrence of a term with the average probability for occurrences of that term (in the text or
in a matrix), then the probability should in my opinion not be affected by the skip. The average probability
for a term occurs at about one third of the maximal skip, when the linearly falling probability curve is
reasonably linear. Therefore the C-factor should in my opinion be 1.5 at the shortest skip (whether we have
one or more occurrences). But the optimal value for matrix skip compensation can be altered for different kinds
of matrixes, to compensate for the rough formula for expected occurrences. In a test my Ecm-formula gave
a median that roughly 5 times underestimated the probability in 2 term matrixes (with one subsequent occurrence).
Another view is that the theoretical figure should be 1 to make the probability never exceed expected occurrences.
But then the probability will (somewhat simplified) almost always be lower than expected occurrences (still when
we research in the full skip and text range).

It is maybe improper to position the linearly falling curve at a level that gives us the full skip range probability at the average skip for occurrences, as I have done. But I think it is necessary to use (about) this basic level to compensate for the skip in a reasonable way, although the resulting probabilities become a contradiction as told above. Imaginary numbers are improper too, and still useful. To avoid the contradiction we must as far as I can see neglect that a single occurrence of a term is not equally probable at different skips, and use only the figures for expected occurrences. But isn't this improper? A single occurrence is very unlikely to occur at really long text skips. Anyway the difference is normally not big if we calculate the term probability without skip compensation.

Lterm = 3, 4, 5, 6, 7, 8, 9, 10, 11, 12

|Smax| (text) = 152402, 101601, 76201, 60960, 50800, 43543, 38100, 33867, 30480, 27709

Lterm = The number of letters in the term.

|Smax| (text) = The theoretical maximal text skip for the term (with positive sign).

The cropped matrix is an estimation of proximity, so the probability value (normally) has to be an estimation (when I write the word normally within brackets I mean it as an alternative, always or normally).

Plain text probabilities can be special, see one of my additional notes for a discussion on this.

Since the plain text is not a chance text, I expect the linearly falling curve to often be far from precise
at our very shortest text skips.

Since chance results (normally) have a spread, we can among many chance matrixes expect some to have (relatively) good results. I find it hard to compensate for this spread if there is a Bible Code, the distribution of probability figures for our matrixes may then deviate pretty much from chance. Further if we calculate for single researchers it will be wrong for all researchers together, and it is hard to calculate for all researchers together.

When we find all terms we search for and all subsequent occurrences are significant (clearly improbable
to occur by chance), we can multiply the probabilities including the first term probability to (roughly)
get the total probability. But when we try many terms, it can be much easier to find a few than what the
multiplication says. The probability to find 3 subsequent terms out of 12, is higher than the multiplication
says for 3 terms in this case. There are 220 ways to get 3 terms out of 12 and we have 2.12 million ways to
get 5 terms out of 50 (but these figures are not the increase in probability). In such cases multiplied
probabilities will give us a degree of free significance, but I don't have a general formula for it. The
normal rise in total matrix probability seems to be roughly 10 times (when we search for a few tenths of
clearly improbable subsequent terms and find a few). But the free significance can be many hundreds of times
(theoretically even more), if we search for some tenths of less improbable subsequent terms.

Subsequent occurrence probabilities higher than 100 percent should not be multiplied (it is fundamentally
wrong). We should (normally) not use occurrences of subsequent terms with probabilities higher than
100 percent, they are (normally) uninteresting but they can theoretically form significant patterns.

In the Bible Code theory also sentences and sequences of sentences are called terms.

**Remarkable chance occurrences and spelling changes:**

Searching for terms at all useful skips in the Torah is similar to searching in a very long randomized text.

If the Torah this way has a million billion term combinations with reasonable numbers of related terms,
then probably a billion of them form very remarkable clusters by chance. And maybe we get a million big
clusters that seem impossible to get by chance. (My assumptions are rough but we deal with very big numbers.
For example we have a trillion trillion ways to choose 6 terms out of 30 000, but many differ by only
one term.) The same thing holds for any book of the same text length. Without a code key we can not know
if a finding is a remarkable chance occurrence or not. If we take in account all spelling changes that
reasonably have occurred in the Torah, many Bible Code findings must be considered remarkable chance
occurrences.

Bible Code programs and Bible Code researchers are very good in sorting out remarkable occurrences.

There are different versions of the Torah and there are spelling differences (Genesis alone had between
3 and 43 for a number of versions, when compared in Statistical Science in 1999 to the Koren version).

A trend in Hebrew spelling of importance here, has been to add missing vowels. If only 300 vowels are added
a skip code can be almost fully deleted in a 300 000 letter text, maybe a few per mille of all term
encodings can remain (normally mainly short skip encodings can remain). When one letter is added, it destroys
all skip encodings that pass this position in the text.

There is also a big number of skip terms in a normal size search area. It is probably normal to by chance
have many "significant" terms related to the first term, and an enormous total "significance".

If you for example have 400 related terms with one percent chance for each of them to occur in your search
area, then you can expect typically 4 to occur. But with the multiplied probability method commonly used,
the chance is one in 10^{8} to get 4. This is a simplified example, but if you think a bit further
(ponder the free significance within ten power probability intervals) it indicates that we can get many
"significant" occurrences and a very big total free significance when we search with our eyes. Longer
related terms may contribute a lot to the total free significance, they are normally (much) less probable
to occur but we reasonably have (much) more of them. Comparison to chance matrixes doesn't compensate for
the free significance discussed here, it can be very improbable to fully get the same terms.

I mean American trillion and billion above, that is one million millions and one thousand millions.

**Some ways to research that can lead you far astray:**

These ways to research can be very misleading when we use good (in reality bad) selections of them, then they can show a mass of things seemingly encoded. They all increase the probability to find (seemingly) remarkable chance occurrences. Also alone some of these ways to research can be very misleading.

To choose a first term with many occurrences, the program searches around all of them to find your
subsequent terms (when you research in the full skip and text range).

To try your subsequent terms as your first term, it can sometimes give a lot of similar chances.

To search for many things and report only your successes.

If you know Hebrew: To search with your eyes among all terms that occur by chance in a normal size matrix.
Some will (normally) be related to your first term, and eventual long ones can raise the total significance
very much.

To use (every possible) row splitting, it can give you several extra chances.

To check the letters before and after your terms, if meaningful together with your terms they raise
the significance (subsequent occurrences should still be related to the first occurrence).

If your program probabilities are much better at short text skips: To try hard to find short text skip matrixes.

Beside the Torah: To search for clusters in some other book of the Bible.

To use short terms with many occurrences in your matrix by chance, and show a few seemingly remarkable
occurrences.

To use short abbreviated years with many occurrences, although they in reality denote years long before
the birth of Moses.

To try different ways to write Jewish dates, and try the corresponding western dates.

To accept bad grammar and bad translations.

To try different ways to spell your terms.

To try different ways to express your terms. If you have four terms and three ways to express each term
you get totally 81 ways to find them.

With a good selection of these ways to research we can get a considerable free significance.

To research properly we only have to leave out improper ways to research, notice that some ways to
research can become proper through compensations.

Extra occurrences of finding terms can be treated as ordinary subsequent occurrences.

When we compare to chance matrixes we can restrict ourselves to compare to matrixes with the same
occurrences as the original matrix, we then avoid comparison to different matrixes.

It is disputable if extra occurrences of the first term should be used in a finding. The first term
is in my opinion not related to itself, but it may be remarkable to get improbable extra occurrences
of it in a finding.

A subsequent term with a less good proximity to the first term can get a very good significance, if it happens to occur in a long and narrow matrix (and thereby within a relatively small area).

If there is a Bible Code it is at least normally hard to use it to predict, the freely chosen first term is normally surrounded by a lot of chance occurrences (sometimes in many matrixes). Further if there is a Bible code it probably has a pretty limited number of encodings, it is hard to retain the story and use many letter changes to add terms (and starting with many code letters it is hard to form a desired story).

I suggest you to check occurrence probabilities in Bible Code programs before you trust them, the popular program I bought in 2003 had both much too high and much too small expected occurrence figures.

Since I don't know Hebrew spelling and grammar I can not really discuss findings claimed to be sequences of sentences. Bad word order and the choice of vowels (many are missing in written Hebrew) may cause such findings (and of course a real code). Such findings need not be (pure) Torah findings.

In my section about basic cropped matrix and text probabilities I have not considered the unusual situation when a subsequent term in a matrix has a text skip but not a matrix skip. In this situation the square conversion still roughly holds, and possible text skip starting points are still (roughly) proportional to the matrix probability. The situation can occur when the line division gives lines that are a bit longer than the matrix lines. Then an occurrence can for example extend diagonally towards the left side of the matrix and continue from the right side, after one different matrix skip (e.g. with one letter in every second column).

About comparison to chance matrixes: To have the same chance to get the same terms in the same number
or numbers, it can be considered strict to use the original search area for the finding and the rest of the
search areas we may have got. But we can (normally) get the original finding with another search area or at
least partially other search areas. Further it is a problem that we can often get the same finding with
different sets of row skips with the first term within the final copped matrix size, and thus often with
different probability figures. We should take into account if our cropped chance matrixes are totally bigger
or smaller than the original cropped matrix, it is also of interest if we get enough chance findings to make
a roughly reliable comparison.

Intentional plain text occurrences should in my opinion be included in the search when we compare to chance
matrixes, but with compensation for deviations from ordinary expected occurrence probabilities for such
occurrences.

Representative chance matrixes have a normal skip distribution, we should therefore not compensate for skips
in chance matrixes (whether such matrixes are from chance texts or not).

We should always calculate an occurrence probability for the skip range (and text range) we research in. But a long intentional plain text term (improbable at all other possible text skips together) can not have a small plain text probability, plain text probabilities for intentional terms should be calculated separately. The probability for an intentional plain text occurrence should be 100 percent, and the plain text probability for an intentional term the number of times it occurs in the plain text. The probability to get an intentional term in a cropped matrix (a matrix found through a first term occurrence and a program search with a search area), should then normally roughly be the figure given by text length conversion (when the term is at least roughly evenly distributed in the plain text). We can not know beforehand what parts (or part) of the plain text we will get in the matrix, so we can not have 100 percent probability for the intentional term to be in the matrix. Unevenly distributed intentional plain text terms can have a different conversion.

Some may not agree on my higher Bible Code probabilities. The principle is to use 10 times more expected occurrences as a 10 times higher probability, in my opinion a reasonable way to compare probabilities. I can therefore have probabilities much higher than 100 percent. Orthodox mathematics says that since nothing can happen more than for sure, probabilities can never exceed 100 percent. But we can also choose to call all degrees of insignificance probabilities, not only higher ordinary probabilities. Does orthodox mathematics have patent on what to be called probabilities? I know that the probability to actually get one occurrence (not for example 2 or 3) is smaller than expected occurrences for less improbable terms. And the probability to actually get one occurrence is pretty much smaller than 100 percent, when we have one expected occurrence. The probability to actually get 100 occurrences is small, and (normally) of no interest in this context.

Research within reduced text and skip ranges (not in the full text at all possible skips) causes a problem. We can (normally) get a finding within different text and skip ranges, and thereby with different probability values.

Subsequent terms shorter than five letters are often too frequent to be interesting.

My conversion from text to matrix probability can work bad for really small matrixes. I don't suggest my formulas for subsequent terms to be used for really small matrixes.

I have written that it normally doesn't matter for subsequent terms what text skips a matrix contains.
It doesn't matter when we only can have vertical, horizontal and diagonal text skip occurrences in the matrix,
and this is normally the case for most matrixes when we research in the full text skip range in the full text.

I have told the exceptional form broken diagonal for matrixes where the text lines are a bit longer than
the matrix lines, in such matrixes and plain text matrixes we can normally have (pretty) many other
occurrences (seldom or never used in findings). Also when the difference between text and matrix line
length is a bit bigger, we can normally have other occurrences.

First a good formula for the full figure of expected chance occurrences of a term in the full text.

Etext = |Smax| x Nfirst x Fsecond x ... x Flast

Etext = The number of expected chance occurrences of a term.

|Smax| = The maximal theoretical skip for the term with positive sign.

Nfirst = The number of occurrences of the first letter (our starting letters).

Fsecond = The relative frequency of the second letter.

Flast = The relative frequency of the last letter.

A relative letter frequency is the number of one letter divided with the full number of letters.

I will give an almost precise explanation of the formula.

To calculate expected chance occurrences of a term in the text, we can use the full number of possible
skips (positive and negative) times the average probability for a chance occurrence at the single
possible skips.

This average is the average number of possible starting letters (for the term at the possible skips)
times the relative letter frequencies. And the average number of possible starting letters is here the
full number divided with 2. But since the formula uses only positive skips we should also multiply with 2.

Second the formula for expected text occurrences above can be modified to replace the square conversion, we can then use |Smax| (here matrix skips), Nfirst and relative letter frequences for the matrix. Just as the square conversion this calculation normally gives too high values for the search area, but the results will normally be roughly correct in the cropped matrix since it normally is smaller than the search area.

Third a simple (not perfect) formula for skip compensation in the matrix, based on possible text skip starting points for subsequent terms. The formula is intended for subsequent terms except intentional plain text terms, when all text skips in the matrix are within the skip range we research in.

C = Ecm x 0.75 ( Ns / ( Nt / ( Su / 2 )))

C = The skip compensation, not a constant.

Ecm = The figure of expected occurrences for the term in the matrix.

Ns = The number of possible text skip starting points at plus and minus the occurrence skip in the matrix.

Nt = The total number of possible text skip starting points for the term in the matrix.

We can count starting points for each possible text skip separately (in both directions), and sum up.

Su = Skips used, the total number of possible plus and minus text skips for the term in the matrix.

The number of possible text skip starting points for the term at plus and minus the occurrence skip in the matrix, is compared is to an average number of starting points for possible plus minus text skips in the matrix (for the term). A skip may be possible in only one direction from a starting letter, and the kind of average used takes this into account. The factor 0.75 is an adjustment that corresponds to the factor 1.5 in my pure skip compensation. We should increase the factor 0.75 to 1 when we have only one starting letter for a term in the matrix, and we should (normally) restrict the formula to starting points for vertical, horizontal and diagonal occurrences. When we have two starting letters we can use the factor 0.85 to limit the maximal fault. The formula is pretty useless without a computer program to calculate.

In this appendix I have chosen to consider one starting letter more than one starting point when it can be a starting letter for more than one possible text skip occurrence.

Fourth a rough formula for my maximal C for the first term in the cropped matrix at normal row skips.

The formula holds for the normal case, when we have vertical possible first term occurrences without row splitting.
It also holds when the possible first term occurrences at the row skip don't exceed the final cropped matrix width.
In other cases I suggest the compensation below to be omitted.

Cmax = 1 + ( 0.5 x ( first term height in letters / cropped matrix height in letters ) )

Mathematicians' Statement on the Bible Codes

Solving the Bible Code Puzzle

The main text is well worth to be read.

Torah Codes

The maybe leading believer homepage.

torahcode.us

A believer homepage that contains a lot.

Skeptic's Dictionary

Describes various disputable teachings.

Cult Leaders

They often attract you with light, but what do you get along with it?

Hammond Organ Music

If you like Hammond organ, you may like this music of mine.