EntroBeam
Search…
Entropy

Definitions

“The entropy source produces random bitstrings to be used by an RBG.” (Random Bits Generator, RNG: Random Number Generator).
“The three main components of a cryptographic RBG are a source of random bits (an entropy source), an algorithm for accumulating and providing random bits to the consuming applications, and a way to combine the first two components appropriately for cryptographic applications.”
Entropy is used as the source of the RGB(Random Bits Generator, RNG: Random Number Generator), which is the cornerstone of Digital Signature, Public-key Encryption, and Key-establishment—moreover applies to sampling and modeling of probability distributions, what essential for data analysis and machine learning.

Whats more

The entropy of a random variable
XX
is a mathematical measure of the expected amount of information provided by an observation of
XX
. As such, entropy is always relative to an observer and his or her knowledge prior to an observation.
In information theory, the Entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent in the variable's possible outcomes. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. As an example, consider a biased coin with probability
pp
of landing on heads and probability
1p1-p
of landing on tails. The maximum surprise is for
p=1/2p=1/2
, when there is no reason to expect one outcome over another. In this case a coin flip has an entropy of one bit. The minimum surprise is for
p=0p=0
or
p=1p=1
, when the event is known and the entropy is zero bits. Other values of
pp
give different entropies between zero and one bits.
Given a discrete random variable
XX
, with possible outcomes
x1,...,xnx_1,...,x_n
, which occur with probability
P(x1),...,P(xn)P(x_1),...,P(x_n)
, the entropy of
XX
is formally defined as:
H(X)=i=1nP(xi)logP(xi)H(X)= - \sum_{i=1}^n P(x_i)\log P(x_i)
The choice of base for
log\log
, varies for different applications. where
\sum
denotes the sum over the variable's possible values and in the EntroBeam corresponds to the length of the Entropy Chain.
The length of the Entropy Chain is variable(FormationNumber). EntroBeam will increase the length according to the frequency of transactions in the Entropy Registry.
As noted in the EntroBeam white paper, the probability
pp
of secure entropy to be derived from an entropy registry transaction at one time may be equal to the length of the entropy chain. Therefore, if the length desired by the user is not sufficient, a user must add multiple transactions to accumulate security entropy.
When the length of the entropy chain is
pp
and the number of transactions is
nn
. The probability of security entropy that can come out of one transaction can be
pp
as it is, but multiple transactions have a probability of
pnp^n
.
The length of the entropy chain set uint256 public FormationNumber variable, so users can quickly check it through Block Explorer.
Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics. The definition can be derived from a set of axioms establishing that entropy should be a measure of how "surprising" the average outcome of a variable is.

Information theory

When viewed in terms of information theory, the entropy state function is the amount of information in the system that is needed to fully specify the microstate of the system. Entropy is the measure of the amount of missing information before reception. Often called Shannon entropy, it was originally devised by Claude Shannon in 1948 to study the size of information of a transmitted message. The definition of information entropy is expressed in terms of a discrete set of probabilities
pip_i
so that
H(X)=i=1nP(xi)logP(xi)H(X)= - \sum_{i=1}^n P(x_i)\log P(x_i)
In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average size of information of a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of binary questions needed to determine the content of the message.
A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and finite block-length information theory.

In cryptography

In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average)
21272^{127}
guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.
Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.
When EntroBeam users need secure entropy of cryptographically unpredictable probability, they must accumulate secure entropy with multiple transactions. As it accumulates, the unpredictability becomes exponentially more robust. Even if it does not directly generate a transaction, referring to the reveal secure entropy of other future transactions has a similar effect.
Copy link
On this page
Definitions
Whats more
Information theory
In cryptography