Blog Profile / The Endeavour


URL :http://www.johndcook.com/blog/
Filed Under:Academics
Posts on Regator:1410
Posts / Week:4.7
Archived Since:April 26, 2011

Blog Post Archive

Probability of secure hash collisions

A hash function maps arbitrarily long input strings to fixed-length outputs. For example, SHA-256 maps its input to a string of 256 bits. A cryptographically secure hash function h is a one-way function, i.e. given a message m it’s easy to compute h(m) but it’s not practical to go the other way, to learn anything about m from h(m). Secure hash functions are useful for […]

Three proofs that 2017 is prime

Aaron Meurer asked on Twitter whether there’s a proof that 2017 is prime that would fit into 140 characters. My first reply was this: sqrt(2017) < 45. 2017 not divisible by 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, or 43. Ergo prime. I’m not sure that’s what he had […]

Cover time of a graph: cliques, chains, and lollipops

Cover time The cover time of a graph is the expected time it takes a simple random walk on the graph to visit every node. A simple random walk starts at some node, then at each step chooses with equal probability one of the adjacent nodes. The cover time is defined to be the maximum […]

Normal hazard continued fraction

The hazard function of a probability distribution is the instantaneous probability density of an event given that it hasn’t happened yet. This works out to be the ratio of the PDF (probability density function) to the CCDF (complementary cumulative density function). For the standard normal distribution, the hazard function is and has a surprisingly simple […]

Category Theory and Facebook

From Drew Armstrong’s notes on adjoint functors: Once upon a time, my opinion of category theory was the same as my opinion of Facebook: if I ignore it for long enough, hopefully it will go away. It is now my educated opinion that category theory will not go away, and in fact the language of […]

Infinite primes via Fibonacci numbers

A well-known result about Fibonacci numbers says gcd(Fm, Fn) = Fgcd(m, n) In words, the greatest common divisor of the mth and nth Fibonacci numbers is the gth Fibonacci number where g is the greatest common divisor of m and n. You can find a proof here. M. Wunderlich used this identity to create a short, clever proof that there are infinitely many […]

Safe primes, Sylow theorems, and Cryptography

A logarithm is the inverse of an exponential, and so we can generalize the idea of a logarithm wherever we can generalize the idea of an exponential. In particular, we can raise elements to exponents in a discrete group, and that leads to the definition of a discrete logarithm. Diffie-Hellman public key cryptography is based […]

Automated theorem proving

When I first heard of automated theorem proving, I imagined computers being programmed to search for mathematical theorems interesting to a wide audience. Maybe that’s what a few of the pioneers in the area had in mind too, but that’s not how things developed. The biggest uses for automated theorem proving have been highly specialized […]

Probability of secure hash collisions

A hash function maps arbitrarily long input strings to fixed-length outputs. For example, sha256 maps its input to a string of 256 bits. A cryptographically secure hash function h is a one-way function, i.e. given a message m it’s easy to compute h(m) but it’s not practical to go the other way, to learn anything about m from h(m). Secure hash functions are useful for […]

Three proofs that 2017 is prime

Aaron Mueller asked on Twitter whether there’s a proof that 2017 is prime that would fit into 140 characters. My first reply was this: sqrt(2017) < 45. 2017 not divisible by 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, or 43. Ergo prime. I’m not sure that’s what he had […]

Monthly highlights

If you enjoy reading the articles here, you might like a monthly review of the most popular posts. I send out a newsletter at the end of each month. I’ve sent out around 20 so far. They all have two parts: a review of the most popular posts of the month, and a few words […]

Cover time of a graph: cliques, chains, and lollipops

Cover time The cover time of a graph is the expected time it takes a random walk on the graph to visit every node. A simple random walk starts at some node, then at each step chooses with equal probability one of the adjacent nodes. The cover time is defined to be the maximum over […]

Changing names

I’ve just started reading Laurus, an English translation of a contemporary Russian novel. The book opens with this paragraph. He had four names at various times. A person’s life is heterogeneous, so this could be seen as an advantage. Life’s parts sometimes have little in common, so little that it might appear that various people […]

Bernoulli numbers, Riemann zeta, and strange sums

In the previous post, we looked at sums of the first n consecutive powers, i.e. sums of the form where p was a positive integer. Here we look at what happens when we let p be a negative integer and we let n go to infinity. We’ll learn more about Bernoulli numbers and we’ll see what […]

Sums of consecutive powers

There’s a well-known formula for the sum of the first n positive integers: 1 + 2 + 3 + … + n = n(n + 1) / 2 There’s also a formula for the sum of the first n squares 12 + 22 + 32 + … + n2 = n(n + 1)(2n + 1) / 6 […]

Rapidly mixing random walks on graphs

Random walks mix faster on some graphs than on others. Rapid mixing is one of the ways to characterize expander graphs. By rapid mixing we mean that a random walk approaches its limiting (stationary) probability distribution quickly relative to random walks on other graphs. Here’s a graph that supports rapid mixing. Pick a prime p and label nodes 0, 1, 2, 3, […]

Top three posts of 2016

These were the most popular posts this year: Random number generator seed mistakes Unusual proof that there are infinitely many primes Literate programming: presenting code in human order

Branch cuts and Common Lisp

“Nearly everything is really interesting if you go into it deeply enough.” — Richard Feynman If you thumb through Guy Steele’s book Common Lisp: The Language, 2nd Edition, you might be surprised how much space is devoted to defining familiar functions: square root, log, arcsine, etc. He gives some history of how these functions were […]

Subjectivity in statistics

Andrew Gelman on subjectivity in statistics: Bayesian methods are often characterized as “subjective” because the user must choose a prior distribution, that is, a mathematical expression of prior information. The prior distribution requires information and user input, that’s for sure, but I don’t see this as being any more “subjective” than other aspects of a […]

Most mathematical problem statement

Every so often college students will ask me for advice regarding going into applied math. I’ll tell them the first step in an application, and often the hardest step, is formulating a problem, not solving it. People don’t approach you with mathematical problems per se but problems that can be turned into mathematical problems. Nobody is going […]

Copyright © 2015 Regator, LLC