I like this definition of knowledge as information adapted to a purpose. For the most part it fits really well with my current understanding of what knowledge is, but I donâ€™t know how it applies to pure mathematics. The concepts/theorems of pure math are definitely knowledge, but itâ€™s difficult to nail down what their purpose is.

Math is at least mostly connected to the real world. It was developed to solve real world problems (purposes) such as counting things, tracking things (do I have the same number as this morning? how many more or fewer?), or trading with a multiple (e.g. 3 of these per 1 of those). Math helps with problems like knowing if you have enough food for the winter, predicting that in advance, or predicting eclipses. Math has lots of connections to physics and other science. In general, more abstract math builds on earlier stuff to solve more problems that came up (e.g. complex numbers were developed in response to a problem, not made up for no reason). Do you have math examples that you canâ€™t connect to any purposes?

edit: not happy with what I wrote, deleting it.

Some context to help direct your replies: I am doing my PhD in mathematical physics, so math is my area of expertise.

There are *a lot* of concepts in pure math where I have a difficult time connecting them to concrete purposes. The simplest one I can think of is this: What is the purpose to which the concept of *prime numbers* is adapted?

Iâ€™m sure that natural numbers served a useful purpose for e.g. ancient merchants, but Iâ€™m skeptical that *prime numbers* was actually a useful concept for anything practical until much more recent times. The stuff on the history of prime numbers article on Wikipedia does not seem to contradict my skepticism. I donâ€™t understand the motivations of the ancient Greeks who came up with primes, so instead Iâ€™ll ask:

How did I, LMF, ever came to care about primes? At some point when I was a kid I learned the definition of primes in class, and I donâ€™t remember what motivation the teacher gave, but I didnâ€™t care about them. I think my interest in primes started a few years later when I struggled but failed to find a *pattern* that the prime numbers follow (in the language Iâ€™d now use, I couldnâ€™t find an algorithm that was more efficient than just brute-force checking each odd number one by one to see if it was prime). I was extremely intrigued by this (though Iâ€™d be hard-pressed to explain *why* I found it so intriguing).

The confusing part here is that my knowledge of prime numbers is *prior to* the problem that I needed the concept of primes in order to solve (the question of how primes are distributed). Applying your definition of knowledge to this leaves me in a kind of confusing circularity: my knowledge of primes is adapted toâ€¦ help me formulate questions about how primes are distributed / help me understand the partial solutions that people have found to those questions.

edit: changed last paragraph a bit for clarity

I will stop editing it, sorry!

The basic purpose of primes is to understand and use numbers better. E.g. prime factorization is helpful to understand for people who care about arithmetic.

I do think primes are useful and practical. But also, people can adapt information to a purpose which is not very useful or practical. People donâ€™t actually do things totally arbitrarily or randomly, but they do do things to satisfy intellectual curiosity, or try to understand something, without knowing what use it will be. Information can be adapted to purposes like that and be knowledge.

Some people learn about primes initially with purposes like â€śpass this testâ€ť or â€śsatisfy my teacherâ€ť or â€ślearn whatever is in this book on the assumption that the authors knew what they were doingâ€ť without understanding how primes are useful. Those purposes are not suitable for creating knowledge of primes initially, but can work some when knowledge of primes is already available. Itâ€™s generally better if people do know the purpose of what theyâ€™re learning, though, rather than only having this sort of meta-purpose.

Prime numbers are really useful and should be explicitly taught as part of basic math, before algebra.

Being able to use & understand prime factorization is helpful with basic arithmetic (multiplication & division). And understanding and being able to use prime numbers helps a lot with understanding divisibility rules and using them efficiently. Primes, prime factorizations, and divisibility rules all help *a lot* with working with basic fractions (e.g. simplification and arithmetic with fractions), which helps with understanding decimals, ratios, and percentages, which helps with understanding statistics and a lot of real-world math. Being able to do prime factorizations & cancellations is also really helpful with algebra. And mental math in general would be a lot easier if people understood prime numbers and prime factorizations better. When I have recommended that people go back and learn math better, *this* (prime numbers & prime factorization) is the kind of stuff I think they should be focussing on.

Prime numbers are also useful for basic counting problems that would have existed before modern math: it is useful to know that there are some quantities that you are **not** going to be able to break up into equally sized groups or arrange into rectangles (except groups of one or rectangles by one). It is useful to know that there are plenty of ways to break up 48 potatoes equally, but **no** ways to break up 47 potatoes equally (without cutting some of the potatoes). Or that you can arrange 48 people into a rectangle, but you **cannot** arrange 47. (Arranging people into rectangles is currently used for things like desks in a classroom, and in the past was used for arranging military troops.) These are useful things that people could have noticed just trying to solve real-life problems.

Youâ€™re right. I donâ€™t know how I was able to blank out the fact that primes are essential for all these basic algorithms of arithmetic.

[â€¦] people can adapt information to a purpose which is not very useful or practical. People donâ€™t actually do things totally arbitrarily or randomly, but they do do things to satisfy intellectual curiosity, or try to understand something, without knowing what use it will be. Information can be adapted to purposes like that and be knowledge.

Some people learn about primes initially with purposes like â€śpass this testâ€ť or â€śsatisfy my teacherâ€ť or â€ślearn whatever is in this book on the assumption that the authors knew what they were doingâ€ť without understanding how primes are useful. Those purposes are not suitable for creating knowledge of primes initially, but can work some when knowledge of primes is already available. Itâ€™s generally better if people do know the purpose of what theyâ€™re learning, though, rather than only having this sort of meta-purpose.

My takeaway from this is that you meant something by â€śpurposeâ€ť which is more broad than what I initially had in mind. You seem to mean that information adapted to any purpose at allâ€”no matter how arbitrary the purpose isâ€”is knowledge.

Under that definition, itâ€™s kind of obvious that all mathâ€”no matter how abstract/uselessâ€”is knowledge, since none of it is created completely arbitrarily. All math was always at least a bit interesting to *someone*, or counting towards *someoneâ€™s* tenure, or whatever, and so it serves a purpose. My original issue seems to be resolved.

If I generate a random string of digits, is that still something youâ€™d call knowledge, since itâ€™s adapted to the purpose of satisfying my desire to see a random string of digits?

Tired, writing quickly:

Generally we only consider things knowledge if they were actually created by an evolutionary process, not if they hypothetically could have been. Otherwise any information would qualify as â€śknowledgeâ€ť, which would defeat the purpose of the word.

This also relates to one of DDâ€™s takes on knowledge in FoR: large multiversal structures. Those happen when there is a causal mechanism favoring (selecting) particular information over other information (adapting the information to be good at something). Thereâ€™s some some force causing similarities across multiple universes (which is that the information fits the niche well).

Knowledge can also be thought of as an opinionated concept related to what purposes we consider worthwhile or useful. What do we respect enough to acknowledge as knowledge? Even things we have less respect for â€“ perhaps some pure math or the details of some dumb hobby â€“ are still, from our perspective, in the top 0.00001% of all logically possible purposes.

This is related to IGC stuff. For any idea at all, you can define some goal it succeeds at. We have to pick what goals we care about. Even goals we view as pretty bad, if anyone has ever actually proposed them, tend to be far far far better than random.

As our situation/context changes, our opinions of goal, purposes and knowledge changes. But despite some change, itâ€™s fairly consistent over time. There are strong patterns rather than wild swings.

This was actually where I was maybe going to go next depending on how you answered.

Randomly generated digits of a bit string, if they are generated by measuring a bunch of qubits, will look completely different in every branch of the multiverse, and so they are very different from genes (which, as DD explains in FoR, will look the same in branches which are moderately close, unlike junk DNA). The random strings donâ€™t have the same physical property that knowledge is supposed to have.

I donâ€™t think your comment about hypothetically vs actually created information applies here, because all of the possible bit strings are *actually* created in the multiverse in the procedure I have in mind (for concreteness, Iâ€™m imagining a really simple procedure like, prepare (|0> + |1>)/sqrt(2), measure it in {|0>, |1>} basis, repeat N times). And all these bit strings supposedly count as knowledge, because they are all adapted to the purpose of satisfying my desire to see a random string of N digits.

Whatâ€™s breaking down in my reasoning?

I think the answer is that since any randomly generated N bit string would suffice just as well as any other, maybe itâ€™s not the randomly-generated bit string itself that should be thought of as the knowledge in this situation, but rather some less fungible auxiliary stuff, like my knowledge of the string length N or my knowledge of the procedure that created it.

This sounds interesting& related to CF stuff, but what does the acronym IGC stand for? Iâ€™ve never heard of it.

IGC stands for idea, goal, context. An idea should be judged by whether it fulfils its goal in the context in which it is being considered:

I think you missed my second qualifier (i bolded it above).

Also, to count as knowledge generally something has to be pretty well adapted (judgment call for how much is enough). And the adaptation has to be for the evolutionary process that created it, not some other actual or hypothetical evolutionary process.

What if nature creates many stones. They are not evolutionarily created. But then I find a stone of just the right size and shape and use it as a part in a machine. Thereâ€™s knowledge in that stone selection. And once I place it in the machine, thereâ€™s a multiversal structure there (many similar stones in other universes). The knowledge to find and choose that particular stone (or one adequately similar) was evolutionarily created in my mind. The knowledge to design and create the machine was evolutionarily created in some peopleâ€™s minds.

But the stone itself was transplanted into a niche that itâ€™s adapted to. The process of finding the stone involved selection â€“ I look at a bunch of stones and find the right one â€“ but not replication or variation (though there are many variant stones).

So thatâ€™s a kinda interesting case â€“ finding things in nature that fit a niche well. It usually only works for fairly simple niches. I donâ€™t think itâ€™s a major problem. It looks like some details that could be worked out a bit more clearly.

The important thing is that evolutionarily-created knowledge is controlling what happens. Similar to if I think of a design for a piece of wood and then carve it. The carving itself is not an evolutionary process but the ideas that guide my hand were evolved.

This is a summary:

Knowledge is information adapted to a purpose. All knowledge is created by evolutionary processes: by information being copied and undergoing variation and selection. If you get stuck on mistakes will stop making progress on the areas where youâ€™re stuck, so you have to correct errors. There should be no limits on what ideas and criticisms youâ€™re willing to consider cuz otherwise you can miss a problem or a solution to a problem. Non-CF epistemology sez an idea can have degrees of goodness or badness and that scale is usually analog. An idea has to be categorised as either solving or failing to solve a problem, so assessment of ideas is actually digital not analog.

Okay, I think I get it now. I actually remember seeing your second qualifier, but at the time I think I thought that the procedure I described must count as evolutionary process, since the bit string was created for a purpose by thinking reasoning human beings. But really, the object that people actually wanted to createâ€”the object actually created by an evolutionary processâ€”wasnâ€™t â€ś011011101â€ť specifically but rather â€śa random bit string of length 9 drawn from a uniform distributionâ€ť or something, and the latter object is what has the multiversal structure (e.g. in every branch it will have 9 digits).