Non-linear weightings, like linear weightings, are a type of conversion between dimensions. The point/meaning of weighting coefficients or functions is to change dimensions. You’re re-scaling the values (mapping them to new values) because the values/numbers in the before and after dimensions are different (weightings of 1 are rare though you could easily get them on purpose by specific choices of units/dimensions).
Also, if you multiply speed by distance, neither of those is a weighting, and also you won’t get a duration as a result.
(Note: I don’t disagree with your conclusion here (units sometimes cancel and it’s NBD), but I think the way I analyse this is interesting/useful so think it’s worth posting)
I don’t think you end up with a unitless number – it’s just that there’s context that gets dropped. That said, provided physicists don’t use the numbers in the wrong way then calculations won’t go wrong. So WRT physics, it’s not a big issue to drop those units if they’re being used in conventional ways.
The reason I think that the number isn’t unitless is b/c there’s hidden context that isn’t expressed in the units you mentioned. That hidden context means that using the values in certain ways doesn’t work (garbage out). But if you make the context explicit then it’s clear which values can and can’t be compared, summed, etc.
From Multi-Factor Decision Making Math:
Factors can often be combined when they’re in the same dimension. A dimension is a type, kind or category of factor, such as length, weight, time, cuteness or color. When two factors are the same kind of thing, you can combine them with addition.
If the units (including context) are identical then you can sum values meaningfully. If the units are different (including context), then a sum doesn’t make sense. There’s no generic and universal way to convert between different units (except when the units are just scaled versions of one another (e.g., inches \leftrightarrow meters)): conversion requires a suitable goal and context (note, this is Elliot’s idea).
Based on this idea about summing (and when it works), wavelength and distance travelled are different units – even if both are measured in meters, summing them doesn’t make sense, so they can’t be the same units.
If we note that there’s some extra context, e.g., we say a wavelength value has units “wavelength meters”[1] and distance travelled has units “distance meters”, then it’s clear we can’t sum them. But also, we can reconsider the original maths.
I’ll show the process in 3 columns: an expression, its units, and the operation done or a comment. In each step we’ll do one algebraic operation or some simplification. Starting with 2\pi:
Without 2\pi the units would end up as “cycles”, which makes sense. 2\pi is actually a constant of conversion between cycles or ‘rotations’ and a specific angle of rotation. (Note: this constant of conversion is not universal! it only works in the context of euclidean space. Or, at least, if you were to use it in non-euclidian space you’d need a way to define rotation that I don’t know about.)
Also, I googled ‘wave number’ and the wiki page does include the extra units:
In the physical sciences, the wavenumber (also wave number or repetency[1]) is the spatial frequency of a wave, measured in cycles per unit distance (ordinary wavenumber) or radians per unit distance (angular wavenumber).
I didn’t use that to do the above maths, though – the technique is one I used to resolve a problem I was having with something at work back in October last year. It’s nice that I get the same units, though.
I put spaces between these to not confuse the idea with the product of units (e.g., watt-hours). But you could look at “wavelength-meters” as the product of distance and a ~magic unit of “wavelength” that only has a single valid value: 1. That ~magic unit is like a context unit: you introduce it to mark a value as context laden, and it needs to be ‘canceled out’ to use that value outside of the relevant context, probably via a constant of conversion. ↩︎
If the right answer is obvious to you when you pay conscious attention, but not automatic/intuitive while writing, that means you need more intentional practice/automatization/mastery. You aren’t done learning it.
I added this paragraph to the article (second to last):
A major reason for developing this approach is that it applies to rational debate. When people try to add up strong and weak arguments and evidence, they are adding linearly weighted factors. Each argument or piece of evidence is a factor. Evidence and arguments are judged for strength, which means assigning them weights even if no numbers are used. Criticisms have negative weights. However, arguments are in different dimensions so they can’t be combined by adding. While studying philosophy, I rejected this approach to debate, in favor of evaluating ideas as refuted or non-refuted, and evaluating arguments as decisive or indecisive. Translating from epistemology to math provided another way to understand and explain the issue.
This book was first published in 1922 by a Harvard physics professor. It says:
The growing use of the methods of dimensional analysis in technical physics, as well as the importance of the method in theoretical investigations, makes it desirable that every physicist should have this method of analysis at his command. There is, however, nowhere a systematic exposition of the principles of the method. Perhaps the reason for this lack is the feeling that the subject is so simple that any formal presentation is superfluous. There do, nevertheless, exist important misconceptions as to the fundamental character of the method and the details of its use. These misconceptions are so widespread, and have so profoundly influenced the character of many speculations, as I shall try to show by many illustrative examples, that I have thought an attempt to remove the misconceptions well worth the effort.
I think he’s politely saying: y’all are getting basic stuff wrong, which is foundational to the work you do, and no one even tried to work out and write down the details in a systematic way, you overconfident fools.
I’m looking at this book because dimensional analysis seems relevant to the stuff about dimensions I say in my CF math article. Also, looking at older books and papers is one of my search strategies.
I like the book so far. I’m on page 9 of the pdf and wondering if @alanforr@lmf (physicists) or others know this stuff about the dimensional analysis of a pendulum, and if so where they learned it (including specifics like book cites). You can start reading on page 7 (beginning of chapter 1) and continue to the end of the pendulum example (2.5 pages of total reading).
I finished chapter 1. I like the book. The questions at the end of ch. 1, which the book raises, look good to me. The author died in 1961. This was his first book of many. He won a nobel prize (re the physics of high pressures) and various other awards.
we have treated the dimensional formula as if it expressed operations actually performed on physical entities, as if we took a certain number of feet and divided them by a certain number of seconds. Of course, we actually do nothing of the sort. It is meaningless to talk of dividing a length by a time; what we actually do is to operate with numbers which are the measure of these quantities. We may, however, use this shorthand method of statement, if we like, with great advantage in treating problems of this sort, but we must not think that we are therefore actually operating with the physical things in any other than a symbolical way.[1]
He goes on to mention this more exact formulation of velocity:
Though he prefers to focus instead on how numbers and measurements differ from actual physical quantities. He wants us to remember the rules of operation used in measuring.
I reviewed some modern material on dimensional analysis:
I think these are all terrible. I would never have understood dimensional analysis from all these sources combined without a lot of work. The 1922 book is far better and made it easy for me to understand the initial concepts. These people really want to get lost in math details, and teach how to do calculations following specific methods, without dealing with concepts about what’s going on. The 1922 book wasn’t heavy on explanation but at least it had enough for me to understand and it wasn’t focused on teaching confused students to do rote calculations. And the 1922 book has a much harder task. It’s communicating 100 years into the future. Some terminology has changed – having to figure out some terminology stuff was noticeable with the old book but not the modern sources. The old book also used some math I don’t know how to do, and in various ways seemed aimed at more advanced readers (the newer stuff seem aimed at undergrads I think), yet it was still easier to learn from.
I definitely know this stuff. I can’t provide a citation because I learned it before I started reading books or papers about physics. I learned about units and unit conversions in a 7th grade physics class, and I think I learned about dimensional analysis in a 1st-year university physics class (though it was never taught to me systematically; I think I figured it out on my own after seeing it used in a few examples).
I like reading Tofallis. He actually explains stuff including fairly basic/foundational conceptual issues instead of only talking about local details while assuming some undiscussed premises. And he’s unusually readable and clear, like a good teacher/explainer, for a modern academic.
Since weightings now appear as powers (exponents) of the criteria, their interpretation is now in terms of percentages: A weighting of W means that a 1% improvement in a performance indicator will lead to a W% change in the score.
This is mathematically false. How did it get through peer review!?
The weights are interpreted in percentage terms and can allow for diminishing returns. If the score on an attribute is given a weight (exponent) of w, this means that a 1% change in the attribute gives a w% change to the overall score. See Appendix A for a derivation.
Again, this is false.
Tofallis knows it’s false. In Appendix A I see that his math agrees with my math. Then he writes (my emphasis):
This is approximately a change of W1%.
Why didn’t he say “approximately” in the main text of either article? He just stated it as a truth, not a simplified approximation. And he compared it with an explanation for additive weights that is exact not approximate.
Is it a good approximation? Not really.
E.g. a 10% improvement and a weight of 5 gives “approximately” a 50% improvement but actually a 61% improvement. (1.1^5 = 1.61051). The first example I considered was 10% and a weight of 2, which results in 20% vs. 21%.
It works better with small (near 1 or less) positive exponents and small percentage increases. (I don’t know if that’s an complete statement about the limitations; I didn’t fully mathematically work things out, so e.g. there could be a problem with tiny numbers that I don’t know about.) In the appendix, Tofallis indirectly acknowledges that it works poorly with large percentage changes:
To see the effect of large percentage changes, it is best not to rely on the approximation
Interestingly, he continues:
especially because the exact calculation is so easy to accomplish
If the exact calculation is so easy, then why present a dumbed-down approximation with no warning that it’s an approximation or warning about in what circumstances it’s a poor approximation?
Let’s look at examples to see what happens with larger percentage increases. Consider a 20% increase with a 0.05 weight. The approximation says that will result in a 1% increase overall, but it’s actually 0.92% (I rounded). That might look close enough because the numbers are small, but it’s 8% lower than it should be – it’s actually worse than the 20% vs 21% example from earlier (where the approximate value is 4.8% lower than the true value). A 90% increase with a 0.05 weight gives a 4.5% increase according to the approximation, but actually gives a 3.26% increase, which is 27.6% lower than it should be. So even with very small exponents, a 20% increase made the approximation mediocre and a 90% increase made it poor.
Tofallis’ general recommendation is to use weights/exponents below 1, and in particular to have all the weights add up to 1. So if you have 5 factors, the average exponent would be 0.2.
I had trouble believing that Tofallis would make such a bad/basic math error (and that it’d get through peer review). He seemed to have more than enough math skill to get this right. I noticed the issue with the earlier paper with no appendix with details. I double and triple checked what I was doing. I consulted a textbook on this topic by another author. I asked a friend. Then I checked his later paper to see if he said something similar there, and I found that I hadn’t made a mistake.
And I was right that he wouldn’t make this math error. I doubted that “he just got the math wrong” could be the explanation, and it wasn’t. He made a judgment, communication and/or integrity error but did know the math. It’s also concerning that it got through peer review, especially for the earlier paper with no appendix. And why wouldn’t he just add the one word “approximately” in the main text of either paper?
My only mistake seems to have been having overly-high expectations about quality and correctness.
I understand wanting to offer simplified or dumbed-down versions of ideas to make them more accessible to lay people. But you ought to say when you’re doing that and differentiate it from when you’re giving real or exact statements, especially in academic papers. Academic papers are meant to be read by experts, so are most appropriate place to write the non-dumbed-down version. Using an approximation in a pamphlet for laymen, without warning that it’s an approximation, would be more reasonable than doing the same thing in an academic paper. Tofallis confused me because I thought I was reading expert-level material, not e.g. a sloppy blog post aimed at the general public.