Monday, December 27, 2010

Helicopter flight school

Excerpt from a toy helicopter's directions: "In if the flight does not have the impetus to change the operating lever, but the helicopter still in airborne spun, by now might adjust in your hand on remote control's vernier adjustment knob, balanced does not spin until the helicopter."

My translation: "Fly the helicopter to a steady height with the Throttle stick. Then adjust the Vernier adjustment knob so that the helicopter does not spin left or right."

Note that the diagram shows the Throttle stick and the Vernier adjustment knob.

Note to companies: I charge $100 per hour for translations. That's real cheap. This entire manual could be fixed in under an hour. Then your company won't be so embarrassed, and more of your customers will be able to figure out how to use their Holiday Gifts. Lowered childhood frustration might lead to world peace.

Sunday, December 12, 2010

Neil Armstrong sent Robert Krulwich, of Radiolab on National Public Radio fame, a letter. And he talks a bit about what he and Buzz did on the Moon in '69. And, I agree with everything he said. I'd love to hear Neil make an appearance on Radiolab.

And, i haven't changed my opinion about what NASA is currently doing. Going to the Moon was dangerous. And worth it. As far as the space program goes, i'm not risk averse. While one should do everything one can think of to limit risk, it's required to make progress.

The Shuttle accident rate is 2%. That's the worst of any vehicle that carried anyone into space. In my opinion, that's unwarranted risk. Use more reliable rockets. Further, the Shuttle program promised that it would be cheaper, through reuse of systems, than other vehicles. But it hasn't delivered on the cost promise. It's easily twice as expensive. Don't get me wrong - the Shuttle is amazing. While the external booster issue that the Challenger disaster exposed seems to have been solved, the wing problem that the Columbia disaster exposed was not solved. The program should have been terminated. I understand that the long-canceled National AeroSpace Plane did solve the wing problem. But that's not something that could be retrofit into the Shuttle.

So, the recent success that NASA has had with SpaceX is encouraging news. I was hoping that when the Constellation program was canceled, that something like this would emerge.

Friday, December 10, 2010


Over at The Seanachai, Patrick has a new book - Unkillable. I've listened to the first seven chapters. If you loved How To Succeed In Evil, you'll love Unkillable. If you haven't yet succeeded in evil, you should do yourself a favor now. Then, do me a favor. Send him a few bucks. Maybe he'll write some more.

Wednesday, December 01, 2010

Be Unreasonable

The reasonable man adapts himself to his environment.
The unreasonable man adapts his environment to himself.
Therefore, all progress is due to unreasonable men.

In this case, NASA is taking the reasonable approach. If the astronauts experience bone loss, radiation damage, etc., then change their diet, give them drugs, or whatever to get them through it.

The unreasonable approach is to notice that astronauts do not have millions of years of evolution in near zero G conditions. So, provide them with artificial gravity. This can be achieved by spinning. Radiation shielding is expensive, but possible. 10 meters of water all around should do it. Since this weighs as much as a battleship, one should consider electrostatic and magnetic deflection as lighter and therefore cheaper alternatives. And one should test these technologies. Half a trillion dollars has been spent on the International Space Station.

Going to the Moon was unreasonable. Audacious aeronautical research, like the X15, was unreasonable. That's what NASA was created for. NASA is being reasonable when being unreasonable is called for.

Wednesday, November 17, 2010

Podcast file name convention

What's in a name? If you're publishing a podcast, the filename used makes a difference. File names must be unique, otherwise your subscribers will overwrite older shows that they may not have listened to yet. The file name should distinguish the show from other subscriptions the subscriber may have, much for the same issue. So, don't use 'episode'. Someone else may do that. Better, is something from the name itself. All In The Mind becomes aim, for example. And this show-unique bit should come first in the name, so the subscriber, using a sorted list, can see all your shows together.

After the show title part of the name should come a numeric part that makes each episode unique. One way to do that is to number them. The first show could be '1', and the second could be '2'. But, such numbers should have leading zeros so that in a sorted list, the lexicographic sort order also is a sequence sort order. So, use '01', at least, so that the first nine episodes sort properly with the tenth. It may be arrogance or optimism to use '001' or '0001' for your first show, suggesting that the expectation is over a hundred or thousand shows. But there are plenty of shows out there with more than one hundred episodes already. And some monthly shows are getting close.

Another way to do this is to encode the date. Some shows use a 2 digit year, 2 digit month, and 2 digit day. For example, 100823 is 2010, August 23rd. This has the advantage that the lexicographic sorting is also the date order. And the sequence won't break for another 90 years. Of course, a 4 digit year such as 20100823 also sorts properly, and won't break sort order for nearly eight thousand years. Either is fine. But i find that the four digit year is easier for a human to read. That is, while one hopes that it's a date, and one hopes that it's in the form of year, month, day for sorting, one must still guess that 10 is the year and not October. Dates come in all the permutations of order, in different cultures. IMO, the military gets it right with YYYYMMDD. While 2010Aug23 may be easier for a human to read, it fails the sort order requirement, and is therefore unacceptable.

Underscores are optional in filenames. aim100823.mp3 is OK. But they must be consistent. You can't use aim100823.mp3 one week and aim_100830.mp3 the next week. This error breaks the sorting order. Best to name these things with a script. Does it matter if the file name is the recording date or publish date? Probably not. There should be a publishing script that gets all the RSS details right. If there is, it could get the file name right as one of those details.

Speaking of underscores, are there characters that should not go into file names? Yes. No colons (:), no slashes (/), and no backslashes (\), because these characters are directory separators on various operating systems. But really, one should stick to alphanumerics, hyphen (-) and underscore. In command line environments, (parenthesis), dots (.), quotes ("'`), brackets (<{[]}>), pipes (|) and so on (~!@#$%^&*+=;?) can all be interpreted, making it difficult (but almost never impossible) to cope. Simply avoid these.

After the sequence number or date, a very brief description of the show may be included. This information can very easily be included in id3 tags within the file - and they should be there. But one or two words will often help the subscriber. Don't make it too long. Windows may have long filenames but DOS does not. And, like it or not, there are mp3 players out there that have 8.3 filenames. So long file names show up as micros~1.mp3 on these players.

What can be included as text within mp3 files? Some of the shows i listen to have complete transcripts. It's incredible.

What if you got it wrong? Should one rename old shows? Absolutely not. Once you've made an error, changing an old filename risks having thousands of podcasting software suites download these old shows again.

This podcast filename convention should also work for any other RSS published material, such as a blog. However, for blogs, the file name length does not have to observe the 8.3 convention. Short file names have mostly gone the way of the dinosaurs. You do use Rock Ridge extensions on your CDs, right?

Monday, November 08, 2010

Binocular spam

I got some spam by email recently.

The Optic 1050 binoculars, with up to 1000x magnification will allow you to see objects up to 35 miles away! The lightweight, rugged and durable Optic 1050 binoculars are only $19.98 and just $7.95 P&H. These super lightweight binoculars easily adjust to your eyes, are shock resistant with shatterproof lenses and feature wide-angle viewing.

Plus, with each pair of binoculars you order, youll also receive the bonus Pocket Spyscope. Its less than 6 inches long with 24x magnification. Thats a $50 value, yours FREE! You just pay $4.95 to cover shipping and handling. The Pocket Spyscope is lightweight and portable. You can see objects up to 7 miles away and it doubles as a magnifying glass for close up use.

National TV Bargains Power Binoculars...

Ignore the missing single quote in youll. Presumably, when they say 1050 binoculars, they're talking about 10x50 binoculars. Read this as "ten by fifty". But it's 10x - ten times larger than your eyes normally see. The big end of the binoculars are 50 millimeters. That's about two inches.

They say up to 1000x magnification. No. They're 10 times magnification. 1000 times magnification with binoculars that have a 2 inch big end would produce a useless, grainy image. For 1000x magnification, the big end would have to be about 20 inches across. And that would be pushing it. I'd really want the big end to be 40 inches across for 1000x. That's how optics work. Naturally, such an instrument would be more expensive, and less portable.

will allow you to see objects up to 35 miles away! How disappointing. I've seen the Andromeda Galaxy without optical aid. That's about two and a half million light years away. One light year is about 6 trillion miles. So Andromeda is more than 15 quintillion miles away. If i can only see 35 miles with these binoculars, but can see 15 quintillion miles without them, there must be something wrong with them.

shatterproof lenses. They must come from Krypton, like Superman.

feature wide-angle viewing. I suppose anything is relative. They're likely wider angle viewing than my telescope at low power. But they're not very wide compared to naked eye. But notice that they don't say what they're relative to. Nor do they give any measure of how wide an angle you can see with them. Normally, binoculars are sold with such a reference. It might be five degrees or seven degrees.

Pocket Spyscope. Its less than 6 inches long with 24x magnification. The length isn't that important. If this Spyscope has a diameter of less than an inch, then the views through it will be grainy. For 24x magnification, it will likely have to be two inches in diameter to be able to produce any kind of decent image. If it's six inches long, it's unlikely to be even an inch in diameter. This is not a $50 value. It's so mis-designed that $5 is too much.

You can see objects up to 7 miles away. So, it's not even as good as the binoculars? Well, that's the truth.

doubles as a magnifying glass for close up use. I think this says something about the way the optics work. I don't think means anything good for use with distant objects.

I have a pair of 10x50 binoculars. They're really good. However, if i want binoculars for hand held use (no tripod), then I find that 8x, or eight times magnification, is about as much as I can handle. With more magnification, the view isn't as steady. But I have a nice tripod. And my 10x50 binoculars work really well on it. I also have a small scope - much more than 6 inches long, and 2 inches in diameter. And it can magnify to something like 24x. But it is totally useless without a tripod. So, my best guess is that this 24x Spyscope is useless.

$19.98 + $7.95 + $4.95 = $32.88. That's the amount you can save by reading this post.

Everything breaks

An odd series of failures seems to have happened all at once. Seals on the oil pump for one car, the water pump, and now it seems, the thermostat of the other car, a printer, and the CPU fan for a computer. The phone. Having redundant hardware isn't enough. Well, nothing lasts forever. But the CPU fan is only maybe five months old. I had expected better. The worst failure, though, is a long lasting cold. Well, there's no fever. But it's been far too long.

This blog has been idle for quite a bit. Lots going on. There are about a dozen posts in the queue. Just have to get time to upload them. Also, i've pretty much abandoned my livejournal blog. The ads that livejoural have added are invasive. I don't like my own site. So, my astronomy stuff will start getting posted here as well in the near future.

Thursday, August 05, 2010

English as a first language

I admit it. I hated English in school. Well, perhaps hate is too strong a word. It was more that there were other subjects that i preferred. I don't hate chocolate ice cream. I simply prefer vanilla. But i was talking to someone who majored in English in college the other day, and it got me thinking.

So why was English less than my favorite subject? It's probably the way grammar and spelling are taught. The way grammar is often taught is to explain the rules of grammar, with heavy emphasis on this is a noun and all sentences have at least a noun and a verb. It largely ignores the simple fact that English has no rules at all that aren't regularly broken. By the time the average American child is ten years old, they've learned 10,000 words of vocabulary, but also 10,000 rules of grammar. This is as large a vocabulary as adults master for most other languages. And really, come on. A rule for every word is pretty much the same as an exception for every rule. That's like saying that there are no rules at all. And American children don't learn these things by having to remember either the rules or the names of the rules. They do it by usage. And usage is how the language is defined. Really. Dictionaries are written by examining published material. That's why congress critters and others can routinely verb words. (The word verb is, of course, a noun). And, of course, learning just exactly what words are nouns, verbs, adverbs, adjectives, pronouns, prepositions, and so on, and what, exactly, the rules are for these things is roughly irrelevant to the English speaking child.

Here's an example. As a former child, i remember these, and swore i'd never torture my kids with them, should i ever have any. As a parent, i enjoy torturing my kids with them. There are few other perks being a parent, so one must enjoy the opportunities available. Johnny and me went to the park. The correct phrasing is Johnny and I went to the park. Please don't explain what rule this breaks. The correct way to teach this is as follows. One must drop the Johnny and bit and see if it still sounds right. So, Me went to the Park doesn't scan as well as I went to the Park. And yet, when i was a kid, i didn't respond even to the full grammar lesson, with what amounts to technojargon words and rules, using what ever in a sing-song voice, as i routinely get now, even with my methods. So, there are at least two things to note here. The education problem is much harder these days, now that we don't demand so much respect from kids. And, it's likely that we're attempting to teach kids the full grammar rules before they're mentally equipped to deal with them.

I did hear a Johnny and me reference on the radio recently. It was correct. Which is to say, it passed my test. I can't remember when (if ever) i've encountered anyone doing it right. Maybe the simpler rule is to simply always use Johnny and I. It may not always be right, but it may be right so much more often as to not make any difference. You heard it first here.

Spelling is even easier to teach. Tell the kids to write lots of stuff, and demand that they use spelling checkers. Have them turn off the word processor feature that corrects words as you type. This feature doesn't teach anything. Have the word processor mark anything it doesn't understand, and allow it to offer suggestions. This is how i learned to spell. Learning vocabulary words by rote was irritating. It also wasn't nearly as effective at expanding vocabulary as simply reading challenging books.

This correct-as-you-type feature is pretty evil. IMCO (In My Considered Opinion), the feature should be removed from all software. It often "corrects" words that have been typed correctly. That is, it often introduces errors. I often use a specialized vocabulary, for computers, engineering, or some other business. I end up having to type the same word correctly half a dozen times in the course of editing. I often have to come up with unique tricks to get what i need, such as writing some longer word, and deleting bits of it to get the right spelling. I even have to correct broken capitalizations.

I went to an Engineering school. And i work with engineers. With few exceptions, these people are brilliant, well rounded people. But they often have poor English skills. They leave out articles, make references using pronouns without clearly establishing what the references are for, and so on. And many of these people only know one language. It's often far worse when English isn't their first language. From a strictly business point of view, one might say, who cares? The work is getting done, right? And these people are brilliant, right? But poor documentation, especially unclear and ambiguous documentation can lead to needless rework and worse. Worse is documentation that misleads. One could call it anti-documentation. You're actually better off without anything. It has negative value.

Now, when i went to school for engineering, there was considerable opportunity to write. There were lab reports and other assignments. There were requirements to do work outside of your chosen major. Usually, these were not graded on English grammar or spelling, however. There were projects that one needed to do in groups. Since i had advanced computer editing, formatting and typing skills, i generally typed up the group projects. And, it was somewhat of a surprise to me that my ability to compose prose was generally superior to that of other students. After all, these kids were all off-the-wall brilliant. I completely ignore students for which English was not their first language here. I'm talking about native English speakers. And yet, as far as i recall, the school did not offer an English course of any kind. But one really needs a firm understanding of English to achieve technical excellence. And technical excellence was clearly the primary goal of the school. So, while it isn't my opinion that engineering schools need to have an English department offering and capable of granting an English degree, they should offer English courses as an option. Otherwise, all students are stuck with whatever they happened to learn in high school.

How large is your vocabulary? It's worse than impractical to try to list all the words you know and count them. Humans are terrible at listing things, especially when the list is long, such as when there are more than about three items. And yet, it turns out that there is a fairly quick and simple way to find out. Get a dictionary that brags about the number of words it contains. Many college dictionaries boast half a million words. Get a blank piece of paper and a pencil. A pen will do. Make two columns, Right and Wrong. Open the dictionary to a random page. Jam your finger down the left edge without looking. Then slowly move your finger down until a new word is exposed. Examine the word. Do you know what it means? Can you use it in a sentence? Try it. Then read the definition. If you were right, make a mark in the Right column. Otherwise, make a mark in the Wrong column. Do this exactly thirty times. Now the math. Take the total number of words in the dictionary, multiply it by the number of words you got right and divide that by thirty. That is, multiply the total number by the fraction you got right. That's an estimate of the number of words you know. If you don't believe it, you can always use a larger sample than thirty, or repeat the experiment, or change dictionaries.

So, my spelling is now good enough that i frequently argue with my spelling checker. But there is a word that i always spell wrong. Everyone has one. For me, the word is wrong, which i always spell w r o n g.

Sunday, August 01, 2010

Practical hyperthreading

I recently read some inconsistent material concerning Intel's CPUs. It had to do with hyperthreading.

The idea behind hyperthreading is that you have more than one set of CPU registers (including hidden registers) so that it is very quick for the CPU to switch from one process to another. In fact, it can be done between every instruction. That is, if there are two processes currently runable, the CPU can execute instructions from alternating processes.

There are a couple of reasons one might want to do this. One might want to have separate state for operating system kernel instructions and user level instructions. One might have separate state so that interrupt routines would run quickly. No need to save the state, just use registers dedicated to running interrupt service routines. This was done for Digital's PDP-10 computer back in the 1970's.

But there is a problem for modern machines that's a little different. It's the memory wall. Eventually, the bottleneck for Von Neumen architecture CPUs is the communication of data between the CPU and main memory. One can delay this bottleneck for awhile, and this has been done, but it will eventually come up and smack you in the face. And these days, CPUs are much faster than main memory. So, while the CPU may execute instructions at three billion per second, main memory takes at least several nanoseconds to respond to a request. OK, so most memory references only go as far as the on-CPU chip cache. These requests may be satisfied in as little as a single cycle. But to go all the way out to main memory can take what seems like forever. At least, forever if you're a fast CPU. It can be over a hundred cycles.

So the idea is, have more than one process running. When an instruction is executed that fetches data from main memory, the CPU might have to wait for the result before the next instruction is executed. However, if the CPU switches to an entirely different process, then that processes' next instruction can't be waiting for this result. There's a better chance that it can proceed without waiting at all. If the CPU is idle less often, then it is doing more useful work per unit time. It's faster. For Intel, this is usually about 20% faster. That is, you get an extra 20% more cycles per unit time.

However. Let's say you have two processes. Each process will get about half of the available cycles. If the total is 120%, then each process will run at about 60% of the original speed. Yes, that's right, the total throughput is higher, but a single processor will run a single process faster. But consider that a single processor will run two processes at 50% each, rather than 60% each. Still, people worried about speed often want their single process to run as fast as possible. Can one get the best of both worlds?

Yes. Often, there is inherent parallelism available within an application. The operating system supports something called threads. An application can have two threads running at the same time. Both threads have access to all the memory of the application. And in a hyperthreading environment, both threads can contribute to the performance of the single application. Therefore, a single application can get the speed boost offered by hyperthreading. It requires more effort on the part of the programmer. The result is usually more complicated, and can be more difficult to debug (get right). But it can, and often is, done.

When hyperthreading became available, i fired up a benchmark, timed a run of one copy. Then timed a run of two copies at the same time. Then, i went into the BIOS, turned on hyperthreading, and reran both tests. With hyperthreading turned off, the results were 100% speed with one process, and 50% for each with two simultaneous processes. With hyperthreading turned on, the results were 100% speed with one process and 60% for each with two processes. There was no additional gain to be had in total bandwidth for more than two processes with my simple benchmark. The benchmarks perform a fixed amount of work. So by 50%, i mean that this work load takes twice as long (wall clock) to execute. By 60% speed, i mean that this work load takes 1.66 times as long measured by the wall clock (1 / 1.66 = 0.60). Very simple.

But i started this article talking about confusion i've seen. One of the things i've heard stated is that if you turn on hyperthreading, your speed is immediately cut in half. This may be due to the way that the tools report your performance. We pretty much have an idea what 100% means if there is a single CPU with no hyperthreading. 100% use means that the CPU is totally consumed. But with hyperthreading turned on, some tools report 100% if two threads are executing the entire time. And if only one process is running, these tools often report 50%. However, in this later case, the CPU isn't idle. It's getting 83% (100 / 120) as much work done as is possible with this CPU. But this is exactly as much total work as the CPU would have done if hyperthreading were turned off.

And it gets worse. Some tools report 200% instead of 100%, as above. That's on the same running system. With some tools reporting 100% and others reporting 200%, it's a royal pain to compare results. And those reporting up to 100% often end up reporting 102% from time to time.

And it gets worse still. The operating system reports the CPU time that a process uses based on the runable time and the number of processes that were runable at the time. But the performance during that time can vary by 20%. So, CPU time doesn't measure total cycles delivered very accurately or repeatably. Well, with demand paging, this has been true for awhile anyway. Page replacement interrupts, TLB replacement interrupts, and even I/O interrupts all take their toll on accounting. So, IMO, it's not that much of a loss.

My new 4 core AMD Phenom II does not appear to support hyperthreading. I wish it did. But it still suffers a bit from poor accounting. My operating system tools sometimes report up to 400% CPU utilization, and sometimes report up to 100% CPU utilization.

And yet, there is a downside to hyperthreading. It has to do with priority. I often run a very long running background process, with the priority set as poor as possible. And Unix (or Linux) will typically give this process nearly 100% of the CPU when nothing else is running. And if there is a normal priority process running, then the background process will get 5%, with the foreground process getting 95%.

But with hyperthreading turned on, two processes may run at full speed because the operating system treats threads as CPUs. Since there are two CPUs (there aren't, really), the operating system lets them both run at full speed. That means that each process gets 60% of the single CPU speed. That's much less than 95% for the normal priority process, and much more than 5% for the low priority process. And there are times when i'm impatient enough to want that extra 35%. In fact, it's been awhile, but at one time i ran a Unix variant that would give the normal priority process 100% percent, with no cycles at all going to the low priority process. I miss those days. They were nice, or is it not so nice?

Wednesday, April 07, 2010

Of Course

I just completed a three day course at work. That's one point of view. The course used to be four days or five days, but now it's three days. Same material. It takes time for the material to sink in. And the way that this might happen is if i read the 500+ page text book from cover to cover, and do all the exercises in the work book. And soon. But, in principal, i know it all now.

Early in the course, there was an example where there was a pin that was measured to be a certain diameter, and also, there was a hole that was measured to be the same diameter. The question was, will the pin fit into the hole? My response was to ask how big the hammer was allowed to be.

One can, indeed, jam that much information into one's brain in such a short period of time. However, one must be prepared to use a really big hammer. But it's not free. I now feel like i've been hit by a truck, and then run over by a steam roller. We have chosen to describe the result of the incident survival.

Wednesday, February 17, 2010

Ipod Shuffle - Not Dead Yet

My original iPod Shuffle classic isn't dead yet. Fortunately, the ring came off while i was at my desk, and i saw where it landed quickly. I doubt i could get another ring. It appears that the ring was simply glued on in such a way that the bubble buttons underneath can be pressed with it. So, all i should have to do is glue it back on. There are a couple protrusions on the back side of the ring that allow the ring to be aligned properly so the symbols match up.

I fully expect the device to die eventually. After all, there's a non-replaceable rechargeable battery inside. It won't last forever. And, the cover cap for the USB port doesn't have any kind of permanent connection, so one expects to lose this eventually. It came with a second cap that has a cord attached. I never use it. It's around somewhere (i never throw anything out), and could be used as a spare, i suppose.

I have three other mp3 players. They all have more memory than the iPod. From twice as much to four times as much. And they all have more features, for example, a display so you can see what track you're listening to. But i end up using the iPod the most. And that's because i use it while commuting to work. You see, since it has no display, it is designed to be operated without looking at it. The other units are way more complex, and really can't be conveniently operated without looking at them. I've seen an mp3 player without a display since my iPod, but by comparison, it sucks. It's much cheaper, and for example, suffers from having too short of battery life per charge. So my iPod may be irreplaceable.

There are bugs and limitations in the iPod Shuffle. It's not perfect. If you miss the last bit of a track, you can't simply "rewind" into it. And it can be quite painful to "fast forward" from the beginning, if the track is long. And this happens often. I listen to mostly talk shows, and have missed the final punch line of an hour show due to traffic, etc. Another issue is that, from time to time, it fails to turn itself off. So, the battery is dead, even though it was just charged to full. And speaking of charging to full, all you get is a green, yellow, or red light for a battery indication. This isn't really enough to tell if you've charged it enough. And, finally, sometimes when you pause the playback, it will turn itself off, but when it starts back up, it has forgotten where it was. I've run into a couple other bugs, not worth mentioning. Essentially no software is perfect. But simpler devices tend to have fewer issues.