The audio of C25K is working after a late session last night. Listening to myself talking really makes me want to find someone else to record the audio track!
I thought I felt good enough to do some more work this morning, before going for a swim with Robi. I’m sitting down in the quiet kitchen. I’ve drunk my coffee, and I’ve had my breakfast, but the cogs are turning very slowly this morning.
I’m pretty pleased with how the code worked out. As you are jogging along, the C25K app is talks to you to tell you what to do at the moment (probably while you’ve also got music playing). It’s also going to have a simple on screen display that will show you what you should be doing, and that will give you a quick précis of your progress with the run.
I’ve got a RunPlayer object that will spool through data for a run firing audio samples at appropriate points, and allowing the run to be paused and resumed. I think it is going to be relatively easy to extend this object to also fire out events to external listeners. These events can drive updates of the in run display.
The other side of the audio work I did is support for generating data files that describe the run structure. I’ve got a simple scheme for naming the audio files so that they encode where in a run they occur. I’ve written a Ruby script that will run through all these files and build up XML data to describe how the runs should play out. I guess I could have done this in Objective C, but it’s really very easy in Ruby.
An advantage of this approach is that I think it will make the application easy to localise if I get in to that, and it will allow me to supply more than one commentary track, which would be a nice feature. I guess it might even be possible to have user submitted audio tracks.