Etekcity_USB3_dock
I’ve been working with multiple displays since at least 2004. Back in those days I had a 17″ PowerBook G4 with a PCM/CIA card that provided a second external DVI output. It was slow, but it worked, and for the programming I was doing the lacking performance was not an issue.

For the last few months, I’ve been using a Sorbent USB 2.0 adapter to provide a second video output for my Mid-2011 MacBook Air, so I had two external displays, and the laptop itself for 3 total. The problem is that this MacBook Air doesn’t have USB 3.0, and USB 2.0 just doesn’t have enough throughput to drive a large display, so it had a lot of lag—too much to really be acceptable.

I’d been looking around for solutions. The problem is that all of the Thunderbolt docks can only really drive a single external display unless one of your monitors is itself a Thunderbolt display, which can work via Thunderbolt pass-through. But recently there have been more and more USB 3.0-based docks that support Mac OS X.

Kanex_Thunderbolt_USB_adapterSo I picked up a Kanex KTU10 Thunderbolt to eSATA Plus USB 3.0 adapter, and an Etekcity USB 3.0 dual monitor dock for my late-2011 MacBook Air. This dock has two USB 3.0 ports, 4 USB 2.0 ports, and two display outputs (HDMI and DVI), and the Kanex adapter should in theory provide a USB 3.0 port which the dock needs.

After a little dance with installing the latest DisplayLink driver (2.3 beta), they totally work.

So for about $200 (only a tad more than the cost of one of the single-display Thunderbolt docks), I’m running with three screens again, and the performance is perfectly acceptable for most things I will ever need to do.

Plus I’ve got gigabit Ethernet, more USB 3.0 ports, and an eSATA port which will be great for backing up my machine to an external drive.

Overall I’m very pleased.

Uncategorized

radioProductShotIt may take some time for the DNS change to propagate, and there are certainly going to be some broken incoming links, but I just finished the bulk of the work to port my Radio UserLand site here.

All of the posts from jake.userland.com are in a new Jake’s Radio ‘Blog category, in addition to preserving their original categories (some of which overlap with ones that were already here).

That leaves just one site to port in before my entire blogging life will live here at jakesavin.com.

(Of course I have a bunch of other sites too, and I have yet to decide what to do with each of them.)

Blogging Jake's Radio 'Blog WordPress

We took the verticals that FounderDating Network Cofounder members (those members who have indicated that they are interested in finding cofounders) selected as markets they are interested in started a company in and compared the last six months with the same six months one year ago…

- via Founder Dating

It comes as little surprise to me that the verticals that seem to remain pretty stable include commerce, small business, advertising, cloud services, and enterprise. To my mind, this is reflective of how our economy intersects with technology in a fairly general sense. Of course mobile is still big, and I believe most investment in mobile is driven by commerce (including advertising) and business needs, with cloud services serving a supporting role. It is interesting though that mobile startup investment seems to be reaching a plateau rather than growing or declining.

It’s also no surprise to me that the wearable and smart home verticals are on the rise, given the buzz around “Internet of Things”, health-data scenarios, and clean energy over the last few years. Interest in these verticals has existed for a long time, but investment is happening now for two reasons: maturing new technologies are finally enabling them, and our social norms are changing. It of course remains to be seen whether there will be a bubble in either wearables or smart home startups, but for the moment there’s a scramble to deliver new products and services in both spaces, and there’s a lot of room for growth over the next few years.

The consumer electronics rise is probably related in part to wearables and smart home, though it’s interesting to contemplate what might be happening if some portion of that rise is independent. (I’m not going to do that here though.)

To me, the most interesting stand-out in the Founder Dating verticals report is an apparent decline in interest doing startups in the data & analytics space.

Is Big Data investment waning?

I see more and more job listings these days, in all sorts of technology disciplines that call for “a passion for big data” or “proven ability to analyze data for customer insights”. In part at least [big] data analytics seems to be getting absorbed into the broader technology toolbox—that more an more “Big Data” is seen as a core competency, or from another point of view just another part of the “cost of doing business.”

Simultaneously, the idea of Big Data driving markets in-and-of itself seems to be dwindling. And I think this is a good thing.

Data by itself is just data even if it’s Big

I’ve felt for a few years now that there’s been an over-emphasis on data for its own sake, at least the way it’s been marketed so far: More data, more types of data, more sources of data, more users contributing data, etc.

BigData_2267x1146_whiteThere’s certainly been a huge rise in data warehousing and reporting capability across the many industries touched by high-tech. And many companies have made at times extravagant claims about how Big Data will revolutionize all aspects of your business (technology or otherwise).

It’s true that we can now store, search, and retrieve information with a capacity and speed that was unimaginable even two or three years ago. But for the most part, availability and cost-effectiveness of data collection and reporting by itself has not (so far) revolutionized our lives or our businesses, except in a few niches—web search and social networks being two of the most visible.

It’s the analysis, stupid!

Take Facebook and Twitter in the social space, Google in search, or 23andMe in the consumer DNA analysis space. For at least these verticals there’s also been a correspondingly large investment in data analysis—probably in nearly all cases a much larger investment.

We need to understand that good data analysis requires a lot of creativity, long-term investment in tools and algorithms, and an iterative development process—all of which is far from free. The data by itself is just bits on a disk somewhere.

Access to vast amounts of data has indeed been a fantastic aid that has driven broad, albeit often incremental improvements in decision making, product design, and operational efficiency. More rarely it’s enabled completely new product spaces, though without a real data analysis component, most of the new markets that have opened up have been related to data warehousing. The mere availability of lots of data has not so far been a panacea. And it may never be.

It’s certainly true that we take for granted today that we have comprehensive map data at our fingertips.

Ultimately though, the most interesting Big Data scenarios require that we aggregate and correlate vast data-sets in ways that ask specifically designed questions, and which report results that can be interpreted as effective, meaningful, actionable answers to those questions. (Remember Douglas Adams’ 42?)

And so far asking the right questions is still nearly completely in the domain of human beings.

Uncategorized

Hi all—here’s an update following my previous post asking for some WordPress advice: I pulled the trigger, and now all of the content from Jake.EditThisPage.com is ported over to JakeSavin.com. Amazingly it worked right the first time! When does that ever happen?

Most, if not all of the links into the old site now redirect to the right place here. For the moment they’re temporary redirects, but after a bit more testing I’ll make them permanent so they’ll get picked up by search engines and the like. And contrary to my initial fears, the problem with pages living at multiple URLs was able to be easily resolved by redirecting via mod_rewrite rules in my .htaccess file.

404: Just say no!

The content of that site spans the period from December 22, 1999 to March 11, 2003, and all of the posts from that site are in their own category to make them easy to find.

Next I’m going to write some code to export my Radio UserLand site to WXR (WordPress eXtended RSS) format, so I can merge that content in too. I know a lot more now than I did when I started this work with Manila, so it should be quite a bit easier. After that, a one-off exporter for my custom WebsiteFramework site, Jspace.org. That one goes way back to 1997!

Blogging Jake's Brainpan WordPress

Comments closed

I have a plan for something I want to do with my site, and could use some advice from experienced WordPress people.

I have two legacy sites that I want to merge into my current WordPress site. Content in this site already consists of the imported content from one of these sites, plus posts I’ve made since switching over.

The other site I want to merge in has conflicting post IDs. In order to redirect old URLs to their new homes in WordPress, I need a way to resolve this conflict in a predictable fashion that can be addressed with mod_rewrite (or something comparably simple).

So I decided to apply an offset of 10,000 as I export the content from that site, so:

  • ID 15 becomes ID 10015.
  • ID 1243 becomes ID 11243.

This guarantees that there will be no conflict with any IDs in the current site.

And since the old IDs can be transformed relatively easily with regex into the new ones, I can create some mod_rewrite rules that are conditional on requests coming to the old host name, which redirect from the old URLs to the new ones. (I’ve already tested this, and it appears to work.)

So basically what I want to know is this:

Is there some reason I should not do this?

Am I painting myself into a corner?

Will the jump from ID ~2000 to ID 10001 cause any issues?

Any gotchas (SEO or otherwise) with my next post after the import starting at roughly ID 12000?

Any comments in favor or against are much appreciated! :-)

Update: @octothorpe replies on Twitter, “@jsavin That should work, although having a lot of mod_rewrite can add serious latency. Also make the redirects 301s.” — I’m doin’ this thang…

Uncategorized

Dave Winer:

As Walter Isaacson points out  innovators need to be both humanitarians and scientists, we have to touch the human spirit, and be masters of the scientific method. In the bootstrap of blogging it was enormously important that I was both a writer and a programmer. We had to learn to write for this new medium, and we had to figure out how the software worked.

I was lucky in 1994 that I was completely free to explore, and that the world was ready to make this leap. So I began a trip, that led to something wonderful , every bit as big as I thought it might be back then.

Read the whole thing.

Blogging

Comments closed

Alex King posted an interesting rebuttal of Santiago Valdarrama’s missive explaining why he’s building his own blog engine.

Taken together, these posts pretty much sum up the reasons why I went with self-hosted WordPress, rather than try to roll my own solution, or continue to lope along indefinitely with Manila.

A couple of Alex’s points in particular stuck out for me:

Santiago: There’s always a learning curve. Every platform is different, specially when you want to fine tune your layout and deviate from the provided templates.

Alex: This one strikes me as a bit silly. There is a learning curve when building your own system too – especially if you haven’t written your layout/templating system yet.

Then:

Santiago: You’ll never get to experience the satisfaction of engaging in a conversation about how you developed your own platform from scratch.

Alex: … if what you want is engagement then joining a bountiful and vibrant community of developers is a much bigger opportunity than the potential for a conversation with another NIH hacker.

Santiago finished his post with:

It takes a few evenings of work to get it done. It’s that simple.

Honestly I doubt it. Although I’m an experienced web developer, if I were to attempt to roll my own solution from scratch, it would be a huge undertaking, fraught with many potentially fatal problems:

  • First I’d have to choose a programming language and platform, with very little in the way of criteria with which to make the right decision—at least not without doing a lot of research first.
  • I’d need to decide what features I really need and what I could do without.
  • I’d have to write (and debug) the code—probably a lot of code.
  • If I wanted to be able to use a native app to post to my blog, I’d have to implement a well-known API, with a dialect that the app understands. (Mo code, mo problems.)
  • I wouldn’t be able to take advantage of the vast universe of WordPress plugins: I wanted a feature a plugin implemented, I’d have to write it myself. (Mo code, mo problems)
  • And so on…

And after all that, I’d still have to find a way to export the content from my current site, and import it into the new one, which was something was going to have to do anyway. :-(

Plus, as Alex hints at by pointing out the vibrancy of the WordPress community, I wouldn’t be able to leverage the experience to actually learn WordPress (and some PHP, and some optimization, and some Apache config, and…).

Update: Santiago has a follow-up post:

“I’d never ask someone to do this. Rolling your own engine means a lot of work, and unless you are really on the nerd side (like I am and Brent Simmons is), it will be a waste of your time.”

Update: More dialog on Twitter

Ps. In the end Brent decided to stick with the self-built engine he’s been using for years, and write an iOS app for himself to post to it remotely. Moral of this story: Stick with what you know?

Blogging WordPress

CocoaConf LogoThis June, I was one of the lucky ones who’d won the lottery, and was able to attend WWDC in San Francisco. While I was at the conference, it was awesome to have the whole schedule at my fingertips via the WWDC app on my iPhone. With CocoaConf Seattle just around the corner, I found myself wishing there were a CocoaConf app. No such luck.

Then I remembered iCal feeds are a thing, so went to check on the CocoaConf website for a subscribable calendar feed for the Seattle event, but that also didn’t seem to exist.

So as a public service to my fellow nerds who are attending CocoaConf 2014 in Seattle, I created a public iCal feed using Google Calendar, that you can subscribe to for the schedule of all the sessions, including the Thursday workshops. It should work on iOS devices, Google Calendar, Calendar.app on the Mac, BusyCal, and others. Here’s the link:

https://www.google.com/calendar/ical/f7oecob86640vd63qtmu57r92c%40group.calendar.google.com/public/basic.ics

If you’re looking at this post on your iPhone, iPad, or Mac you should be able to just click the link to subscribe to the calendar.

Hope to see you at the conference!

Uncategorized

Comments closed

Today is Dave Winer’s 20th Anniversary blogging, starting with this DaveNet piece in 1994. Dave wrote a great piece about the occasion here: 20 years of blogging. As I read it, a few things came to mind. Oddly the number 19 seems to be a theme…

Another life on another continent

Twenty years ago today, I had just turned 25. I was living in Amsterdam, and touring as the bassist with an indie rock band called Painting Over Picasso. We had just released our first album. My life has changed a lot since then—enough so that today it feels like the “me” in Amsterdam might have been a different person altogether.

I started reading Dave’s writing online and getting into programming with Frontier some time in 1995, while I was still living abroad. Dave’s writing and programming in Frontier are among the few threads in my life that cross the K-T boundary between my music and tech careers.

The band stayed together for a little over four years altogether. I had been an aspiring musician for about five years before that, and continued performing in public with various bands until 2006, though not professionally.

Reflecting on 20 years of… anything

In total my music career lasted about 19 years, only seven of which were really serious.

19 years is the same amount of time between when I first started programming in Frontier to now.

19 years is also the time between writing my very first programs on the Apple II at my middle school, and joining UserLand as a developer in 2000.

I’d be pretty hard pressed to think of any single thing I’ve done more or less continuously for 20 years.

Starting a second career

I returned to the United States in 1996, with no job, no real prospects, a music composition degree, and a strong sense that I belonged in the software industry. Though I had been an amateur programmer off-and-on since 1980 or so and had created large-ish projects of my own, I had zero real work experience in technology.

Through a college friend, I managed to bully my way into an entry-level software test engineer job at Sonic Solutions in Marin County, which brought me to the Bay Area for my first tech job. My first day of work was my birthday, October 1, 1996, just over 18 years ago. (Huzzah for testers!)

I met my friend Vance who introduced me to Sonic in the fall of 1987 at Reed College in Portland, OR—the same college that Steve Jobs famously dropped out of. (Not that this fact has anything at all to do with me.) So I’ve known Vance for 27 years.

The number of people outside of my family that would call close friends for 20 years or more is… 4.

Doing anything or knowing anyone for more than 20 years is rare in my experience, but as it turns out my true friends have lasted longer than my career tracks to date.

And this comes as no surprise. ;-)

A few quotes from Dave’s piece

These passages in Dave’s piece today resonated with me:

… You should create stuff because you enjoy being creative, because you have the creative impulse. Not because you expect to be loved for it. #

That’s how I’ve always felt about my own creative work, whether sculpture and painting I did in high school, making music, creating software, or writing (online or off).

It really is great to be admired for one’s accomplishments. But that’s never been the most important reason I’ve worked to create anything. People may admire someone or their work, but admiration doesn’t equal love. It’s an easy mistake to make, especially when you’re young.

This part about Aaron Swarts also struck a chord for me:

… I did him the honor he asked for, and treated him as a responsible person. One of the great things about the Internet is that our bodies are the same size here, and if you want to play with the adults, there’s nothing stopping a young person from doing so… #

I remember how surprised, and then delighted I was when I first learned about Aaron’s youth, as he began to engage with the RSS community. It was refreshing to see (much of) the community accept him as a person with ideas, and worthy of being listened to. Especially since I often didn’t fit in easily when I was younger, and I’d wished that being bright and engaged were enough to gain acceptance in the so-called “real world”.

Dave did me the same honor when I approached him and UserLand in 2000, and asked if I could work with them. At that point I had few accomplishments to prove my worth as a developer, other than some hacked together Frontier scripts that ran my own blog, a bit of incomplete online writing, and the willingness to ask for their trust.

The move to UserLand, and having worked with Dave and others there have had a very positive long-term effect for me, which is difficult to quantify:

  • I grew from a tinkerer into a real software developer working in Frontier, with Dave, Brent, and André as mentors.
  • Relationships I made at UserLand, and work I did both with and for Dave, continue to open professional doors for me even today.
  • My UserLand experience proved to me in a personal way that a few smart, dedicated people can have a big impact, if they stick to it over time.

Thanks, Dave!

So on your 20th blog-versary, I’d like to say “Thanks, Dave!” for sharing your thoughts and writing with us all, and for narrating your work on so many things, some of which now seem obvious—even taken for granted.

And on a personal note, thank you for the great opportunity and experience of working with you and the folks at UserLand. It was a great experience for me, and in no small part it was reading your writing starting about 19 years ago, that made me to want to work with you. :-)

Blogging

Comments closed

m4s0n501

In the last post on this topic, I discussed some of the differences between Manila and WordPress, and how understanding those differences teased out some of the requirements for this project.

In this post I’m going to talk about the design and implementation of a ManilaToWXR Tool, some more requirements that were revealed through the process of building it, and a few of the tricky edge cases I had to deal with.

A little history first…

Frontier website headerAmong the more interesting things I did while I was a developer at UserLand, was to build a framework we called the Tools Framework, which brought together many different points of extensibility, and made it easy for developers to customize the environment.

In Frontier, Radio UserLand, and the OPML Editor, a Tool is a collection of code and data in a database, which extends or overrides some platform- or application-level functionality. It’s sort of analogous to a Plugin in the WordPress universe, but Tools can also do things like run code periodically (or continuously) in the background, or implement entirely new web applications, or even customize Frontier’s native UI.

For example, you could implement a Tool that hooks into the windowTypes framework and File menu callbacks to implement a new document type corresponding to a WordPress post. Commands in the File menu call the WordPress API, and present a native interface for editing your blog—probably in an outline. Radio UserLand did exactly this for Manila sites, and it was fantastic. (More on that later.)

Another example of a Tool is one that implements some new XML-RPC endpoints (RPC handlers in Frontier) to provide a programmatic API for accessing some content in a database on your server.

For my purposes, I’m not doing anything nearly so complicated. The main thing I wanted comes from the Tools > New Tool… menu command. This creates a new database and pre-populates it with a bunch of placeholders for things like its menu, a table for data and preferences, and of course a table where my code will live.

It gives me an easy, standard way to create a database with the right structure, and the hooks into the menu bar that I wanted to make my exporter easy to use.

Code Components

Now some of this may sound pedantic to the developer-types who are reading this, but please bear with me on behalf of our non-nerd cohorts.

Any time you need to write a lot of code, it makes sense to break the work down into small, bite-sized problems. By solving each of those problems one at a time, sometimes in layers, you eventually work your way towards a complete solution.

Each little piece should be simple enough that you can compartmentalize it and separate it from the other pieces. This is called factoring, and it’s good for lots of reasons including readability, maintainability, debug-ability, reuse. And if you miss something, make a mistake in your design, or discover that some part of your system doesn’t perform well, it’s far easier to rewrite just one or a couple of parts than it is to de-spaghettify a big, monolithic mess.

Components and sub-components should have simple and consistent interfaces so that other code that talks to them can in turn be made simple and consistent. Components should also have minimal or no side-effects, meaning that they don’t change data that some other code depends on. And components should usually perform one or a very small number of tasks in a predictable way, to keep them small, and make them easy to test and debug. If you find yourself writing hundreds of lines of code in one place, you probably need to break the problem down into smaller components.

So with these concepts in mind, I set about coming up with a component-level design for my Tool. I initially came up with four types of components that I would need, and each type of component may have a specific version depending on the type of object it knows about.

Iterators

First, I’m going to need an easy way to iterate across posts, stories, pictures, and other objects. As my code iterates objects in my site, the tool will create a fragment of XML that will go into a WXR file on disk.

By separating the iteration from everything else, I can easily change the order in which objects are exported, apply filters for specific object types, or only export objects in a given date or ID range. (It turned out that ranges and filters were useful for debugging later on.)

Manila stores most content in its #discussionGroup in a sub-table named messages. User information is in #membershipGroup, and there’s some other data scattered around too. But the most important content—posts, pages, pictures, and comments—is all in the #discussionGroup.

Initially I’d planned to make multiple passes over the data, with one pass for each type of data I wanted to export. So first export all the posts, next the pages, next pictures, etc. As it turned out however, in both Manila and WordPress, a post, a page, and a picture have more in common than not in terms of how they’re stored and the data that comes along with them. Therefore it actually made more sense to do just one pass, and export all the data at one time.

There was one exception, however: In WordPress unlike Manila, comments are stored in a separate table from other first-class site content, and they appear in a WXR file as children of an <item> rather than as their own <item> under the <channel> element:

In the end I decided to write two iterators. Each of them would take the address of the site (so they can find other required metadata about a person for instance), and the address of a function to call for each object as it goes along:

wxr.visit.messages – iterates over all of the messages in my site’s #discussionGroup, skipping over deleted items and comments, since they won’t be exported as an <item> in my WXR file.

wxr.visit.commentsrecurses over responses to a message to generate threaded comment information.

It turned out later on that I needed two more iterators—one for categories, and one for “Gems” (non-picture files), but the two above were a great starting point that would give my code easy access to the bulk of the content.

Data Extractors

Next I needed some data extractors. These are type-specific components will pull some data for a post, picture, comment, etc out of the database, and normalize it to a native data structure that can then easily be output to XML for my WXR file.

The most important data extractor is wxr.post.data, which takes the address of a message containing a blog post that’s in my site’s #discussionGroup—and returns a table (struct) that has all of the data elements that will go into an <item> in the exported WXR file.

Because the WordPress importer expects the comments as <wp:comment> sub-elements of <item> the post data extractor will also call into another data extractor that generates normalized data representing a comment.

For other types of objects I’ll need code that extracts data for that type as well. So I’ll need code to extract data for a picture, code to extract data for a page (story), and code to extract data for a gem (file).

Here’s part of the code that grabs the data for a comment:

There are a few interesting things to point out here:

  1. I chose to capture comment content even if it’s not approved. Better to keep the content than lose it, just in case I decide to approve it later.
  2. The call to wxr.comment.parent gets the ID of the comment’s parent. This preserves the threaded nature of the conversation, even if I decide not to have threaded comments in my WordPress site later on. It turns out that supporting both threaded and unthreaded comments was the source of some pain that I’ll explain in a future post.
  3. The call to wxr.string.processMacros is especially important. This call emulates what Manila, mainResponder, and the Frontier website framework do when a page is rendered to HTML. Without this capability, Frontier macro source code would leak through into my WordPress site, and possibly many internal links from #glossary items would not be broken. Getting this working was another source of pain that took a while to work through—again, more in a future post.
  4. All sub-items in the table that gets returned have names that start with “wp:”, which I’ll explain below…

Encoders

Once I had some structured data, I was going to need to use it to encode some XML. It turns out that this component could be done in a very generic way that would work with any of my data extractors.

Frontier actually does have somewhat comprehensive XML capabilities. But the way it’s implemented requires very verbose code that I really didn’t want to write. I had done quite enough of that in a past life. ;-)

So I decided to write a much simpler one-way XML-izer that I could easily integrate with my data extractors.

The solution I came up with was to recurse over the data structure that an extractor passed to it, and generate an XML tree whose element names match sub-items’ names, and whose element content were the contents of each sub-item.

There were three features I needed to add in order to make this work well:

Namespaces: Many elements in a WXR file are in a non-default namespace—either wp: for the WordPress-specific data, or dc: for the Dublin Core extension. This feature was easy to deal with by just naming sub-items with the namespace prefix, i.e. an element named parent in the wp: namespace would simply be called wp:parent when returned by the data extractor.

Multiple elements: Often I needed to create multiple elements at a given level in the XML file that all have the same name. <wp:comment> is a good example. The solution I came up with here is similar to the one Frontier implements in its native XML verbs.

A compiled XML table in Frontier has sub-items representing elements, which have a number, a tab character, and the element’s name. The Frontier GUI hides the number and the tab character when you view the table, so you can see multiple same-named elements in the table editor. When you click an item’s name, the number and tab character are revealed, and you can edit them if you want. That said, you’re supposed to use the XML verbs, xml.addTable or xml.addValue to add elements.

Most of this is not particularly well documented, and personally I don’t think it was the most elegant solution, but it was effective at working around Frontier’s limitation that items in tables had to have unique names, whereas in XML they don’t.

I wanted something simpler, so I decided instead to simply strip anything after a comma character from the sub-item’s name. This way whenever my data extractor is adding an item, it can just use table.uniqueName with a prefix ending in a comma character, and then add the item at that address. Two lines of code, or one if we get just a little bit fancy:

XML attributes: The last problem to solve was generating attributes on XML elements, for example <guid isPermalink="false">...</guid>. It turns out that if there were an xml.addAttributeValue in Frontier, it could have handled this pretty easily, but that was never implemented. Instead I’d have to add an /atts sub-table, and add the attribute manually—which takes multiple lines of code just to set a single attribute. Of course I could implement xml.addAttributeValue, but I don’t have a way to distribute it, so nobody else could use it! :-(

In addition, I really didn’t want big, deeply-nested data structures flying around my call-stack, since I’m going to be creating thousands of tables at run-time, and I was concerned about memory and performance.

In the end I decided to do a hack: By using the | character to delimit attribute/value pairs in the name of table sub-elements, I could include the attributes and their values into the element name itself. So the <guid isPermalink="false"> element would come from a sub-item named guid|isPermalink=false.

Normally I would avoid doing something like this since hacks have a tendency to be fragile, but in this case I know in advance what all of the output needs to look like, so I don’t need a robust widely-applicable solution, and the time I save with the hacky version is worth it.

Utility Functions

Then there’s a bunch of miscellany:

  • A way to easily wrap the body of a post with <![CDATA[...]]> tokens, and properly handle the edge case where the body actually contains those tokens.
  • A non-buggy way to encode entities in text destined for XML. (xml.entityEncode has had some bugs forever, which weren’t fixed because of Rule 1.)
  • Code to deal with encoding various date formats, and converting to GMT.
  • Code to convert non-printable characters into the appropriate HTML entities (which in turn get encoded in XML).
  • Other utility functions dealing with URLs, calculating permalinks, getting people’s names from their usernames, etc.

The Elephants in the Room

At this point there were a few more things I knew I would need to address. I’ll talk about these along with handling media objects in my next post. In the meantime, here’s a teaser:

  1. Lots of stuff in Manila just doesn’t work at all unless you actually install the site, with Manila’s source code available.
  2. The macro and glossary processors aren’t easy to get working unless the code is running in the context of a real web request.
  3. What should I do about all the incoming links to my site? Are they all going to simply break?

I’ll talk about how I dealt with these and other issues in the next post.

More soon…

Development Manila WordPress

Comments closed