To me, it feels like there’s some kind of inflection point being reached, but I base this on not much more than my own subjective, albeit at least somewhat informed experience.

The obviously important recent SCOTUS decisions are out there of course, but we have multiple justices over age 80, going into a presidential election with a big potential for a backlash, leading to a potential for appointments that could reverse a lot of positive progress.

We had a huge financial meltdown and now 6+ years later a lot more of the general public is well aware that real justice has yet to be served to many of those responsible. Some of the changes we’re seeing may be a result of this.

Some Evangelicals are aligning more and more with liberals and progressives on protecting the Earth and the environment, and are deeply concerned about limiting the impending damage that will be caused by climate change.

And we have generational changes in social norms coming to a head around the world, at a time when mass communication over most of the globe has never been more accessible, in spite of corporate and government attempts to control or curtail it—at least so far.

Look at how many videos are going online all over the country and the world, of police misconduct, racism, and brutally. That this is happening is far from new. That neither the media nor many governments can really control who knows about it is new. The information has been becoming more available for decades, but the visceral reality in these videos has only been widely visible for the last few years.

In Arthur C. Clarke’s world of 2010, wars between nations ended after the abolition of long distance phone charges, which led to many average people having friends all over the world. You can’t attack a country filled with so many people that are loved by your own citizens—that was the thinking. While it’s not working out in quite the way that Clarke envisioned, there is still huge potential in making information from primary sources available globally, at massive scale, and for such little cost.

At the same time the Internet has also led some (many perhaps) into isolated enclaves—information deserts (borrowing from the idea of food deserts in American urban areas), where the only ideas that flow freely are the ones that a clique agrees with, along with a few refrains that they abhor and can use as foils and straw men, to “argue” about how wrong or even evil the other side is.

I for one am cautiously optimistic.

Ps. This post is in response to an online discussion about a Kevin Garcia piece on bedlammag.com.

Uncategorized

My friend Brent Simmons has recently written a series of blog posts—seven parts so far—on How Not to Crash, for Cocoa and iOS developers. Brent is an experienced and thoughtful programmer, and these are well worth a read. Most are probably useful even to programmers working in other languages.

Check them out!

How Not to Crash #1: KVO and Manual Bindings
How Not to Crash #2: Mutation Exceptions
How Not to Crash #3: NSNotification
How Not to Crash #4: Threading
How Not to Crash #5: Threading, part 2
How Not to Crash #6: Properties and Accessors
How Not to Crash #7: Dealing with Nothing

Update: Brent added two more How Not to Crash posts since I originally wrote this:

How Not to Crash #8: Infrastructure
How Not to Crash #9: Mindset

… and wrapped them all up in this post on inessential.com.

CocoaDev Development Uncategorized

Simon Wardley: Evolution, diffusion, hype cycle and early failures:

“I looked at many techniques to measure change and found all of them wanting. I spent years finding out that lots of things weren’t useful for describing evolution. This is why I spent so long in the British Library cataloguing many thousands of publications. There was no effective means of describing the process of evolution until I’d done this work and found a process that seemed to work.”

See also: On mapping and the evolution axis

Uncategorized

Mandelbrot SetMatt Mullenweg writes in “How Paul Graham Is Wrong“:

If 95% of great programmers aren’t in the US, and an even higher percentage not in the Bay Area, set up your company to take advantage of that fact as a strength, not a weakness.

I have heard recently and first hand, that some investors don’t like to invest in virtual companies, or in companies where any of the important team members is remote.

This makes me sad.

It’s especially disheartening in light of the continued and sustained explosion in communication tools and capabilities, and the fantastic reduction in cost of communicating with remote people. At the same time I have friends who are experts with very deep experience who are having trouble finding work.

In my now over 18-year technology career, I’ve spent nearly half of it working with or for remote teams or at virtual companies, and a large portion of my best and most important work has happened while working “remote” from home.

Companies (and people) that don’t figure out how to do this are already at a significant disadvantage against those that do, and have been since at least the early 2000’s. And this disadvantage is more than likely to continue to grow as communication and coordination tools continue to get better and cheaper.

Similarly, investors who don’t understand this fact artifically limit their potential up-side.

When companies open up to the possibility of remote work, they vastly expand the pool of talent they can draw upon. When people live and work in less expensive locations, they may be less expensive, or they may be more loyal because you can pay them more. Remote workers may be happier and more productive because they can tailor their work environment to maximize their own, personal productivity needs.

And when the whole company is virtual, you can decrease operational expenses: There may be no need for an office. The cost and time lost to commuting disappears. Perks that are common in our industry, like free food, on-site massages, and high-end office decor are unnecessary, and the savings can be passed on to employees to use in ways that better fit their personal needs.

Early-stage startups can leverage the savings for a longer “runway”. And for established or so-called “growth” companies, you can use the balance to pay for better people, support travel for company-wide meetings, sponsor related trade shows, or a multitude of other things.

Ps. Bonus link: I Am Not a Child by Emma Plumb

Remote Work

TomcatToday I needed to start figuring out how to install an open source analytics package on my dev machine. It’s implemented in Java, and needed Tomcat. I groaned. “Great. Another complicated dependency to install,” I thought.

Turns out that installing Tomcat on a Mac is actually pretty easy. I ended up following Wolf Paulus’ tutorial here.

Nice write-up, Wolf. Thanks!

Development

Lock-iconI just finished installing an SSL certificate on JakeSavin.com. The main reason was to prevent impersonation and man-in-the-middle attacks while I'm editing or administering my site. I was using SSL to connect to my WordPress admin interface already, but with a self-signed certificate that produces warnings in the browser (in addition to not being as secure as it should be). Now that I have a CA-backed certificate, the warnings go away.

There are a some additional benefits to this:

  1. API clients like dedicated blog editing apps, that validate SSL certs (as they all should when connecting securely) should now work, though I have yet to test this.
  2. Anyone who visits my site can request the secure URL, and get an encrypted connection to protect their privacy. They can also be reasonably sure that they're actually visiting my real site and not an imposter—not that I'm actually worried about imposters.
  3. Google (at least) has started ranking sites that fully support SSL higher in their searches. Not that I'm really big on SEO for my site, but it's a “nice-to-have” feature.

See also: Embracing HTTPS (Konigsburg, Pant and Kvochko)

If you see any problems, please let me know via a comment, tweet or some-such.

 

Security

A couple of weeks ago, I started running River4 on my Synology DS412+ NAS device. At the moment I’m generating just one river, which you can see on river.jakesavin.com.

Since I needed River4 to run all the time, and I didn’t want to have to kick it off by hand every time the NAS boots up, I decided to write an init.d script to make River4 start automatically.

If you have a Synology NAS or other Linux-based machine that uses init.d to start daemon processes, you can use or adapt my script to run River4 on your machine.

How to

  1. Install node.js and River4 on your machine.
    • I installed River4 under /opt/share/river4/ since that’s where optional software usually goes on Synology systems, but yours can be wherever you want.
  2. Follow Dave’s instructions here in order to set up River4 with your data, and test that it’s working.
  3. Download the init.d shell script.
  4. Unzip, and copy the script to  /opt/etc/init.d/S58river4 on your NAS/server.
  5. Make the script executable on your NAS/server with:  chmod 755 S58river4
  6. Edit the variables near the top of the script to correspond to the correct paths on your local system.
    • If you’re using a Synology NAS, and the SynoCommunity release of node.js, then the only paths you should need to change are RIVER4_EXEC and RIVER4_FSPATH, which are the path to river4.js and your web-accessible data folder (river4data in Dave’s instructions).
  7. Run River4 using  /opt/etc/init.d/S58river4 start

At this point, River4 should be running.

If your firewall is enabled and want access to the dashboard, you’ll need to add a firewall rule to allow incoming TCP traffic on port 1337. I recommend you only do this for your local network, and not for the Internet at large, since River4 doesn’t currently require authentication.

Once your firewall has been configured, you should be able to access the dashboard via:

http://myserver:1337/dashboard

Notes

The script assumes you’re going to be generating your river of news using the local filesystem, per Dave’s instructions for using River4 with file system storage. I haven’t used it with S3 yet, but you should be able to simply comment out the line in my script that says export fspath, and get the S3 behavior.

There is no watcher, so if River4 crashes or is killed, or if node itself exits, then you’ll need to restart River4 manually. (It should restart automatically if you reboot your NAS.)

Questions, Problems, Caveats

I did this on my own, and it is likely to break in the future if River4 changes substantially. I can’t make any guarantees about support or updates.

If you have problems, for now please post a comment on this post, and I’ll do what I can to help.

Please don’t bug Dave. 😉

Source code

Here’s the source code of the init.d script. (The downloadable version also contains instructions at the top of the file.)

Uncategorized

Version control and I go back a long way.

Back in the late 1990’s, I was working in the QA team at Sonic Solutions, and was asked to look into our build scripts and source code control system, to investigate what it would take to get us to a cross-platform development environment—one that didn’t suck.

At the time, we were running our own build scripts implemented in the MPW Shell (which was weird, but didn’t suck), and our version control system was Projector (which did suck). I ended up evaluating and benchmarking several systems including CVS, SourceSafe (which Microsoft had just acquired), and Perforce.

In the end we landed on Perforce because it was far and away the fastest and most flexible system, and we knew from talking to folks at big companies (Adobe in particular) that it could scale.

Recently I’ve been reading about some of the advantages and disadvantages of Git versus Mercurial, and I realized I haven’t seen any discussion about a feature we had in the Perforce world called change lists.

Atomic commits, and why they’re good

In Perforce, as in Git and Mercurial, changes are always committed atomically, meaning that for a given commit to the repository, all the changes are made at once or not at all. If anything goes wrong during the commit process, nothing happens.

For example, if there are any conflicting changes between your local working copy and the destination repository, the system will force you to resolve the conflict first, before any of the changes are allowed to go forward.

Atomic commits give you two things:

First, you’re forced to only commit changes that are compatible with the current state of the destination repo.

Second, and more important, it’s impossible (or very difficult) to accidentally put the repo into an inconsistent state by committing a partial set of changes, whether you’re stopped in the middle by a conflicting change, or by a network or power outage, etc.

Multiple change lists?

In Git and Mercurial, as far as I can tell there is only one set of working changes that you can potentially commit on a given working copy. (In Git this is called the index, or sometimes the staging area.)

In Perforce, however, you can have multiple sets of changes in your local working copy, and commit them one at a time. There’s a default change list that’s analogous to Git’s index, but you can create as many additional change lists as you want to locally, each with its own set of files that will be committed atomically when you submit.

You can move files back and forth between change lists before you commit them. You can even author change notes as you go by updating the description of an in-progress change list, without having to commit the change set to the repository.

Having multiple change lists makes it possible, for example, to quickly fix a bug by updating one or two files locally and committing just those files, without having to do anything to isolate the quick fix from other sets of changes you may be working on at the time.

Each change list in Perforce is like its own separate staging area.

So what’s the corresponding DVCS workflow?

While it’s possible with some hand-waving to make isolated changes using Git or Mercurial, it seems like it would be easier to accidentally commit files unintentionally, unless you create a separate local branch for each set of changes.

I understand that one of the advantages people think of philosophically with distributed version control systems, is that they encourage frequent local commits by decoupling version control from a central authority.

But creating lots of local branches seems like a pretty heavy operation to me, in the case where you just need to make a small, quick change, or where you have multiple change sets you’re working on concurrently, but don’t want to have to keep multiple separate local branches in sync with a remote repo.

In the former case cloning the repo to a branch, just to make a small change isn’t particularly agile, especially if the repo is large.

In the latter case, if you’re working on multiple change lists at the same time, keeping more than one local branch in sync with the remote repo creates more and possibly redundant work. And more work means you’re more likely to make mistakes, or to get lazy and take riskier shortcuts.

But maybe I’m missing something.

What do you do?

In this situation, what’s the recommended workflow for Git and Mercurial? Any experts care to comment?

Development

Dave and I released some updates to Manila.root, the version of Manila that runs as a Tool inside the OPML Editor.

Instructions and notes are on the Frontier News site:

If you’ve been following me for the last couple of months, may have noticed that I’ve been spending some time looking at Manila again.

Recently, I completed a set of updates to bring Manila up to speed when running in the OPML Editor, and with Dave Winer’s help, that work is now released as a set of updates to the Manila.root Tool.

If you’re one of the people who still runs websites with Manila, I’d love to hear from you. Leave a comment here and say “Hi!”, or if you run into any problems with Manila as a Tool in the OPML Editor, please ask on the frontier-user mail list / Google group. :-)

Development Manila

Catching up on my RSS feeds today, I realized I’d missed Ben Thompson’s Two Microsofts piece a few days ago on stratēchery:

On the consumer side, Microsoft hopes to make money from devices and advertising: they sell Surfaces, Lumias, and Xboxes with differentiated OS’s, hardware, and services, and they have ad-supported services like Bing and Outlook. The enterprise side is the exact opposite: here the focus is 100% on services, especially Azure and Office 365 (to use the Office iPad apps for business still requires a subscription).

This actually makes all kinds of sense: enterprise and consumer markets not only require different business models, but by extension require very different companies with different priorities, different sales cycles, different marketing, so on and so forth. Everything that makes Office 365 a great idea for the enterprise didn’t necessarily make it the best idea for consumers, just as the model for selling Xbox’s hardly translates to big business. From this perspective, I love the idea of Office on iOS and Android being free for consumers: get people into the Microsoft ecosystem even as you keep them in the Office orbit.

I see things a little bit differently—maybe two Microsofts isn’t going far enough. It may well be insufficient for Microsoft to apply only a consumer/enterprise split broadly across so many existing product lines. Microsoft’s products serve such a diverse set of scenarios across such a broad range of markets that I find myself wondering whether a multi-faceted approach wouldn’t be more effective.

Take Xbox for example. Making Xbox work requires everything from media streaming to user identity to cloud storage to 3D rendering. User identity applies to any product that has a services component, and cuts across almost everything Microsoft is doing right now. On the other hand, media streaming is more specific to consumer scenarios, while 3D technologies cross multiple markets from business presentations, to CAD, to video games, albeit at the moment with a heavy emphasis on consumer scenarios.

It’s easy both from the perspective of outside analysis, and behind-the-curtain business management, to try to simplify the story and reduce it to a small number of core truths that indicate a single strategic approach to success across multiple markets.

But all of these markets are shifting more and more rapidly, in different ways, with different vectors, and huge differences in their respective competitive environments. To set one or two long term (or even medium term) strategies that will apply to all of them may be unlikely to result in success for more than one or two, and probably leads some efforts straight into the ditch.

In my (very humble) opinion, a more optimized approach would be to separate into three or more largely separate lines of business, and within each appoint captains of specific products, who would have a very large degree of autonomy. At least three of these groups would be:

  • Core technology (Developer tools, Windows, Azure, SQL, .NET, Visual Studio)
  • Enterprise (Office and Office 365, Sharepoint, Dynamics, cloud services, etc)
  • Consumer (Xbox, mobile, accessory devices)

The core technology group would function mostly autonomously as a provider of platform-level components that form the foundation of the other lines of business. These components would have documented interfaces that anyone can plug into, even outside of Microsoft. (Perhaps we’re seeing the beginning of this strategy in the recent announcement about .NET going open source.) Success here is twofold: Core technology enables multiple other lines of business as a platform, and by being open Microsoft can start to win back developers who have flocked to mobile and open source platforms.

The Enterprise group would focus on productivity and back-office scenarios, while driving requirements into the core tech group where appropriate, so that they can be leveraged across multiple products, and feed into consumer products. As in the past, revenue in enterprise is driven by software licensing, support contracts, and subscription services. Building on core tech also helps prove that the platform is robust and comprehensive enough for developers to bet on it, as they are today on iOS, Android, ruby, node.js, etc.

The Consumer group would focus on inexpensive or free software, with devices being the primary lever for market differentiation and driving revenue. The consumer group would drive common UX and UI frameworks into the core technology platform where they can be leveraged by 3rd parties as well as by Enterprise and developer tools. Make most or all of the software and services free (or as close to free as possible), and shoot for the best possible user experiences with the most functionality, with the value of software actually appearing as revenue via wider adoption of core technologies, and the improvements reaped by the Enterprise group.

Within each group, let the products stand or fail on their own merits. Give the owners of each product as much autonomy as they need in order to innovate and differentiate themselves in the market as a whole. Goals for product owners in priority order should be: Great products, wide adoption, cross-feeding into other groups, and finally revenue.

I’ve seen many great ideas from Microsoft turn into successful but short-lived products, only to be killed because they didn’t well fit into the grand strategy of the day, or sometimes they way that they did fit wasn’t recognized by leadership at some level. Sometimes even, nearly identical products were released over and over by different teams, with different names, and separate implementations—after all a good idea is a good idea is a good idea. But without the support of the Microsoft management machine, there was no way for these products to get the support they needed in order to continue to exist.

Many people who are smarter, more educated, and more experienced than I am have spent years thinking and working on this. And I’m sure I don’t have any answers that haven’t been thought of before, probably by many of those people.

But from my vantage point, Microsoft has a tendency to go “all-in” on very expensive but often only marginally successful grand-unified strategies, while at the same time ignoring or even eschewing potentially great but more narrowly-focuset products, when they don’t fit the vision de jour.

My hope is that Microsoft can continue to evolve in a direction that allows for the kind of innovation it wants to be creating for the world, even (or perhaps especially) when that innovation doesn’t quite fit the mold.

 

Microsoft