A couple of weeks ago, I started running River4 on my Synology DS412+ NAS device. At the moment I’m generating just one river, which you can see on river.jakesavin.com.

Since I needed River4 to run all the time, and I didn’t want to have to kick it off by hand every time the NAS boots up, I decided to write an init.d script to make River4 start automatically.

If you have a Synology NAS or other Linux-based machine that uses init.d to start daemon processes, you can use or adapt my script to run River4 on your machine.

How to

  1. Install node.js and River4 on your machine.
    • I installed River4 under /opt/share/river4/ since that’s where optional software usually goes on Synology systems, but yours can be wherever you want.
  2. Follow Dave’s instructions here in order to set up River4 with your data, and test that it’s working.
  3. Download the init.d shell script.
  4. Unzip, and copy the script to  /opt/etc/init.d/S58river4 on your NAS/server.
  5. Make the script executable on your NAS/server with:  chmod 755 S58river4
  6. Edit the variables near the top of the script to correspond to the correct paths on your local system.
    • If you’re using a Synology NAS, and the SynoCommunity release of node.js, then the only paths you should need to change are RIVER4_EXEC and RIVER4_FSPATH, which are the path to river4.js and your web-accessible data folder (river4data in Dave’s instructions).
  7. Run River4 using  /opt/etc/init.d/S58river4 start

At this point, River4 should be running.

If your firewall is enabled and want access to the dashboard, you’ll need to add a firewall rule to allow incoming TCP traffic on port 1337. I recommend you only do this for your local network, and not for the Internet at large, since River4 doesn’t currently require authentication.

Once your firewall has been configured, you should be able to access the dashboard via:

http://myserver:1337/dashboard

Notes

The script assumes you’re going to be generating your river of news using the local filesystem, per Dave’s instructions for using River4 with file system storage. I haven’t used it with S3 yet, but you should be able to simply comment out the line in my script that says export fspath, and get the S3 behavior.

There is no watcher, so if River4 crashes or is killed, or if node itself exits, then you’ll need to restart River4 manually. (It should restart automatically if you reboot your NAS.)

Questions, Problems, Caveats

I did this on my own, and it is likely to break in the future if River4 changes substantially. I can’t make any guarantees about support or updates.

If you have problems, for now please post a comment on this post, and I’ll do what I can to help.

Please don’t bug Dave. ;-)

Source code

Here’s the source code of the init.d script. (The downloadable version also contains instructions at the top of the file.)

Uncategorized

Version control and I go back a long way.

Back in the late 1990’s, I was working in the QA team at Sonic Solutions, and was asked to look into our build scripts and source code control system, to investigate what it would take to get us to a cross-platform development environment—one that didn’t suck.

At the time, we were running our own build scripts implemented in the MPW Shell (which was weird, but didn’t suck), and our version control system was Projector (which did suck). I ended up evaluating and benchmarking several systems including CVS, SourceSafe (which Microsoft had just acquired), and Perforce.

In the end we landed on Perforce because it was far and away the fastest and most flexible system, and we knew from talking to folks at big companies (Adobe in particular) that it could scale.

Recently I’ve been reading about some of the advantages and disadvantages of Git versus Mercurial, and I realized I haven’t seen any discussion about a feature we had in the Perforce world called change lists.

Atomic commits, and why they’re good

In Perforce, as in Git and Mercurial, changes are always committed atomically, meaning that for a given commit to the repository, all the changes are made at once or not at all. If anything goes wrong during the commit process, nothing happens.

For example, if there are any conflicting changes between your local working copy and the destination repository, the system will force you to resolve the conflict first, before any of the changes are allowed to go forward.

Atomic commits give you two things:

First, you’re forced to only commit changes that are compatible with the current state of the destination repo.

Second, and more important, it’s impossible (or very difficult) to accidentally put the repo into an inconsistent state by committing a partial set of changes, whether you’re stopped in the middle by a conflicting change, or by a network or power outage, etc.

Multiple change lists?

In Git and Mercurial, as far as I can tell there is only one set of working changes that you can potentially commit on a given working copy. (In Git this is called the index, or sometimes the staging area.)

In Perforce, however, you can have multiple sets of changes in your local working copy, and commit them one at a time. There’s a default change list that’s analogous to Git’s index, but you can create as many additional change lists as you want to locally, each with its own set of files that will be committed atomically when you submit.

You can move files back and forth between change lists before you commit them. You can even author change notes as you go by updating the description of an in-progress change list, without having to commit the change set to the repository.

Having multiple change lists makes it possible, for example, to quickly fix a bug by updating one or two files locally and committing just those files, without having to do anything to isolate the quick fix from other sets of changes you may be working on at the time.

Each change list in Perforce is like its own separate staging area.

So what’s the corresponding DVCS workflow?

While it’s possible with some hand-waving to make isolated changes using Git or Mercurial, it seems like it would be easier to accidentally commit files unintentionally, unless you create a separate local branch for each set of changes.

I understand that one of the advantages people think of philosophically with distributed version control systems, is that they encourage frequent local commits by decoupling version control from a central authority.

But creating lots of local branches seems like a pretty heavy operation to me, in the case where you just need to make a small, quick change, or where you have multiple change sets you’re working on concurrently, but don’t want to have to keep multiple separate local branches in sync with a remote repo.

In the former case cloning the repo to a branch, just to make a small change isn’t particularly agile, especially if the repo is large.

In the latter case, if you’re working on multiple change lists at the same time, keeping more than one local branch in sync with the remote repo creates more and possibly redundant work. And more work means you’re more likely to make mistakes, or to get lazy and take riskier shortcuts.

But maybe I’m missing something.

What do you do?

In this situation, what’s the recommended workflow for Git and Mercurial? Any experts care to comment?

Development

Dave and I released some updates to Manila.root, the version of Manila that runs as a Tool inside the OPML Editor.

Instructions and notes are on the Frontier News site:

If you’ve been following me for the last couple of months, may have noticed that I’ve been spending some time looking at Manila again.

Recently, I completed a set of updates to bring Manila up to speed when running in the OPML Editor, and with Dave Winer’s help, that work is now released as a set of updates to the Manila.root Tool.

If you’re one of the people who still runs websites with Manila, I’d love to hear from you. Leave a comment here and say “Hi!”, or if you run into any problems with Manila as a Tool in the OPML Editor, please ask on the frontier-user mail list / Google group. :-)

Development Manila

Catching up on my RSS feeds today, I realized I’d missed Ben Thompson’s Two Microsofts piece a few days ago on stratēchery:

On the consumer side, Microsoft hopes to make money from devices and advertising: they sell Surfaces, Lumias, and Xboxes with differentiated OS’s, hardware, and services, and they have ad-supported services like Bing and Outlook. The enterprise side is the exact opposite: here the focus is 100% on services, especially Azure and Office 365 (to use the Office iPad apps for business still requires a subscription).

This actually makes all kinds of sense: enterprise and consumer markets not only require different business models, but by extension require very different companies with different priorities, different sales cycles, different marketing, so on and so forth. Everything that makes Office 365 a great idea for the enterprise didn’t necessarily make it the best idea for consumers, just as the model for selling Xbox’s hardly translates to big business. From this perspective, I love the idea of Office on iOS and Android being free for consumers: get people into the Microsoft ecosystem even as you keep them in the Office orbit.

I see things a little bit differently—maybe two Microsofts isn’t going far enough. It may well be insufficient for Microsoft to apply only a consumer/enterprise split broadly across so many existing product lines. Microsoft’s products serve such a diverse set of scenarios across such a broad range of markets that I find myself wondering whether a multi-faceted approach wouldn’t be more effective.

Take Xbox for example. Making Xbox work requires everything from media streaming to user identity to cloud storage to 3D rendering. User identity applies to any product that has a services component, and cuts across almost everything Microsoft is doing right now. On the other hand, media streaming is more specific to consumer scenarios, while 3D technologies cross multiple markets from business presentations, to CAD, to video games, albeit at the moment with a heavy emphasis on consumer scenarios.

It’s easy both from the perspective of outside analysis, and behind-the-curtain business management, to try to simplify the story and reduce it to a small number of core truths that indicate a single strategic approach to success across multiple markets.

But all of these markets are shifting more and more rapidly, in different ways, with different vectors, and huge differences in their respective competitive environments. To set one or two long term (or even medium term) strategies that will apply to all of them may be unlikely to result in success for more than one or two, and probably leads some efforts straight into the ditch.

In my (very humble) opinion, a more optimized approach would be to separate into three or more largely separate lines of business, and within each appoint captains of specific products, who would have a very large degree of autonomy. At least three of these groups would be:

  • Core technology (Developer tools, Windows, Azure, SQL, .NET, Visual Studio)
  • Enterprise (Office and Office 365, Sharepoint, Dynamics, cloud services, etc)
  • Consumer (Xbox, mobile, accessory devices)

The core technology group would function mostly autonomously as a provider of platform-level components that form the foundation of the other lines of business. These components would have documented interfaces that anyone can plug into, even outside of Microsoft. (Perhaps we’re seeing the beginning of this strategy in the recent announcement about .NET going open source.) Success here is twofold: Core technology enables multiple other lines of business as a platform, and by being open Microsoft can start to win back developers who have flocked to mobile and open source platforms.

The Enterprise group would focus on productivity and back-office scenarios, while driving requirements into the core tech group where appropriate, so that they can be leveraged across multiple products, and feed into consumer products. As in the past, revenue in enterprise is driven by software licensing, support contracts, and subscription services. Building on core tech also helps prove that the platform is robust and comprehensive enough for developers to bet on it, as they are today on iOS, Android, ruby, node.js, etc.

The Consumer group would focus on inexpensive or free software, with devices being the primary lever for market differentiation and driving revenue. The consumer group would drive common UX and UI frameworks into the core technology platform where they can be leveraged by 3rd parties as well as by Enterprise and developer tools. Make most or all of the software and services free (or as close to free as possible), and shoot for the best possible user experiences with the most functionality, with the value of software actually appearing as revenue via wider adoption of core technologies, and the improvements reaped by the Enterprise group.

Within each group, let the products stand or fail on their own merits. Give the owners of each product as much autonomy as they need in order to innovate and differentiate themselves in the market as a whole. Goals for product owners in priority order should be: Great products, wide adoption, cross-feeding into other groups, and finally revenue.

I’ve seen many great ideas from Microsoft turn into successful but short-lived products, only to be killed because they didn’t well fit into the grand strategy of the day, or sometimes they way that they did fit wasn’t recognized by leadership at some level. Sometimes even, nearly identical products were released over and over by different teams, with different names, and separate implementations—after all a good idea is a good idea is a good idea. But without the support of the Microsoft management machine, there was no way for these products to get the support they needed in order to continue to exist.

Many people who are smarter, more educated, and more experienced than I am have spent years thinking and working on this. And I’m sure I don’t have any answers that haven’t been thought of before, probably by many of those people.

But from my vantage point, Microsoft has a tendency to go “all-in” on very expensive but often only marginally successful grand-unified strategies, while at the same time ignoring or even eschewing potentially great but more narrowly-focuset products, when they don’t fit the vision de jour.

My hope is that Microsoft can continue to evolve in a direction that allows for the kind of innovation it wants to be creating for the world, even (or perhaps especially) when that innovation doesn’t quite fit the mold.

 

Microsoft

I love this:

Just because you’re building a new product with limited resources, doesn’t mean you can ignore design, usability, or reliability.

(Thanks to Santiago Valdarrama for the retweet.)

Uncategorized

Woody Leonhard in InfoWorld: Brummel bails: Another member of the Microsoft old guard to leave.

“Lisa Brummel who, rightly or wrongly, was associated in many employees’ minds with the detested stack ranking system, leaves at the end of the year. With 25 years at Microsoft and almost a decade leading the Human Resources organization, she’s one of the last of the Ballmer inner circle to hit the trail…

“Opinions vary as to whether Brummel created Microsoft’s version of the stack ranking system or merely enforced it with an iron hand. But legions of Microsoft employees will remember her for the system that forced co-workers to compete, not cooperate.”

Microsoft

Just before the midterm election, I wrote a pretty misinformed post about how broken the US health insurance system is. While my level of due-diligence when it comes to the protections afforded by the Affordable Care Act was pretty lacking, I did then–and still do have some serious concerns about whether the pre-existing condition protections will continue to stand in the future.

On election night, I saw TX Senator Ted Cruz on ABC News say the following:

“The Obama economy isn’t working… People want leadership… Now that the Republicans have won the majority, it’s encumbent on us to stand up and lead.”

(“Uh oh, here we go,” I thought.)

When asked, “And what happens to Obamacare?” Cruz answered: (emphasis mine)

“I think Republicans should do everything humanly possible to stop Obamacare…

“I think we need to follow through on the promises that the Republicans made on the campaign trail. We need to start by using reconciliation to pass legislation repealing Obamacare, and then if President Obama vetos that we should systematically pass legislation addressing the greatest harms from Obamacare. For example, passing legislation saying that you can’t have your healthcare cancelled, you can’t lose your doctor because of Obamacare. Passing legislation saying you can’t be forced into part-time work because of Obamacare, like so many people have been, especially single moms have been hammered by Obamacare on that. Passing legislation saying, ‘No insurance company bailouts under Obamacare.’ And teeing those up one at a time, and forcing the President to come to a decision: Will he listen to the overwhelming views of the American people, or will he simply try ot veto them one after the other, after the other? If he does the latter, that’ll be a real mistake, and I very much hope he doesn’t.”

Basically what I hear in this is that the republicans have a game plan for attacking Obama politically, and it’s centered around the Affordable Care Act. Specifically:

  1. Try to repeal it wholesale. Knowing that this will never happen, move to step 2:
  2. Initiate a massive campaign to “inform” people of what they’re “losing” because of the ACA (a.k.a. Obamacare).
  3. Systematically misrepresent protections as causing hardship for middle-class swing voters in blocks that the GOP needs to win back (“single moms” for example), so they’ll hopefully swing to the Republicans in 2016.

The theory represented in step 3, as teed up by steps 1 and 2, is that the ACA is unbearably expensive for small businesses and insurance companies, and that therefore small businesses are forcing people into part-time work (so they don’t have to pay for insurance), and that insurance companies are going to go out of business and need a government bailout, with the implication that tax payers will have to foot the bill like they did for the (enormously unpopular) bank bailouts.

I don’t know that much about the impact to small businesses, so I can’t speak to that angle in a very fact-based way.

But I can tell you that the insurance companies are not in trouble. The ACA, via mandates to make insurance available, and the government health insurance market (Healthcare.gov) made the addressable market for health insurance much larger than it was previously. Insurance companies are not in trouble—to the contrary, under the ACA most experts agree that they’ll do better than they were before healthcare reform.

But if Ted Cruz and his colleagues can sell this sham to the American public, and force through limitations on the protections of the ACA, then we’re all in trouble. Especially those of us with pre-existing conditions, who are for the moment safe, but by no means no longer at risk.

But the real agenda is to discredit Obama and the Democrats, using healthcare reform as a lever to force Obama to wield his veto power. Cruz basically said as much on national television, on election night. And that’s step 4 and 5:

  1. Make grand overtures about working with the Democrats. A new era of cooperation! This has already started, and at least so far the Democrats are falling for it, based on cross-party meetings and public statements we’ve seen up to this point.
  2. Meanwhile, now that we (the GOP) control both houses, we can force Obama to veto our nonsense legislation, so we can claim that he and the Democrats are stonewalling and breaking promises, while we appear to be reasonable adults.

Here’s Senator Cruz in his own words:

Politics

java-os-x-yosemiteI’m looking at various Mac options for JavaScript / Node.js IDEs, and decided to try out the Eclipse-based Aptana Studio 3 (now part of Appcelerator). But I ran into a problem when trying to run it—I kept getting an error saying:

The JVM shared library “/Library/Java/JavaVirtualMachines/jdk1.8.0_20.jdk” does not contain the JNI_CreateJavaVM symbol

After much searching and reading of Stack Overflow posts, I decided after reading this, to completely uninstall the jdk and browser plugin from my machine, and start fresh with a clean install of Java for Yosemite.

Now I don’t know if it’s just me, but oddly the page on the Apple Support site that comes back blank. (I’ll assume there’s no conspiracy for the moment, and that this is just a bug.)

So I plugged the URL to the page into Google and loaded up the cached version. There I found the direct download link to the installer here: http://support.apple.com/downloads/DL1572/en_US/JavaForOSX2014-001.dmg

After running that installer, Aptana Studio (and also Eclipse) now launch just fine. Phew.

I’m posting this here to help others who are running into the JNI_CreateJavaVM error, and so I can find it again the next time I need to set up a new machine with Eclipse or Aptana.

ps. See also comment thread on Facebook.

Development

Updated: Apparently I’m living in the past, and hadn’t yet educated myself on the implications of the Affordable Care Act. While I knew that the ACA protected people from being denied coverage based on pre-existing conditions, I didn’t know that this was universal across all providers (not just the ones available on HealthCare.gov), and that it also prevents insurers from hiking rates based on them. Of course this is all true only as long as the ACA isn’t repealed or whittled away, so if you care about this issue make sure to vote!

Regence LogoIn case you needed it, here’s some more evidence of just how broken Health Insurance really is in the United States.

On Friday, I received a letter in the mail from Regency’s “Condition Manager Program” with enclosed materials that “may be helpful to you.”

Dear Participant:

Enclosed are materials from the Regency Condition Manager Program that may be helpful to you. Please look them over and we encourage you to discuss them with your doctor. You can call us toll free at 1 (800) 267-6729 with any questions.

The Regency Condition Manager Program is a free health management program sponsored by Regency BlueShield.

The program provides information, education and support to help you learn more about your condition. You should still see your doctor. If you have any concerns about your health, you should contact your doctor. In an emergency, call 911 or your local emergency services number.

Thank you for allowing us to be a part of your healthcare management team. We look forward to speaking to you again soon.

Sincerely,

Your Regency Condition Manager Team

Enclosure(s)

And the condition that the letter refers to? Diabetes.

Here’s the thing: I have never had diabetes. I’ve never been diagnosed with diabetes. I’ve never been treated for diabetes. I was on a medication a couple of years ago that could cause an increase in blood sugar levels, but I’m no longer on the medication, and never needed any intervention.

So my health insurance provider decided unilaterally that I have a pre-existing condition that I don’t actually have.

Presumably they did this based on a badly designed and poorly tested matching algorithm running against their customer database, (I refuse to use the word “patient” since I am not one, and even if I were, I’m not theirs.)

I can’t tell you how messed up this is

For those of you who live outside the United States, or if you’re young enough or fortunate enough to have never had to contend with the pre-existing condition rules we have here, here’s the skinny:

If you have been diagnosed with a condition, and your health insurance ever lapses for more than 30 days (90 in some states including Washington), you are now in a situation where no insurance provider will pay for care related to that condition. Ever. Period. End of story. You’re screwed.

Even worse, some insurance companies are even in the habit of denying coverage as a matter of course if you have a pre-existing condition, and hoping that you won’t contest the decision—which can often be prohibitively costly for the patient. You’re screwed again.

This practice should be illegal

There should be no way for an insurer to attribute someone with a medical condition without an explicit diagnosis and provision of covered care for the specific condition.

Just because you have some mediocre DBA on your cost-control team who had the bright idea that they could mine your customer database to find people who need help managing chronic conditions, doesn’t mean that you now get to decide they have some health issue that they don’t have.

Diagnosis is done by doctors, not database analysts. And certainly not computer systems.

Hey Regence: Stick to what you know

Health insurance companies should stick to the business they understand: Forcing sick people to sell their homes and declare bankruptcy in order to pay for unjustifiable profits for their shareholders and investors.

Oh wait. Did I just say that?

We fixed the problem

I did call them, and the customer service representative was “happy to fix this for you.”

When I told him why I was so concerned, and suggested that they really need to look into fixing the bug in their computer system, he said, “Oh, I know—we’ve been getting complaints, and they’re working on a fix.”

Uncategorized

Etekcity_USB3_dock
I’ve been working with multiple displays since at least 2004. Back in those days I had a 17″ PowerBook G4 with a PCM/CIA card that provided a second external DVI output. It was slow, but it worked, and for the programming I was doing the lacking performance was not an issue.

For the last few months, I’ve been using a Sorbent USB 2.0 adapter to provide a second video output for my Mid-2011 MacBook Air, so I had two external displays, and the laptop itself for 3 total. The problem is that this MacBook Air doesn’t have USB 3.0, and USB 2.0 just doesn’t have enough throughput to drive a large display, so it had a lot of lag—too much to really be acceptable.

I’d been looking around for solutions. The problem is that all of the Thunderbolt docks can only really drive a single external display unless one of your monitors is itself a Thunderbolt display, which can work via Thunderbolt pass-through. But recently there have been more and more USB 3.0-based docks that support Mac OS X.

Kanex_Thunderbolt_USB_adapterSo I picked up a Kanex KTU10 Thunderbolt to eSATA Plus USB 3.0 adapter, and an Etekcity USB 3.0 dual monitor dock for my late-2011 MacBook Air. This dock has two USB 3.0 ports, 4 USB 2.0 ports, and two display outputs (HDMI and DVI), and the Kanex adapter should in theory provide a USB 3.0 port which the dock needs.

After a little dance with installing the latest DisplayLink driver (2.3 beta), they totally work.

So for about $200 (only a tad more than the cost of one of the single-display Thunderbolt docks), I’m running with three screens again, and the performance is perfectly acceptable for most things I will ever need to do.

Plus I’ve got gigabit Ethernet, more USB 3.0 ports, and an eSATA port which will be great for backing up my machine to an external drive.

Overall I’m very pleased.

Uncategorized