Thursday, September 25, 2008

If Solaris dies, will Linux stagnate?


In a story posted on the New York Times titled "Is Sun Solaris on its deathbed?",
a rather one sided view of Linux vs Solaris is presented. The casual reader might be
inclined to agree that Solaris is in trouble, but if it is, what does that mean for
Linux?



By and large, most open source projects exist to provide a free alternative to some
commercial product that you must pay for. Linux started out as a free Unix-like operating system when you had to buy Solaris, never mind whether or not it ran on a PC.
If you look at the length and bredth of open source software, it is incredibly hard to
find something that was done first there or where open source innovation led commercial
space.



Lets analyse this for a bit. In the commercial sector, you need to come up with new ideas and new features to woo the customer into paying for something new or to convince the customer that your product is better than the other one.
In the open source space, many of the contributors work on something that they first
see in a commercial product - i.e. the Linux equivalent of Solaris' DTrace.
If Solaris hadn't of brought the world DTrace, would Linux?



If I stop and think about the flow of ideas between Linux and Solaris, it is hard to
see anything new that Linux is doing that OpenSolaris wants to follow.
The best that seems to happen is someone in Linux comes up with a better way of doing
X. If I expanded the set of operating systems to include AIX and HP-UX, there may
indeed be very very little innovation in Linux. And that should scare Linux.



And that leads me to the title of this blog entry: if Solaris and the other Unix-like operating systems die, who will Linux be left to copy? If Linux is thereafter left to innovate on its own (something that it hasn't seemed able to do
in 15 years of existence, so far), will it happen? Or will it simply flounder and stagnate because the real innovation that it has relied on to copy has disappeared?

Thursday, September 11, 2008

SNMP trap sending added to IPFilter


Late last night, or early this morning, or was it yesterday morning, I finished adding sending of SNMP traps, in response to logging events, to ipmon. ipmon is the daemon that performs logging for IP Filter.



This feature is only present in IPFilter 5.0 and won't be back ported to the 4.1 series. The configuration allows for matching on the same data to send both v1 and v2 traps - if that's what is desired. The configuration options for enabling sending of traps looks like this:



match { logtag = 10000 }
do { send-trap v1 community public 192.168.1.239 };
#
match { logtag = 10000 }
do { send-trap v2 community read 192.168.1.239 };
#


Of course it goes without saying that to enable this to work you will need to allow SNMP traps to be sent out of the firewall. There are a couple of issues that need to be discussed and resolved:



  • what address (given a firewall can have many) should be included in the trap message and how should it be configured - or just left as 0?

  • what should "uptime" be reported as? The time since IPFilter was last enabled, the current time or something else?

  • There's a request_id in SNMPv2 and some error numbers in both v1 and v2. Does it make sense for these to all be 0 or something else - and if so what?



So the hard work (creating the trap messages!) is done, now there's just some gaps to fill in.

Wednesday, September 3, 2008

A disaster waiting to happen...

To follow up on my earlier post, not only have logins been centralised between blogs.sun.com and other parts of Sun's Internet facing web pages but the login names are derived from publicly available data and the passwords ... I'm not sure if I should mention what our passwords are, suffice to say that if someone managed to hack any of the sun.com web pages used for logins and captured passwords then a lot of Sun employees might need to change their password. (And that's the rosy side of a successful attack. The dark side is everyone inside sun.com will need to.)

A couple of decades ago, we would have had those concerns for mail software (and perhaps we still should) but whatever problems there are with email now is dwarfed by those with web pages. Maybe in a couple of decades using sensitive passwords on outer web pages will be considered "ok" or "safe" by many but for now, such designs leave me aghast.