Waterways of the Information Age

Waterways of the Information Age

The obvious restated at risk

A blog post by P.T. Withington sparked a water-cooler conversation with Bret Simister and Sarah Allen about the notion that access to the Internet is analogous to proximity to a waterway in prior ages.

Once upon a time, locating on a river or coast was crucial for access to a steady flow of information, goods and services. Owners of such sites enjoyed enduring prosperity. The Internet serves this role today. And the faster the connection to this digital waterway, the greater the flow of information, goods and services. These notions seem rather obvious, given the general recognition of the importance of broadband and the common use of geographic metaphors when discussing the Internet.

Yet some deployers of 802.11 WiFi access points essentially want to improve their own land, and convince strangers to pay for it. Most of us would only co-invest in property improvements in exchange for a share of the return. Otherwise, property owners should be content if their efforts increase foot traffic, a common measure of retail property lease value.

The merchants of Newbury Street in Boston, wittingly or unwittingly, endorse a new variation on the first rule of real estate (‘location, location, location’). They can not relocate alongside a river or coast, but they can offer a substitute with similar virtues. By bringing complimentary WiFi coverage to their street, they provide in reality what rivers today only imply — ready access for all to information, goods and services. This simple act accomplishes for them what it has done for all trade centers — it makes their vicinity a better place to do business and to live life. Will they be surprised if greater prosperity follows?


Knowledge, search engines and blogs

Knowledge, search engines and blogs

Comments regarding the impact of blogs on the propagation of knowledge

Some Data Points

Over the last couple of months, I have twice found more reliable information from blogs than from ‘official’ sites on the Web:

Case 1: a problem with Python XML parsing on Mac OS X 10.2.

Case 2: a problem with a ‘hijacked’ Microsoft IE Web browser

Case 1 is rather arcane, involving an issue only a very small percentage of OS X users would be concerned with. But since the apple.com site does not permit users to post entries, there is in fact no way for Apple customers to share information via the Apple site. An email to Apple providing this information would probably languish in a low-priority queue, because the issue does not affect a significant number of people. Instead, this knowledge must reach the world via various blogs indexed by search engines.

Case 2 constitutes a growing problem, but is addressed in an overly complex manner on the Microsoft support site, with no mention of easier solutions available elsewhere. A Web search on ‘IE Internet Options missing tabs’ yields a confusing laundry list of sites with no clear solution. However, a Web search of ‘blog IE Internet Options missing tabs’ yields a complete personal account of someone’s experience with this problem plus her recommendations of resources to help fix the issue, including invaluable referrals to www.spywareinfo.com and a program called HijackThis , which in combination fix the compromised Web browser efficiently.

What’s Happening?

Numerous colleagues report anecdotally that blogs often provide more reliable information than official sources. On reflection, this makes sense. In both of the above cases, the pre-Internet method of propagating information involves passing first-hand knowledge through intermediate filters. One of the occassional side-effects is that the explantion from an expert is written or re-written by others with less domain knowledge. This pre-Internet ‘work-flow’ is obviously streamlined by Web logs. Information can now come straight from the source.

So far, I sense only the benefit — knowledge is propagating faster, without the delays and occassional dilution introduced by formal publishing processes. I suppose the opposite is also possible — the propagation of lies, without the protection of editorial review. The unfolding of the blog phenomenon may thus serve as another portrait of human nature. On the balance, I expect the portrait to be flattering.

Encounter with Web Browser ‘Hijacking’

Encounter with Web Browser ‘Hijacking’

It’s a relatively benign nuisance in comparison to other types of ‘hijackings’, so the label is perhaps a bit hyperbolic, but there is a spreading phenomenon known as a Web browser hijacking. By visiting an unscrupulous URL or clicking a hyperlink in a spam email, you can actually lose control of your web browser. Most of these attacks victimize users of Microsoft’s Internet Explorer. The attacking code exploits security holes in the browser to reset your prefered home page, add links to your Favorites list and, most dramatically, remove tabs from your IE Internet Options panel. With that last step, the attacker prevents you from resetting your browser options — a very effective technique to force a few extra page views to their site, until you reinstall your system software in desperation or discover a simpler solution.

So this actually happened to me yesterday — the hijacking attack changed my default home page to an ad supported portal page, and removed the General Tab under my IE browser’s Internet Options, thus preventing me from resetting my homepage back to the original URL (‘blank’, in my case).

After a few hours of fumbling around the Web, trying to figure out what happened to my computer and how to describe it for a search query, I converged on the following explanations:




Oddly, the best solution came not from Microsoft’s support site, but from SpywareInfo, and their amazing online forum, combined with a shareware program provocatively named HijackThis. Volunteers on the SpywareInfo forum have assisted thousands of individuals across the Internet to combat a dizzying array of Web-related programmatic attacks which fall outside the realm of ‘viruses’ per se.

Following the recommendations of SpywareInfo, I repaired my IE web browser as follows:

[1] Downloaded and ran SpyBotSearch&Destroy

[2] Downloaded and ran HijackThis

Following the HijackThis instructions, I saved the resulting hijackthis.log report and posted it to SpywareInfo’s online support forums for analysis by their forum monitors. These individuals inspect the log reports to identify improper Windows OS registry settings introduced by the hijacking attack. You can see the daily action at:


A forum monitor named Tony Klein responded within 30 minutes identifying the two fixes I needed to apply via the HijackThis application. After completing the fixes and rebooting, my IE browser was restored to health. I felt like I was just saved by a ‘firefighter of the Internet’.

Life Lessons:

Though I hope never to encounter this nuisance again, the experience has been enlightening.

1. The best information and support on the browser hijacking problem came from non-professional sources, lacking financial compensation perhaps, but not lacking integrity and commitment.

2. In the same sense that ‘bio-diversity’ makes ecosystems more resilient, ‘IO Diversity’ may help protect our global information systems. The ‘hijacking’ attacks referred to above target vulnerabilities in the code base of Internet Explorer. Other Web browsers with different code bases are immune to these specific attacks. Perhaps variety in the code bases of browsers and all underlying software systems may be beneficial for reasons beyond maintaining economic competition.

Grappling with Python XML parsing on OS X 10.2

Grappling with Python XML parsing on OS X 10.2

Run-time errors encountered trying Google Hack #57: Python and the GoogleAPI

After struggling with various Python programming examples related to XML parsing, I discovered that the “Python No Parsers Found” run-time error is common on multiple platforms, and reported in numerous places on the Web. A thread describing the issue and suggesting a solution can be found at:


Jean-Yves Stervinou, the solution provider, also reported the issue at:


But the most complete thread on the issue and the solution I finally used was found at:


Like myself, these individuals were using the default Python installation on Mac OSX 10.2 Jaguar, and all encountered run-time errors when attempting to parse XML in Python. The solution that worked for me was installing PyXML 0.8.1, as suggested on the last blog listed above. I was then able to successfully invoke Google Hack #57 from the UNIX command line on my server. Another hack that accessed an XML stock quote via a Python CGI also worked successfully.

I will update my own post at hacks.oreilly.com re: Google Hacks #57 (a Python-based example) to assist others encountering this problem:


Ahhh…the joys and challenges of hacking code. I must admit this whole experience confirmed the amazing value of the Web, Blogs and search engines in sharing and advancing knowledge None of the above material could be found at the Apple Computer Web site or online support knowledge base, at least around the date of this posting.

A Layman’s Attempt to Digest “XML Web Services”

A Layman’s Attempt to Digest “XML Web Services”

I rediscovered this note written during the spring of 2002, and decided to add it to my blog. The note was originally drafted to help me internalize readings on emerging Web Services. Maybe others will find this a helpful read.

What’s a web service?

There’s an incredible amount of hype on this topic at present (since 2001). Numerous books have been published in early 2002, including a series of O’Reilly titles, all dedicated to Web Services on the Sun J2EE platform (Java 2 Enterprise Edition) and the Microsoft .Net Framework. In the simplest terms, the emerging Web Services standards promise to simplify the sharing of data between software information systems over the Internet, using open rather than proprietary standards.

Integrated Software Experiences

From a lay perspective, we know that it’s very useful for distributed information systems, and even desktop applications to be able to easily exchange data. It’s great from a user’s perspective to work with a well integrated software system. Imagine if you could automatically extract financial information from all of your financial service providers (credit cards, banks, brokers) into your favorite financial planning software. It’s not reality because the scope of the human and software-based information systems involved is large. Strong integration has occured on the personal computer desktop, since that’s a more tractable domain, where the disparate systems involved all run on a single computer, and most often are used and adminstered by one person. The Microsoft Office suite would be a good example, where you are able to easily share data between a word processor, a spreadsheet, a presentation tool and even a web browser.

Two Paths to Integration

The benefits of an integrated software system can only be realized when the back-end systems are able to communicate with each other. This back-end communication must be standardized in some way to enable large numbers of software developers to write applications that can interoperate.

In the past, systems were integrated using proprietary, vendor specific APIs. This meant vendor “lock-in” for customers. For example, if you wanted a spreadsheet that interoperated well with a word-processor, it helped to buy both products from the same vendor. Developing integrated software is much easier when all the engineers work in close contact under unified leadership.

In the world of networked information systems, as opposed to isolated desktop applications, it also eased interoperability to buy solutions from a single vendor. This mode of industry practice helped software integration in the short-run, but seemed to stifle software innovation in the long run.

The information systems industry has since evolved toward open standards. These industry-wide standards enable companies to compete in the creation of software products without requiring monopoly power to provide interoperability. A pervasive global information system has evolved rapidly since the rise of the commercial Internet. To enable integrated software experiences on this scale, vendors do not entertain the notion of one solution provider winning over all others. Instead, they hope to agree on methods to enable these systems to communicate with each other. Hence we have the current landscape surrounding the standards for communication between distributed systems over the Internet, the standards intended to enable a future filled with “web services”.

Impact of a Consulting Assignment with Laszlo Systems

Impact of a Consulting Assignment with Laszlo Systems

Notes on exposure to the ‘state-of-the-art’ of information technology during the last few months

Feeling more connected with the ‘state-of-the-art’ on the Internet. Partly the result of the last half-year of consulting with Laszlo Systems on the product launch of the Laszlo Presentation Server. Besides developing the original White Paper, various pieces of marketing collateral, and a product review guide, I also orchestrated the unveiling of the platform through a series of technology conferences and trade shows, including Demo 2003 , O’Reilly 2003 Emerging Technology Conference and JavaOne 2003 . The O’Reilly conference was particularly eye-opening, with a parade of IT industry luminaries speaking and in attendance. During this consulting stint, the LPS received a 2003 Webby Award nomination for technical achievement , alongside fellow nominees Google, Linux, Apache and phpBB. This nomination stunned the Laszlo engineering team, given the cult status of the other technologies recognized by the Webby committee. But the most rewarding part of the experience for me was the exposure to recent trends with programming languages, open standards and Web application development. I have been surprised by the variety of uses for XML from declarative application programming languages to server configuration files. In old age, I will look back feeling privileged to have participated in the early commercial development of the Internet.

Project Kontak: term project for my IT sabbatical

Project Kontak: term project for my IT sabbatical

The end result of my Post-ATHM information technology sabbatical was my first complete, publicly deployed web application: Kontak, a personal contact application that permits a visitor on my Homepage to pop open a small browser window, and send email messages to my Web-email account or my cell phone, without knowing my actual account address information. In this manner, I shield those accounts from spammers, because I never give them out! Kontak also logs all messaging activity, so I can see who sent me what from where and when.

Kontak is based on the following set of technologies:

[1] The Web HTML forms, server-side ‘business logic’ and control of an SMTP mail server are all done in the PHP scripting language, running under the Apache Web Server on a Mac OS X 10.2 server.

[2] The message activity log is maintained in a MySQL database, running under Red Hat Linux 6.2 on a 2nd server (in my dining room data center ;-). PHP provides simple API’s to access databases built in MySQL.

[3] The Kontak app, or really the Apache Web Server behind it, is exposed on the open Web by maping the dynamic IP address provided by my dial-up ISP to a static hostname using the service at dyndns.org.

[4] As a security measure, the OS X server is shielded from the Internet behind a NAT router (Network Address Translation), which in turn provides the Internet connection via an integrated dial-up modem. The two servers powering Kontak are assigned private IP addresses within my development Intranet. The Web server’s IP and port number (80) are then mapped by the router to my public dynamic IP address, to recognize server requests from the open Web.

Kontak represents a grand tour of contemporary Web application technologies, and provides a hands-on understanding of how modern application developers make things happen on the Internet. All in all, a very rewarding journey for a software technology product-marketing person!

Monkeying with my Personal Homepage

Monkeying with my Personal Homepage

Learned something new from the blogger.com site today… the FAQ notes that our personal blogs can incorporate 3rd party search engines. heard good things about AtomZ’s free site search trial service, checked it out, and now have it integrated into my personal home page (linked off this blog). One virtue of this idle period is that I’ve been able to immerse myself in the boundless capabilities offered by various web application developers. So much functionality out there, that very few of us really know anything about…seems a lifetime could be spent only in cyberspace discovering all of it and trying to figure out what to do with it.

Post-Internet-Bubble Rant

Post-Internet-Bubble Rant

As it turns out, I, and many cohorts far brighter than I, spent much of the last decade on economic activities that the Invisible Hand has deemed worthless (or at least not very worthy :-). So while the world continued to grapple with hunger, disease and want of material goods like TVs, kitchen appliances and ever-larger SUVs… I did little to help in those great causes.

So what next? In my case, a period of immersion in contemporary Information Technology, to catch up with the developments that led to the Web boom in the first place.