Organizations Learning to Contribute to FOSS “The Right Way”

A couple weeks ago I wrote that I would be attending the 4th Annual Linux Foundation Collaboration Summit. I wrote about much of my experience there and at the Open Source Business Conference back in March over in my personal blog: “Lessons from Open Source Business Conference and the Linux Foundation Collaboration Summit”.

However, I also wanted to make a post here to cycle back to some of what I learned from the Collaboration Summit in relation to my March 30th post about contributing, “How and why contributing to FOSS can benefit your organization”. In this post I discussed using community tools, getting involved in the community and what steps you could take to get there. This was based upon several years of my own involvement in the FOSS (Free/Open Source Software) community directly and now my experience working for a company which makes FOSS contributions.

The talks at the Collaboration Summit strengthened my resolve in and increased the clarity of my understanding about the right way of going about contributing to FOSS as a company. At this conference there were multiple talks from major companies and figures within the FOSS business world which drove home the need for working with the community. All of these companies had stories about how they had tried to contribute to FOSS and struggled because they went about contributing as a walled off company rather than contributing just like other contributors did and using the same tools that contributors did.

A keynote which really stood out and succinctly discussed all of this was Dan Frye‘s talk, “10+ Years of Linux at IBM” (video). The first half of the keynote discusses the progress of Linux within IBM, but then he moves into discussing contributing itself. Some of their take-aways were that they needed to get involved directly with small contributions and do away with closed-door meetings and canned corporate responses, IBM employees were empowered to become community members. They needed to learn to collaborate with the community to develop higher quality solutions than they could have in-house, and to start these discussions with the community early in the brainstorming process. Related to collaboration, he also discusses control, and how a company does not have it within a community and needs to learn to deal with that, instead what a company should strive for is influence within a project to help guide direction and priorities. He also suggests never creating a project. Instead he encourages companies to join a project that’s close to what they need and work with them to take it in a direction that can benefit everyone and reach their goals and scratch their itches.

What struck me most at the conference regarding the subject of contributing is they are all reaching the same conclusions about the proper ways to successfully contribute. In the end, they learned that they must fully collaborate openly throughout development with the open source communities they’re working with.

Posted by Elizabeth Krumbach in Conference, FOSS Community, 0 comments

Attending the Linux Foundation Collaboration Summit 2010

On the heels of the 5th Annual Emerging Technologies for the Enterprise Conference (ETE 2010) in Philadelphia that CJ attended last week, I’ll be attending the 4th Annual Linux Foundation Collaboration Summit tomorrow through Friday in San Francisco.

The Linux Foundation Collaboration Summit is an exclusive, invitation-only summit gathering core kernel developers, distribution maintainers, ISVs, end users, system vendors and other community organizations for plenary sessions and workgroup meetings to meet face-to-face to tackle and solve the most pressing issues facing Linux today.

My attendance will be in my capacity as a member of the Ubuntu Community Council as well as my role as a Debian Systems Administrator. As such, my attention will be split at the summit between community and governance interests, like the FOSSBazaar Workgroup and Josh Berkus’ How to Prevent Community: Making Sure Your Pond Stays Small, and talks and panels like Does Open Source Mean Open Cloud? where Ubuntu founder Mark Shuttleworth will be a panelist, and the Linux Standard Base Workgroup and Virtualization discussions.

It’s shaping up to be an exciting summit, if you are also attending be sure to say “Hello”!

Posted by Elizabeth Krumbach in Conference, Debian, FOSS Community, News, Ubuntu, 0 comments

Anticipating the Emerging Technologies for the Enterprise (ETE 2010) Event

I will be attending the 5th Annual Emerging Technologies for the Enterprise Conference (ETE 2010) this Thursday and Friday, April 8-9, 2010. The event is billed for “developers, architects, and IT executives” and attempts to provide a dynamic forum for “emerging technology and Open Source”.

I look forward to seeing Robert C. (Uncle Bob) Martin‘s keynote on “Bad Code, Craftsmanship, Engineering, and Certification”, a panel discussion on “Open source is a commercial enterprise”, another panel on “Social Media: Why should I care?”, a second Bob Martin presentation on “Agility and Architecture”, Mary Poppendieck on “Cost Center Disease”, Bonnie Aumann on managing developers, Michael Coté’s keynote on “The Pragmatic Cloud”, Geir Magnusson Jr. on “Project Voldemort”, and Brian McCallister on “Failure Happens” (one of the very few talks on systems administration). Then there’s an interesting panel on “Battle of the Frameworks II” (its predecessor the ETE 2008 “Web Framework Shootout” is on-line in two parts I (here) and II (here). Hopefully this year people will respect each others’ frameworks more and have a mature discussion about the tradeoffs that each incurs. I was impressed with Marjan Bace, the moderator, for helping facilitate some reasonable comments amidst too much hyperbole and for brining the discussion to an effective conclusion). Finally, I think I’ll attend the presentations by Molly Holzschlag on “Demystifiying HTML5”, David A. Black on some CS (computer science) precepts, and Audrey Troutt on “Influencing your way to agile”.

It looks like it will be an engaging two-day event. I’m looking forward to meeting many leaders in the local Philly and broader FOSS (Free and Open Source Software) technology community and getting to downtown Philly for some out of the office learning and networking.

While I’m mentioning events, for those who do not know, I moderate the Q&A for the first Wednesday of the month meeting of the Philadelphia Linux User’s Group (PLUG) which will be on “Functional Programming Using Haskell” this month. It is going to be a busy week! If you plan to attend either event, I’ll see you there.

Posted by CJ Fearnley in FOSS Community, News, 2 comments

How and why contributing to FOSS can benefit your organization

At first glance, the ecosystem in the Free and Open Source Software (FOSS) world can seem a bit complicated. There are several ways to get software: project websites where you can download it directly, use a software management tool that your Linux distribution provides, or you may also be able to install a Linux distribution that includes everything you need right out of the box! Once you understand this ecosystem, you can find where your contributions would be most useful, and why contributing is beneficial to your organization and the FOSS community.

So, where does this all begin? FOSS often originates with a project which maintains the source code for the software and provides its own development and support infrastructure.

A Linux distribution is a carefully culled collection of software from these upstream projects which makes a complete operating system and even includes a lot of application software. This collection of software is tested and prepared to run securely and maintainably together. Debian is built upon this model.

Some distributions of Linux use Debian as a source project unto itself. There are a number of Linux distributions based on Debian, including the popular KNOPPIX and Ubuntu distributions. Being “based on Debian” can mean several things, but it primarily means they draw from the software repository at some point in the release cycle, and they use the Advanced Packaging Tool (apt) to manage this software. In these cases Debian is an intermediary between the original FOSS project and the “children” distributions which may also pull from original software projects to expand upon what Debian provides to target their particular focus.

So where in this software ecosystem should your organization contribute? Why would your organization choose to contribute to Debian rather than to the original project (“upstream” of Debian) or a project like Ubuntu (“downstream” of Debian)? It really depends on your goals.

If your organization is interested in using FOSS in a way which requires rapid development, new and diverse features released quickly, or specializations that the distribution may not easily support, you will probably want to work directly on the upstream project. Frequently this requires programming experience, but many projects need other kinds of help such as bug reports in the form of feature requests which they may be able to satisfy in later releases. In these cases, contributing to development in these projects directly is the best way to meet your needs in using and building upon the software.

If your organization needs to use FOSS in a stable, maintainable and secure way, you should probably work directly with Debian. The primary duty of most developers within the Debian community is working on the “packages” which make up the operating system: creating, updating, patching, tracking their security and handling bugs, forwarding details and patches to the upstream projects when applicable. This is what maintains the solid, core operating system that makes up not only Debian, but the child distributions which depend on it, and which could not exist without it. By contributing to Debian you’re also contributing to Ubuntu, Knoppix, and dozens more, improving the tool shelf for everyone (related: Given 250,000 tools on the shelf, how do you manage them?). Contributing to Debian also helps the upstream projects, taking the burden off of them to provide installation documents and support on Debian and placing that upon you, plus making their software more readily available to users through a simple search through the Debian repository.

If the target of one of Debian’s children better meets your organization’s needs which cannot be achieved through Debian directly, then by all means contribute directly to it. Child distributions already exist which focus on everything from being an Open Source LiveCD toolbox (like KNOPPIX) to being a polished desktop operating system (like Ubuntu). As an example, even within Ubuntu’s family there are targeted projects, like Edubuntu, focused on education by packaging and shipping a collection of educational software and a project devoted to making your computer a PVR like TiVo called Mythbuntu which works with the MythTV project to easily deliver their software on a platform. Contributing to projects like these also expands the open source ecosystem and may be the preferred method to reach your organization’s goals.

Understanding the way in which these projects and distributions work together and selecting a place in the workflow for your organization to contribute is the first step. But perhaps a more important question is why you’d want to work on a FOSS project instead of doing in-house development. The benefits for the FOSS community are obvious, they will reap the benefits of having your expertise, from having the packages in Debian and beyond, but are there benefits for your organization?

I believe there are big benefits, which include:

  • Peer review of packages and software now and in the future
  • Processes for asking the community for assistance
  • Bug reporting infrastructure, which may include patches submitted by community members
  • Procedures to become informed about security problems and policy changes
  • Free collaborative resources provided for FOSS projects (Alioth for Debian,  SourceForge, LaunchPad or the Apache Foundation, etc) for development, including development mailing lists and hosted revision control systems like git, bazaar, svn.
  • Opportunity to learn key FOSS development strategies and industry “best practices” via freely available documentation, chat rooms, forums and mailing lists

In short, by putting the time in to releasing software, packaging for Debian or work in children distributions, you not only are doing good for the FOSS community, you get to take advantage of the plethora of tools, resources and people available to assist in the development process.

Posted by Elizabeth Krumbach in Debian, FOSS Community, Ubuntu, 0 comments

The Nature and Importance of Source Code and Learning Programming with Python

Last year a client asked us for advice on getting started with programming. So I thought I’d share some thoughts about programming, its relationship with FOSS (Free and Open Source Software) management and why Python is a good language for learning programming including some great on-line resources. But first I want to make sure our business-oriented readers understand the nature and importance of source code.

The “source” aka “the code” provides a language in which computer users can create or change software. One does not have to be a programmer to work on the code. In fact, every computer user is, ipso facto, a programmer! Menus, web interfaces, and graphical user interfaces (GUIs) are some of the more facile “languages” for computer programming that everyone, even children, can readily learn and use. Of course, building complex software systems requires a more expressive specification language than a web form, for instance, can provide.

Although all computer software is specified with source code, FOSS systems are unique in that the source code is made available with the software. In contradistinction, software lock-in or vendor lock-in describes the unfortunately all too common practice of many organizations to block access to their source code.

Having access to the source code provides huge operational benefits. For one, the source can be used to understand how the software works: it is a form of software documentation (indeed, it is the most definitive form of software documentation possible!). Also, code can be easily changed to add diagnostics or to test a possible solution to a problem or to modify or add functionality. In addition, the source is a language both for specifying features to the computer and for discussing computing with others. So most mature FOSS languages have vibrant support communities in which one can participate, learn and get help.

The source is a tool: a powerful, multi-purpose, critically important tool.

Since LinuxForce focuses on FOSS, we are able to take full advantage of the availability of the code. We are always working with the source! Since most of our work is systems administration, we usually “program” configuration files. However, we also write systems software and scripts and we support software developers extensively, so we have a persistent, deep, and productive relationship with code.

But what to suggest to someone like our customer who wants to learn programming?

I remembered seeing a blurb in Linux Journal referencing an article they published in May 2000 by Eric Raymond entitled "Why Python" which argues persuasively for the virtues of the programming language Python. I had often felt that Perl‘s idiosyncrasies made it difficult to use, so Eric’s critique of Perl and accolades for Python were convincing to me. In addition, I follow FOSS mathematics software and I was aware that Sage is a Python “glue” to more than fifty FOSS math libraries. I’ve been meaning to look into Python so I could use Sage. Another pull comes from my work at LinuxForce where we use a lot of Python-based software including mailman, fail2ban, Plone, and several tools used for virtual machine management such as kvm, virtinst and xen-tools. Python has a huge software repository and community. So one is likely to find good libraries to build upon (thus avoiding the extra learning curve of building everything from scratch). Python is an interpreted language which makes it easier to debug and use so the learning process is smoother.

To finish the recommendation, I just needed to find some on-line resources. First, Kirby Urner suggested these two: Wikieducator’s Python Tutorials and "Mathematics for the Digital Age and Programming in Python". Then, I checked out the Massachusetts Institute of Technology’s (MIT) OpenCourseWare which provides extensive course materials for many of their classes (I’ve already watched the full video set for a couple of MIT’s courses including the legendary Walter Lewin’s "Classical Mechanics" and have been very impressed by the quality and content of their materials). After nearly 30 years of introducing students to programming with Scheme, MIT switched to Python in 2008! The materials for their introductory Python-based course "6.00 Introduction to Computer Science and Programming" are very thorough, accessible and helpful. Their free on-line materials include the full video lectures of the class plus assignments, sample test problems, class handouts, and an excellent Readings section with references to "the Python Tutorial" and a very good free on-line textbook "How to Think Like a Computer Scientist: Learning with Python".

In conclusion, if you or anyone you know wants to learn how to program computers, I recommend starting with Python using MIT’s on-line course materials supplemented with the other on-line resources mentioned above (and summarized in the table below). I’ve now watched more than half of the videos from the MIT 6.00 course and I’ve worked through several of their assignments: this is a great course! Even with nearly three decades experience programming including a couple of college-level courses in the 1980s, I’m finding the class is more than just good review for me: I’ve learned a few new things (in particular, dynamic programming and the knapsack problem). Python’s clean syntax and elegant design will help as one delves into writing code for the first time. Its extensive libraries and repositories will support the application of one’s newly acquired computing skills to solve problems in the area of the student’s special interests whatever they may be … and that’s the way we learn best: by doing something that we personally care about!

Summary of On-Line Resources for Learning Python

Posted by CJ Fearnley in Programming, 0 comments

Some thoughts on best practices for SMTP blocking of e-mail spam

Blocking e-mail spam at the time of SMTP (Simple Mail Transfer Protocol) transfer has become a best practice. There is no point wasting precious bandwidth & disk space and spending time browsing a huge spambox when most of the incoming flow is clearly spam. At LinuxForce our e-mail hygiene service, LinuxForceMail, makes extensive use of SMTP blocking techniques (using free and open source software such as Exim, Clam AV, SpamAssassin and Policyd-weight). But we are extremely careful to only block sites and e-mails that are so “spammy” that we are justified in blocking it. That doesn’t prevent false positives, but it keeps them to a minimum.

Recently we investigated an incident where one of our users had their e-mail blocked by another company’s anti-spam system. In investigating the problem, we learned that some vendors support an option to block e-mail whose Received header is on a blacklist (in our case it was Barracuda, but other vendors are also guilty). Let me be blunt: this is boneheaded, but the reason is subtle so I can understand how the mistake might be made.

First, blocking senders appearing on a blacklist at SMTP time is good practice. But to understand why blocking Received headers at SMTP time is bad, it is important to understand how e-mail transport works. The sending system opens a TCP/IP connection from a particular IP address. That IP address should be checked against blacklists. And other tests on the envelope can help identify spam. But the message headers including the Received header are not so definite. We shall see that even a blacklisted IP in these headers may be legitimate. So blocking such e-mail incurs unnecessary risks.

The problem occurs when a user of an ISP (Internet Service Provider) sends an e-mail from home, they are typically using a transient, “dynamic” IP address. Indeed it is possible that their IP address has just changed. Since the new address may have been previously used by someone infected with a virus sending out spam, this “new” IP address may be on the blacklists. So, due to no fault of your own, you have a blacklisted IP address (I will suppress my urge to rant for IPv6 when everyone can finally have their own IP address and be responsible for its security).

Now, when you send an e-mail through your ISP’s mail server, it records your (blacklisted) IP as the first Received header. So your (presumably secure) system sending a legitimate message through your ISP’s legitimate, authenticating mail server is blacklisted by your recipients’ overambitious anti-spam system. Ouch. That is why blocking such an e-mail is just wrong. This kind of blocking creates annoying unnecessary complications for the users and admins at both sides. Using e-mail filtering to put such e-mails into a spam folder would be a reasonable way to handle the situation. Filtering is able to handle false positives whereas blocking generates unrecoverable errors.

Do not block e-mail based on the Received header!

Posted by CJ Fearnley in Security, Systems Management, Tech Notes, Ubuntu, 0 comments

Given 250,000 tools on the shelf, how do you manage them?

Although I haven’t seen a thoroughly researched study, I figure there must be at least 250,000 FOSS (Free and Open Source Software) tools available to every systems administrator on the planet (230,000 at SourceForge + 15,000 at Launchpad + 12,000 at CodePlex + 5,000 at Google Code and that doesn’t count the Linux kernel or any of the myriad other self-hosted projects). These 250,000+ resources comprise the full “toolbox” that admins can use for building solutions with FOSS; they represent the FOSS equivalent of COTS (Commercial Off-The-Shelf). Of course, if you add open source but non-free or commercial tools, the problem explodes combinatorially.

How can a systems administrator support the largest possible subset of these “on the shelf” resources to best service the next need from a stakeholder (like the boss or a new client)?

First let me emphasize the difficulty of the task with a list of items that systems administrators and systems management firms like LinuxForce are expected to do whenever a stakeholder presents a software need:

  • Find and Evaluate software that can meet the need:
    • Identify several candidate applications that might meet the business requirements for a given project, function, or need
    • Research the options to assess their ability to meet the requirements (actually we, the systems administrators of the world, are actually expected to know which tool is “best of breed”: just from our past experience. The false assumption is, if it isn’t well known it must not be any good. The long tail applies to the 250,000+ FOSS tools also!). In our experience such research is essential, unfortunately, there is rarely enough budget to carefully explore the options.
    • Install the tool(s) in a “sandbox” to allow the stakeholder to “try it out”
    • Select a tool to use or look for more options
  • Put the tool into production
    • Read the docs to identify best practices for the software’s configuration
    • Prepare an installation plan that will address (as best as possible) any upgrade glitches (yes, you have to anticipate them now or suffer the consequences later!) so that you’re prepared for when a security advisory is released (or when the stakeholder starts begging for features from a new release)
    • Figure out a support plan to handle the inevitable questions that will arise during operations
    • Integrate these considerations into the process of either installing a package or using the “make, configure, make install” steps that most FOSS tools provide for installation
    • Carefully document the “as built” configuration including all assumptions and anticipated glitches to help yourself or future admins during the maintenance phase
  • On-Going Maintenance
    • Monitor the software
    • Subscribe to any relevant security mailing lists for the software so that you are apprised when a security (or other major) problem is detected
    • Track general trends relating to the software and its alternatives so that you are ready to respond if the project goes dormant or is eclipsed by newer, superior technology.
    • Upgrade routinely

About 15 years ago I noticed that the explosion of ready to use FOSS tools plus the trend toward general purpose tools and away from custom software was leading to a combinatorial crisis in software maintenance. I saw that it was the systems administrator’s responsibility to address the situation.

It has become apparent to me that the solution would require use of convention, standards and policy to reduce the complexity of the problem to manageable proportions. I searched for the most “standardized” conventions and policy-enforcing environment that would also provide the most flexible access to the most FOSS tools. The solution I found is Debian/GNU Linux, the universal operating system (although Ubuntu and other Debian derivatives also provide most of these benefits as well).

Debian simplifies the software evaluation process (apt-get [search|show]). Debian simplifies installation (apt-get install), security and new version upgrades (apt-get [upgrade|dist-upgrade]). Debian uses conventions and packages to simplify identifying best practices for administering the software (/usr/share/doc/[package]/, /var/lib/dpkg/info/[package].postinst, and wikis, mailings lists, bug reports, etc.). But the key benefit for managing the combinatorial explosion of FOSS tools is the Debian community’s value of striving to configure each package to automatically support the most common use cases while also providing support for unusual configurations (so you save tons of time in configuring the software).

In summary, the Debian/GNU Linux system provides the infrastructure needed to manage the combinatorial explosion of off the shelf FOSS tools cost effectively. If you have to service a lot of users, customers, or clients with challenging, diverse needs, I think Debian is the most cost effective way to meet their needs and deliver quality maintenance on an on-going basis year after year after year.

Posted by CJ Fearnley in Debian, Systems Management, 1 comment

A FOSS Perspective On Richard Schaeffer’s Three Tactics For Computer Security

Federal Computer Week published a great, succinct quote from Richard Schaeffer Jr., the NSA’s (National Security Agency) information assurance director, on three approaches that are effective in protecting systems from security attacks:

We believe that if one institutes best practices, proper configurations
[and] good network monitoring that a system ought to be able to
withstand about 80 percent of the commonly known attack mechanisms
against systems today, Schaeffer said in his testimony. You can
actually harden your network environment to raise the bar such that
the adversary has to resort to much, much more sophisticated means,
thereby raising the risk of detection.”

Taking Schaeffer’s three tactics as our lead, here is a FOSS perspective on these protection mechanisms:

Best practices implies community effort: discussing, sharing and collectively building understanding and techniques for managing systems and their software components. FOSS (Free and Open Source Software) communities develop, discuss and share these best practices in their project support and development forums. Debian’s package management system implements some of these best practices in the operating system itself thereby allowing users who do not participate in the development and support communities to realize the benefits of best practices without understanding or even knowing that they exist. This is one of the important benefits of policy- and package-based operating systems like Debian and Ubuntu.

Proper configuration is the tactical implementation of best practices. Audit is a critical element here. Debian packages can use their postinst scripts (which are run after a package is installed, upgraded, or re-installed) to audit and sometimes even automatically fix configuration problems. Right now, attentive, diligent systems administrators, i.e., human beings, are required to ensure proper configuration as no vendor — not even Debian — has managed to automate the validation let alone automatically fix bad configurations. I think this is an area where the FOSS community can lead by considering and adopting innovations for ensuring proper configuration of software.

Good network monitoring invokes the discipline of knowing what services are running and investigating when service interruptions occur. Monitoring can contribute to configuration auditing and can help focus one’s efforts on any best practices that should be considered. That is, monitoring helps by engaging critical thinking and building a tactile awareness of the network — what it does and what is exposed to the activities of a frequently malicious Internet. So, like proper configurations, monitoring requires diligent, attentive systems administrators to maintain security. LinuxForce’s Remote Responder services builds best practices around three essential FOSS tools for good network monitoring: Nagios, Munin, and Logcheck.

Posted by CJ Fearnley in Security, Systems Management, 0 comments

Seven Observations On Software Maintenance And FOSS

The November 2009 issue of Communications of the ACM (CACM) has a very interesting article by Paul Stachour and David Collier-Brown entitled “You Don’t Know Jack About Software Maintenance”. The authors argue energetically for using versioned data structures and “continuous upgrading” to improve the state of the art of software maintenance.

The piece got me thinking about FOSS (Free and Open Source Software) and “continuous upgrading”. Here are seven observations on FOSS software maintenance that occurred to me as I reflected on the CACM article:

  1. FOSS projects “continuously” apply bug fixes and feature enhancements at no additional cost to their users. By applying these improvements “continuously”, the user reaps a steady stream of “interest payments” providing ever-improving security, performance, and functionality.
  2. Since FOSS incurs no licensing or license management costs, upgrading FOSS is not hindered by capital expenses.
  3. Typically support in FOSS projects is focused on the current stable version. Therefore, upgrading to the current stable version is the preferred way to receive the best support from FOSS communities.
  4. One of the key reasons behind Debian‘s strong track record of “continuous upgrading” is its way of handling the tricky issues involved with dependent library upgrades (such as libc6, libssl.so.0.9.8, & etc). The chapter on Shared Libraries in the Debian Policy Manual details a proven method to effectively handle library upgrade issues (including its sophisticated handling of versions).
  5. When upgrading is applied routinely and “continuously”, it becomes crucial to support customizations across upgrades which can be one of the biggest obstacles to a smooth upgrade (see my earlier post on customization and upgradeability). One reason for Debian’s effectiveness in this regard is its robust configuration file handling policy.
  6. It is worth noting that the “continuous” implied here is not the one emphasized in dictionaries (which takes its nuances from the mathematical / physics concept of “no interruptions” and the epsilon-delta definition that students of Calculus learn). That concept of “continuous” is impossible in systems administration which is necessarily discrete as are all computer operations. The connotation required here is, perhaps, “unending”, or “eternal” or somesuch.
  7. The “right” frequency for “continuous” upgrades is a complex tradeoff between business requirements and upgrade infrastructure maturity. Debian and Ubuntu provide vary mature support for “continuous upgrading”. They support the upgrade of production servers through release after release after major release with minimal downtime or risk of a glitch that could affect users. Their current release frequency of about 2 years may be the best we can do given the current state of the art of software maintenance. I hope we can learn to increase the frequency as better engineered upgrade policies are developed.

I prefer the name “eternally regenerative software administration” over “continuous upgrading”. It avoids the philosophical problems with the word “continuous” and emphasizes the active, “ecological” approach needed to envision the engineering of “regenerativity” in software. By that I mean software maintenance should involve building the system so each new version enables installation of the next while facilitating management of any customizations and integration with other software (including libraries and other “helper” applications). Regenerativity is the process of growth and change used by Nature itself. Software maintenance needs to follow similar principles.

Posted by CJ Fearnley in Debian, Eternally Regenerative Software Administration, Ubuntu, 0 comments