Akademy Redux: Release Team Members Propose New Development Process

At Akademy 2008, KDE Release Team members Sebastian Kügler and Dirk Müller discussed the future of KDE's development process. Describing the challenges KDE faces and proposing some solutions, they spawned a lot of discussion. Read on for a summary of what has been said and done around this topic at Akademy.

Our current development model has served us for over 10 years now. We did a transition to Subversion some years ago, and we now use CMake, but basically we still work like we did a long time ago: only some tools have changed slightly. But times are changing. Just have a look at the numbers:

  • KDE 0.0 to 3.5 took 420.000 new revisions in 8 years
  • KDE 3.5 to 4.0 took 300.000 new revisions in 2 years

Also, this year's Akademy was the largest KDE event ever held, with more than 350 visitors from every continent of the world.

This enormous growth creates issues both for the wider community, for developers, and for the release team. Patches have to be reviewed, their status has to be tracked - these things become progressively harder when the size of the project balloons like it currently does. The centralized development system in Subversion's trunk doesn't support team-based development very well, and our 6-month release cycle - while theoretically allowing 4 months development and 2 months stabilizing - often boils down to barely 50% of the available time suitable for new feature development.

KDE's current revision control system doesn't allow for offline commits, making life harder for people without a stable internet connection. Furthermore we're still looking for more contributors, so lowering the barrier for entry is another important concern.

Changing requirements

We will have to allow for more diversity and we must be able to accommodate individual workflows. Not everyone is happy with a 6-month schedule, not everyone prefers Subversion. Companies have their schedules and obligations, and what is stable for one user or developer is unsuitable for another. Meanwhile, new development tools have surfaced, such as the much-praised distributed revision control tool Git. Together with new tools for collaborating, new development models are emerging. KDE is in the process of adopting a much wider range of hardware devices, Operating systems (OpenSolaris, Windows, Mac OS) and mobile platforms such as Maemo. And we have an increased need for flexible and efficient collaboration with third parties and other Free Software projects.
Sebastian and Dirk believe it is time for a new way of working. In their view, KDE's development process should be agile, distributed, and trunk freezes should be avoided when possible. While there are still a lot of culprits in their proposal, KDE needs to get ready for the future and further growth.

Agile Development

The most fundamental idea behind Agile Development is "power to the people". Policies are there to avoid chaos, and to guide (but not force) people in any way.

What is Agile Development supposed to offer us?

  • Shorter time-to-market, in other words, less time between the development of a feature and the time users can actually use it
  • More cooperation and shortened feedback cycles between users and developers
  • Faster and more efficient development by eliminating some current limitations in team-based development processes.
  • Simplicity. Not only good in its own right, but it also makes it easier to understand and thus contribute to KDE development.

How can we do this?

To achieve this, we have to reflect upon our experiences as developers and share our thoughts on this. Our process should be in our conscious thoughts. Sebastian and Dirk talked about a specific lesson they have learned: plans rarely work out. As a Free Software project, we don't have fixed resources, and even if we did, the world changes too fast to allow us to reliably predict and plan anything. We have to let go. We should set up a process aimed at adaptation and flexibility, a process optimized for unplanned change.

This needs to be done in one area in particular: our release cycle. Currently, our release cycle is limiting, up to the point of almost strangling our development cycle. So Dirk and Sebastian propose a solution:

"Always Summer in Trunk"

Our current release process, depicted in the graphic below, can be described as using technical limitations to fix what is essentially a social issue: getting people into "release mode". Over 4 months, we develop features, then enter a 2 month freeze period in which increasingly strict rules apply to what can be committed to trunk. This essentially forces developers to work on stabilizing trunk before a release. Furthermore, developers need to keep track of trunk's current status, which changes depending on where in the release cycle KDE currently is, not taking into account diverse time schedules of both upstream and downstream entities. At the same time, many developers complain about Subversion making it hard to maintain "work branches" (branches of the code that are used to develop and stabilize new features or larger changes in the code), subsequent code merges are time-consuming and an error-prone process.

The proposal would essentially remove these limitations, instead relying on discipline in the community to get everyone on the same page and focus on stability. To facilitate this change, we need to get the users to help us: a testing team establishing a feedback cycle to the developers about the quality and bugs. Using a more distributed development model would allow for more flexibility in working in branches, until they are stabilized enough to be merged back to trunk. Trunk, therefore, has to become more stable and predicable, to allow for branching at essentially any point in time. A set of rules and common understanding of the new role of trunk is needed. Also, as the switch to a distributed version control system (which is pretty much mandatory in this development model) is not as trivial as our previous change in revision control systems, from CVS to Subversion. Good documentation, best practice guides, and the right infrastructure is needed. The need for better support for tools (such as Git) in KDE's development process does not only come from the ideas for a new development model though. Developers are already moving towards these tools and ignoring such a trend would mean that KDE's development process will clutter and ultimately become harder to control.

In Sebastian and Dirk's vision, KDE's current system of alpha, beta and release candidate releases will be replaced by a system which has three milestones:

The Publish Milestone

This is the moment we ask all developers to publish the branches they want to get merged in trunk before the release. Of course, it is important to have a good overview of the different branches at all times to prevent people from duplicating work and allow testers to help stabilize things. But the "Publish Milestone" is the moment to have a final look at what will be merged, solve issues, give feedback and finally decide what will go in and what not. The publish milestone is essentially the cut-off date for new features that are planned for the next release.

The Branch Milestone

This is the moment we branch from trunk, creating a tree which will be stabilized over the next couple of months until it is ready for release. Developers will be responsible for their own code, just like they used to be, but one might continue using trunk for development of new features. To facilitate those developers who do not want switch between branches, we could have a tree which replicates the classic development model. Developers are encouraged and expected to help testing and stabilizing the next-release-branch.

The Tested Milestone

The "tested" milestone represents the cut-off date. Features that do not meet the criteria at this point will be excluded from the release. The resulting codebase will be released as KDE 4.x.0 and subsequently updated with 4.x.1, 4.x.2, etc. It might be a good idea to appoint someone who will be the maintainer for this release, ensuring timely regular bugfix releases and coordinating backports of fixes that go into trunk.


A prerequisite for this new development model would be a proper distributed source code management system. Git has already stolen the hearts of many KDE developers, but there are other options out there which should be seriously assessed. Furthermore we need tools to support easy working with the branches and infrastructure for publishing them. Getting fellow developers to review code has always been a challenge, and we should make this as easy as possible. We also need to make it easy for testers to contribute, so having regularly updated packages for specific branches would be an additional bonus. Trunk always needs to be stable and compilable, so it might be a good idea to use some automated testing framework.

Under discussion are ideas like having some kind of "KDE-next" tree containing the branches which will be merged with trunk soon; or maybe have such trees for each sub-project in KDE. Another question is which criteria branches have to meet to get merged into the "new" trunk. Especially in kdelibs, we want to ensure the code is stable already to keep trunk usable. Criteria for merges into various modules have to be made clear. What happens if bad code ends up in trunk? We need clear rules of engagement here. How can we make it as easy as possible to merge and unmerge (in the case the code that has been merged is not ready in time for a release)?

Having a page on TechBase advertising the different branches (including a short explanation of their purpose and information about who's responsible for the work) will go a long way in ensuring discoverability of the now-distributed source trees. A solution also needs to be found for the workload around managing trunk. Especially if we have tight, time-based releases, a whole team of release managers needs to take responsibility. KDE's current release team has come a long way in finding module coordinators for various parts shipped with KDE, but currently not every module has a maintainer.

While there are still a lot of questions open, we'd like to work them out in collaboration with the KDE community. KDE's future revision control system is discussed on the scm-interest mailing list. Discussion on a higher level can be held on the Release Team's mailing list, and naturally KDE's main developer forum, kde-core-devel.

With the release of KDE 4.0, the KDE community has entered the future technologically. Though timescales for the above changes have not yet been decided upon, Dirk and Sebastian took the talk as an opportunity to start discussing and refining these ideas: it's time that KDE's primary processes are made more future-proof and ready for the new phase of growth we have entered.

Dot Categories: 


by markc (not verified)

Point taken but if the release cycles are short enough then the parts of the system that take more work can skip a release and not be stuck with becoming too stale by the time that new functionality does get released to the public.

Say a developer, or team, is working on a large subsection using Git then they can keep working amongst themselves during any bugs-only-freeze and sync up with trunk after only a dozen or so days of isolated work. They can then hook up with the next release cycle instead of holding back the current one if he release cycle is short enough.

by Sebastian Kuegler (not verified)

You could translate the "always summer in trunk" idea to "the last stable version of my code is in trunk".

I think for developers, it's a great asset to know that the features in trunk can actually be expected to not be work in progress anymore. Less uncertainty when you run into the unavoidable question "is this my code that's doing something wrong, or has this just not been implemented yet". If you got it from trunk, it should work. Otherwise, get the developer in question to fix it (or fix it yourself).

by Paul Gideon Dann (not verified)

Sebastian really knows what he's talking about :) The whole purpose of the master (or trunk) being always stable is that you can tag and release master at pretty much any arbitrary time without concern for instability or bugs.

With the suggested model, a release can be made whenever the release team wants, without affecting the developers at all. Developers continue to produce bug-fixes and features, which are only merged to master when they're stable enough for release. This model really works, and it's wonderfully elegant.

by David Johnson (not verified)

I'm still not sold on the idea that we need a new revision control tool every two years. Yeah, some people don't like subversion. So what? Some people don't like git! You will never find one tool everyone likes, but I guarantee you that switching to a new tool every two years isn't going to please everyone either. Let's just skip git and jump directly to 2010's tool!

I understand the desire for a distributed development process, but I'm not sure changing the way everyone works is the only way to get it.

by markc (not verified)

The current DVCS options did not exist 2 years ago when the (right at the time) choice was made to adopt Subversion. Now there are 3 or more good contenders for fully distributed version control systems available, and more importantly, the concept of how to use a distributed system is now widespread whereas it wasn't "back then".

There is a reasonable upgrade path towards adopting Git, in particular, by using git-svn as a bridge for a year until it's decided to, hopefully, adopt Git as the central repo as well. There is nothing stopping developers using Git right now and taking advantage of fast branching for themselves and to push/pull between other developers also using Git, then eventually merge with the central master Subversion repo. If after ~12 months experience, of developers using this hybrid approach, that it was decided that Git should also replace the central master repo then you can be sure it would be the right decision... and not just a fad because because the kernel guys are doing it.

by Paul (not verified)

You got it backwards.

The objective is not to change the way everyone works to get git. The objective is to adopt git so that they can change the way everyone works.

As TFA says, they are considering other DVCS that support the "always summer on trunk" development model. Git just happens to be the one everyone is familiar with because of the integration with subversion.

by Michael "+1" Howell (not verified)

I can agree with that. It's also a matter of the fact that I particularly LIKE the model we use. I know it's got it's downs, but it's got it's ups to.

The advantage it has is also it's disadvantage: it's centralized. That means that it's easier to keep all developers on the same page. Because of this, it's also easier to test the whole code-base. If everything is going on in different branches, it's extremely difficult to get the testing that you get when all the development happens in one place.

There is also the pain of merging. Sometimes, a development branch may diverge from the original code-base too much to merge it with the other branches. Depending on the extent of the diversion, you may end up picking a branch, and basically rewriting all the changes implemented in the other branches on it. Naturally, such merging is going to cause serious problems with the large amount of people working on KDE (no, don't cite the Linux Kernel: it follows a model with only a few select people having commit rights who can easily collaborate to ensure merging works easily).

by Paul Gideon Dann (not verified)

Hmm. I think you're still thinking too centralised. With a decentralised model, there would be a hierarchy of maintainers rather than a central codebase. For instance, if you want to work on KDE-Games, there would be a maintainer for each (maintained) game, and an overall KDE-Games maintainer. If you create a patch for a game, you send it to that game's maintainer, who merges it. The KDE-Games maintainer will pull changes from each game maintainer, and a global KDE maintainer will pull changes from him. The patch is propagated up the chain in this way. The global KDE maintainer will probably also be responsible for release.

You'll find that merge conflicts are pretty rare in this model. I think this is largely because the emphasis on frequent incremental merging (both up and down the hierarchy) prevents any branch from becoming too separated from its parent. In SVN, the emphasis is on large branches that are only merged right at the end, and that is certainly tough.

by Ian Monroe (not verified)

Is this actually the proposal?

by Thiago Macieira (not verified)


The proposal for KDE is to still maintain a central repository.

by Carlo (not verified)

This is the way a see such a system working. Having maintainers of different modules _actively_ maintain these - and this wasn't the case with KDE 3 thoroughly - would do KDE good, because if such a maintainer is slacking or doing bad work, it'll be noticed rather quickly, since developers will complain when stuff doesn't get merged and merging or code quality issues fall directly back to the maintainer of a module branch. It's more hierarchical in the sense that there's more responsibility on different levels and should help to improve overall code quality.

by Paul Gideon Dann (not verified)

Hmm; I guess this wasn't touched on specifically in the article. It's the model the Linux kernel uses, and generally seems to be the most logical and elegant model for large projects. I'd be really disappointed if KDE switched to DVCS only to keep the SVN-Wiki model of repository management.

by sebas (not verified)

The changes you describe would need to be done in a branch anyway. With SVN, you'll quickly tear your hair out when you need to merge multiple times to keep up with trunk.

by Aaron Seigo (not verified)

i don't think we're actually talking about stepping away completely from a centralized model here, but making development in branches or decentralized prior to merging easier.

we 'fake' this fairly well with plasma right now with feature branches and playground, but it's rather painful (to say the least).

> it's extremely difficult to get the testing that you get when all the
> development happens in one place

for plasma, where we do use branches to keep some of the more invasive or experimental changes away from trunk for safety purposes, it's actually the opposite. those branches get nearly no testing except by the involved developers until merging due to the cost and annoyance of trying other branches, and more people stick with the last stable branch versus trunk because of all the raw devel happening there.

i'm hoping we can have a small number of hot branches folding code at stabilization points back to a warm trunk on a regular basis. so everyone can follow trunk and be at most a couple weeks behind the bleeding edge where the developers live. the developers can find and fix the problems they run into, and the rest of those who follow kde's mainline can test the results of that process a few weeks behind.

> a development branch may diverge from the original code-base

well ... don't let them diverge so much then. it's a matter of coordination, and we'll have the main share repo to help with that. that said, svn gives merging branches a really bad name, worse that it actually deserves.

by markc (not verified)

Some good arguments for DVCS in general, and Git in particular, is made in this article by Assaf from apache.org who is currently "stuck" with a centralized Subversion repo. A similarly large project in a similar situation...


"Always summer in trunk" is the important principle, regardless of dVCS, and what toolset best enables that principle is debatable but Git, IMHO, has the edge and probably the best future inertia. To me, where Git best aligns with KDE over the long term, is it's significant C code base (other contenders being python based) which could become the core of a future C++ Qt/KDE development frontend, perhaps as part of Kdevelop, and provide the best KDE specific workflows. If a python based solution is adopted then KDE is forever locked into having a significant part of it's infrastructure always dependent on python, which is fine by some and not so by others, but those python parts will probably never get translated to C/C++ libraries that would make for the most efficient backends to KDEs C++ frontend. There may be some short term gains with another dVCS but over the long term (5+ years), as various native C++ frontends develop for whatever dVCS is deployed, I believe Git would provide the most powerful backend bindings for a C++ project like KDE.

There is also a GitTorrent project underway and that would help with a large multi-gigabyte project like KDE, and also illustrates the adoption range of Git.

by Al (not verified)

Having to choose your VCS based on the language it is implemented is debatable, to say the least.

The only program _remotely_ affected would be Kdevelop, and seeing the good Mercurial support available for Eclipse and Netbeans I doubt it...

If you want to make a well thought decision look at the work some people did when trying to decide what to use.


PS: People seems to jump into the wagon "This is good because Linus made it". Please, lets made informed informed decisions based on facts, not on philias and phobias.

by Aaron Seigo (not verified)

> People seems to jump into the wagon

that's not what we've been doing; KDE rejected dvcs as an option 3-4 years back, and have arrived at git only after looking at all the options and the skills and manpower we have available to us.

Linus, or any other bandwagon chase, has bugger all to do with this.

by Michael "Distri... (not verified)

GitTorrent? You can't get much more distributed than that ;)!

by Ian Monroe (not verified)

Git isn't a library. Judging VCS by language choice is ludicrous. This is actually points in Mercurials favor which actually has a API I believe. And python libraries can easily be accessed in KDE with Kross.

by Aaron Seigo (not verified)

yes, hg and bzr both have good APIs. there is a git library, however, and i believe that qgit uses it even? it was a SoC project last year.

but yes, the language the vcs itself is written is isn't really important. (though it can impact things that are, like performance)

by Paul Gideon Dann (not verified)

I've done a bit of hacking on QGit and I'm afraid it uses output parsing just like everything else. There was a little work done on a libgit, but the general consensus is that maintaining such a library would be costly, especially as it would make internal restructuring of Git code difficult. Anyway, no official libgit on the immediate horizon as far as I'm aware, but the command-line interface is very parsable (if that's a word), and works fine as an API generally.

There are plenty of Git tools around. All of them use command-line parsing, and none of them has any real problems with it.

by T. J. Brumfield (not verified)

I think it will help a great deal.

Right now, with the open trunk model, everyone who is using trunk is using and testing development versions of every application they use. So if I find a bug in, say, Kate, I can report it via bugzilla, or fix it myself, or fix cooperatively with Kate developers, etc. The same with other people using code I worked on --- just in the last few days I've gotten 4 or 5 very helpful khtml bug reports from fellow developers. OTOH, if there was a "kate-development-branch" or something, I'd have to remember to use that, plus "plasma-development-branch", plus "kwin-development-branch", etc.
Yes, you mention user-volunteer testers, but how will they deal with a mess of branches?

And really, I'd say lack of such testing was the biggest reason for KDE4.0.0 being so disappointing a release. Most developers started using KDE4 way too late in the cycle, and so a lot of bugs weren't seen. I know for my stuff there were a whole bunch of easy-to-fix but high-impact bugs that I was told about about shortly after 4.0.0 was tagged. If more people made the jump to 4 earlier, those bugs won't be in 4.0.0 at all.

Further, I think you're severely underestimating traditional stability of trunk. I don't remember when I started using it daily --- I think late in 2.x cycle as a user, and in alphas of 3.0 --- but as a general rule, it's almost always perfectly useable for every-day work. Developers are already generally using work branches for highly invasive changes, anyway.

Yes, the "traditional" concept of using trunk for development and tagging stable branches has been drifting towards the opposite approach for some time. One of the problems with branches is that they (the target URLs) keep changing over time and it requires extra meta info somewhere to help developers stay on the same page, or same code. If the tendency towards a stable trunk is actively pushed then there is only, or mainly, a single long term persistent target URL that any developer, or packager, needs to know about... HEAD, and the more folks that develop with and build against that single target the more it improves it's visibility and therefor it's quality.

by Michael "+1" Howell (not verified)

Agree 100%. ++

by Paul Gideon Dann (not verified)

It's a valid concern, but I don't think it'll be a problem in practice. Because of the hierarchical nature of DVCS, developers won't clone from the global KDE maintainer's master branch. They'll clone from, for example, the KDE-Games' maintainer's master branch, which will contain all the latest goodies that aren't yet considered stable enough for general release. This way, KDE-Games developers will be testing KDE-Games daily, but using stable versions of all the other modules. Each module's developers will be testing their module just as before, without it breaking for everyone else. As it's deemed stable, it'll be merged further and further upstream (contributor's private repo => KBounce maintainer's repo => KDE-Games' maintainer's repo => KDE maintainer's repo). By the time it reaches the root KDE repo, it's pretty rock-solid.

I imagine there will be KDE-next snapshots made available as well, which would include unstable features that aren't ready for master yet.

> This way, KDE-Games developers will be testing KDE-Games daily, but using
> stable versions of all the other modules

And that's EXACTLY what I consider the problem. The rest of KDE just lost a large bunch of highly skilled developers as part of its testing pool.

Plus, the idea of Linux-like centralized development model, with a tree-like structure centralized at one point simply doesn't match how KDE community works.

> And that's EXACTLY what I consider the problem. The rest of KDE just lost a large bunch of highly skilled developers as part of its testing pool.

"testing pool" or victims of the development model?! Nothing stops anyone to create a testing branch, comparably to current trunk and invite developers to use it - but: No one is forced to it anymore.

at least in Plasma, the plan is to use trunk as a continual warm branch that people can use just as they do trunk now but which will lag behind the hot branches by a number of weeks. we'll still get the same testing coverage we do now (or even more, perhaps), but on code that we've used and tested ourselves for a short while. this means people following mainline will still be testing pre-release code, but not "just committed five minutes ago, not complete yet" features.

In my view, this becomes easier because you can be sure that if you encounter a bug in trunk, it should actually be reported. No "ow, don't report bugs yet, I'm not done" -- that would be a sign that it shouldn't be in trunk/ already. The status of trunk simply doesn't change anymore: It's always the tip of development that should be bug-free.

If you want more unstable, more bleeding edge stuff, then there could be kde-next, a tree that has development versions that are not really done yet, and that will be merged into trunk once they're better. That's basically the plasma and kate and whatnot development branches lumped together.

So the stability of trunk right now is not the main problem, it's the fluctuation and the constantly changing meaning of trunk that is. It makes it harder for people to stay on top of what's going on.

We also need to be careful with "it's worked for years", we're growing faster than we ever did, and the signs say we will for some time. We need to get ourselves and our development process ready for this growth. One of the important aspects of this new development model is scalability of the KDE developer community.

>In my view, this becomes easier because you can be sure that if you encounter
> a bug in trunk, it should actually be reported

Yes, you would not have to worry about in-progress stuff being broken, but
the regressions won't be in trunk.

And I understand your concern about not having to coordinate with 5 zillions of developers, but in my view, quality requires more coordination, not less.

> Yes, you would not have to worry about in-progress stuff being broken,
> but the regressions won't be in trunk.

You have to merge at one point ;)
If there are not going to be any regressions in trunk, then we just got perfect. But I doubt that will happen.

There are two extremes; on one side you have the commit to trunk as soon as it compiles (thats actually how some people work right now).
And on the other side is the concept of only committing something when its perfect.

Neither of these extremes are comfortable, and I'm afraid we are working too close to the one extreme that everything always goes to trunk.
The proposal makes the point of moving more to the middle ground of doing much more stabilization of your features before moving it to trunk.

Nobody is suggesting that merging can only be done when the code is perfect, though.

Bottom line; we don't have to go from one extreme to another. There is sufficient gray area to use.

by Paul Gideon Dann (not verified)

Well, taking a look at the Linux kernel, we see that it's extremely rare for regressions to make it to the master branch of Linus's tree. Of course this is more important for the kernel, and it's unlikely that patches will be scrutinised so carefully in KDE, but I think it's very reasonable to expect there to never be any regressions in the master branch if we truly embrace DVCS.

The point is that the choice is still up to the developers. Lots of regression tests help there and the kernel probably doesn't get as many testers.

So, kde devs might integrate into trunk when they feel its ready, not waiting till its perfect.

by markc (not verified)

I'm excited by the possibility that KDE might chose Git as it's official DVCS and general overall Git adoption and documentation is one reason to consider it. As a simple and rudimentary test, check the number results of git, bzr, bazaar, mercurial and hg at Google... git returns 10+ times more than any of the others. Also, when it comes time for developer adoption, sites like http://gitcasts.com/ certainly help and there are at least 2 books available for Git (Pragmatic Version Control Using Git & Git Internals) whereas I'm not aware of any printed books for either Mercurial or Bazaar (they may exist but I couldn't easily find any, which is my point).

by Malte (not verified)

Regarding http://www.englishbreakfastnetwork.org/

Are there actually policies to review modules before release that these minor issues that affect code quality get fixed?

Or are there any attempts to continue and expand this testing?

"KDE's current release team has come a long way in finding module coordinators for various parts shipped with KDE, but currently not every module has a maintainer."

We do I find a list of orphaned modules?

by Allen Winter (not verified)

> Regarding http://www.englishbreakfastnetwork.org/
> Are there actually policies to review modules before release that these minor issues that affect code quality get fixed?

Sorta. New code that goes into kdereview should be clean
But no other policies.

> Or are there any attempts to continue and expand this testing?

I'm always adding new tests and refining existing tests.
We have lots of ideas, but little manpower.

> "KDE's current release team has come a long way in finding module coordinators > for various parts shipped with KDE, but currently not every module has a maintainer." We do I find a list of orphaned modules?

The list of module coordinators can be found on TechBase on the Release Team project page.

by XCG (not verified)

I've to admit that I haven't used git so far. But to me it sounds like the workflow of git is about same as a feature branch in subversion for every developer. Now making a feature branch in SVN is easy too.
Merging a feature branch back to truck I have felt no so easy so far. Now with SVN 1.5 they introduced merge tracking which is supposed to simplify this process. Has anyone experience with SVN 1.5's merge tracking? Couldn't it help here?

by Thiago Macieira (not verified)

It's not.

You're thinking that every developer with a DVCS tool will get one branch where he can play as much as he wants. And, given proper merging abilities (which SVN lacked until 1.5), you can easily merge code from other branches back and forth.

That is correct, but that's not the whole picture.

First and foremost you forgot the disconnected part. You can't commit to Subversion unless you can reach the repository, which is often in a server over the Internet.

Also, each developer isn't restricted to one branch. He very often has a lot of them. Right now I have 28 separate branches of Qt in my workstation: they range from previous stable releases of Qt (to test regressions and fixes with) to branches I created to start working on fixing tasks to research projects.

And that's just my private branches. When I am collaborating with other people in projects, I have more branches. For one project right now in Qt, we are tracking 4 or 5 different branches, each with a different "theme": optimisations, new features, animations, etc. And there's an extra branch which is the merger of all those "theme branches", so that we can get a feel of what it will be when it's done.

Finally, you're also forgetting the ability to undo, redo, and modify your work. Once you commit to Subversion, it's there for life. Removing something from the repository means dumping and reloading it. With a Git, you can undo your commits, change them, squash them together without problems. (You can do that after you've published them, technically, but you shouldn't)

This is actually something where Git is better than Mercurial: in Mercurial, changing the history isn't that simple.

So, no, SVN 1.5's merge tracking isn't the solution. It's definitely a hand in the wheel for the current problems, but not the full solution. If you want an analogy, it's covering the Sun with a sieve.

by sven (not verified)

For people who are curious about Mercurial(hg) this is a good source and this chapter explains how to rollback commits http://hgbook.red-bean.com/hgbookch9.html

by Kevin Kofler (not verified)

All that stuff is completely unrelated to the "always summer in trunk" proposal which can be implemented just as well with SVN.

by Thomas Zander (not verified)

Its actually not; you can't have 'always summer' if everyone just commits his latest thought experiments and every todo item he just came up with for the world to see.
That just doesn't make for a trunk thats usable due to the amount of people coming into our growing community.

So if you read the article more closely you will see that the basis is that people are asked to only commit stuff to trunk when they are done with it. Which for refactors and bigger things means it may be a week or more before you can commit it.

And due to that requirement, thiago's post becomes very relevant. Those tools are essential to a scalable workflow.

by Kevin Kofler (not verified)

> Its actually not; you can't have 'always summer' if everyone just commits his latest thought
> experiments and every todo item he just came up with for the world to see.

I don't see why. The current trunk is working fine, the only difference would be that trunk development can continue during a release freeze because the release would be branched earlier, I don't see why that would prevent working on the trunk the same way as previously.

> Which for refactors and bigger things means it may be a week or more before you can commit it.

That's what branches/work is for. Keeping features on a developer's HDD is a very bad idea, it means zero testing, no way for other developers to coordinate their changes, no way to help implementing the feature etc. Basically it's a return to the "big code drop" antipattern which SCMs were designed to solve, and getting the history added after the fact as part of the big push is only a mild consolation (and not even that is guaranteed because git can "flatten" history and that "feature" has been touted as an advantage).

by Kevin Kofler (not verified)

Oh, and to reply to your actual arguments:

> First and foremost you forgot the disconnected part. You can't commit to Subversion unless you can reach the repository, which is often in a server over the Internet.

And that's a good thing, as it enforces open development, i.e. people can always test the current development version, nobody has to wait for the developer to "push" his/her changes. Committing to the central repository as often as possible is just a part of "release early, release often" best practices. And the nice thing is that you don't even have to release, just commit and it's automatically public, unlike with a DVCS where you have to explicitly push.

> Finally, you're also forgetting the ability to undo, redo, and modify your work. Once you commit to Subversion, it's there for life. Removing something from the repository means dumping and reloading it.

That's also a good thing, history should not be altered, and people should be able to see the progress of development, it's part of openness.

> With a Git, you can undo your commits, change them, squash them together without problems. (You can do that after you've published them, technically, but you shouldn't)

And that's just evil. And it's also moot because people should be pushing after each commit (remember: release early, release often) unless they have a good reason not to (e.g. being in an airplane).

DVCSes encourage development behind closed doors, and that's a bad thing, because it goes against the community development model which has made the success of KDE and many other Free Software projects. (According to the "Open Source" folks, it's even the main advantage of "Open Source" software in the first place.)

by L. (not verified)

Complete agreement...

Besides for the few contributors who can't do without disconnected features, either occasionally or because it matches better their way of working, there already is git-svn.

So what are the arguments for forcing git's horribly steep learning curve on every contributor?

Has the great raise of the bar of entry for the average contributor even been considered?

by Aaron Seigo (not verified)

> nobody has to wait for the developer to "push" his/her changes

no, you just have to wait for them to commit their changes. and making branch management easy encourages people to work in published branches and be able to switch between them more.

> And it's also moot because people should be pushing after each commit

i couldn't agree less.

when working on a git repository, i often do a bunch of much smaller commits than i normally would with svn. i can use these as nice "save points" in my local repository without inflicting my work in progress on others in the same branch, while gaining the benefits of being able to save my work as often as i want even when it isn't done.

i end up pushing about as often as i would commit with svn, but i commit far more often with git and it's a great boost to productivity.

combine with squashing, i can make 50 commits in a day to finish one bigger feature and then squash it down to just the three or four important events before publishing to everyone else. or.. i can just push the whole lot if i'm lazy =)

> DVCSes encourage development behind closed doors

that's at least partly why people want to keep the centralized model, even when using git. it's not the tool that encourages closed development, it's the workflow used. with svn there's basically only one possible workflow, with a dvcs there are several including ones that give similar openness benefits.

as for making points moot, i'm not sure if you are aware of how many git-svn users there are out there with all their completely unpublished, unsearchable and un-clonable repositories. i'd much prefer to see these branches publishable to a central location so people don't have to choose between git and sharing, which is exactly what's happening already.

so ... the problem is already here. i'd like to see us deal with it, and improve our workflow in general in the process =)

by Jordi Polo (not verified)

If Git is to be used, I __strongly__ recommends taking a look to:


It is a Git hosting service with social networking bits on it.
Also, contribute and fork one owns private branch is as easy as click the "fork" button. Make improving software not interesting but something one can't stop doing!

by Thomas Zander (not verified)

Looks neat :)

Is it open source? I liked http://gitorious.org/ as well, which is open source (and so easy to extend and not dependent on a company)

by Jordi Polo (not verified)

Cool. Gitorious seems to be the same idea.
Yeah I just supposed that github was open source but it may be not.
It that casa I guess Gitorious is better

by Lisz (not verified)

Github is not open source.