New Developer Mailing List for Optimizing KDE

This past week a new mailing list was created called kde-optimize (archive). The list is for developers who are willing to actively work on optimizing KDE or KDE applications, those who have profiled KDE or those who have the knowledge to help others optimize KDE. Two initial documents have already been created as part of this list. The first one is to teach users how they can improve the performance of their system. The second one is to give developers a starting point when looking to optimize applications.

Dot Categories: 

Comments

optimize is not a good name.

It's for developers actively optimizing KDE, not for users discussing performance.

by mark (not verified)

check that sysctl hw.ata.ata_dma = 1
you also want to enable hw.ata.wc
and if you have a drive that supports it
hw.ata.tags

by protoman (not verified)

At least now they are thinking about the optimization subject.
You see, I think taking a lot of time for kde start or 3~5 seconds for a browser open os ok, but not for a text editor or a file manager, those two are too essential that should open in less than a second.

by Debian User (not verified)

You can say what you want, but KDE developers have been optimizing KDE - all the time! - since the days KDE 2.0 betas.

Every release of KDE has gotten faster.

The traitor so to speak is the toolchain. This is the compiler and linker on GNU/BSD systems, gcc and ld, which were designed to work well for C. And they do very. C++ as used by KDE project, has long been kind of a step-child. Somebody remember how people even forked the GNU gcc, to egcs, which in the end became gcc 2.95, because C++ development just didn't make significant advances?

The current gcc 3.2.x only now achieve a stable C++ binary interface (something C had for more than 10 years), called ABI btw, and begin do a half-decent optimization for C++ code only now. So gcc, probably is on the way.

What still sucks badly, is the linker and C++. You see, with all the inheritance stuff, C++ is doing a lot of replacement of pointers to member functions. These are the so called relocations. For every method your base class has, you need every derived classes objects to take special care, reserve memory and all, wether it uses that method or not.

These relocations, being the exception with C, have been implemented in a way that must be slow. And doing many of them, application startup gets very slow. So if you look at the page for optimization hints, the best hints are the "don't start" kind...

I personally am very Ok with these slow times, because I don't terminate KDE very often and not their programs. Then, Konqueror is e.g. very fast. See the Safari adoption success of their engine.

So what we really someday want, is a toolset that handles code in a way optimized for how C++ does things. We want to locate the points, where user experience could be snappier. And of course, have more threading. One thing BeOS did right, was having a thread per window, per menu, per UI element. Probably KDE 4.0 material....

Yours, Kay

by Stefan Heimers (not verified)

> want to locate the points, where user experience could be snappier.
> And of course, have more threading.

If you have too many threads, you have a lot of context switches, which takes time and makes cpu cache less efficient. Mplayer for example does not use threading in order to optimize performance. See http://mplayer.lgb.hu/index-en.html.

So only use threads / create new processes where it is necessary.

Threads are useful to keep the user interface responsive while there is some background activity (eg. loading/saving a file, playing a sound,...), but don't put every little thing in it's own thread just because threading is cool.

Threading is not our great saviour. We don't need more threading, we need threading in the right places.

by Debian User (not verified)

The overhead of context switches in Linux is approaching 0 quickly, although of course, it will not reach it. I have to admit, I wouldn't know about FreeBSD e.g., but a Linux 2.6 will be more than good enough at it I guess.

And as to MPlayer example. What if you have 2 CPUs ? How are you going to take advantage of faster encoding, decoding, if you don't use threads?

Deviding work into independant tasks that are then merged, is the key of optimization for multiple CPUs.

But that's KDE offtopic, not? This will happen in some library anyway.

But just like kicker is not part of kdesktop, you want to have ways to make everything work all the time. The bad is probably the synchronisation and locking nightmare you can get into.

I personally want to be able to use a programs menu, no matter if it's doing a long taking repaint, (auto)saving, doing some calculations, whatever. The menu should have a defined state, be accessible by me as a user and allow me to do all useful things.

But lets see, I have strong enough faith in those that did KDE 3.1, that they will know better than me anyway.

Yours, Kay

by Stefan Heimers (not verified)

You didn't read the mplayer text, did you?

- MPlayer don't need two cpus, it works just fine with one.
- Nobody (except some geeks) has two or more cpus in his/her desktop computer anyway.
- If you have two cpus, fine! You will be glad the other is free for some other applications or you can have two mplayers run simultaneously ;-)

For KDE it makes sense to have multiple threads, so you will take advantage of having two cpus. As I previously stated, threads are good in some circumstances, but should not be used in excess.

Context switching is a non-issue on very fast computers, but not everybody can afford one of these. My server still runs at 33MHz (i486), my fastest computer is an iBook at 300MHz, and I don't intend buying one of those energy wasting 2GHz Intel or AMD microwave ovens anytime soon;-) .

BTW, KDE 3.x is great fun in an Xvnc on the poor old 486 ;-) It is slow, but not as slow as you might think. If you are patient, you could even do some meaningful work with it.

by Debian User (not verified)

Hi,

I really didn't read it. But I know it well. mplayer runs my home cinema on a Celeron 500 that "normally" can't do the trick. I don't think a Dual system would improve it.

But my example was about encoding. You can never be fast enough with that, can you? And KDE can be faster (more responsive) on SMP systems, if the "active" application is run on several CPUs simultaneously. Esp. if you assume that this system has CPU loads of CPU number * 100%.

But maybe somebody needs to prove that, I am not all so certain, it's really like this for the load situation.

Context switches introduce latencies. With bad threading support of older Linux, those hurt so badly, that Java gets unusable under Linux. Not that this is a particular loss by itself, but it's the technicque, threads, that is limited.

Performance is a non-issue on fast computers, right. So don't optimize?? Luckily, most people care about efficient use of their computer, not their time. So KDE will get optimized to run 5% faster in some nobody really cares situations, day by day :-)

As for VNC: I did it via 14k upstream DSL. You need to low-color everything, avoid repaints with animations, etc... and you often won't notice it.

Yours, Kay

by protoman (not verified)

This is a great example that putting the guilty on just one side is easy, but not true sometimes.
One of the major kde developers (sorry, can't remember the name) once said in a interview that he wanted the linker problem to get fixed just for kde developers don't hide themselves behind that escuse.

I've heard that gcc3 would fix the linker problem, then it was going to be 3.1, now not even 3.2 satisfacts kde programmers :(

Anyway, I'm not saying that kde isn't getting better since 2.0, but 2.0 was SO SLOW that this isn't enought. They just should organize optimization in programs. You see, most programmers don't even know how to optimize well a program, that's why I think KDE wasn't taking much care about it, and now they are starting.

Yes, kde is faster now, but if it was good no one would complain about speed. Just compare gnome and kde.

Anyway, I'm not saying kde tem is bad or something like that, just they tought features should come first, then speed. Some users thing they could wait more 6 months for a new release in exange of more speed ;)

by Aaron J. Seigo (not verified)

i think you got a few things right in your posting, and a few things not-so-right =)

the truth is that the gcc toolchain does have problems, and this is known by everyone including those on the gcc devel team. this doesn't help matters, and it's frustrating to know that good speed improvements could be had if a tool you depend on was better (or even half-way decent).

on the other hand, there is a lot of room for optimization in KDE. such things are getting more attention now probably because there are so many features in. a rule in programming is to get it right first, then optimize it second. now that the base libraries in KDE are featureful and quite stable the business of optimizing them can and has begun.

that said, when you say "compare gnome and kde", what exactly are you comparing? start up times or run time tasks? the truly meaningful comparison is when it comes to run time tasks (displaying web pages, editting large text files, listing directories, filtering email, searching in files, changing fonts in a document, etc.), since not only are those things fully in the hands of the programmers working on the apps but they are what you spend 99% of the time on your computer doing. keep track of how much time you spend starting apps versus using them.

so i'm very happy to see all sorts of run-time optimizations going into KDE since those have the most impact on my experience as a user.

by protoman (not verified)

And I agree with you :)

About gnome/kde, I was comparing startup times and resources usage.

And there is one thing that few people say that I think is a important factor in kde speed, maybe the biggest one: QT.
Even some small QT apps I've done are kind of slow.

by m@n (not verified)

And there is one thing that few people say that I think is a important factor in kde speed, maybe the biggest one: Qt

But I don't think Qt itself is slow at all... just check out things like Opera..

by Justin (not verified)

See also Qt/Windows. I have a P4 1.6GHz system with Gentoo Linux and gcc 3.2, and a P2 400MHz with Windows XP and MSVC++ 6. I can compile my Qt application on the Windows box in half the time it takes to compile it on Linux. My program also starts faster on Windows.

I don't think Qt is slow. I place blame on the underlying system (g++/ld.so).

by C++ Lover ... (not verified)

Well don't worry ... by the time gcc gets to version 4.0 it will beat or equal MScompilers ...

Oh wait ... by then it will be 2030 and everything C++ will be obsolete

by ObjectiveC Willrule (not verified)

Want a language that is a fast as C, cleaner and *more* object oriented that C++, *simple*, simple, simple sytax allows you to "drop to C"easy as pie and is th basis of MacOS/X ... NeXT with the smallest number of engineers of any company (fewer than Be) produced a break through GUI for machines that ran at 16mhz - using ObjectiveC

The only reason Apple uses C++ is because their gcc allows mixing of obj and c++ together in one project and because there are crap loads of C++ libraries out there. But there are craploads of Java (and python!) libraries out there too ... and gcc can compile that too ...

The way to fix slowness of C++ is to eliminate it almost everywhere. KDE developers are enormously productive (thanks to Qt) but for a look at what 4-6 coders part time can accomplish with ObjectiveC take a look at GNUStep. After 4 years the GNUStep libraries are nearly complete and the GUI is now starting. Soon therel be standard widgets galore and a portable GUI/development framework. At that point it will take only 6-12 GNUStep coders to reproduce a KDE quality of environment - and it will only take about 6 months ... With Mac coders using gcc and GNUStep in droves it will soon have a core development group closer to the size of KDE .

Then it will rule ... because people - it is FAST. GNUStep coded apps truly are useable on a pentium 100.

Sincerely,

Objc Willrule

ps: Insert cool "Switch" advert here to convince C++ coderz of their folly ...

pps: because C++ sux and everyone knows it!

by Rayiner Hashem (not verified)

Oh god, Mac people are so creepy. First, Objective C is slower than C++. It's compilers aren't as good, and it's more dynamic nature incurs more runtime overhead. Second, those who deride C++ do so because they don't understand it. Sure it has its share of faults, but so does every language, and C++ has far fewer faults than many other languages. Overall, C++ is an extremely lightweight, fast, and most importantly, *expressive* language. It blends performance and clean design more than most other contemporary languages. In doing so, it makes certain compromises and incurs a good deal of complexity. But hey, English is an order of magnitude more complex than even C++ and people speak that passably!

PS> I've been using C++ as my primary language for more than six years. It wasn't until I had been using the language for a couple of years that I learned its full power. Nobody claims C++ is easy. Still, I get irritated at all those people who've used C++ for six months and start spouting ("C++ sux, rulez!"). Now, if you're an experienced C++ coder, and still think it sucks, well, that's a defensible difference in opinion. Otherwise, you're just spouting.

PPS> Language wars are stupid. In reality, each language has its strengths and weaknesses. Everyone should know and use at least several. I happen to like C++ best, believe it is a good language, and believe that a statement like "C++ sux" is just plain stupid, but I don't claim it is the be all end-all of languages.

by Jason Thomas (not verified)

Claiming that Objective C is slower than C++, or that you'd simply have to be ignorant to C++ to "deride" it, is making two incredibly general (and very likely not to be true) statements (I think you should probably insert the word "usually"). I've personally professionaly programmed in C++ for over seven years, and I have to say quite frankly that I have a 'lot' of grievances with it. It isn't that C++ is difficult to learn, or really even conceptually difficult to deal with--- it's actually very simple compared to many other languages (conceptually). It's that for the deficiency of concepts it really can express, the syntax is just terrible. Yes, it's powerful, yes, there is a large existing source base to be used, but it's, honestly, pretty damn ugly.

Stating that C++ has far fewer faults than 'many' other programming languages may be a true statement, but Objective C is not one of those languages (And probably no other modernly used language save perl and VB). Objective C is a true extension of C, and it isn't inherantly any more 'dynamic' than C++. In C++ you can create non-virtual objects, which technically are no different than sets of methods taking structures as parameters. If an object can't be extended, can't act polymorphically, there really isn't much of a reason for it to be an object to begin with. Objective C can use a majority of the C compiler optimizations, so, I'm really not quite certain how the compiler performance between Objective C, C, and C++ could be much different (This is bordering on the argument people used against using C++ instead of C; the differences really are insignificant).

You're right about one thing, language "wars" are stupid. Honestly comparing two programming languages, however, is not. C++ definitely doesn't suck, but Objective C is a very viable alternative.

by tom popovich (not verified)

A prof of mine at Stanford, some time ago, said:

This was a time before the opensource / GNU movement. When you pick a
commercial company for your dev tools...

long ago there was a language war:
camp a: Brad Cox and his nobel Obj-C, simple & elegant & pure hybrid,
takes the elegance of smalltalk and hooks up C.
camp b: AT&T and its C++, layers onto C.

[from lang elegance, & pure-OO, Obj-C is the winner.]
[from a "money - bet your biz on it, C++ will win" + it can
get C-like speed]
If you had a company [dont count NeXT now] you would bet on a
single company's compiler, a 10-person like company, or the might
of AT&T? You have to pick something that you will use for the next 10 years?]
His summary: its not contest. C++ will win.

Hey C++ was designed w/ certain constraints: (a) it had to be a superset,
(b) it has to have speed, as its primary customer was the "Phone code",
you know the stuff that services your long distance calls & its running
on, at the time, 1mip PDP or AT&T mini-computers.

Every lang has +s and -s. C++ is faster [machine wise]. Obj-C might
be more efficient human wise. Heck a language like Self might
be even faster, people wise. [prototype based object langs might be
easier than class based ones you see in the mainstream.]

The reason Obj-C is viable now is that GNU-c can do C/Obj-C/C++.

by Kevin Puetz (not verified)

gcc3 fixed gcc's part of it - not enough, but the binaries code it wrote was prelinkable
not sure what binutils version fixed their part, but it's ready. - the things it links are prelinkable
glibc2.3 fixed glibc's part of it - ld.so knows how to load a prelinked binary if it finds one

so the remaining piece is the 'prelink' tool itself, so that these prelinkable binaries can become prelinked binaries. It is approaching a useable stage now... if you want to download a snapshot, be my guest. I'm working with it some. It's not 100% stable, but I haven't nuked myself too bad yet.

linking is *every bit* as bad of an issue as kde developers make it sound like. fully 50% of the cpu time during a kde login (on my system, probably closer to 30% on yours - I have some local work not yet committed in fontconfig that really helps speed that up, raising the percentages for everything else) is spent, in the linker, before reaching main().

We could have frozen feature work while we waited for the improved toolchain to be ready, but then we'd still have kde2.0 now instead of 3.1, and the prelinker wouldn't be any closer to ready.

by cbcbcb (not verified)

The linker problem is utterly irrelevant

It takes my laptop 1 second to execute kedit --help. It takes 7 seconds to load kedit and put the window up.

The linking stage is the same, and linking clearly takes at most 1/7th (~14%) of the time required to start kedit. Therefore 6/7ths (~86%) of the time is spent doing something else.

Would any KDE developers explain where this idea that the linking time dominates KDE start up time when my experiments clearly demonstrate that this is not the case on my system.

A bit of profiling suggests that it spends nearly 50% of the startup time doing malloc(!)

by Kevin Puetz (not verified)

the linking stage is not the same, ld.so is a lazy linker resolving symbols only as they are needed. So you need to resolve more symbols when you start the whole app.

if you're seeing that much malloc though... are you kdelibs built with --enable-fast-malloc=full?

Instead of "time kedit --help", try "time LD_BIND_NOW=1 kedit --help".

On my machine "kedit --help" takes 0.551s without LD_BIND_NOW, and 0.946s with it, almost double.

Running kedit so that it pops up onto the screen without LD_BIND_NOW takes 1.757s, and with it 2.041s. Those numbers are both higher than the true values though, because I wait for kedit to pop up on the screen and hit alt-f4 as fast as I can.

So it looks to me like almost half the startup time is taken up by the linker.

by tschortsch (not verified)

well i'm currently playing around with a glibc.2.3-based prelinked gentoo system. i can tell you prelink makes a _huge_ difference.
it takes 9s to start x and kde from the console. when i start konq with kdeinit_wrapper konqueror it takes ~0s. (thats on a athlon xp 1800)
surprisingly it's a lot slower to start apps from kicker. but i read of a patch in HEAD that should fix that. (im currently using 3.1rc5)

> It takes my laptop 1 second to execute kedit --help. It takes 7 seconds to load kedit and put the window up.

Have you heard about linking is actually done? Not *all* symbols are resolved at once.

by Rayiner Hashem (not verified)

Actually, threading on Linux is extremely cheap. At this point, encouraging more extensive threading would probably improve performance more than hurt it.

by itsch (not verified)

Threading can never improve performance on a uni-processor system (at least 95% of all systems running kde). Multi-threading programs use the processor cache less efficent.

Threads often make programs more complicated, and so bugs are harder to find.

by Another Anon-person (not verified)

Only in the land of the theoretical. In the real world where we live, threading can yield enourmous improvements just by the sheer fact that it can make some problems easier to solve with our intellectual-resource-constrained noggins.

by Dieter Nützel (not verified)

Try a preemptable Kernel (2.4.20+patch, e.g. -ck or 2.5.xx) without "heavy threading" and be surprised. Next step would be the glibc-2.3.1 with the new NGPT.

Greetings,
Dieter

by Rayiner Hashem (not verified)

Threading almost always improves user percieved performance. There are two options for good user performance: do events and I/O async, or break them out into threads. Most developers are too lazy to do the former (because it's freaking HARD!) so unless they use threads, they end up doing massive computations and I/O while screwing user speed. See most Linux software for blatent examples (and compare them to a well threaded base of software like BeOS's). Besides, with HyperThreading and cheap SMP becoming more prevalent, who cares about a (very minor, with today's or even yesterday's systems) in the uniprocessor case.

by renoX (not verified)

You're right that threading is not necessarily CPU-time efficient, but it is user-time efficient.

I really hate each time I do a slow operation and that the whole application freeze instead of just the part who is doing the work.

An example with Mozilla: tabs opening, lets suppose I open a new tab on a busy website, the tab "freeze" which is ok but the whole browser freeze too, I can't instantly go back to an active tab..
From the user point of view, it feels really slow, a browser can render pages "instaneously", if it has these kind of user interface issues, it will be always felt as slow by users..

I hope that Konqueror with tabs won't have these kind of issue.

BeOS was really good for this, it felt smooth: no blocking, a window could freeze because it was doing something complicated, but you could instantly swith to another window without any slowdown felt both in Windows or X-Windows (KDE or Gnome).
Oh and BeOS booted really fast!
While it doesn't matter really, it gives user a good feeling about the OS, end-user feels is much more important than benchmark numbers IMHO.

by Tot (not verified)

In X-Windows windows don't freeze, event if application hangs, you can switch to another window or even resize the window. If you feel slowdowns when switching apps it means there isn't enough RAM on your computer.

by cbcbcb (not verified)

> If you feel slowdowns when switching apps it means there isn't
> enough RAM on your computer.

Or your apps use too much RAM... :)

For example, should a web browser really use 60MB of RAM? That is an awful lot. What does Konqueror do with it all?

by Dieter Nützel (not verified)

Nothing 'cause it's mostly shared:

nuetzel 13680 2.4 2.5 109268 26688 ? S 02:29 0:09 kdeinit: konqueror --silent

/home/nuetzel> pmap 13680
kdeinit: konqueror --silent(13680)
08048000 (32 KB) r-xp (08:08 417105) /opt/kde3/bin/kdeinit
08050000 (8 KB) rw-p (08:08 417105) /opt/kde3/bin/kdeinit
08052000 (6940 KB) rwxp (00:00 0)
40000000 (76 KB) r-xp (08:03 9424) /lib/ld-2.2.4.so
40013000 (4 KB) rw-p (08:03 9424) /lib/ld-2.2.4.so
40014000 (4 KB) rw-p (00:00 0)
40015000 (188 KB) r-xp (08:08 307406) /opt/kde3/lib/libkparts.so.2.1.0
40044000 (12 KB) rw-p (08:08 307406) /opt/kde3/lib/libkparts.so.2.1.0

[snip]

46312000 (152 KB) r--p (08:07 1731481) /usr/X11R6/lib/X11/fonts/truetype/verdanai.ttf
46338000 (272 KB) r--p (08:07 1731455) /usr/X11R6/lib/X11/fonts/truetype/arial.ttf
4637c000 (280 KB) r--p (08:07 1731486) /usr/X11R6/lib/X11/fonts/truetype/arialbd.ttf
463c2000 (56 KB) r-xp (08:08 475465) /opt/kde3/lib/kde3/libkuickplugin.so
463d0000 (8 KB) rw-p (08:08 475465) /opt/kde3/lib/kde3/libkuickplugin.so
bffeb000 (84 KB) rwxp (00:00 0)
mapped: 109268 KB writable/private: 9324 KB shared: 73812 KB

=> 9,3 MB (own) + 73,8 (shared)

So what is your problem?

Regards,
Dieter

by cbcbcb (not verified)

Thanks for the hint about the pmap tool.

> So what is your problem?

Firstly, I don't run KDE on my laptop (it's too slow), I run konqueror under Windowmaker, so (most of) the 'shared' memory isn't really shared. You still have to account for the shared memory - it isn't free.

Secondly, I have a 30MB private anonymous mapping in the current konqueror process which has been running for some time. My experience is that konqueror gradually uses more and more memory but I haven't been able to reproduce it under test conditions (although general web browsing always triggers it). Eventually the Konqueror process grows big enough that it hits swap and everything grinds to a halt and I have to restart it.

A newly started konqueror doesn't have this huge mapping. I guess this is a bug in Konqueror/KDE's memory management somewhere.

by David Faure (not verified)

Yes we just found and fixed some cycling reference counting problems in the Javascript support, which led to memory leaks.
Hopefully KDE-3.1 will have those fixes.

by cbcbcb (not verified)

Are the fixes on the 3.1 branch currently, or just on the head?

Should KDE just move to using libgc and become fully garbage collected? This should reduce the number of memory leaks

by David Faure (not verified)

Just backported to 3.1 branch.

Fully garbage collected? Surely not. It's the garbage-collection nature of Javascript which introduces those leaks (when you end up with a cyclic reference chain, nothing gets deleted anymore). This is definitely NOT a solution for the rest of KDE. Plain simple "delete foobar" is much easier to do and debug than garbage-collection with refcounts and possibly cyclic ref chains.

by cbcbcb (not verified)

I'm talking about real garbage collection, not reference counting! libgc implements a generational mark and sweep collector, so it works properly when you've got cycles. Refcounts also have horrible performance implications.

Anyway, looking at the kdecore/malloc stuff it looks pretty trivial to add new memory allocation strategies so you might get a patch when I get round to it.

by David Faure (not verified)

Ah yes. The reason I'm talking about refcounting for kjs, is because it has a DOM-like API, so it needs both refcounting _and_ garbage collection with mark & sweep. The dependencies with the DOM nodes (which are refcounted) is what makes the problem a bit difficult.

About a generic malloc implementation that can garbage collect: hmm - how will it know the dependencies between the objects, i.e. which one should mark which other one?

by cbcbcb (not verified)

I'm not sure why you need refcounting and mark & sweep - surely if you have the correct set of roots then the mark & sweep strategy will do everything? (I've not looked at the kjs code so I could be missing something)

The homepage for the libgc collector is at http://www.hpl.hp.com/personal/Hans_Boehm/gc/ Most distributions should have a libgc package.

Essentially, it's a conservative collector, so if a malloc-ed region contains a value which appears to be a pointer (ie it has a value within the bounds of the heap and there is an object on the heap at that location) then it is treated as a reference. The roots of the collection are contained in the static data area and the stack.

by David Faure (not verified)

We need refcounting because as I said the KJS objects (the only ones which are garbage collected) have dependencies on the DOM objects (which are not), and vice-versa.
Since the DOM objects are not known to the garbage collector, they will never be asked to mark(), therefore the related KJS objects will be destroyed at the wrong time.

Anyway I'm not very expert on fully mark&sweep solutions (this one was implemented by Harri and the refcounting was added by Peter) - I can describe what kjs does and why, but not whether something else could have worked.

Hmm, the idea of a collector trying to guess pointers from raw memory sounds like a good opportunity for strange behaviours to me :) Ok, it must be quite rare that binary data (pixmaps, strings, etc.) contains the same 4-byte sequence that those matching precise locations where data was allocated, but when this happens, strange things will happen. You see, the main reason we can't easily switch to a GC, is that in C++, destructors can have code. And they often do. We rely on object A doing this and that when being destroyed. If it's not destroyed anymore, then stuff will break. Moving stuff from the destructor to a "destroy()" method changes nothing to this - if it's not called, bugs will happen. Therefore a GC wouldn't help. Fixing the code does :)

by Jonathan Miller (not verified)

I may be wrong, but doesn't Perl use reference counters? Perl seems pretty darn fast to me, too. If Perl can do it quickly, who can't a JavaScript implementation do it quickly as well?

by Dieter Nützel (not verified)

In short: Which KDE/Konqueror version?

Would you "all" be so kind to send full system specs and software versions if you have some trouble?

Thanks,
Dieter

by GNOME user (not verified)

KDE is written in C++. While this is not necessarily a problem, it can be when Visual Basic reject programmers (which the KDE project is overrun with) do not know enough to avoid important pitfalls that plague C++ software projects; KDE suffers badly from stupid use of autoincrementing operators and iteration with C++ objects and masses of unnecessary allocations and deallocations of memory -- two of the most comon problems in C++ software.

Perhaps the most cretinous of all problems is blaming the extremely slow startup times of KDE apps on GCC. The GNOME 1.x releases were hardly svelt (2.x fixes many of these issues), but GNOME is a fashion cat-walk superwaif when compared to KDE's 500lb fat-momma cheese-burger scoffing trailer trash.

One need only look at the fuss over ugly KDE hacks (such as prelinking) used to bandage up the design and coding flaws in the decrepit KDE architecture to see the truth.

by Anonymous (not verified)

Word for word the same on wrong facts based flame as last week. You were corrected, you didn't learn. Go away!

by Andrew (not verified)

Not only that, but he spelled svelte wrong both times! :-)

Yeah! And C64 Basic loads really fast on an 1MHz machine. It's written in assembly. Let's use that!!

by CPH (not verified)

LOL!! Very funny.
This troll is quite amusing. :))

by yep (not verified)

It's good to see all the lamers still living in 1990 come out of the woodwork. KDE runs fine on my Athlon XP 2200+. Gnome does too. I know GNOME 3.0 and KDE 4.0 may not run as well, but that's fine... i'll upgrade. KDE 3.0 and GNOME 2.0 don't run with any acceptable speed on my pII 233 router box (which was once my main system).. but KDE 1.0 and GNOME 1.0-1.2 did (and GNOME 1.4 without the hog that is Slow^H^H^H^HNautilus)