XFree86 and KDE

The XFree86 project, one of the major window systems hosting the KDE/Qt platform, released version 4.3.0 almost a month ago. This version incorporates many frequently requested features such as fully graphical mouse cursors with support for shadows and animation (3d.jpg, gold.jpg, tux.jpg), and on-the-fly screen resolution changes (dialog.png, kcontrolmodule.png, notification.png, popup.png). In other news, amidst some criticism regarding XFree86's current development model, the project has gained a much needed public bug tracking system as well as a new forum inviting public comment on the direction of the project.

The latter forum is already very active, and arguably a success, spawning an hourly updated XFree86 CVS changelog. Unfortunately, in a political shuffle, much loved X developer Keith Packard involved with FontConfig, Xft and the X RENDER extension found himself ousted from the XFree86 Core Team.


Update: 03/24 18:50 by
N: A group of KDE and GNOME developers have released an open statement on the matter. The statement doesn't say much that hasn't already been said, though it lays out some key issues in point form.


Update: 03/25 00:36 by
N: X.Org chairperson Steve Swales speaks up on behalf of the X.Org members.

Dot Categories: 

Comments

by Rayiner Hashem (not verified)

Unfortunately, Quartz isn't actually OpenGL accelerated. OpenGL is only used to accelerate compositing of windows. Everything *inside* the windows is drawn in software.

by FooBar (not verified)

Which is a good thing to do. OpenGL sucks at 2D operations! Drawing 2D in OpenGL may actually slow things down.

by FooBar (not verified)

"When you open your eyes and look around you'll see two big players on the desktopmarket. Apple has done a great job with their latest OpenGL accelerated userinterface."

Oh no not this overrated stuff again...
Quartz is *slow*. It *feels* fast, but if you compare framerates then Quartz is *slower* than XFree86.
Just by using OpenGL doesn't automatically give you drop shadows or uber-l33t-performance or whatever. OpenGL is meant for 3D. 2D operations in OpenGL is *slow*. Blitting in OpenGL is *very* slow. Imagine what happens when a high-framerate 2D app tries to blit everything through OpenGL. It would kill performance rather than improving it.
The usage of OpenGL only accelerates drawing in certain cases, but also *slows down* things in other cases.

"Really slick, and one of the major reasons OSX became intresting for the desktop."

While still being stuck at 2% market share...
Windows XP users don't seem to care about "3D-accelerated GUIs".

"I realize that not everyone has a state-of-the-art graphicscard in their system, but that problem could be solved by just letting the OpenGL renderer run in software mode"

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Try running Tux Racer in non-accelerated Mesa (software OpenGL rendering). Be happy if you can manage to exit Tux Racer in 10 minutes using the menu.
Do you want your windows to draw at 0.5 fps too?

"So the question may be: "Is it wise to stop innovation in fear of releasing the past?""

It is not wise to implement hyped designs just because they're hyped. OpenGL is not the ultimate solution to everything!

by Rayiner Hashem (not verified)

Quartz is *slow*.
>>>>>>>
Yes, it is. It's also all done in software. In OS X, Quartz Extreme (the OpenGL acceleration) only comes into play when all the drawing is done and it's time to composit all those transparent, drop-shadows windows together. All Quartz2D calls are still rendered via the CPU. The big reason Quartz Extreme was such a performance boost for the OS X folks is because, before QE, the CPU was doing all the drawing *and* all the compositing.

Blitting in OpenGL is *very* slow. Imagine what happens when a high-framerate 2D app tries to blit everything through OpenGL. It would kill performance rather than improving it.
>>>>>>>>
Actually, the speed of 2D operations is dependent on the implementation. On workstation-level implementations, 2D operations are very fast. On many consumer level implementations, the 2D subset (glCopyPixels, etc) is (was) rather slow. I haven't benchmarked 2D operations on modern NVIDIA (or ATI) hardware, but since's NVIDIA's OpenGL implementation is good enough to be in SGI workstations (all the Intel VPro chips are modified Quadros) I'd guess 2D operations would be quite fast. Of course, all of this is rather irrelevent. The modern way to do 2D blits is with textured quads. Both the ATI and NVIDIA implementations allow you to use arbitrary texture sizes, so no scaling is involved and the end result is exactly equivilent to blitting, and is *very* fast. It should be noted that modern consumer OpenGL implementations are very good. In the past, consumer-level cards struggled if more than one OpenGL window was in use at a time. On my GeForce4Go, however, 6 glxgears shows only a nominal drop in total framerate over the 1 window case, and with 3 windows there is actually an *increase* in total framerate.

The usage of OpenGL only accelerates drawing in certain cases, but also *slows down* things in other cases.
>>>>>>>>>>
Depends on the implementation, but that shouldn't be the case for any modern implementation. From what I've read at the SVGL and AmayaGL sites, most of SVG can be accelerated by OpenGL, including gradients and filters.

Windows XP users don't seem to care about "3D-accelerated GUIs".
>>>>>>>>>>
Hello? That's one of the huge new parts of Windows Longhorn. It's supposed to have a fully Direct3D accelerated GUI.

Blitting in OpenGL is *very* slow. Imagine what happens when a high-framerate 2D app tries to blit everything through OpenGL. It would kill performance rather than improving it.

Try running Tux Racer in non-accelerated Mesa (software OpenGL rendering). Be happy if you can manage to exit Tux Racer in 10 minutes using the menu.
Do you want your windows to draw at 0.5 fps too?
>>>>>>>>
Now that's the kicker. Software rendered GL is *very *slow. This is where the "leaving the past behind and daring to innovate" bit comes into play. You can have an optimized software-rendering fallback, as EVAS does, but it won't really run apps that take full advantage of the increased rendering power.

by SadEagle (not verified)

I've tried glDrawPixels a while ago on a TNT2 with NVidia drivers. It was horrid, taking near a second to blit something like a 640x480 rectangle.
Even using XPutImage and synching the GL and 2D streams seemed faster, IIRC!
Textured quads are an option, but they're also a /hack/. Unextended GL
gives you only the power of 2 textures - other sizes are vendor extensions;
and an implementation only has to support something that's 64x64, so the
user has to do tile cache management. It's prolly a big win in many cases
when the HW accel is there, but it's also quite possible that the decicated 2D
accel may be more usable on some hardware.

And I think yes, the real kicker is that you can't really find out what will
be accelerated and what wouldn't be. That's because OpenGL is the wrong layer, really, IMHO. What we need is closer to the driver inside the GL implementation, which provides only the accelerated primitives, and the information on what it supports, etc. Most of the upper layers are useless, and provide rendering that's too complex for the 2D case. Do we need the T&L pipeline (which in S/W for older accelerators)? No. Support for controllable Z-buffer, etc.? Nope.

by Alex (not verified)

How long ago did you try doing that test? I've got a TNT2 as well and a while back I tried using the OpenGL video output from mplayer and it crawled. It was worse than the plain x11 driver, and many, many times worse than xv. A few days ago I tried the gl driver again and this time it worked fine. As far as I could tell, the performance was the same as what xv provided.

by Sad Eagle (not verified)

Gosh, that could have been 1.5 - 2 years ago, true.. At any rate, current versions of mplayer use textured quads, not glDrawPixels, and they have for at least 2 years if WebCVS isn't lying. The difference you saw could be an improvement in the driver of course -- video obviously uploads lots of data, so it has to load new textures a lot; perhaps NVidia optimized anonymous texture upload a bit. On the other hand, it could also be an improvement to the software YUV -> RGB conversion it needs. At any rate, I think it'd be hard for OpenGL to compete vs Xv here -- Xv is very much designed for good/fast video, and it can handle YUV natively.

by FooBar (not verified)

"Hello? That's one of the huge new parts of Windows Longhorn. It's supposed to have a fully Direct3D accelerated GUI."

Microsoft != it's users.
*Microsoft* cares. Most of it's *users* do not. Try convincing my grandmother to switch to Longhorn just because it has this "exciting new 3D GUI".

by Idimmu (not verified)

Users content to use legacy hardware should also be content to use legacy software, and stop ruining it for us that want something good!

I have a h/w 3D card, and if that would make X really spiffy then I say, so be it!

by mpaque (not verified)

FooBar wrote:
> Quartz is *slow*. It *feels* fast, but if you compare framerates then
> Quartz is *slower* than XFree86.

The Quartz display acceleration system normally limits framerates to the display refresh rate when an app issues multiple sequential flushes. Trying to go faster than that produces no visible effect, as the intermediate frames simply won't appear on screen, and is effectively a waste of bus bandwidth.

Try flushing a window as fast as you can. The system will report a flush rate remarkably close to the refresh rate.

An app will get the best results and highest overall performance in either Quartz or Quartz Extreme by doing needed computations and I/O, then drawing, then issuing a flush. This will interleave GPU and CPU operations.