GObject design pattern: attached class extension

I wanted to share one recurrent API design that I’ve implemented several times and that I’ve found useful. I’ve coined it “attached class extension”. It is not a complete description like the design patterns documented in the Gang of Four book (I didn’t want to write 10 pages on the subject), it is more a draft. Also the most difficult is to come up with good names, so comments welcome 😉

Intent

Adding a GObject property or signal to an existing class, but modifying that class is not possible (because it is part of another module), and creating a subclass is not desirable.

Also Unknown As

“One-to-one class extension”, or simply “class extension”, or “extending class”, or “Siamese class”.

Motivation

First example: in the gspell library, we would like to extend the GtkTextView class to add spell-checking. We need to create a boolean property to enable/disable the feature. Subclassing GtkTextView is not desirable because the GtkSourceView library already has a subclass (and it should be possible in an application to use both GtkSourceView and gspell at the same time ((Why not implementing spell-checking in GtkSourceView then? Because gspell also supports GtkEntry.)) ).

Before describing the “attached class extension” design pattern, another solution is described, to have some contrast and thus to better understand the design pattern.

Since subclassing is not desirable in our case, as always with Object-Oriented Programming: composition to the rescue! A possible solution is to create a direct subclass of GObject that takes by composition a GtkTextView reference (with a construct-only property). But this has a small disadvantage: the application needs to create and store an additional object. One example in the wild of such pattern is the GtkSourceSearchContext class which takes a GtkSourceBuffer reference to extend it with search capability. Note that there is a one-to-many relationship between GtkSourceBuffer and GtkSourceSearchContext, since it is possible to create several SearchContext instances for the same Buffer. And note that the application needs to store both the Buffer and the SearchContext objects. This pattern could be named “one-to-many class extension”, or “detached class extension”.

The solution with the “attached class extension” or “one-to-one class extension” design pattern also uses composition, but in the reverse direction: see the implementation of the gspell_text_view_get_from_gtk_text_view() function, it speaks for itself. It uses g_object_get_data() and g_object_set_data_full(), to store the GspellTextView object in the GtkTextView. So the GtkTextView object has a strong reference to the GspellTextView; when the GtkTextView is destroyed, so is the GspellTextView. The nice thing with this design pattern is that an application wanting to enable spell-checking doesn’t need to store any additional object, the application has normally already a reference to the GtkTextView, so, by extension, it has also access to the GspellTextView. With this implementation, GtkTextView can store only one GspellTextView, so it is a one-to-one relationship.

Other Examples

I’ve applied this design pattern several other times in gspell, Amtk and Tepl. To give a few other examples:

  • GspellEntry: adding spell-checking to GtkEntry. GspellEntry is not a subclass of GtkEntry because there is already GtkSearchEntry.
  • AmtkMenuShell that extends GtkMenuShell to add convenience signals. GtkMenuShell is the abstract class to derive the GtkMenu and GtkMenuBar subclasses. The convenience signals must work with any GtkMenuShell subclass.
  • AmtkApplicationWindow, an extension of GtkApplicationWindow to add a statusbar property. Subclassing GtkApplicationWindow in a library is not desirable, because several libraries might want to extend GtkApplicationWindow and an application needs to be able to use all those extensions at the same time (the same applies to GtkApplication).

Re: Consider the maintainer

I’ve read this LWN article: Consider the maintainer. It was a great read, and I want to share my thoughts, from my experience on being a maintainer (or helping the maintenance) of several GNOME modules.

GNOME has a lot of existing code, but let’s face it, it has also a lot of bugs (just look at bugzilla, but the code also contains a lot of not-yet-reported bugs). For a piece of software to be successful, I’m convinced that it has to be stable, mostly bug-free. Stability is not the only property of a successful software, but without it it has way less chance to be successful in the long run (after the hype wave is gone and the harsh reality resurfaces).

There is a big difference between (a) writing a feature, but in reality it’s full of bugs, and (b) writing the same feature, but “mostly bug-free” (targeting bug-free code). It certainly takes the double of time, probably more. The last 10% of perfection are the most difficult.

Paolo Borelli likes to explain that there are two kinds of developers: the maintainers and the developers who prefer to write crazy-new-experimental features (with a gray scale in-between). It is similar with the difference between useful tasks vs interesting tasks that I talked about in a previous blog post: some useful tasks like writing unit tests are not terribly interesting to do, but I think that in general a maintainer-kind-of-developer writes more tests. And Paolo said that I’m clearly on the maintainer side, caring a lot about code quality, stability, documentation, tests, bug triaging, etc.

Reducing complexity

The key, with a lot of existing but not perfect code, is to reduce complexity:

  • Improving the coding style for better readability;
  • Doing lots of small (or less-small) refactorings;
  • Writing utility classes;
  • Extracting from a big class a set of smaller classes so that the initial class delegates some of its work;
  • Writing re-usable code, by writing a library, and documenting the classes with GTK-Doc;
  • Etc.

Even for an application, it is useful to write most of the code as an internal library, documented with GTK-Doc. Browsing the classes in Devhelp is such a nice way to discover and understand the codebase for new contributors (even if the contributor has already a lot of experience with GLib/GTK+).

Another “maintainer task” that I’ve often done: when I start contributing to a certain class, I read the whole code of that class, trying to understand every line of code, doing lots of small refactorings along the way, simplifying the code, using new GLib or GTK+ APIs, etc. When doing this exercise, I have often discovered (and fixed) bugs that were not reported in the bug tracker. Then to achieve what I wanted to do initially, with the much better knowledge of the code, I know how to do it properly, not with a quick hack to do the minimal amount of change that I sometimes see passing. As a result the code has less bugs, there is less chance to introduce new bugs, and the code is easier to understand and thus more maintainable. There is no secrets, it takes more time to do that, but the result is better.

Some books that were very useful to me:

Of course I didn’t read all those books at once, practicing is also important. I nowadays read approximately one computing science book per year.

About new contributors and code reviews

When I started to contribute to GtkSourceView several years ago, I had already developed a complete LaTeX editor based on GtkSourceView (by myself), read several of the above books (most importantly Code Complete) and applied what I learned. I had already a lot of experience with GTK+. So starting to contribute to GtkSourceView was easy, my patches were accepted easily and I think it was not too much work for the reviewers. I then became a co-maintainer.

Contrast this with all those newbies wanting to contribute to GNOME for the first time, without any experience with GLib/GTK+. They don’t even know how to contribute, how to compile the code, they probably don’t know well the command line or git, etc. So if a maintainer wants to help those newcomers, it takes a lot of time. I think this is partly a problem of documentation (that I’m trying to solve with this guide on GLib/GTK+). But even with good documentation, if the new contributor needs to learn for the first time GTK+, it will require too much time for the maintainer. What I would suggest is for newcomers to start by writing a new application on their own; for that a list of ideas of missing applications would be helpful.

This is maybe a little controversial, but the talk Consider the maintainer was also controversial, by suggesting for instance: “Maintainers should be able to say that a project is simply not accepting contributions, or to limit contributors to a small, known group of developers.”

When a company wants to hire a developer, they can choose the best candidate, or if no candidates fit they can also choose to keep the existing team as-is. In Free Software, anyone can send a patch; sometimes it takes a lot of time to explain everything and then after a short time the contributor never comes back. Remember also the well-known fact that adding people to a late project makes it later (usually, but there are exceptions).

Another interesting glimpse, from Hackers and Painters (Paul Graham):

I think this is the right model for collaboration in software too. Don’t push it too far. When a piece of code is being hacked by three or four different people, no one of whom really owns it, it will end up being like a common-room. It will tend to feel bleak and abandoned, and accumulate cruft. The right way to collaborate, I think, is to divide projects into sharply defined modules, each with a definite owner, and with interfaces between them that are as carefully designed and, if possible, as articulated as programming languages.

Other topics

I could talk about other topics, such as the lack of statistics (I don’t even know the number of people executing the code I write!) or trying to avoid sources of endless maintenance burden (example: GtkSourceView syntax highlighting definition files, the maintenance could clearly be better distributed, with one maintainer per *.lang file). But this blog post is already quite long, so I’ll not expand on those topics.

In short, there is clearly matter for thoughts and improvements in how we work, to get more things done.

Smooth transition to new major versions of a set of libraries

With GTK+ 4 in development, it is a good time to reflect about some best-practices to handle API breaks in a library, and providing a smooth transition for the developers who will want to port their code.

But this is not just about one library breaking its API. It’s about a set of related libraries all breaking their API at the same time. Like what will happen in the near future with (at least a subset of) the GNOME libraries in addition to GTK+.

Smooth transition, you say?

What am I implying by “smooth transition”, exactly? If you know the principles behind code refactoring, the goal should be obvious: doing small changes in the code, one step at a time, and – more importantly – being able to compile and test the code after each step. Not in one huge commit or a branch with a lot of un-testable commits.

So, how to achieve that?

Reducing API breaks to the minimum

When developing a non-trivial feature in a library, designing a good API is a hard problem. So often, once an API is released and marked as stable, we see some possible improvements several years later. So what is usually done is to add a new API (e.g. a new function), and deprecating an old one. For a new major version of the library, all the deprecated APIs are removed, to simplify the code. So far so good.

Note that a deprecated API still needs to work as advertised. In a lot of cases, we can just leave the code as-is. But in some other cases, the deprecated API needs to be re-implemented in terms of the new API, usually for a stateful API where the state is stored only wrt the new API.

And this is one case where library developers may be tempted to introduce the new API only in a new major version of the library, removing at the same time the old API to avoid the need to adapt the old API implementation. But please, if possible, don’t do that! Because an application would be forced to migrate to the new API at the same time as dealing with other API breaks, which we want to avoid.

So, ideally, a new major version of a library should only remove the deprecated API, not doing other API breaks. Or, at least, reducing to the minimum the list of the other, “real” API breaks.

Let’s look at another example: what if you want to change the signature of a function? For example adding or removing a parameter. This is an API break, right? So you might be tempted to defer that API break for the next major version. But there is another solution! Just add a new function, with a different name, and deprecate the first one. Coming up with a good name for the new function can be hard, but it should just be seen as the function “version 2”. So why not just add a “2” at the end of the function name? Like some Linux system calls: umount() -> umount2() or renameat() -> renameat2(), etc. I admit such names are a little ugly, but a developer can port a piece of code to the new function with one (or several) small, testable commit(s). The new major version of the library can rename the v2 function to the original name, since the function with the original name was deprecated and thus removed. It’s a small API break, but trivial to handle, it’s just renaming a function (a git grep or the compiler is your friend).

GTK+ timing and relation to other GNOME libraries

GTK+ 3.22 as the latest GTK+ 3 version came up a little as a surprise. It was announced quite late during the GTK+/GNOME 3.20 -> 3.22 development cycle. I don’t criticize the GTK+ project for doing that, the maintainers have good reasons behind that decision (experimenting with GSK, among other things). But – if we don’t pay attention – this could have a subtle negative fallout on higher-level GNOME libraries.

Those higher-level libraries will need to be ported to GTK+ 4, which will require a fair amount of code changes, and might force to break in turn their API. So what will happen is that a new major version will also be released for those libraries, removing their own share of deprecated API, and doing other API breaks. Nothing abnormal so far.

If you are a maintainer of one of those higher-level libraries, you might have a list of things you want to improve in the API, some corners that you find a little ugly but you never took the time to add a better API. So you think, “now is a good time” since you’ll release a new major version. This is where it can become problematic.

Let’s say you released libfoo 3.22 in September. If you follow the new GTK+ numbering scheme, you’ll release libfoo 3.90 in March (if everything goes well). But remember, porting an application to libfoo 3.90/4.0 should be as smooth as possible. So instead of introducing the new API directly in libfoo 3.90 (and removing the old, ugly API at the same time), you should release one more version based on GTK+ 3: libfoo 3.24. To reduce the API delta between libfoo-3 and libfoo-4.

So the unusual thing about this development cycle is that, for some libraries, there will be two new versions in March (excluding the micro/patch versions). Or, alternatively, one new version released in the middle of the development cycle. That’s what will be done for GtkSourceView, at least (the first option), and I encourage other library developers to do the same if they are in the same situation (wanting to get rid of APIs which were not yet marked as deprecated in GNOME 3.22).

Porting, one library at a time

If each library maintainer has reduced to the minimum the real API breaks, this eases greatly the work to port an application (or higher-level library).

But in the case where (1) multiple libraries all break their API at the same time, and (2) they are all based on the same main library (in our case GTK+), and (3) the new major version of those other libraries all depend on the new major version of the main library (in our case, libfoo 3.90/4.0 can be used only with GTK+ 3.90/4.0, not with GTK+ 3.22). Then… it’s again the mess to port an application – except with the following good practice that I will just describe!

The problem is easy but must be done in a well-defined order. So imagine that libfoo 3.24 is ready to be released (you can either release it directly, or create a branch and wait March to do the release, to follow the GNOME release schedule). What are the next steps?

  • Do not port libfoo to GTK+ 3.89/3.90 directly, stay at GTK+ 3.22.
  • Bump the major version of libfoo, making it parallel-installable with previous major versions.
  • Remove the deprecated API and then release libfoo 3.89.1 (development version). With a git tag and a tarball.
  • Do the (hopefully few) other API breaks and then release libfoo 3.89.2. If there are many API breaks, more than one release can be done for this step.
  • Port to GTK+ 3.89/3.90 for the subsequent releases (which may force other API breaks in libfoo).

The same for libbar.

Then, to port an application:

  • Make sure that the application doesn’t use any deprecated API (look at compilation warnings).
  • Test against libfoo 3.89.1.
  • Port to libfoo 3.89.2.
  • Test against libbar 3.89.1.
  • Port to libbar 3.89.2.
  • […]
  • Port to GTK+ 3.89/3.90/…/4.0.

This results in smaller and testable commits. You can compile the code, run the unit tests, run other small interactive/GUI tests, and run the final executable. All of that, in finer-grained steps. It is not hard to do, provided that each library maintainer has followed the above steps in the good order, with the git tags and tarballs so that application developers can compile the intermediate versions. Alongside a comprehensive (and comprehensible) porting guide, of course.

For a practical example, see how it is done in GtkSourceView: Transition to GtkSourceView 4. (It is slightly more complicated, because we will change the namespace of the code from GtkSource to Gsv, to stop stomping on the Gtk namespace).

And you, what is your list of library development best-practices when it comes to API breaks?

PS: This blog post doesn’t really touch on the subject of how to design a good API in the first place, to avoid the need to break it. It will maybe be the subject of a future blog post. In the meantime, this LWN article (Designing better kernel ABIs) is interesting. But for a user-space library, there is more freedom: making a new major version parallel-installable (even every six months, if needed, like it is done in the Gtef library that can serve as an incubator for GtkSourceView). Writing small copylibs/git submodules before integrating the feature to a shared library. And a few other ways. With a list of book references that help designing an Object-Oriented code and API.

Thoughts on the Linux Mint X-Apps forks

You may be aware that Linux Mint has forked several GNOME applications, either directly from GNOME (Totem -> Xplayer, Evince -> Xreader, Eye of GNOME -> Xviewer), or indirectly via MATE (gedit -> pluma -> XEd).

GNOME is like the Debian of the Linux desktops. But is it a good thing? In the current state of the code, I don’t think so and I’ll explain why, with a solution: creating more shared libraries.

At the end of the day, it’s just a matter of design and usability concerns. We can safely say that the main reason behind the forks is that the Linux Mint developers don’t like the new design of GNOME applications with a GtkHeaderBar.

And there are perfectly valid reasons to not like headerbars. For gedit for example, see the list of usability regressions at this wiki page.

Currently the trend is GtkHeaderBar, but what will it be in 5 years, 10 years? Let’s face it, GNOME is here just following the new trend that came with smartphones and tablets.

So, a GNOME application developer needs to know that:

  • A GUI is an ever-changing thing, exactly like the clothes that you bought last year are already obsolete, right?
  • When the GUI changes too much, other developers don’t like it and fork the project. For valid reasons or not, this doesn’t matter.

The four X-Apps forks account for roughly 200k lines of code. In the short-term it works, Linux Mint has apps with a traditional UI. But hey, porting the code to GTK+ 4 will be another beast, because the four X-Apps still use the deprecated GtkUIManager and GtkAction APIs, among other things.

But when we look at the codebase, there are a lot of code that could be shared between a GNOME app and its fork(s). So there is a solution: creating more shared libraries. The shared libraries would contain the backend code, of course, but also some basic blocks for the UI. The application would just need to glue things up together, assembling objects, binding GObject properties to GSettings, create the main GtkWindow and a few other things.

The difference would be that instead of forking 200k lines of code, it would be forking maybe 20k lines, which is more manageable to maintain in the long term.

In the case of gedit, making its code more re-usable is exactly what I do since several years, but for another reason: being able to create easily specialized text editors or small IDEs.

Beside avoiding code duplication, creating a shared library has the nice side effect that it is much better documented (usually), and with an API browser like Devhelp, it’s a breeze to discover and understand a new codebase, it permits to have a nice overview of the classes. It’s of course possible to have such documentation for application code, but in practice few developers do that, although it would be a big step towards lowering the barrier to entry for newcomers.

When untangling some code from an application and putting it in a shared library, it is also easier to make the code unit testable (and unit tested!), or at least write a mini interactive test in case of frontend code. Making the code more stable, getting closer to bug-free code and thus more successful software.

Developing a shared library doesn’t necessarily mean to provide backward compatibility during 10 years. Nothing prevents you from bumping the major version of the library every 6 months if needed, making the new version parallel-installable with the previous major versions. So that applications are not forced to update the code when there is an API break.

But creating a library is more difficult, API design is hard. But in my opinion it is worth it. GNOME is not only a desktop environment with an application suite, it is also a development platform.

Doing things that scale

In the software world, and with internet, we can do a lot of things that scale.

Answering a user question on IRC doesn’t scale, only one person and a few lurkers will benefit from it. Answering a user question on a mailing list scales a little better, since the answer is archived and can be searched for. What really scales is instead to improve the reference manual. Yes, you know, documentation. That way, normally the question will not surface again. You have written the documentation once and N readers benefit from it, regardless of N. It scales!

Another example. Writing an application versus writing a library. Writing an application or end-user software already scales in the sense that the programmer writes the code once, but it can be copied and installed N times for almost the same cost as one time. Writing a library scales more, since it can benefit N programs, and thus N^2 users if we use the asymptotic notation. It’s no longer linear, it’s quadratic! The lower-level the library, the more it scales.

Providing your work under a free/libre license scales more, since it can be re-distributed freely on the internet. There are some start-ups in the world that reinvent the wheel by writing proprietary software and provide technical support in their region only. By writing a free software instead, other start-ups can pop up in other regions of the world, and will start contributing to your codebase (if it’s a copyleft license, that is). For the human being advancement, free software is better since it more easily permits to not reinvent the wheel. Provided that it’s a high-quality software.

This can go on and on. You get the picture. But doing something that scales is generally more difficult. It takes more time in the short-term, but in the long-term everybody wins.

API vs ABI

I repeatedly see other people doing the mistake, so a little reminder doesn’t hurt.

  • API: Application Programming Interface
  • ABI: Application Binary Interface

The difference can be easily explained by knowing what to do for some code when the API or ABI breaks in a library that the code depends on:

  • If only the ABI breaks, then you just need to re-compile the code.
  • If the API breaks, you need to adapt the code.
  • When the code doesn’t compile anymore, you need to adapt it, so it’s both an API and ABI break.
  • When the code still compiles fine but you need to adapt it, it’s only an API break.

Example of an ABI break only: when the size of a public struct changes.

Example of an ABI and API break: when a function is renamed.

Example of an API break only: CSS in GTK+ 3.20, or when a function doesn’t do the same as what was documented before (in an incompatible way).

That’s it.

Libtool convenience library to unit test private functions

Only the public API of a library is exported, so when a program is dynamically linked to the DSO (Dynamic Shared Object), it can only use the public functions.

So how do you unit test the private classes of your library?

There is a simple solution, having a Libtool convenience library that contains the whole code of the library, but without an -export linker flag, so that all functions can still be accessed. You then link that *.la file to your unit tests and also to the real library where you add the appropriate -export flag.

You can see that in action in GtkSourceView and gspell. For example in gspell:

When you run make in verbose mode with make V=1, you see that the libtool commands statically link the convenience static library (a file with the *.a extension) into the unit test programs and into the DSO.

That way the *.c files are compiled only once, and adding a unit test to the testsuite/Makefile.am is straightforward.

Changing quickly between a final and derivable GObject class

When doing code refactorings, it is sometimes desirable to change between a final and derivable GObject class. So it is important that that operation can be done quickly, as every other small refactorings that you usually do in your code.

In fact, when refactoring some code, sometimes you want to quickly try something to see if the result is better. But if a certain refactoring task takes a lot of manual and boring steps, (1) it’s not a pleasure to do it and (2) if the result (after other refactorings) is not what you want, you’ve lost your time.

To evolve in the right direction, a code needs to be easily modifiable.

With the convention to put GObject instance variables in the Private struct for derivable types, and in the object struct for final types, changing between the two is currently quite burdensome (because there is no tools to do it quickly):

/* For a final type */
gint
my_object_get_foo (MyObject *object)
{
  return object->foo;
}

/* For a derivable type */
gint
my_object_get_foo (MyObject *object)
{
  MyObjectPrivate *priv = my_object_get_instance_private (object);

  return priv->foo;
}

Maybe in a library like GTK+, changing between a final and derivable type is not a common operation. But in an application, it can be more common. For example when you want to move a class from an application to a library (it can be an internal or external library), one way to do it is to move the re-usable code in the library and keep the app-specific code in a subclass. So in that case, the class in the library needs to be derivable!

There is, though, another convention that can be used with GObject: always use a Private struct, even for final types.

But, calling my_object_get_instance_private() in each function is quite cumbersome… With the old convention to have a 'priv' field in the object struct to access the Private struct, you don’t have that problem. For example:

/* Old convention: have a 'priv' field in the object struct. */
gint
my_object_get_foo (MyObject *object)
{
  return object->priv->foo;
}

Although for a library it’s better to not have the 'priv' field, I tend to prefer that latter code.

TL;DR: writing and refactoring GObject code in C can be quite painful, we need more tools. In the meantime, there can be a big difference just by following a different convention.

Edit: If you’ve read this blog post, you might think “just use Vala or Python or JavaScript! problem solved!”, but I still prefer the C language for GNOME development for reasons explained in this document. And to develop a library, the C language is the de facto language in GNOME.

Object-oriented design best practices

Here is another book review, this time about object-oriented design.

Programming Best Practices

We can learn by our own experience, we can use our common sense, we can learn by more experienced developers when contributing to an existing project. But to speed things up, reading a good old book is also a good idea.

For programming in general, not specifically for object-oriented design, see the blog post About code quality and maintainability.

Object-Oriented Design Heuristics

For OO design, the well-known Design Patterns book is often advised. But most of the design patterns are not often applied, or are useful only for big applications and frameworks. How often do you create an Abstract Factory or a Facade?

On the other hand, the book Object-Oriented Design Heuristics, by Arthur Riel, discusses in details more than 60 guidelines − or heuristics − that can be applied to all projects, even for a small codebase with three classes.

Example of an heuristic:
Keep related data and behavior in one place.

If a getter method is present in a class, why is the data used outside the class? The behavior implemented outside the class can maybe be implemented directly in the class, so the getter can be replaced by a higher-level method.

Other example:
Distribute system intelligence horizontally as uniformly as possible, that is, the top-level classes in a design should share the work uniformly.

By following this heuristic, god classes should be avoided. An example of a class that can be split in two is where some methods use only attributes A and B, while some other methods use only attributes C and D. In other words, if a class has non-communicating behavior, that is, methods that operate on a proper subset of the data members of the class, then the class can be split.

All the heuristics can not be followed all the time. That’s why they are called “heuristics”. But when an heuristic is violated, there should be good reasons behind that decision, and the developer should know what he or she is doing.

Event-driven programming

One missing concept in the book is event-driven programming.

With GObject, we are lucky to have a powerful signal system. It is a great way to decouple classes. When an object sends a signal, the class doesn’t know who receive it.

Conclusion

Programming best practices are useful to maintain a good code quality. Object-Oriented Design Heuristics will give you the knowledge to better write object-oriented code, either for small projects, big projects or API design in a library.

{specialized, general-purpose} × {text editors, IDEs}

Some thoughts about text editors, IDEs, specialized or general-purpose applications.

Several languages Other tasks Plugins
Specialized text editor No No ?
General-purpose text editor Yes No Yes
Specialized IDE No Yes ?
General-purpose IDE Yes Yes Yes

The above table summarizes how I view the text editors and IDEs field.

Several languages

A specialized application focus only on one programming language, or several languages used together. A general-purpose application can support completely different programming languages and platforms.

Other tasks

A text editor focus only on editing text or source code, while simplifying as most as possible the writing of this code. An IDE integrate other features like building the source code, running it, writing commits, integrate a debugger, and so forth.

Plugins

For obvious reasons, a general-purpose application should have a plugin system. For a specialized application, the need for plugins is less obvious.

Two extremes

The general-purpose IDE with plugins is one extreme. The user interface must be generic enough to suit plugins and different programming languages. And it takes more time to learn and configure the application.

The specialized text editor without plugins is the other extreme. The developer has full control on the user interface, so the application has a potentially better usability. The application requires less time to learn and configure. Ideally, it should Just Work.

Not reinventing the wheel for specialized applications

Write libraries for the common features! Or at least ensure that your code is reusable. This is not limited to text editors and IDEs, I’m sure you can apply the same principles on your favorite field.