Re: Consider the maintainer

I’ve read this LWN article: Consider the maintainer. It was a great read, and I want to share my thoughts, from my experience on being a maintainer (or helping the maintenance) of several GNOME modules.

GNOME has a lot of existing code, but let’s face it, it has also a lot of bugs (just look at bugzilla, but the code also contains a lot of not-yet-reported bugs). For a piece of software to be successful, I’m convinced that it has to be stable, mostly bug-free. Stability is not the only property of a successful software, but without it it has way less chance to be successful in the long run (after the hype wave is gone and the harsh reality resurfaces).

There is a big difference between (a) writing a feature, but in reality it’s full of bugs, and (b) writing the same feature, but “mostly bug-free” (targeting bug-free code). It certainly takes the double of time, probably more. The last 10% of perfection are the most difficult.

Paolo Borelli likes to explain that there are two kinds of developers: the maintainers and the developers who prefer to write crazy-new-experimental features (with a gray scale in-between). It is similar with the difference between useful tasks vs interesting tasks that I talked about in a previous blog post: some useful tasks like writing unit tests are not terribly interesting to do, but I think that in general a maintainer-kind-of-developer writes more tests. And Paolo said that I’m clearly on the maintainer side, caring a lot about code quality, stability, documentation, tests, bug triaging, etc.

Reducing complexity

The key, with a lot of existing but not perfect code, is to reduce complexity:

  • Improving the coding style for better readability;
  • Doing lots of small (or less-small) refactorings;
  • Writing utility classes;
  • Extracting from a big class a set of smaller classes so that the initial class delegates some of its work;
  • Writing re-usable code, by writing a library, and documenting the classes with GTK-Doc;
  • Etc.

Even for an application, it is useful to write most of the code as an internal library, documented with GTK-Doc. Browsing the classes in Devhelp is such a nice way to discover and understand the codebase for new contributors (even if the contributor has already a lot of experience with GLib/GTK+).

Another “maintainer task” that I’ve often done: when I start contributing to a certain class, I read the whole code of that class, trying to understand every line of code, doing lots of small refactorings along the way, simplifying the code, using new GLib or GTK+ APIs, etc. When doing this exercise, I have often discovered (and fixed) bugs that were not reported in the bug tracker. Then to achieve what I wanted to do initially, with the much better knowledge of the code, I know how to do it properly, not with a quick hack to do the minimal amount of change that I sometimes see passing. As a result the code has less bugs, there is less chance to introduce new bugs, and the code is easier to understand and thus more maintainable. There is no secrets, it takes more time to do that, but the result is better.

Some books that were very useful to me:

Of course I didn’t read all those books at once, practicing is also important. I nowadays read approximately one computing science book per year.

About new contributors and code reviews

When I started to contribute to GtkSourceView several years ago, I had already developed a complete LaTeX editor based on GtkSourceView (by myself), read several of the above books (most importantly Code Complete) and applied what I learned. I had already a lot of experience with GTK+. So starting to contribute to GtkSourceView was easy, my patches were accepted easily and I think it was not too much work for the reviewers. I then became a co-maintainer.

Contrast this with all those newbies wanting to contribute to GNOME for the first time, without any experience with GLib/GTK+. They don’t even know how to contribute, how to compile the code, they probably don’t know well the command line or git, etc. So if a maintainer wants to help those newcomers, it takes a lot of time. I think this is partly a problem of documentation (that I’m trying to solve with this guide on GLib/GTK+). But even with good documentation, if the new contributor needs to learn for the first time GTK+, it will require too much time for the maintainer. What I would suggest is for newcomers to start by writing a new application on their own; for that a list of ideas of missing applications would be helpful.

This is maybe a little controversial, but the talk Consider the maintainer was also controversial, by suggesting for instance: “Maintainers should be able to say that a project is simply not accepting contributions, or to limit contributors to a small, known group of developers.”

When a company wants to hire a developer, they can choose the best candidate, or if no candidates fit they can also choose to keep the existing team as-is. In Free Software, anyone can send a patch; sometimes it takes a lot of time to explain everything and then after a short time the contributor never comes back. Remember also the well-known fact that adding people to a late project makes it later (usually, but there are exceptions).

Another interesting glimpse, from Hackers and Painters (Paul Graham):

I think this is the right model for collaboration in software too. Don’t push it too far. When a piece of code is being hacked by three or four different people, no one of whom really owns it, it will end up being like a common-room. It will tend to feel bleak and abandoned, and accumulate cruft. The right way to collaborate, I think, is to divide projects into sharply defined modules, each with a definite owner, and with interfaces between them that are as carefully designed and, if possible, as articulated as programming languages.

Other topics

I could talk about other topics, such as the lack of statistics (I don’t even know the number of people executing the code I write!) or trying to avoid sources of endless maintenance burden (example: GtkSourceView syntax highlighting definition files, the maintenance could clearly be better distributed, with one maintainer per *.lang file). But this blog post is already quite long, so I’ll not expand on those topics.

In short, there is clearly matter for thoughts and improvements in how we work, to get more things done.

gspell and LaTeXila – progress report

In September I’ve launched two small fundraisings on gspell and LaTeXila. The two goals are now reached, thanks!

I’ve started working on those two projects, here is a progress report.

gspell – adding GtkEntry support

The basic infrastructure is there, and red wavy underlines are inserted to highlight misspelled words. The code is written in a way that it is unit-testable, and unit tests have been written.

Statistics so far:

21 files changed, 1921 insertions(+), 14 deletions(-)

Next steps:

  • The context menu (right-click menu).
  • Do not check the word currently typed.
  • Better word boundaries, to take into account apostrophes and dashes.

For all those next steps, the idea is to have common code between the GtkTextView and GtkEntry support.

LaTeXila – port to GAction/GMenu?

This progress report is a bit more technical.

To create the menu and toolbars, LaTeXila still uses the deprecated GtkUIManager and GtkAction. What I’ve done this week is investigation work, to figure out if it is possible to use GMenu (the new way menus are now usually created). GAction is great and will be used, but for GMenu it is more complicated.

The conclusion is that it will not be possible to use GMenu in LaTeXila. Because in LaTeXila, when hovering a menu item, a longer description is displayed in the status-bar – which makes the user interface self-discoverable – and this is not possible with GMenu (see this comment in bugzilla for more details). Thankfully GTK+ has a more basic way to create menus, with GtkMenuBar, GtkMenuItem etc. So that’s what LaTeXila will use.

Another reason to create a GtkMenuBar manually is to be able to add a sub-menu with the list of recently used files, with GtkRecentChooserMenu. This doesn’t exist as a GMenu, it would need to be implemented.

Also, the nice thing about GtkUIManager is that the information was encoded just once for both the menus and the toolbars. For a certain menu or toolbar item, the needed information is: the short description, the long description, the icon, the keyboard shortcut and the function to call when the action is activated. With GMenu, some of the information needs to be duplicated for creating a toolbar. So that’s another reason why not using GMenu (at least not directly).

So what I will do is to create a simplified GtkUIManager based on GAction. In other words, store the information just once, and have a convenient way to create menus and toolbars. To create a menu or toolbar item, all of what will be needed is to provide the action name; it will fetch the required missing information from a central store.

gspell and LaTeXila fundraisings – thanks!

The gspell fundraising has reached its initial goal! So thanks a lot for your support!

Expect GtkEntry support in the next version of gspell, which is planned for March 2017.

I’ve added a second milestone for the gspell fundraising, because there are a lot of other possible things to do.

The LaTeXila fundraising is going well too, it is currently at 80%. So thanks to the people who donated so far!

If you write documents in LaTeX, and care about having a good LaTeX editor, well maintained and well integrated to GNOME, then consider donating to LaTeXila 😉 There will hopefully be other milestones in the future, for example to improve the auto-completion (e.g. complete the label in \ref commands), or implementing a full-screen mode.

Two Small Fundraisings – on gspell and LaTeXila

We live in a world where it’s increasingly possible to have a part-time job and being paid for other small tasks on the side; think about Uber or airbnb.

I have a half-time job, and I care about Free Software. So the natural thing to do for me is to find ways to be funded for the contributions I do.

That’s why I was really pleased to hear Alexander Larsson, at the end of his talk at GUADEC about Flatpak:


[…]
And also an interesting thing – I think actually super important – is payment. I want people to be able to pay for Free Software.

Which was met with applause. But unfortunately such support in Flatpak and GNOME Software are not going to happen anytime soon, maybe in a few years with some hope.

In the meantime, I’m launching two small fundraisings!

If this is successful, the idea is to add further milestones, after the work is done on the first one.

gspell

gspell is a library that I created last year to share the spell-checking code between gedit and LaTeXila. The code comes from the gedit spell plugin, as such there is only the support for GtkTextView. The goal of the fundraiser is to add support for GtkEntry, a single line text entry field.

Go to the gspell fundraising page if you’re interested!

LaTeXila

GTK+ 3.22, which will be released this Wednesday, will be the last GTK+ 3 stable version (see this announcement). After that, the deprecated functionality of GTK+ 3 will be removed. LaTeXila is not a new application, it is developed since 2009. A fundraising on a new piece of software can concentrate on features. For LaTeXila, what is more important is that it is well maintained, to be still relevant in the 2020’s. So the fundraiser is to make the code ready for GTK+ 4, at least as a next step.

Code maintenance is not really what we can call interesting, but it is useful. We can see this interesting/useful dichotomy quite often in the computer science world: for a researcher, is proof of correctness interesting? Yes. But in practice it is rarely applied. Is writing unit tests interesting? No, but it is very useful.

Go to the LaTeXila fundraising page if you’re convinced 😉

Introducing gspell, a new spell checking library

As part of the LaTeXila project I’m working on a new spell checking library called gspell.

Some background

At first I wanted to contribute to GtkSpell so that GtkSpell and GtkSourceView work well together, without a dependency on each other. GtkSourceView defines a no-spell-check region. For LaTeX, the region includes the LaTeX command’s names, for example. But GtkSpell didn’t read that region, and the region was available only through the GtkSourceView API. Adding a dependency on GtkSourceView in GtkSpell was not desirable, because many applications use GtkSpell only. Also, a library like GtkSpell could potentially add the support for GtkEntry too, so if there is a dependency on GtkSourceView, it isn’t nice for an application that wants only the spell checking for a GtkEntry. The solution was actually really simple: the no-spell-check region is a GtkTextTag. After setting a name to the tag and expose it in the API, it was possible for GtkSpell to lookup the tag and read the region.

So the patches for GtkSourceView and GtkSpell have been merged, to remark only later that there was a quite annoying text responsiveness problem on long lines (e.g. a wrapped line that takes 5 lines on the screen). And… there was exactly the same problem with the gedit spell plugin. The typed text appeared with a noticeable delay. Fixing that problem was more complicated. The text needs to be spell checked after a timeout. But adding a timeout function means that the remaining region to spell check needs to be tracked, which was not the case, neither in GtkSpell nor in gedit. And another problem is that GtkSpell anyway needed some code clean-ups. On the other hand the gedit code was in a slightly better shape and, more importantly, it had more features. For example gedit has a dialog window to spell check an entire document, one (misspelled) word at a time, whereas GtkSpell only has an “in-line” checker. An in-line checker is not convenient in the case of a very long document with very few misspelled words, in that case the dialog window is more convenient (btw it could also be an horizontal bar above or below the text, to not hide the text by the dialog window).

So, since the gedit spell plugin’s code had more features, I’ve decided to improve that code base instead. The gedit plugin code also needed some code clean-ups, but at least the code architecture was for the most part good. After those code refactorings and bug fixes, it was easier to fix the responsiveness problem. Then, after some more “spell shaking” (haha) the code was re-usable for other text editors.

Enters gspell

The gedit spell plugin’s code has then been copied into its own repository, called gspell. The library is still under construction, but I hope to get a first version available when GNOME 3.18.0 is released (September 21, according to the schedule). That first version will not have a guaranteed stable API, be warned! It is currently available on my GitHub account, but if things go well, I’ll ask during the next development cycle to get it hosted on gnome.org!

Update: gspell is now hosted on gnome.org, the links above have been updated. The project page is now at https://wiki.gnome.org/Projects/gspell.

What it means for LaTeXila

Basically, the LaTeX command’s names won’t be highlighted anymore with a red wavy underline. And there will be a dialog window to spell check an entire file. Also, on the LaTeXila side, settings will be stored on a file-by-file basis (to remember the language and whether the in-line checker is activated), and there will be settings in the preferences dialog for the default configuration.

I already have a branch in LaTeXila that uses gspell, and it works pretty well. There is still a bit of work to do, but it should be ready soon. Since it would be a pity to not have it for LaTeXila 3.18, I’ll delay the release of LaTeXila 3.18.0 for a few weeks to let translators have the time to update the translations.

Thoughts on live previews in LaTeXila

Several years ago I talked about some principles for the user experience of LaTeXila, a GTK+ LaTeX editor for GNU/Linux. The conclusion:

The idea of LaTeXila is to always deal directly with the LaTeX code, while simplifying as most as possible the writing of this LaTeX code. The users don’t need to be LaTeX gurus, but they should understand what happens.

In my opinion this better follows the LaTeX philosophy than programs like LyX. By writing directly the LaTeX markup, you have full control of your document. The idea of LaTeX is to concentrate on the content and the structure of the document, not its layout.

With a live preview, you see constantly the layout… so you’re less concentrated on the content. As soon as something is wrong in the layout, you’ll want to fix it. This can lead to bad practices, like proceeding by trials and errors until the layout is good. LaTeXila tries to avoid that. As in programming, you should understand what you’ve written before the compilation or execution. You must be certain that the code is correct; if you have any doubts, the best is to read the documentation, this will save you time when you’ll use the same commands in the future.

Moreover, layout polishing should be done when the content is finished. For instance, it can sometimes happen that a word exceeds the margin, because LaTeX doesn’t know where to place an hyphen to split that word. It is useless to fix this issue when the content isn’t finalized, because if you add or remove some words in the sentence, the problem will maybe be fixed by itself.

Instead of a live preview, the workflow in LaTeXila is to compile from time to time the document (e.g. when you’ve finished a section) to re-read your text and check that the result is what you expected. A handy feature in that context is the forward and backward search between LaTeXila and Evince, to switch between the *.tex file(s) and the PDF at the corresponding positions, with a simple Ctrl+click.

But there are some special cases where a live preview can be useful, i.e. when more source <-> result cycles are required:

  • A PGF/TikZ figure preview, because in that case the layout is more important.
  • When we do something difficult, like writing a long and tricky math equation. But when it becomes difficult to find our way in the code, an alternative is to improve its readability, by spacing it out, adding comments to separate the sections, etc.

If you have other specific use cases where a live preview is really useful, I would be interested to hear them. I don’t think “learning LaTeX” requires a live preview, as explained above this can result in bad practices.

So I think a live preview might be useful for editing one paragraph. A live preview of the whole document is probably less useful. In any case, a live preview should be enabled only temporarily. In LaTeXila we can imagine doing a right click on a paragraph or TikZ figure, select the live preview in the context menu and we enter in a mode where only that paragraph (or selection) is visible, with the live preview on top/right/directly injected in your brain/whatever. Then when the writing of the tricky paragraph is finished, we return to the normal mode with the whole source content.

Search and replace behavior for a text editor

The search and replace is one of those features that have a wide variety of different implementations and behaviors across applications. The nice thing with my work I did this summer on GtkSourceView, is that you can build whatever behavior you want by using the search and replace framework. And that’s exactly what I did for LaTeXila, which has a new behavior for the replace button.

Search and replace in LaTeXila
Search and replace in LaTeXila

When we replace an occurrence, we generally have the time and we don’t have a train to take (or we are already in the train). And it’s better to see how the content looks like with the replacement text, to see if everything is fine and make adjustments if required. As a consequence, when you click on the replace button, it just replaces the search match without moving to the next occurrence (which can be far away, so we would not be able to see the replaced text).

When you have just replaced an occurrence, no text is selected. If you click a second time on the replace button, it goes to the next occurrence, and selects it, to repeat the process. Nice, isn’t it?

Voilà, that’s all, I just wanted to share this information. It can be interesting for other applications.

My work on GtkSourceView so far

During the GNOME 3.7 development cycle I’ve been busy working on GtkSourceView, a library used by gedit, Anjuta DevStudio, LaTeXila, and other applications.

The main change is that the completion system has been revamped, mainly under the hood.

Completion: user-visible changes

The only user-visible changes are the bug fixes and the new calltip window sizing, also used for displaying extra pieces of information for proposals. In GtkSourceView 2 there was a function for tuning the calltip sizing, but this function has been removed when porting GtkSourceView to GTK+ 3. Now the sizing works correctly without any tuning function, the window simply fits the natural size of its child widget.

Now the user-comprehensible explanation:

Before
Before
After
After

You see that the main completion window is still quite big when there are only a few proposals. For now it’s not possible to change that, but a more compact completion window should be possible in the future.

Completion: developer-visible changes

The completion system is now better documented. A few functions have been deprecated. The code has been simplified (two classes have been almost completely rewritten). There are also some performance improvements, but it was not really an issue previously. Some statistics on the number of lines:

Before:

$ wc -l gtksourcecompletion*.{c,h} | tail -1
8012 total

After:

$ wc -l gtksourcecompletion*.{c,h} | tail -1
6728 total

Perfection is achieved not when there is nothing more to add, but rather when there is nothing more to take away. (Antoine de Saint-Exupéry)

Other enhancements to GtkSourceView

Paolo and I wrote more unit tests, and code coverage support has been added, to have statistics with nice colors.

Lots of compilation warnings have been fixed, especially for the API documentation. For example, the links to GLib or GTK+ symbols work now in the documentation. Compilation warnings are generally easy to fix, it’s a good way to get involved in a project I think, if there are no easy bugs to fix.

That’s all folks. Thanks to Paolo, Ignacio and Jesse for their advices and reviews!

Switch from CMake to Autotools

Last week I’ve migrated the build system of LaTeXila from CMake to the Autotools. Here are the reasons.

The GNU Coding Standards

With CMake, some important make’s targets are missing, for example make uninstall. The main problem is that CMake doesn’t follow the GNU Coding Standards (GCS). The purposes of the GCS, with regard to a build system, is to make a program portable, easy to install, and consistent with the way other softwares are built and installed.

The GCS has several decades of experience, the standards are well established, and we can trust the GNU hackers for having well conceived them. Those that don’t follow them are devoted to reinvent the wheel: they will be faced sooner or later by the same problems already solved by the GCS and the Autotools…

Following standards is important both for users and packagers. If every software uses a different build system, with different options etc, it is a nightmare.

Available macros for GNOME applications

Another reason to use the Autotools, for a GNOME application, is to use the available macros: for the translations (intltool, ITS Tool), the documentation (yelp), the settings (gsettings), …

Creating a tarball

With the Autotools it is as simple as running make distcheck, and putting some files in EXTRA_DIST or prefixing Automake variables with the dist modifier.

It is more complicated with CMake. CPack can be used, but it is far from automatic. When reading the CPack documentation, I changed my mind and wrote a shell script instead.

Learning the Autotools

The Autotools are not as Autopain as people generally say. The learning curve is maybe steeper, but with a good book, there is no reasons to be afraid.

That said, there are certainly problems in LaTeXila, so some tests before the stable release would be more than welcome 😉

LaTeXila: some principles for the user experience

Writing a LaTeX document can be done in different ways. Some people prefer an application like LyX, which hides the LaTeX code and uses sophisticated UIs.

Other people prefer to work directly on the LaTeX code. LaTeXila has chosen this direction. In this context, let’s see, with two examples, some of the principles behind LaTeXila to offer a good user experience.

Inserting a figure

To insert a figure, an application like LibreOffice uses a wizard, so the user can choose an image, its size, the caption, etc.

For a LaTeX application, we could imagine that the corresponding LaTeX code is generated and inserted in the .tex file. Nice, isn’t it?

There is a little problem though: if the user doesn’t understand the code, how does he modify it afterwards to change an option? A good reaction is to look at the documentation to understand what happens. But a quicker solution is perhaps to re-run the wizard and refill the pieces of information and modify the option.

So, to force users to learn LaTeX, LaTeXila avoids wizards!<Esc>dd

A wizard is not a perfect solution. The root of the problem is that looking at the LaTeX documentation can take some time.

A better solution:

  • Good completion of LaTeX commands and their arguments.
  • A way to add easily the required commands for doing common actions like inserting a figure.

The completion works well in LaTeXila, but can be improved. Also, when we are in a command argument, if no completion is available, a calltip is displayed with the prototype of the LaTeX tag:

Calltip
Calltip showing the prototype of a LaTeX tag

As for the second point, there is a toolbar and menus for common actions. An interesting feature, that doesn’t exist yet in LaTeXila, is the snippets plugin of gedit.

Creating a new document

Creating a new document can also be made via a wizard. We choose:

  • the document type (an article, a report, slides, …)
  • the title
  • the author(s)

But there is the same problem as for figures’ insertion.

In LaTeXila, creating a new document is done via a template. There are some default basic templates, and personal templates can be created.

The user can for example create a big personal template with all the stuff that he could possibly use in a new document. And then, when he creates a new document, he removes or comments what he doesn’t need.

The UI to create a new document has been improved recently ((The window style is different from the GNOME 3 style because I use Xfce, gnome-shell is not supported on my graphics card, and the fallback mode is a bit buggy.)):

New document
Create a new document from a template

To summarize, the idea of LaTeXila is to always deal directly with the LaTeX code, while simplifying as most as possible the writing of this LaTeX code. The users don’t need to be LaTeX gurus, but they should understand what happens.