OpenBSD Ports - Special Porting Topics [Handbook Index]



Shared Libraries

Understanding Shared Libraries Number Rules

Shared libraries are a bit tricky for a variety of reasons. You must understand the library naming scheme: libfoo.so.major.minor.

When you link a program, the linker ld embeds that information in the created binary. You can see it with ldd(1). Later, when you run that program, the dynamic linker ld.so(1) uses that information to find the right dynamic library:

So, this means that all libraries with the same major number and an equal or higher minor number must satisfy the binary API that the program expects. If they do not, then your port is broken. Specifically, it will break when users try to update their system.

The rules for shared libraries are quite simple.

Sometimes, it happens that a library is written as several files, and that internal functions happen to be visible to communicate between those files. Those function names traditionally begin with an underscore, and are not part of the API proper.

Tweaking Ports Builds to Achieve the Right Names

Quite a few ports need tweaks to build shared libraries correctly anyways. Remember that building shared libraries should be done with
$ cc -shared -fpic|-fPIC -o libfoo.so.4.5 obj1 obj2
Trying to rename the library after the fact to adjust the version number does not work: ELF libraries use some extra magic to set the library internal name, so you must link it with the correct version the first time.

On the other hand, remember that you can override Makefile variables from the command line, by using MAKE_FLAGS in the port's Makefile. In some cases, the program you're porting will have a simple variable which you can override by setting the library version in MAKE_FLAGS, for example MAKE_FLAGS= SO_VERSION=${LIBfoo_VERSION}. In others, the port will need to be patched to make use of such a variable.

The ports infrastructure already handles these details in libtool-based and CMake-based ports. For libtool, by default the version from the base OS is used, but in some cases this is insufficient and USE_LIBTOOL=gnu can be set. CMake is handled by using the cmake.port.mk module: MODULES += devel/cmake. In these cases, most details are handled automatically:

For CMake, the initial build will often produce libraries without a version number (lib/libXXX.so files). In that case, add SHARED_LIBS lines to the Makefile for those libraries set to version 0.0, clean and rebuild the port, and when you regenerate the PLIST you should see that it starts to use the version numbers.

Avoid DT_SONAME hardcoding

Some ports use the -soname flag of ld(1) to override the library specification in the DT_SONAME field. Setting DT_SONAME is not a bug itself but is usually not desirable on OpenBSD where ld.so(1) is smart and the ports tree takes care of library versioning. Moreover, a wrong soname can result in unusable binaries that depend on this library; either always or after some updates to the port containing the library. To check if the DT_SONAME field is set, run the following command:
$ objdump -x /path/to/libfoo.so.0.0 | fgrep SONAME
  SONAME      libfoo.so.0.0
As a general rule, setting soname explicitly should be patched out. The only exception is a situation when the right soname is recorded and it's hard to patch soname-related code out and upstream won't accept such a patch. In that case the soname should fully match the file name (see the example above).

Try putting all user visible libraries into /usr/local/lib

As a rule, requesting the user to add directories to their ldconfig(8) path is a very bad idea: all shared libraries that are linked directly to programs should appear in /usr/local/lib. However, it is quite possible to use a symbolic link to the actual library. You should understand the library lookup rules: So, let us assume you have two ports that provide two major versions of a given library, say qt.1.45 and qt.2.31. Since both ports can be installed simultaneously, to make sure a given program will link against qt.1, that library is provided as /usr/local/lib/qt/libqt.so.1.45, and programs will be linked using
$ ld -o program program.o -L/usr/local/lib/qt -lqt
Similarly, a program that links with qt.2 will use the /usr/local/lib/qt2/libqt.so.2.31 file with
$ ld -o program program.o -L/usr/local/lib/qt2 -lqt
To solve those libraries at run-time, a link called /usr/local/lib/libqt.so.1.45 and a link called /usr/local/lib/libqt.so.2.31 have been provided. This is enough to satisfy ld.so(1).

It is an error to link a program using qt1 with

$ ld -o program program.o -L/usr/local/lib -lqt
This code assumes that qt.2.31 is not installed, which is a wrong assumption.

Such tricks are only necessary in the rare cases of very pervasive libraries where a transition period between major versions must be provided. In general, it is enough to make sure the library appears in /usr/local/lib.

Writing Library Dependencies Correctly

The new dependency code does need complete library dependencies. You must use make lib-depends-check or make port-lib-depends-check to verify a port does mention all libraries it requires. You just write them in LIB_DEPENDS/WANTLIB like this:
LIB_DEPENDS += x11/gtk+
WANTLIB += gtk>=1.2 gdk>=1.2
It is not an error to specify static libraries on a WANTLIB line as well. WANTLIBs are fully evaluated at package build time: the resulting package will have library dependency information embedded as lines for ld.so that hold the actual major.minor number that was used for building, and nothing for static libraries.

In fact, providing LIB_DEPENDS lines even for static libraries is a good idea. This will simplify port update if a given dependency goes from a static library to a shared library.

WANTLIB lines must specify the same paths that are used for ld. With the same example as above, a standard qt2 depends fragment would say WANTLIB += lib/qt2/qt.=2. This allows the dependency checking code to do the right thing when multiple versions of the same library are encountered.

Updating Ports Correctly

So, when you update or add a port that involves shared libraries, a few details must be done right.

GNU autoconf

Autoconf is a GNU tool that is supposed to help in writing portable programs. It is often used together with automake (portable makefiles) and libtool (portable shared libraries).

Those tools do not work all that well, and often create specific challenges in porting software to OpenBSD.

Detecting the Use of autoconf in a Piece of Software

Quite a few software projects have configure scripts, and in most cases, those scripts were generated by autoconf. Such scripts have a line near the top that says:
# Generated automatically using autoconf version 2.13
or something similar. The generation procedure is covered in a following section. Most often, autoconf ports come with the generated scripts, and with the source scripts that generated these. The next section covers the simple case where you simply want to run the generated script, and not modify it. Make sure you read the section about trojan horses as well.

Running an autoconf Configure Script

This script is normally run during the configure stage of ports building. To invoke the configure script, one only has to set CONFIGURE_STYLE=gnu which will automatically invoke ${WRKSRC}/configure.

If your configure script lies elsewhere, just set CONFIGURE_SCRIPT to the right value.

Configure scripts often take a lot of arguments. The default processing of the ports tree will only pass --prefix and --sysconfdir to these. Very old configure scripts don't understand --sysconfdir; you can set CONFIGURE_STYLE=gnu old in such cases.

Similarly, some ports are not aware of DESTDIR. Those ports will often accept setting prefix=${DESTDIR}/usr/local without any issue, which can be done with CONFIGURE_STYLE=gnu dest.

Ports using autoconf and automake will have Makefiles with a specific format that begins with a few standard locations:

If the configure script does not allow you to override these, you may still be able to do it later on during the build or fake stage. This does assume, of course, that the only reference to such a directory is with in the generated Makefile.

For instance, a neat trick involves switching sysconfdir to ${PREFIX}/share/example/pkgname during the fake stage to get default config files to package (since packages don't normally store files under /etc).

Ports fully using autoconf and automake may support building under a different directory: try setting SEPARATE_BUILD=flavored and see if that works. This would allow you to wipe the build tree without wiping the source tree, by giving you separate ${WRKSRC} and ${WRKBUILD} locations. In a few cases, separate builds may need to use gmake, where the rest of the port is happy with bsd-make, in which case this is not worth it.

Automake will generate a few rules to rebuild all the generated scripts if anything changes. These often get in the way of OpenBSD specific patches. For that reason, as soon as CONFIGURE_STYLE corresponds to autoconf use, post-patch will touch various files in a specific order, so that no automake dependencies get triggered later. The list of dependencies is given in tsort(1) order in a file mentioned in REORDER_DEPENDENCIES (the default is ${PORTSDIR}/infrastructure/mk/automake.dep).

The Mechanics of Configure Checks

The configure script first runs a fixed script called config.guess, that will determine which system configure is running on. config.guess does not vary from port to port and is a fixed script, so the OpenBSD ports tree replaces it with a fixed version that knows about some specific OpenBSD architectures. Since most software packages come with bundled config.guess, and since some of them are quite old, this is a necessary step. If a software package contains more than one config.guess, you can overwrite them all by setting MODGNU_CONFIG_GUESS_DIRS to the full list of directories to process.

The configure script generated by autoconf then simply checks all functionality on the existing system, by looking for a compiler, and running simple test programs through it. Since some of these tests are quite lengthy, the ports tree primes configure with a CONFIG_SITE=config.site file. Configure will look at the contents of that file first before running the tests. A few configure scripts may have bugs that will prevent them from running correctly in the presence of config.site. Setting CONFIG_SITE to empty will weed out these kind of problems.

Most configure will auto-detect quite a few conditions. It is very important to look at configure's options, at configure's output, and at the generated config.log file: these will tell you what options were found, and what options were not found. This will allow you to find out when configure did not find a package that was installed.

This will also tell you which optional packages configure would find. In the ports tree, those are called hidden dependencies. This is a bad thing: a hidden dependency is some extra package configure will pick up if it's installed. Then it will proceed in building a mutant package. In some cases, the build will fail because of OpenBSD peculiarities. In some cases, the package creation will fail, as some files will have different names. In some cases, the resulting package will be incorrect, as it will fail to record any dependency on the optional package. So looking at configure's output is one of the most important duty of ports' maintainers. Watch out for cascading tests: detecting a given feature may lead a configure script to try out and find some dependent feature, so you will not see the second feature in the configure output unless the first feature is triggered.

In case some hidden dependencies are found, some action should be taken. The most simple action is to install the optional package, and see what configure will do. If it detects the package, one can either disable the detection (by using configure options, or environment variables, or patching the configure script), or verify that the build goes well and add the dependency to the list of dependent packages. A better choice is to figure out a reasonable set of default dependencies, and then add some flavors to cover other common features.

Re-generating Configure Scripts

Configure scripts are normally generated from a configure.in file (recent versions of autoconf use a configure.ac file instead). A standard library of definitions is often available in an aclocal.m4.

In most cases, patching configure directly is a bad idea. It is better to patch the configure.in file and get the ports tree to call autoconf. Good porters will endeavor to write configure.in changes that they can feed to software authors.

Different versions of autoconf will produce distinct configure scripts. autoconf-2.13 is special: it was used over a fairly long period, and there has been mutant versions of autoconf-2.13 (actually, betas of a newer autoconf) in wide use. Hence, using autoconf-2.13 will often not produce the exact same configure script.

Since having several autoconf versions around at the same time is useful, the autoconf script actually available in the ports tree is part of a port called metaauto. Which autoconf script actually gets called is controlled through the environment variable AUTOCONF_VERSION. Calling autoconf happens if you set CONFIGURE_STYLE=autoconf, together with setting AUTOCONF_VERSION. In most cases, identify the version of autoconf that was used to generate the distributed configure script (usually obvious when reading the script) and use this same version yourself.

Autoconf relies on the standard unix preprocessor m4(1). Normally, autoconf relies on some features on the GNU version of m4, gm4. Fortunately, OpenBSD's m4 has enough features to run autoconf as well, it just needs to be invoked with -g to handle autoconf. Very seldom, autoconf run with OpenBSD's m4 will produce bogus configure scripts. The OpenBSD developers will fix such an issue.

Trojan Horses

Configure scripts are big generated files. They are an ideal hiding place for trojan horses, and this has indeed already happened in the past. This is the main reason for having most versions of autoconf in the tree: a good porter is expected to check that a generated configure script matches what the ports tree autoconf builds.

Interaction with Other Programs

Autoheader is another program related to autoconf that is normally run to create a config.h.in file. Setting CONFIGURE_STYLE=autoconf will also run autoheader. A few ports don't use autoheader. Setting CONFIGURE_STYLE=autoconf no-autoheader will fix that issue.

libtool has a few specific hooks in configure.in. There is often a libtool.m4 script that goes with it. Getting libtool to do the right thing goes beyond the scope of this documentation.

Configuration Files

Packages should only install files under ${PREFIX}, which is /usr/local by default. On the other hand, the OpenBSD policy is to install most configuration files under ${SYSCONFDIR}, which is /etc by default.

Note that it is perfectly acceptable for a binary package to have both ${PREFIX} and ${SYSCONFDIR} hardcoded: PREFIX and SYSCONFDIR are mostly user settings that influence the build of the package.

@sample Explained

Packing-lists contain a specific @sample mechanism to deal with configuration files:

More @sample Specificities

Contrary to other files in a packing-list, @sample entries can have an absolute path name.

Some big packages will also need their own configuration directory, @sample ${SYSCONFDIR}/directory/ will deal with that.

Using @sample directory/ to create port specific directories that do not hold any configuration files is perfectly good style. @sample correctly interprets correct @mode, @owner, @group annotations. This can be a bit cumbersome, because you will often need to switch back and forth between a default mode and a configuration file specific mode.

Special Tricks

make update-plist knows how to copy @sample annotations over, but it does not know how to create them, so they have to be written in the first place.

Note the distinction between configuration files and example configuration files: the port must be configured to find its files under ${SYSCONFDIR}. It is only the fake installation stage that must put stuff under ${PREFIX}/share/examples. One simple way to handle that is to copy the files over in a post-install.

A neat trick which often works is to look at a program's Makefile, and override the configuration directory in the fake installation stage by using specific FAKE_FLAGS, for instance:

FAKE_FLAGS=	DESTDIR=${WRKINST} \
		sysconfdir=${WRKINST}${TRUEPREFIX}/share/examples/PKGNAME
You just need to watch out for programs that write the configuration directory down in specific files during their install stage.

Examples

Audio Applications

This document currently deals with sampled sounds issues only. Contributions dealing with synthesizers and waveform tables are welcome.

Audio applications tend to be hard to port, as this is a domain where interfaces are not standardized at all, though approaches don't vary much between operating systems.

libsndio

OpenBSD has its own audio layer provided by the sndio library, documented in sio_open(3). Until it's merged into this page, you can find further information about programming for this API in the guide, hints on writing and porting audio code. sndio allows user processes to access audio(4) hardware and the sndiod(8) audio server in a uniform way. It supports full-duplex operation, and when used with the sndiod(8) server it supports resampling and format conversions on the fly.

Hardware Independence

YOU SHOULDN'T ASSUME ANYTHING ABOUT THE AUDIO HARDWARE USED.
Wrong code is code that only checks the a_info.play.precision field against 8 or 16 bits, and assumes unsigned or signed samples based on soundblaster behavior. You should check the sample type explicitly, and code according to that. Simple example:

AUDIO_INIT_INFO(&a_info);
a_info.play.encoding = AUDIO_ENCODING_SLINEAR;
a_info.play.precision = 16;
a_info.play.sample_rate = 22050;
error = ioctl(audio, AUDIO_SETINFO, &a_info);
if (error)
    /* deal with it */
error = ioctl(audio, AUDIO_GETINFO, &a_info);
switch(a_info.play.encoding)
    {
case AUDIO_ENCODING_ULINEAR_LE:
case AUDIO_ENCODING_ULINEAR_BE:
    if (a_info.play.precision == 8)
        /* ... */
    else
        /* ... */
    break;
case ...

default:
    /* don't forget to deal with what you don't know !!! For instance, */
    fprintf(stderr,
        "Unsupported audio format (%d), ask ports@ about that\n",
            a_info.play.encoding);

    }
    /* now don't forget to check what sampling frequency you actually got */
This is about the smallest code fragment that will deal with most issues.

16 bit Formats and Endianness

In normal usage, you just ask for an encoding type (e.g., AUDIO_ENCODING_SLINEAR), and you retrieve an encoding with endianness (e.g., AUDIO_ENCODING_SLINEAR_LE). Considering that a soundcard does not have to use the same endianness as your platform, you should be prepared to deal with that. The easiest way is probably to prepare a full audio buffer, and to use swab(3) if an endianness change is required. Dealing with external samples usually amounts to:
  1. Parsing the sample format
  2. Getting the sample in
  3. Swapping endianness if it is not your native format
  4. Computing what you want to output into a buffer
  5. Swapping endianness if the sound card is not in your native format
  6. Playing the buffer
Obviously, you may be able to remove steps 3 and 5 if you are simply playing a sound sample which happens to be in your sound card native format.

Audio Quality

Hardware may have some weird limitations, such as being unable to get over 22050 Hz in stereo, but up to 44100 in mono. In such cases, you should give the user a chance to state his preferences, then try your best to give the best performance possible. For instance, it is stupid to limit the frequency to 22050 Hz because you are outputting stereo. What if the user does not have a stereo sound system connected to his audio card output?

It is also stupid to hardcode soundblaster-like limitations into your program. You should be aware of these, but do try to get over the 22050 Hz/stereo barrier and check the results.

Sampling Frequency

You should definitely check the sampling frequency your card gives you back. A 5% discrepancy already amounts to a half-tone, and some people have much more accurate hearing than that, though most of us won't notice a thing. Your application should be able to perform resampling on the fly, possibly naively, or through devious applications of Shannon's resampling formula if you can.

Dynamic Range

Samples don't always use the full range of values they could. First, samples recorded with a low gain will not sound very loud on the machine, forcing the user to turn the volume up. Second, on machines with badly isolated audio, low sound output means you mostly hear your machine heart-beat, and not the sound you expected. Finally, dumb conversion from 16 bits to 8 bits may leave you with only 4 bits of usable audio, which makes for an awfully bad quality.

If possible, the best solution is probably to scan the whole stream you are going to play ahead of time, and to scale it so that it fits the full dynamic range. If you can't afford that, but you can manage to get a bit of look-ahead on what you're going to play, you can adjust the volume boost on the fly, you just have to make sure that the boost factor stays at a low frequency compared to the sound you want to play, and that you get absolutely no overflows -- those will always sound much worse than the improvement you're trying to achieve.
As sound volume perception is logarithmic, using arithmetic shifts is usually enough. If your data is signed, you should explicitly code the shift as a division, as C >> operator is not portable on signed data.

If all else fails, you should at least try to provide the user with a volume scaling option.

Audio Performance

Low-end applications usually don't have much to worry about.

Don't forget to run benches. Theoretical optimizations are just that: theoretical. Some hard figures should be collected to check what's a sizeable improvement and what's not.

For high performance audio applications, such as mpegI-layer3, some points should be taken into account:

A model you may have to follow to get optimal results is to first compile a small test program that enquires about the specific audio hardware available, then proceed to configure your program so that it deals optimally with this hardware. You may reasonably expect people who want good audio performance to recompile your port when they change hardware, provided it makes a difference.

Real Time or synchronized

Considering that OpenBSD is not real time, you may still wish to write audio applications that are mostly real time, for instance games. In such a case, you will have to lower the blocksize so that the sound effects don't get out of sync with the current game. The problem with this if that the audio device may get starved, which yields horrible results.

In case you simply want audio to be synchronized with some graphics output, but the behavior of your program is predictable, synchronization is easier to achieve. You just play your audio samples, and ask the audio device what you are currently playing with AUDIO_GETOOFFS, then use that information to post-synchronize graphics. Provided you ask sufficiently often (say, every tenth of a second), and as long as you have enough horse-power to run your application, you can get very good synchronization that way. You might have to tweak the figures by a constant offset, as there is some lag between what the audio reports, what's currently playing, and the time it takes for X Window to display something.

Contributing Code Back

In the case of audio applications, working with the original program's author is very important. If their code only works with soundblaster cards, for instance, there is a good chance they will have to cope with other technology soon.

If you don't send your comments to the author, your work will have been useless.

It may also be that the author has already noticed whatever problems you are currently dealing with, and is addressing them in his current development tree. If the patches you are writing amount to more than a handful of lines, cooperation is almost certainly a very good idea.

Manual Pages

This section provides guidelines on how to deal with groff versus mandoc(1) issues in ports, and what to do with non-English manual pages.

Should I check anything?

When creating a new port or updating an existing port, please check whether the port can use mandoc to format its manuals. Both the automatic and the manual checks described below are required. This may make the manuals more useable for the port's users, and it will reduce the port's build time.

If a new port or an existing port not marked with USE_GROFF does not work with mandoc, please report that to schwarze@, who will probably fix mandoc.

Which tools do I need?

No tools are required except the mandoc(1) utility included in the base system. In the very unusual case that you suspect recent changes to mandoc are important for the port, you can easily update mandoc, even without updating the rest of the system:
$ cd /usr/src/usr.bin/mandoc/
$ cvs -q up -Pd
$ make cleandir
$ make obj
$ make
$ doas make install
Optionally, you may also get a copy of the gmdiff utility script that helps to compare groff and mandoc output. The gmdiff script is not strictly required, doing the necessary checks by hand is perfectly acceptable.

How do I report the results?

The following paragraphs ask for sending in reports to the mandoc maintainers in some particular situations. Before sending such reports, please always tick off the following checklist:
  1. Attach the mdoc(7) or man(7) source file in question to the mail. This may either be a file contained in the distribution tarball or a file generated during the build process. In case several files exhibit the problems, choose one that shows all problems. In case different files exhibit different problems you wish to report, attach as many files as necessary. The point is to save the mandoc maintainers the work of downloading distribution tarballs, searching them for source files, sometimes even installing software before being able to start a build, while you have that information readily at hand, anyway.
  2. Briefly describe all the problems you want to report, and where they can be seen in which file. We have spent time wondering what exactly the reporter's point was more than once in the past.
  3. In case your report is related to errors or warnings printed by the mandoc utility, copy the output of mandoc -Tlint (or mandoc -Tlint -Werror when warnings are irrelevant) into the body of your mail. Usually, this is easy to reproduce, but it did happen that it was not, causing unnecessary confusion.
  4. In case the version of the port you are talking about is not yet committed, please attach what is needed to build the uncommitted port: A diff against -current when it is an update, or a tarball of the port directory when it is a completely new port. Very often, the source files will be sufficient to identify the problem; however, in those cases where they are not, mailing back and forth or searching mailing list archives just to get the needed additional information is a waste of time.
  5. Send mail to schwarze@. Unless you are the maintainer of the port, Cc: him or her. Unless you are an OpenBSD developer, in case you regularly work with a developer who is committing your ports and who you know is interested in this port, Cc:ing him or her may be useful as well.

How do I do automatic checking?

To do the automatic part of the check, please run the following command over all mdoc(7) and man(7) manual source files contained in the port:
$ mandoc -Tlint -Werror *
If you get any UNSUPPORTED messages, the respective places of the manual page require careful scrutiny. It is likely that the page will be misformatted with mandoc and the port requires USE_GROFF. If you are sure that all misformattings related to the unsupported features are minor and don't hinder the reader, you may remove USE_GROFF; but in case of doubt, leave USE_GROFF in place when there are UNSUPPORTED messages.

If there are any ERROR messages, they should also be briefly looked at. In the unusual case that they are related to misformatting with mandoc that doesn't happen with groff, that should be reported; the mandoc maintainer might choose to let mandoc issue UNSUPPORTED messages in additional cases or to fix the formatting.

If manual pages look good with groff, never patch them to get rid of mandoc errors. That would merely be a make-work project not helping anyone: It will neither help to improve upstream manuals nor mandoc.

How do I do manual checking?

If there are no errors or the errors are not related to serious misformatting with mandoc, proceed to the manual part of the check. Look at the manuals as formatted by mandoc. Do they look fine? If yes, you do not need USE_GROFF, and there is no need to report anything.

If there are no errors, but mandoc output has serious issues, that is, relevant information is missing or part of the output is garbled, please always report your findings, even if you happen to know it's due to a known issue with mandoc. We do want to know which issues cause serious problems in practice, such that we can address the most pressing issues first.

If mandoc output has serious issues and groff output looks bad as well, then the manuals are probably just broken upstream. In that case, you have the usual options when porting broken software: Abandon the port, ignore the problem, report upstream, and/or patch the bugs away. In case you need help with the latter, talk to schwarze@.

If there are no errors, but mandoc output has minor issues that don't really hinder the user when reading the manual, you are welcome to report these issues as well. In that case, you are even more welcome to first check the mandoc TODO list, to avoid having the same minor issues reported again and again - but in case of doubt, it is always better to report dupes than to let problems go unnoticed.

If there are only very few errors, in particular if you get the impression that mandoc output is just fine all the same, you don't usually need USE_GROFF=Yes. In case of doubt, ask for advice. Such questions often help to improve mandoc error reporting, in particular to identify and remove bogus mandoc errors messages.

To speed up the manual checks, in particular if you are often doing mandoc checks on OpenBSD ports, and to reduce the risk of overlooking problems, consider using the gmdiff utility script. It takes the file names of an arbitrary number of manual source files as arguments, runs both groff and mandoc on all the files in turn, and compares the output of both programs. However, bear in mind that you are still doing manual checks with the ultimate goal to judge the quality of mandoc output: all the above points still apply even when you are using the gmdiff script to help your work. Also note that gmdiff will usually find minor formatting differences between both programs, in particular with respect to whitespace. If mandoc output looks good, even if it's slightly different from groff output, USE_GROFF is not needed.

For ease of use, it's possible to call gmdiff from a custom target in mk.conf:

gmdiff:
	@make fake; cd ${WRKINST}${TRUEPREFIX}; find man -type f -path 'man/man*' -print0 | xargs -0r gmdiff | less

What about warnings?

You might wonder about mandoc warnings, as opposed to mandoc errors. In a nutshell, the distinction is that errors may seriously impact the usefulness of the output, while warnings might at the worst cause minor formatting glitches, if at all. If a mandoc warning appears to be related to seriously garbled output, that's probably a bug in mandoc and should always be reported.

That said, it is obvious that warnings are irrelevant for the decision whether to use or not to use mandoc for a given port. They are for manual authors, to help improve manual quality, not for porters.

How can I help upstream?

In case you are one of the port's upstream developers, or know they care about good quality of their manuals and gladly accept patches, it may make sense to use mandoc -Tlint to identify potential formatting issues and to produce patches to be submitted upstream. Usually, there is no need to put such patches into the ports tree.

As with any kind of linting, before changing your mdoc(7) or man(7) source code or sending out patches, first make sure you are chasing real problems in the manuals. The mandoc utility is not perfect. It may produce bogus warnings. We are trying to fix that, but there will always be room for improvement. In case of doubt, report the issue and ask for advice.

Non-English Manual Pages

The following are rules of thumb, not laws set in stone. If you find that you port has special needs, you can set them aside; the goal is to make the port useful for users. Consider telling schwarze@ about it if you do, maybe we can learn something from your port.
  1. If upstream provides non-English manual pages, install them if that is possible without jumping through hoops, and unless there are specific reasons not to. "They are outdated" is not a good reason to exclude them.
  2. Never install any encoding except UTF-8. If upstream provides UTF-8, great. Otherwise, set BUILD_DEPENDS = converters/libiconv and use iconv(1) in the post-build target.
  3. If mandoc copes, which you can check in exactly the same way as with English manuals, simply install the UTF-8 source code to man/language/manN/*.N and do not USE_GROFF.
  4. If mandoc does not cope, the proper order of operations is iconv(1) -t UTF-8, then preconv(1), then nroff(1), never some other way round.
  5. If possible, install to man/language/manN, without any "_" or "@" characters. Never include the encoding in the path name, and make sure the /language/ part never contains "." (a dot).
  6. As an exception, use zh_CN and zh_TW rather than just zh. Also, keep pt and pt_BR as they are upstream, and install both if available.

If the above is followed, people can do the following with no changes to any part of the default configuration:

$ doas pkg_add mc
$ export LC_CTYPE=en_US.UTF-8
$ alias ruman="man -m /usr/local/man/ru"
$ ruman mc

rc.d(8) Scripts

This section is intended to provide some information on writing and installing rc.d(8) scripts.

Ports that install a daemon benefit greatly from having rc.d(8) scripts. It allows the user to easily check if the daemon is running, as well as providing an easy and consistent way to start and stop it.

Writing rc.d(8) Scripts

Writing an rc.d(8) script is straightforward and simple due to the clean and simple design of the rc.subr(8) system. Though there are several things to take into account.
  1. The script has to be placed into ${PKGDIR} with a .rc extension, like mpd.rc. This will allow the package tools to pick it up.
  2. Be sure to test all the functions of the script, especially the reload function.
  3. Use ${TRUEPREFIX} when writing the path to the daemon.

Example Script

Below is an example of a typical script.
#!/bin/ksh

daemon="${TRUEPREFIX}/sbin/munin-node"

. /etc/rc.d/rc.subr

pexp="/usr/bin/perl -wT ${daemon}${daemon_flags:+ ${daemon_flags}}"

rc_pre() {
        install -d -o _munin /var/run/munin
}

rc_cmd $1
A template script can also be found in the templates directory of your ports tree.