libfoo.so.major.minor
.
When you link a program, the linker ld embeds that information in the created binary. You can see it with ldd(1). Later, when you run that program, the dynamic linker ld.so(1) uses that information to find the right dynamic library:
The rules for shared libraries are quite simple.
$ nm -g oldlib.so.X.Y | cut -c10- | grep -e^Tand
$ nm -g newlib.so.X.Y | cut -c10- | grep -e^TThis won't show if functions arguments type changed, but at least you'll see quickly if some functions were added and/or removed.
Sometimes, it happens that a library is written as several files, and that internal functions happen to be visible to communicate between those files. Those function names traditionally begin with an underscore, and are not part of the API proper.
$ cc -shared -fpic|-fPIC -o libfoo.so.4.5 obj1 obj2Trying to rename the library after the fact to adjust the version number does not work: ELF libraries use some extra magic to set the library internal name, so you must link it with the correct version the first time.
On the other hand,
remember that you can override Makefile
variables
from the command line, by using MAKE_FLAGS
in the port's
Makefile
.
In some cases, the program you're porting will have a simple variable which
you can override by setting the library version in MAKE_FLAGS, for example
MAKE_FLAGS= SO_VERSION=${LIBfoo_VERSION}
.
In others, the port will need to be patched to make use of such a variable.
The ports infrastructure already handles these details in libtool-based
and CMake-based ports.
For libtool, by default the version from the base OS is used, but in some
cases this is insufficient and USE_LIBTOOL=gnu
can be set.
CMake is handled by using the cmake.port.mk
module:
MODULES += devel/cmake
.
In these cases, most details are handled automatically:
SHARED_LIBS
is examined and version numbers are automatically replaced.
${WRKBUILD}/shared_libs.log
,
which can be directly included in the port's Makefile
.
lib/libXXX.so
files).
In that case, add SHARED_LIBS
lines to the Makefile for
those libraries set to version 0.0, clean and rebuild the port,
and when you regenerate the PLIST you should see that it starts to
use the version numbers.
-soname
flag of
ld(1) to override the library
specification in the DT_SONAME field.
Setting DT_SONAME is not a bug itself but is usually not desirable on
OpenBSD where ld.so(1) is smart and
the ports tree takes care of library versioning.
Moreover, a wrong soname can result in unusable binaries that depend on this
library; either always or after some updates to the port containing the library.
To check if the DT_SONAME field is set, run the following command:
$ objdump -x /path/to/libfoo.so.0.0 | fgrep SONAME SONAME libfoo.so.0.0As a general rule, setting soname explicitly should be patched out. The only exception is a situation when the right soname is recorded and it's hard to patch soname-related code out and upstream won't accept such a patch. In that case the soname should fully match the file name (see the example above).
/usr/local/lib
.
However, it is quite possible to use a symbolic link to the actual library.
You should understand the library lookup rules:
-L
flags to set up paths to look for libraries.
It stops looking as soon as it finds a library that matches its requirements.
qt.1.45
and qt.2.31
.
Since both ports can be installed simultaneously, to make sure a given program
will link against qt.1
, that library is provided as
/usr/local/lib/qt/libqt.so.1.45
, and programs will be linked using
$ ld -o program program.o -L/usr/local/lib/qt -lqtSimilarly, a program that links with
qt.2
will use the
/usr/local/lib/qt2/libqt.so.2.31
file with
$ ld -o program program.o -L/usr/local/lib/qt2 -lqtTo solve those libraries at run-time, a link called
/usr/local/lib/libqt.so.1.45
and a link called
/usr/local/lib/libqt.so.2.31
have been provided.
This is enough to satisfy ld.so(1).
It is an error to link a program using qt1
with
$ ld -o program program.o -L/usr/local/lib -lqtThis code assumes that
qt.2.31
is not installed, which is
a wrong assumption.
Such tricks are only necessary in the rare cases of very pervasive
libraries where a transition period between major versions must be
provided.
In general, it is enough to make sure the library appears in
/usr/local/lib
.
make lib-depends-check
or
make port-lib-depends-check
to verify a port does mention all
libraries it requires.
You just write them in LIB_DEPENDS
/WANTLIB
like this:
LIB_DEPENDS += x11/gtk+ WANTLIB += gtk>=1.2 gdk>=1.2It is not an error to specify static libraries on a
WANTLIB
line as
well.
WANTLIB
s are fully evaluated at package build time: the resulting
package will have library dependency information embedded as lines for ld.so
that hold the actual major.minor number that was used for building, and
nothing for static libraries.
In fact, providing LIB_DEPENDS
lines even for static libraries is a
good idea.
This will simplify port update if a given dependency goes from a static library
to a shared library.
WANTLIB
lines must specify the same paths that are used for ld.
With the same example as above,
a standard qt2
depends fragment would say
WANTLIB += lib/qt2/qt.=2
.
This allows the dependency checking code to do the right thing when multiple
versions of the same library are encountered.
WANTLIB
and
LIB_DEPENDS
.
If you introduce new shared libraries, watch out for BUILD_DEPENDS
that need to be turned into LIB_DEPENDS
.
Those tools do not work all that well, and often create specific challenges in porting software to OpenBSD.
# Generated automatically using autoconf version 2.13or something similar. The generation procedure is covered in a following section. Most often, autoconf ports come with the generated scripts, and with the source scripts that generated these. The next section covers the simple case where you simply want to run the generated script, and not modify it. Make sure you read the section about trojan horses as well.
CONFIGURE_STYLE=gnu
which will automatically invoke ${WRKSRC}/configure
.
If your configure script lies elsewhere,
just set CONFIGURE_SCRIPT
to the right value.
Configure scripts often take a lot of arguments.
The default processing of the ports tree will only pass
--prefix
and
--sysconfdir
to these.
Very old configure scripts don't understand --sysconfdir
; you
can set CONFIGURE_STYLE=gnu old
in such cases.
Similarly, some ports are not aware of DESTDIR
.
Those ports will often accept setting prefix=${DESTDIR}/usr/local
without any issue, which can be done with CONFIGURE_STYLE=gnu dest
.
Ports using autoconf and automake
will have Makefile
s with a specific
format that begins with a few standard locations:
bindir
: location for binaries
sysconfdir
: location for configuration
includedir
: location for include dirs
fake
stage.
This does assume, of course, that the only reference to such a directory is with
in the generated Makefile
.
For instance, a neat trick involves switching sysconfdir
to
${PREFIX}/share/example/pkgname
during the fake
stage
to get default config files to package
(since packages don't normally store files under /etc
).
Ports fully using autoconf and automake may support building under a different
directory:
try setting SEPARATE_BUILD=flavored
and see if that works.
This would allow you to wipe the build tree without wiping the source tree,
by giving you separate ${WRKSRC}
and ${WRKBUILD}
locations.
In a few cases, separate builds may need to use gmake, where the rest of the
port is happy with bsd-make, in which case this is not worth it.
Automake will generate a few rules to rebuild all the generated scripts if
anything changes.
These often get in the way of OpenBSD specific patches.
For that reason,
as soon as CONFIGURE_STYLE
corresponds to autoconf use,
post-patch
will touch various files in a specific order,
so that no automake dependencies get triggered later.
The list of dependencies is given in
tsort(1) order
in a file mentioned in
REORDER_DEPENDENCIES
(the default is
${PORTSDIR}/infrastructure/mk/automake.dep
).
config.guess
,
that will determine which system configure is running on.
config.guess
does not vary from port to port and is a fixed script,
so the OpenBSD ports tree replaces it with a fixed version that knows about some
specific OpenBSD architectures.
Since most software packages come with bundled config.guess
,
and since some of them are quite old, this is a necessary step.
If a software package contains more than one config.guess
,
you can overwrite them all by setting MODGNU_CONFIG_GUESS_DIRS
to the full list of directories to process.
The configure script generated by autoconf then simply checks all functionality
on the existing system, by looking for a compiler, and running simple test
programs through it.
Since some of these tests are quite lengthy, the ports tree primes configure
with a CONFIG_SITE=config.site
file.
Configure will look at the contents of that file first before running the tests.
A few configure scripts may have bugs that will prevent them from running
correctly in the presence of config.site
.
Setting CONFIG_SITE
to empty will weed out these kind of problems.
Most configure will auto-detect quite a few conditions.
It is very important to look at configure's options, at configure's output,
and at the generated config.log
file:
these will tell you what options were found, and what options were not found.
This will allow you to find out when configure did not find a package that was
installed.
This will also tell you which optional packages configure would find. In the ports tree, those are called hidden dependencies. This is a bad thing: a hidden dependency is some extra package configure will pick up if it's installed. Then it will proceed in building a mutant package. In some cases, the build will fail because of OpenBSD peculiarities. In some cases, the package creation will fail, as some files will have different names. In some cases, the resulting package will be incorrect, as it will fail to record any dependency on the optional package. So looking at configure's output is one of the most important duty of ports' maintainers. Watch out for cascading tests: detecting a given feature may lead a configure script to try out and find some dependent feature, so you will not see the second feature in the configure output unless the first feature is triggered.
In case some hidden dependencies are found, some action should be taken. The most simple action is to install the optional package, and see what configure will do. If it detects the package, one can either disable the detection (by using configure options, or environment variables, or patching the configure script), or verify that the build goes well and add the dependency to the list of dependent packages. A better choice is to figure out a reasonable set of default dependencies, and then add some flavors to cover other common features.
configure.in
file
(recent versions of autoconf use a configure.ac
file instead).
A standard library of definitions is often available in an
aclocal.m4
.
In most cases, patching configure directly is a bad idea.
It is better to patch the configure.in
file
and get the ports tree to call autoconf.
Good porters will endeavor to write configure.in
changes
that they can feed to software authors.
Different versions of autoconf will produce distinct configure scripts.
autoconf-2.13
is special:
it was used over a fairly long period,
and there has been mutant versions of autoconf-2.13
(actually, betas of a newer autoconf) in wide use.
Hence, using autoconf-2.13
will often not produce the exact same
configure script.
Since having several autoconf versions around at the same time is useful,
the autoconf script actually available in the ports tree is part of a port
called metaauto.
Which autoconf script actually gets called is controlled through the
environment variable AUTOCONF_VERSION
.
Calling autoconf happens if you set CONFIGURE_STYLE=autoconf
,
together with setting AUTOCONF_VERSION
.
In most cases, identify the version of autoconf that was used to generate
the distributed configure script (usually obvious when reading the script)
and use this same version yourself.
Autoconf relies on the standard unix preprocessor
m4(1).
Normally, autoconf relies on some features on the GNU version of m4, gm4.
Fortunately, OpenBSD's m4 has enough features to run autoconf as well, it
just needs to be invoked with -g
to handle autoconf.
Very seldom, autoconf run with OpenBSD's m4 will produce bogus configure
scripts.
The OpenBSD developers will fix such an issue.
config.h.in
file.
Setting CONFIGURE_STYLE=autoconf
will also run autoheader.
A few ports don't use autoheader.
Setting CONFIGURE_STYLE=autoconf no-autoheader
will fix that issue.
libtool has a few specific hooks in configure.in
.
There is often a libtool.m4
script that goes with it.
Getting libtool to do the right thing goes beyond the scope of this
documentation.
${PREFIX}
, which is
/usr/local
by default.
On the other hand, the OpenBSD policy is to install most configuration files
under ${SYSCONFDIR}
, which is /etc
by default.
Note that it is perfectly acceptable for a binary package to have both
${PREFIX}
and ${SYSCONFDIR}
hardcoded:
PREFIX
and
SYSCONFDIR
are mostly user settings that influence the build of the
package.
@sample
Explained@sample
mechanism to deal with
configuration files:
${PREFIX}/share/examples/PKGNAME/foo.rc
.
@sample ${SYSCONFDIR}/foo.rc
right under the sample configuration file.
@sample
Specificities@sample
entries can
have an absolute path name.
Some big packages will also need their own configuration directory,
@sample ${SYSCONFDIR}/directory/
will deal with that.
Using @sample directory/
to create port specific directories
that do not hold any configuration files is perfectly good style.
@sample
correctly interprets correct @mode
,
@owner
, @group
annotations.
This can be a bit cumbersome, because you will often need to switch back
and forth between a default mode and a configuration file specific mode.
make update-plist
knows how to copy @sample
annotations over,
but it does not know how to create them,
so they have to be written in the first place.
Note the distinction between configuration files and example configuration
files: the port must be configured to find its files under
${SYSCONFDIR}
.
It is only the fake installation stage that must put stuff under
${PREFIX}/share/examples
.
One simple way to handle that is to copy the files over in a
post-install
.
A neat trick which often works is to look at a program's Makefile
,
and override the configuration directory in the fake installation stage by using
specific FAKE_FLAGS
, for instance:
FAKE_FLAGS= DESTDIR=${WRKINST} \ sysconfdir=${WRKINST}${TRUEPREFIX}/share/examples/PKGNAMEYou just need to watch out for programs that write the configuration directory down in specific files during their install stage.
security/integrit
port uses a configuration directory with a
few files.
Its packing-list looks like this:
@comment $OpenBSD$ @bin bin/i-ls @info info/integrit.info @man man/man1/i-ls.1 @man man/man1/i-viewdb.1 @man man/man1/integrit.1 @bin sbin/i-viewdb @bin sbin/integrit share/doc/integrit/ share/doc/integrit/README share/doc/integrit/crontab share/doc/integrit/install_db share/doc/integrit/integrit_check share/doc/integrit/viewreport share/examples/integrit/ @sample ${SYSCONFDIR}/integrit/ share/examples/integrit/root.conf @sample ${SYSCONFDIR}/integrit/root.conf share/examples/integrit/src.conf @sample ${SYSCONFDIR}/integrit/src.conf share/examples/integrit/usr.conf @sample ${SYSCONFDIR}/integrit/usr.conf
mail/dovecot
port uses @sample dir/
to create
private directories.
... @extraunexec rm -rf /var/dovecot/* @sample ${SYSCONFDIR}/dovecot/ @sample ${SYSCONFDIR}/dovecot/conf.d/ @mode 755 @sample /var/dovecot/ @mode 750 @group _dovenull @sample /var/dovecot/login/
sysutils/nut
port uses a specific owner for its configuration
files.
@comment $OpenBSD$ @conflict upsd-* @newuser ${NUT_USER}:${NUT_ID}:daemon:UPS User:/var/empty:/sbin/nologin ... share/examples/nut/ @sample ${SYSCONFDIR}/nut/ @owner ${NUT_USER} share/examples/nut/ups.conf @sample ${SYSCONFDIR}/nut/ups.conf share/examples/nut/upsd.conf @mode 600 @sample ${SYSCONFDIR}/nut/upsd.conf @mode share/examples/nut/upsd.users @mode 600 @sample ${SYSCONFDIR}/nut/upsd.users @mode share/examples/nut/upsmon.conf @mode 600 @sample ${SYSCONFDIR}/nut/upsmon.conf @mode share/examples/nut/upssched.conf @sample ${SYSCONFDIR}/nut/upssched.conf @mode 700 @sample /var/db/nut/ @mode @owner share/ups/ share/ups/cmdvartab share/ups/driver.list
Audio applications tend to be hard to port, as this is a domain where interfaces are not standardized at all, though approaches don't vary much between operating systems.
OpenBSD has its own audio layer provided by the sndio library, documented in sio_open(3). Until it's merged into this page, you can find further information about programming for this API in the guide, hints on writing and porting audio code. sndio allows user processes to access audio(4) hardware and the sndiod(8) audio server in a uniform way. It supports full-duplex operation, and when used with the sndiod(8) server it supports resampling and format conversions on the fly.
YOU SHOULDN'T ASSUME ANYTHING ABOUT THE AUDIO HARDWARE USED.
Wrong code is code that only checks the a_info.play.precision
field against 8 or 16 bits, and assumes unsigned or signed samples based
on soundblaster behavior.
You should check the sample type explicitly, and code according to that.
Simple example:
AUDIO_INIT_INFO(&a_info); a_info.play.encoding = AUDIO_ENCODING_SLINEAR; a_info.play.precision = 16; a_info.play.sample_rate = 22050; error = ioctl(audio, AUDIO_SETINFO, &a_info); if (error) /* deal with it */ error = ioctl(audio, AUDIO_GETINFO, &a_info); switch(a_info.play.encoding) { case AUDIO_ENCODING_ULINEAR_LE: case AUDIO_ENCODING_ULINEAR_BE: if (a_info.play.precision == 8) /* ... */ else /* ... */ break; case ... default: /* don't forget to deal with what you don't know !!! For instance, */ fprintf(stderr, "Unsupported audio format (%d), ask ports@ about that\n", a_info.play.encoding); } /* now don't forget to check what sampling frequency you actually got */This is about the smallest code fragment that will deal with most issues.
AUDIO_ENCODING_SLINEAR
),
and you retrieve an encoding with endianness
(e.g., AUDIO_ENCODING_SLINEAR_LE
).
Considering that a soundcard does not have to use the same endianness as your
platform, you should be prepared to deal with that.
The easiest way is probably to prepare a full audio buffer,
and to use swab(3) if an endianness
change is required.
Dealing with external samples usually amounts to:
It is also stupid to hardcode soundblaster-like limitations into your program. You should be aware of these, but do try to get over the 22050 Hz/stereo barrier and check the results.
If possible, the best solution is probably to scan the whole stream you are
going to play ahead of time, and to scale it so that it fits the full dynamic
range.
If you can't afford that, but you can manage to get a bit of look-ahead on
what you're going to play, you can adjust the volume boost on the fly, you
just have to make sure that the boost factor stays at a low frequency compared
to the sound you want to play, and that you get absolutely
no overflows -- those will always sound much worse than the improvement
you're trying to achieve.
As sound volume perception is logarithmic, using arithmetic shifts is usually
enough.
If your data is signed, you should explicitly code the shift as a division,
as C >>
operator is not portable on signed data.
If all else fails, you should at least try to provide the user with a volume scaling option.
Don't forget to run benches. Theoretical optimizations are just that: theoretical. Some hard figures should be collected to check what's a sizeable improvement and what's not.
For high performance audio applications, such as mpegI-layer3, some points should be taken into account:
write
, as a system call,
incurs a high cost compared to internal audio processing.
AUDIO_GETENC
ioctl
should be used to retrieve all
formats that the audio device provides.
Be especially aware of the AUDIO_ENCODINGFLAG_EMULATED
flag.
If your application is already able to output all kinds of weird formats,
and reasonably optimized for that, try to use a native format at all costs.
On the other hand, the emulation code present in the audio device can be
assumed to be reasonably optimal, so don't replace it with quickly hacked
up code.
A model you may have to follow to get optimal results is to first compile a small test program that enquires about the specific audio hardware available, then proceed to configure your program so that it deals optimally with this hardware. You may reasonably expect people who want good audio performance to recompile your port when they change hardware, provided it makes a difference.
In case you simply want audio to be synchronized with some graphics output,
but the behavior of your program is predictable, synchronization is easier
to achieve.
You just play your audio samples, and ask the audio device what you are
currently playing with AUDIO_GETOOFFS
, then use that information
to post-synchronize graphics.
Provided you ask sufficiently often (say, every tenth of a second), and as
long as you have enough horse-power to run your application, you can get very
good synchronization that way.
You might have to tweak the figures by a constant offset, as there is some lag
between what the audio reports, what's currently playing, and the time it takes
for X Window to display something.
If you don't send your comments to the author, your work will have been useless.
It may also be that the author has already noticed whatever problems you are currently dealing with, and is addressing them in his current development tree. If the patches you are writing amount to more than a handful of lines, cooperation is almost certainly a very good idea.
If a new port or an existing port not marked with USE_GROFF
does not work with mandoc, please report that to schwarze@, who
will probably fix mandoc.
$ cd /usr/src/usr.bin/mandoc/ $ cvs -q up -Pd $ make cleandir $ make obj $ make $ doas make installOptionally, you may also get a copy of the gmdiff utility script that helps to compare groff and mandoc output. The gmdiff script is not strictly required, doing the necessary checks by hand is perfectly acceptable.
mandoc -Tlint
(or mandoc -Tlint -Werror
when warnings are irrelevant)
into the body of your mail.
Usually, this is easy to reproduce, but it did happen that it was not, causing
unnecessary confusion.
$ mandoc -Tlint -Werror *If you get any
UNSUPPORTED
messages, the respective
places of the manual page require careful scrutiny.
It is likely that the page will be misformatted with mandoc and the
port requires USE_GROFF
.
If you are sure that all misformattings related to the unsupported
features are minor and don't hinder the reader, you may remove
USE_GROFF
; but in case of doubt, leave USE_GROFF
in place when there are UNSUPPORTED
messages.
If there are any ERROR
messages, they should also be briefly
looked at.
In the unusual case that they are related to misformatting with
mandoc that doesn't happen with groff, that should be reported; the
mandoc maintainer might choose to let mandoc issue UNSUPPORTED
messages in additional cases or to fix the formatting.
If manual pages look good with groff, never patch them to get rid of mandoc errors. That would merely be a make-work project not helping anyone: It will neither help to improve upstream manuals nor mandoc.
USE_GROFF
, and there is no need to report
anything.
If there are no errors, but mandoc output has serious issues, that is, relevant information is missing or part of the output is garbled, please always report your findings, even if you happen to know it's due to a known issue with mandoc. We do want to know which issues cause serious problems in practice, such that we can address the most pressing issues first.
If mandoc output has serious issues and groff output looks bad as well, then the manuals are probably just broken upstream. In that case, you have the usual options when porting broken software: Abandon the port, ignore the problem, report upstream, and/or patch the bugs away. In case you need help with the latter, talk to schwarze@.
If there are no errors, but mandoc output has minor issues that don't really hinder the user when reading the manual, you are welcome to report these issues as well. In that case, you are even more welcome to first check the mandoc TODO list, to avoid having the same minor issues reported again and again - but in case of doubt, it is always better to report dupes than to let problems go unnoticed.
If there are only very few errors, in particular if you get the
impression that mandoc output is just fine all the same, you don't
usually need USE_GROFF=Yes
.
In case of doubt, ask for advice.
Such questions often help to improve mandoc error reporting, in
particular to identify and remove bogus mandoc errors messages.
To speed up the manual checks, in particular if you are often doing mandoc
checks on OpenBSD ports, and to reduce the risk of overlooking problems,
consider using the
gmdiff
utility script.
It takes the file names of an arbitrary number of manual source files as
arguments, runs both groff and mandoc on all the files in turn, and compares
the output of both programs.
However, bear in mind that you are still doing manual checks with the ultimate
goal to judge the quality of mandoc output: all the above points still apply
even when you are using the gmdiff script to help your work.
Also note that gmdiff will usually find minor formatting differences between
both programs, in particular with respect to whitespace.
If mandoc output looks good, even if it's slightly different from groff output,
USE_GROFF
is not needed.
For ease of use, it's possible to call gmdiff from a custom target in mk.conf:
gmdiff: @make fake; cd ${WRKINST}${TRUEPREFIX}; find man -type f -path 'man/man*' -print0 | xargs -0r gmdiff | less
That said, it is obvious that warnings are irrelevant for the decision whether to use or not to use mandoc for a given port. They are for manual authors, to help improve manual quality, not for porters.
mandoc -Tlint
to identify potential formatting issues
and to produce patches to be submitted upstream.
Usually, there is no need to put such patches into the ports tree.
As with any kind of linting, before changing your mdoc(7) or man(7) source code or sending out patches, first make sure you are chasing real problems in the manuals. The mandoc utility is not perfect. It may produce bogus warnings. We are trying to fix that, but there will always be room for improvement. In case of doubt, report the issue and ask for advice.
BUILD_DEPENDS = converters/libiconv
and use iconv(1) in the post-build
target.
man/language/manN/*.N
and do not USE_GROFF
.
man/language/manN
,
without any "_" or "@" characters.
Never include the encoding in the path name, and make sure
the /language/
part never contains "." (a dot).
zh_CN
and zh_TW
rather
than just zh
.
Also, keep pt
and pt_BR
as they are upstream,
and install both if available.
If the above is followed, people can do the following with no changes to any part of the default configuration:
$ doas pkg_add mc $ export LC_CTYPE=en_US.UTF-8 $ alias ruman="man -m /usr/local/man/ru" $ ruman mc
Ports that install a daemon benefit greatly from having rc.d(8) scripts. It allows the user to easily check if the daemon is running, as well as providing an easy and consistent way to start and stop it.
${PKGDIR}
with a .rc
extension, like mpd.rc
.
This will allow the package tools to pick it up.
${TRUEPREFIX}
when writing the path to the daemon.
#!/bin/ksh daemon="${TRUEPREFIX}/sbin/munin-node" . /etc/rc.d/rc.subr pexp="/usr/bin/perl -wT ${daemon}${daemon_flags:+ ${daemon_flags}}" rc_pre() { install -d -o _munin /var/run/munin } rc_cmd $1A template script can also be found in the templates directory of your ports tree.