GXemul  >  Documentation (



  Stable release (  




GXemul: Technical details

Back to the index.

This page describes some of the internals of GXemul.

Speed and emulation modes

So, how fast is GXemul? There is no short answer to this. There is especially no answer to the question What is the slowdown factor?, because the host architecture and emulated architecture can usually not be compared just like that.

Performance depends on several factors, including (but not limited to) host architecture, target architecture, host clock speed, which compiler and compiler flags were used to build the emulator, what the workload is, what additional runtime flags are given to the emulator, and so on.

Devices are generally not timing-accurate: for example, if an emulated operating system tries to read a block from disk, from its point of view the read was instantaneous (no waiting). So 1 MIPS in an emulated OS might have taken more than one million instructions on a real machine.

Also, if the emulator says it has executed 1 million instructions, and the CPU family in question was capable of scalar execution (i.e. one cycle per instruction), it might still have taken more than 1 million cycles on a real machine because of cache misses and similar micro-architectural penalties that are not simulated by GXemul.

Because of these issues, it is in my opinion best to measure performance as the actual (real-world) time it takes to perform a task with the emulator, e.g.:

  • "How long does it take to install NetBSD onto a disk image?"
  • "How long does it take to compile XYZ inside NetBSD in the emulator?".

So, how fast is it? :-)   Answer: it varies.


NOTE/TODO: This section is very old.

Running an entire operating system under emulation is very interesting in itself, but for several reasons, running a modern OS without access to TCP/IP networking is a bit akward. Hence, I feel the need to implement TCP/IP (networking) support in the emulator.

As far as I have understood it, there seems to be two different ways to go:

  1. Forward ethernet packets from the emulated ethernet controller to the host machine's ethernet controller, and capture incoming packets on the host's controller, giving them back to the emulated OS. Characteristics are:
    • Requires direct access to the host's NIC, which means on most platforms that the emulator cannot be run as a normal user!
    • Reduced portability, as not every host operating system uses the same programming interface for dealing with hardware ethernet controllers directly.
    • When run on a switched network, it might be problematic to connect from the emulated OS to the OS running on the host, as packets sent out on the host's NIC are not received by itself. (?)
    • All specific networking protocols will be handled by the physical network.


  2. Whenever the emulated ethernet controller wishes to send a packet, the emulator looks at the packet and creates a response. Packets that can have an immediate response never go outside the emulator, other packet types have to be converted into suitable other connection types (UDP, TCP, etc). Characteristics:
    • Each packet type sent out on the emulated NIC must be handled. This means that I have to do a lot of coding. (I like this, because it gives me an opportunity to learn about networking protocols.)
    • By not relying on access to the host's NIC directly, portability is maintained. (It would be sad if the networking portion of a portable emulator isn't as portable as the rest of the emulator.)
    • The emulator can be run as a normal user process, does not require root privilegies.
    • Connecting from the emulated OS to the host's OS should not be problematic.
    • The emulated OS will experience the network just as a single machine behind a NAT gateway/firewall would. The emulated OS is thus automatically protected from the outside world.

Some emulators/simulators use the first approach, while others use the second. I think that SIMH and QEMU are examples of emulators using the first and second approach, respectively.

Since I have choosen the second kind of implementation, I have to write support explicitly for any kind of network protocol that should be supported. As of 2004-07-09, the following has been implemented and seems to work under at least NetBSD/pmax and OpenBSD/pmax under DECstation 5000/200 emulation (-E dec -e 3max):

  • ARP requests sent out from the emulated NIC are interpreted, and converted to ARP responses. (This is used by the emulated OS to find out the MAC address of the gateway.)
  • ICMP echo requests (that is the kind of packet produced by the ping program) are interpreted and converted to ICMP echo replies, regardless of the IP address. This means that running ping from within the emulated OS will always receive a response. The ping packets never leave the emulated environment.
  • UDP packets are interpreted and passed along to the outside world. If the emulator receives an UDP packet from the outside world, it is converted into an UDP packet for the emulated OS. (This is not implemented very well yet, but seems to be enough for nameserver lookups, tftp file transfers, and NFS mounts using UDP.)
  • TCP packets are interpreted one at a time, similar to how UDP packets are handled (but more state is kept for each connection). NOTE: Much of the TCP handling code is very ugly and hardcoded.

The gateway machine, which is the only "other" machine that the emulated OS sees on its emulated network, works as a NAT-style firewall/gateway. It usually has a fixed IPv4 address of An OS running in the emulator would usually have an address of the form 10.x.x.x; a typical choice would be

Inside emulated NetBSD/pmax or OpenBSD/pmax, running the following commands should configure the emulated NIC:

	# ifconfig le0
	# route add default
	add net default: gateway

If you want nameserver lookups to work, you need a valid /etc/resolv.conf as well:

	# echo nameserver > /etc/resolv.conf
(But replace with the actual real-world IP address of your nearest nameserver.)

Now, host lookups should work:

	# host -a www.netbsd.org
	Trying null domain
	rcode = 0 (Success), ancount=2
	The following answer is not authoritative:
	The following answer is not verified as authentic by the server:
	www.netbsd.org  86400 IN        AAAA    2001:4f8:4:7:290:27ff:feab:19a7
	www.netbsd.org  86400 IN        A
	For authoritative answers, see:
	netbsd.org      83627 IN        NS      uucp-gw-2.pa.dec.com
	netbsd.org      83627 IN        NS      ns.netbsd.org
	netbsd.org      83627 IN        NS      adns1.berkeley.edu
	netbsd.org      83627 IN        NS      adns2.berkeley.edu
	netbsd.org      83627 IN        NS      uucp-gw-1.pa.dec.com
	Additional information:
	ns.netbsd.org   83627 IN        A
	uucp-gw-1.pa.dec.com	172799 IN	A
	uucp-gw-2.pa.dec.com	172799 IN	A

At this point, UDP and TCP should (mostly) work.

Here is an example of how to configure a server machine and an emulated client machine for sharing files via NFS:

(This is very useful if you want to share entire directory trees between the emulated environment and another machine. These instruction will work for FreeBSD, if you are running something else, use your imagination to modify them.)

  • On the server, add a line to your /etc/exports file, exporting the files you wish to use in the emulator:
    	/tftpboot -mapall=nobody -ro
    where is the IP address of the machine running the emulator process, as seen from the outside world.

  • Then start up the programs needed to serve NFS via UDP. Note the -n argument to mountd. This is needed to tell mountd to accept connections from unprivileged ports (because the emulator does not need to run as root).
    	# portmap
    	# nfsd -u       <--- u for UDP
    	# mountd -n
  • In the guest OS in the emulator, once you have ethernet and IPv4 configured so that you can use UDP, mounting the filesystem should now be possible: (this example is for NetBSD/pmax or OpenBSD/pmax)
    	# mount -o ro,-r=1024,-w=1024,-U,-3 my.server.com:/tftpboot /mnt
    	# mount my.server.com:/tftpboot /mnt
    If you don't supply the read and write sizes, there is a risk that the default values are too large. The emulator currently does not handle fragmentation/defragmentation of outgoing packets, so going above the ethernet frame size (1518) is a very bad idea. Incoming packets (reading from nfs) should work, though, for example during an NFS install.
The example above uses read-only mounts. That is enough for things like letting NetBSD/pmax or OpenBSD/pmax install via NFS, without the need for a CDROM ISO image. You can use a read-write mount if you wish to share files in both directions, but then you should be aware of the fragmentation issue mentioned above.

Emulation of hardware devices

NOTE/TODO: This section is very old, and is about the legacy framework. New machines and devices should be implemented using the new Component framework.

Each file called dev_*.c in the src/devices/ directory is responsible for one hardware device. These are used from src/machines/machine_*.c, when initializing which hardware a particular machine model will be using, or when adding devices to a machine using the device() command in configuration files.

(I'll be using the name "foo" as the name of the device in all these examples. This is pseudo code, it might need some modification to actually compile and run.)

Each device should have the following:

  • A devinit function in src/devices/dev_foo.c. It would typically look something like this:
    	        struct foo_data *d;
    		CHECK_ALLOCATION(d = malloc(sizeof(struct foo_data)));
    	        memset(d, 0, sizeof(struct foo_data));
    		 *  Set up stuff here, for example fill d with useful
    		 *  data. devinit contains settings like address, irq path,
    		 *  and other things.
    		 *  ...
    		INTERRUPT_CONNECT(devinit->interrupt_path, d->irq);
    	        memory_device_register(devinit->machine->memory, devinit->name,
    	            devinit->addr, DEV_FOO_LENGTH,
    	            dev_foo_access, (void *)d, DM_DEFAULT, NULL);
    		/*  This should only be here if the device
    		    has a tick function:  */
    		machine_add_tickfunction(machine, dev_foo_tick, d,
    		/*  Return 1 if the device was successfully added.  */
    	        return 1;       

    DEVINIT(foo) is defined as int devinit_foo(struct devinit *devinit), and the devinit argument contains everything that the device driver's initialization function needs.

  • At the top of dev_foo.c, the foo_data struct should be defined.
    	struct foo_data {
    		struct interrupt	irq;
    		/*  ...  */

    (There is an exception to this rule; some legacy code and other ugly hacks have their device structs defined in src/include/devices.h instead of dev_foo.c. New code should not add stuff to devices.h.)

  • If foo has a tick function (that is, something that needs to be run at regular intervals) then FOO_TICKSHIFT and a tick function need to be defined as well:
    	#define FOO_TICKSHIFT		14
    		struct foo_data *d = extra;
    		if (.....)

  • Does this device belong to a standard bus?
    • If this device should be detectable as a PCI device, then glue code should be added to src/devices/bus_pci.c.
    • If this is a legacy ISA device which should be usable by any machine which has an ISA bus, then the device should be added to src/devices/bus_isa.c.

  • And last but not least, the device should have an access function. The access function is called whenever there is a load or store to an address which is in the device' memory mapped region. To simplify things a little, a macro DEVICE_ACCESS(x) is expanded into
    	int dev_x_access(struct cpu *cpu, struct memory *mem,
    	    uint64_t relative_addr, unsigned char *data, size_t len,
    	    int writeflag, void *extra)
    The access function can look like this:
    		struct foo_data *d = extra;
    		uint64_t idata = 0, odata = 0;
    		if (writeflag == MEM_WRITE)
    			idata = memory_readmax64(cpu, data, len);
    		switch (relative_addr) {
    		/*  Handle accesses to individual addresses within
    		    the device here.  */
    		/*  ...  */
    		if (writeflag == MEM_READ)
    			memory_writemax64(cpu, data, len, odata);
    		/*  Perhaps interrupts need to be asserted or
    		    deasserted:  */
    		dev_foo_tick(cpu, extra);
    		/*  Return successfully.  */
    		return 1;

The return value of the access function has until 2004-07-02 been a true/false value; 1 for success, or 0 for device access failure. A device access failure (on MIPS) will result in a DBE exception.

Some devices are converted to support arbitrary memory latency values. The return value is the number of cycles that the read or write access took. A value of 1 means one cycle, a value of 10 means 10 cycles. Negative values are used for device access failures, and the absolute value of the value is then the number of cycles; a value of -5 means that the access failed, and took 5 cycles.

To be compatible with pre-20040702 devices, a return value of 0 is treated by the caller (in src/memory_rw.c) as a value of -1.