NetWall virtual firewall creation under KVM on ARM

Last modified on 26 May, 2021. Revision 81
Up to date for
cOS Core 13.10.00 TP
Supported since
cOS Core 13.10.00 TP
Status OK




1. Upload the disk image

Start by uploading the cOS Core qcow2 disk image for ARM to the path /var/lib/libvirt/images on the target system. The most recent disk image for ARM can be downloaded from the cOS Core downloads page on MyClavister at https://my.clavister.com/downloads/?sid=1 (MyClavister login required).

2. Create a definition XML file

Create a new .xml text file and paste the XML template below into it. This will be used to define the KVM virtual machine. Changes to this file will probably be needed before it is imported into KVM and these are described next.

A definition XML file template

<domain type='kvm'>
  <name>NetWall_ARM</name>
  <memory unit='KiB'>1953125</memory>
  <currentMemory unit='KiB'>1953125</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <os>
    <type arch='aarch64' machine='virt-2.10'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/AAVMF/AAVMF_CODE.fd</loader>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='host-passthrough' check='none'/>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-aarch64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/disk_image_name.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='8'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x8'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
   </controller>
   <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x9'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0xa'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0xb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0xc'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <interface type='direct'>
      <source dev='enP2p1s0' mode='bridge'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='system-serial' port='0'>
        <model name='pl011'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
   </console>
  </devices>
</domain>

3. Modify the definition XML file

Modifications can now be made to the XML template above to match the target ARM platform.

First, specify how many vCPUs to assign. The minimum requirement is 1 but the recommended value is 2.

<vcpu placement='static'>1</vcpu>

Make sure the architecture and machine type are correct.

<type arch='aarch64' machine='virt-2.10'>hvm</type>

The command virsh capabilities can be used to find the available guest parameter values and an example of this command’s output is shown below. If you cannot find aarch64 as a guest in the output then it must be installed using the following link: https://wiki.ubuntu.com/ARM64/QEMU.

<guest>
    <os_type>hvm</os_type>
    <arch name='aarch64'>
      <wordsize>64</wordsize>
      <emulator>/usr/bin/qemu-system-aarch64</emulator>
      <machine maxCpus='255'>virt-2.10</machine>


Specify the path to the loader firmware. If missing it should be included in a package for your Linux distribution and will need to be installed .

<emulator>/usr/bin/qemu-system-aarch64</emulator>


Specify the name of the cOS Core .qcow file image that was placed earlier in the folder /var/lib/libvirt/images.

<source file='/var/lib/libvirt/images/disk_image_name.qcow2'/>


Set the virtual network interface exposed to the firewall which is bound to a physical network interface. Below, the value enP2p1s0 is set as the physical network interface.

<interface type='direct'>
      <source dev='enP2p1s0' mode='bridge'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

The value enP2p1s0 should be changed to the network interface that matches your system.

It is possible to add more network interfaces to the firewall by repeating this section and specifying a different physical network interface. If doing this, make sure to assign unique PCI-values. The bus/slot/function combination must be unique for each network interface.

4. Import the definition XML file

The definition file should now be imported using the command: virsh define <name_of_your_file>.xml

Any errors in the XML file that are found during import will be displayed and must be corrected before the import can be completed.

5. Start the Virtual Netwall Firewall

When the definition file has been imported successfully, the virtual firewall can be started using the command: virsh start NetWall_ARM

Note that the name NetWall_ARM used in this example is the domain name specified at the beginning of the definition XML file and could be changed to anything.

6. Accessing the management interface

The firewall has a DHCP client enabled by default on the first network interface which is also the default management interface. To see the assigned IP address, first execute the command: virsh console NetWall_ARM

This opens a virtual serial console which acts as the firewall’s local console and any cOS Core CLI commands can be entered through it. Enter the ifstat command to see the management interface’s assigned IP address. Pressing Ctrl-5 will exit the cOS Core console (this key sequence may be different with the keyboard of some languages).

To now access cOS Core via its web interface,  open a standard web browser and enter: https://<assigned-IP-address>

If the firewall is refusing browser access, it may be necessary to change the remote management rules in the cOS Core default configuration so that the relevant source network is allowed. Please refer to the cOS Core Administration Guide PDF for more information about these management access rules. The NetWall KVM Getting Started Guide PDF also provides comprehensive information about the initial cOS Core configuration setup. SSH console access is also possible via the assigned IP address.

Related articles