Enclave is designed to be installed directly onto every client, server, cloud instance, virtual machine and container in your organisation. That way, Enclave can apply Zero Trust Network Access principles and controls between systems, fully enforce policy and provide end-to-end encryption.
However, in some situations, you can’t or might not want to install Enclave on all devices or systems:
- On domain controllers where two or more network interfaces can be problematic
- On networks where the physical infrastructure is not allowed to be changed
- On embedded systems, like firewalls, webcams or printers which prohibit external software
- When accessing legacy systems which are too old to run, or are incompatible, with the agent
- When accessing cloud native services like AWS RDS, which don't run third party software
- With large numbers of devices in a single subnet, like a single AWS VPC
In these cases, you can set up an Enclave Gateway to provide access to devices and systems which don't, or can't, run Enclave.
An Enclave Gateway allows you to route traffic from systems running Enclave to systems and devices not running Enclave (like RDS databases, webcams, printers and IoT sensors) in subnets the Enclave Gateway can reach.
Before you begin this guide, you’ll need an Enclave account set up with at least two devices enrolled. Read our getting started guide if you need help with this.
In order to enable an enrolled system to act as an Enclave Gateway, Enclave must be installed onto a Linux operating system.
An Enclave Gateway is just a normal installation of Enclave which has been set as "Allowed to act as a Gateway" in the Portal. The following steps will show you how to configure an Enclave Gateway and configure Policies which will allow other Enclave systems to use it.
Enclave Gateway is not curently compatible with Apple iOS devices. We're working on this and will update the documentation when we add support for iOS.
Step 1: Install and Enrol
To get started, download and install Enclave onto the System which you want to use as an Enclave Gateway, provide an enrolment key and join it to your account. We support a variety of Linux distros.
Step 2: Enable the Gateway
Once enrolled, you’ll need to "Allow the system to act as a Gateway" in the Portal. To do this, log into the Enclave Portal and open the Systems page. Find your enrolled system in the list and click on it to open the details pane.
Next, edit the system using the edit pen icon (), find the “Allow this system to act as a gateway” section and switch the toggle to enable this system to act as a Gateway.
Once enabled, Enclave will automatically configure that system to act as a gateway.
When a system is enabled to act as a gateway the Enclave agent running on that system will automatically enable IP forwarding in the Linux kernel and create an iptables chain named
ENCLAVEto hold the relevant source NAT rules.
Step 3: Configure subnets
An Enclave Gateway can act as a route to one or more subnets for other Enclave peers. It might provide access to a single subnet, for example to its local
192.168.1.0/24 network, or the Enclave Gateway may be running in AWS or Azure and provide access to a number of subnets. You may even decide to route all traffic for the entire Internet via your Enclave Gateway by configuring it to advertise a default route (
Enclave needs to know which subnets can be accessed via the new gateway. For convenience, Enclave will auto-discover all local ethernet or wireless network interfaces on the new gateway with IPv4 unicast addresses and automatically show them in the Portal (where the adapters are up and their assigned IP addresses are not broadcast, multicast or auto-configured addresses).
In the example below, Enclave has discovered there is a network interface with an IP address in the subnet
172.19.64.0/20 on this system. That means this system can be used as a gateway to access other devices not running Enclave in that subnet.
It’s important to note that on this screen the administrator is defining all of the possible subnets available via this gateway, but not actually determining which systems can (or cannot) route traffic to those subnets. Controlling which Systems can reach which subnets behind specific Enclave Gateways is done later by creating one or more Gateway Access Policies. Use the () button to remove any unwanted subnets from the gateway's configuration.
In some scenarios, you may want an Enclave Gateway to forward traffic towards a subnet it can reach but is not directly connected to (like
0.0.0.0/0). In these situations, use the "Add subnet" link to manually define additional subnets that this gateway is capabe of routing traffic into.
Step 4: Create an access policy
Now you have one or more configured Enclave Gateways, create one or more Gateway Access Policies to determine which of your enrolled systems can route traffic to which subnets.
Gateway Access Policies use Tags in exactly the same way as Direct Access Policies, but only on the sender side of the policy. Each policy enables connectivity between tagged systems and destination subnets via one or more selected gateways. Traffic can flow from
Senders to destination IP addresses (subject to policy-based access controls), but devices on the routed subnets cannot initiate connectivity back to the sender systems.
In practise, this means that if you use Enclave together with Enclave Gateways to route traffic to webcams or printers (for example), any tagged Enlave systems will be able to send traffic to those devices, but those devices won’t be able to send unsolicited traffic back to the Systems inside your Enclave network.
To create a Gateway Access Policy select one or more Tags on the
Senders side of the policy and select one or more subnets on the
Gateway / Subnet side of the policy.
Once a policy is created, Enclave automatically configures the routing table for appropriate sender systems according to Tag membership and Trust Requirements.
Subnets can be large. In some situations, you may not want to provide access to an entire subnet and instead prefer to allow partial access to a smaller set of IP addresses, devices or systems in each subnet. In such cases, see the Subnet Filtering section below.
With at least one subnet attached to a Gateway Access Policy it is possible to enable and configure Subnet Filtering on that policy.
Subnet Filtering allows administrators to avoid providing full access to entire subnets and instead restrict
Senders access to a limited set of IP addresses.
For example, you may have an Enclave Gateway which provides access to
10.0.0.0/24 but only want to allow systems on the
Senders side of the policy to access two servers in that subnet:
10.0.0.9. By adding a Subnet Filter to the policy you can restrict access to the wider subnet and provide only the required access.
In the example below the policy grants the users tag access to the
172.16.1.0/24 subnet advertised by the Enclave Gateway named Menlo Park Office. Having then enabled subnet filtering, a rule has been added which restricts
Senders access to only two IP addresses,
172.16.1.11, two domain controllers in the Menlo Park Office.
Gateways as the default route
Enclave Gateways can also route all non-Enclave traffic through specific systems on your network.
Normally Enclave doesn’t interact with traffic destined for the public Internet, instead defaulting to a split-tunnel overlay network and only routing traffic between systems running Enclave.
However there may be times when you do want Enclave to route your public Internet traffic. For example, to ensure a predictable and static IP address is used to access to trusted SaaS services such as Office 365 or Salesforce.
To setup an Enclave Gateway as a default route on your network you should:
Enrol a Linux system to act as your Enclave Gateway
In the Portal, allow that enrolled system to act as an Gateway
Manually add the subnet
0.0.0.0/0to that system in the Portal
Create a Gateway Access Policy which includes the Gateway system
Once setup, all public Internet traffic for systems on the
Senders side of the policy will be routed via that Enclave Gateway.
Failover and redundancy
It’s possible to have several Enclave Gateways all advertising the same logical subnets to the same set of
Senders. In such cases Enclave will automatically pick one of the available Enclave Gateways for each logical subnet.
When multiple gateways are available, the active gateway which Enclave will use is whichever gateway a connection can be established with first.
For example, an administrator might allow two systems in the Menlo Park Office to act as Gateways which both advertise access to the subnets
If a sender system in the policy loses its connection to one of the Enclave Gateways in the Menlo Park office, connectivity will automatically failover to the alternative Enclave Gateway.
In this way, by running multiple Enclave Gateways to provide access to the same group of logical subnets, Enclave will automatically enable multi-path failover and increase redundancy.
Gateway Access Policies obey trust requirements and ACLs the same was as a regular Direct Access Policies, so you can easilly apply Zero Trust requirements to the
Senders connecting into your subnets, like multi-factor authentication.
Check that you can ping your new Enclave Gateway from your Sender policy machine, using the Enclave Gateway's Enclave Virtual address (i.e. 100.64.69.156 in the example below). You can find the Enclave Virtual address of your Enclave Gateway by running
enclave status on either system once connected by policy.
You should also notice in the output of the
enclave status CLI command on the sender-side systems that any routes advertised via your Enclave Gateway are listed under the appropriate peer, as shown below under
Gateway for for the
Menlo Park Office peer.
Peer: WZL94 (Menlo Park Office) Peer state. . . . . : Up Certificate . . . . : CN=WZL94 Expires=Never (Perpetual Issue) Endpoint. . . . . . : Udp/10.1.10.115:40161 Last activity . . . : 0.75 seconds ago Transfer. . . . . . : 3.462 KB received, 4.071 KB sent, link rtt 0.59 ms Virtual network . . : 100.64.0.0/10 (255.192.0.0) Virtual address . . : 100.64.0.70 Gateway for . . . . : 172.26.0.0/20 Dns . . . . . . . . : wzl94.id.enclave, ubuntu-dev.enclave ACLs. . . . . . . . : allow [icmp] from peer -> local, allow [any] from local -> peer
If you can ping the Enclave Gateway (i.e.
ping 100.64.0.70 in this example) via Enclave and your local system is showing
172.26.0.0/20 (or your equivalent subnet) as the
Gateway for in CLI output for the correct peer, you should be able to start sending traffic directly to non-Enclave devices that subnet.
Knowing that the Enclave Gateway itself is reachable, try sending some traffic past the Enclave Gateway to a device in the subnet behind it. If the any of the host-based firewalls on the target systems not running Enclave allow it, you may be able to send pings to test end-to-end connectivity.
In the example below, we've successfully sent a ping from a Windows laptop in a coffee shop, via the Enclave Gateway at
100.64.0.70 in the office, out to a printer with an IP address of
172.26.0.250 in the Gateway's local subnet.
C:\> ping 172.26.0.250 Pinging 172.26.0.250 with 32 bytes of data: Reply from 172.26.0.250: bytes=32 time=29ms TTL=64 Reply from 172.26.0.250: bytes=32 time=30ms TTL=64 Reply from 172.26.0.250: bytes=32 time=29ms TTL=64 Reply from 172.26.0.250: bytes=32 time=38ms TTL=64 Ping statistics for 172.26.0.250: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 29ms, Maximum = 38ms, Average = 31ms
If you find an Enclave Gateway isn't working as expected, here's a simple troubleshooting checklist:
Check your systems (sender systems and gateways) are all enrolled, connected and approved in the portal.
Check that the sender systems(s) can ping the Gateway using the Gateway's Enclave address.
Check the subnets which the Gateway is advertising.
Check the output of
enclave statuson all systems has the correct
Check that the Gateway itself can reach (i.e. ping) other devices on its local subnet.
Check the routing table has been correctly configured on the relevant Sender systems of the policy.
The routing table is configured automatically by Enclave so it's unlikely to be the source of a problem unless there are other conflicting routes already in place. The
Interfaceaddress is the client's local Enclave IP address.
C:\> route print | findstr 172.26.0.0 IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 172.26.0.0 255.255.240.0 On-link 100.119.20.243 26
Check that iptables on the Enclave Gateway system are correctly configured.
sudo iptables -t nat -L -n -v.
In particular pay attention to the
to:field on the postrouting chain, which should be the local (non-enclave) IP address of your Enclave Gateway.
$ sudo iptables -t nat -L -n -v Chain PREROUTING (policy ACCEPT 35 packets, 10016 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 13 packets, 1071 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 13 packets, 1071 bytes) pkts bytes target prot opt in out source destination 0 0 SNAT all -- * * 100.64.0.0/10 172.26.0.0/20 to:172.26.0.3
Check that the iptables
bytescounters are incrementing.
If they're not then the iptables configuration may be incorrect or the routing table on the sender system may not be correct.
Try running Enclave as a foreground process with high log verbosity.
Run enclave directly with
sudo enclave run -v 5to inspect traffic flows on the sender and Enclave Gateway.
tcpdumpon your Enclave Gateway.
Capture from the interface connected to your local subnet (in our case, that's
eth0) and capture traffic to and from the host you're trying to communicate with using the Enclave Gateway, which in our case is a printer at
Below, you can see our ping originated from a sender, but exiting from the eth0 interface on the Gateway (
172.26.0.3) as an icmp echo request to the printer, followed by a returned an icmp echo reply.
$ sudo tcpdump -ni eth0 host 172.26.0.250 11:28:12.444590 IP 172.26.0.3 > 172.26.0.250: ICMP echo request, id 1, seq 4208, length 40 11:28:12.444995 IP 172.26.0.250 > 172.26.0.3: ICMP echo reply, id 1, seq 4208, length 40
Last updated Nov 21, 2022