Hi!
I often read suggestions to use something like Tailscale to create a tunnel between a home server and a VPS because it is allegedly safer than opening a port for WireGuard (WG) or Nginx on my router and connecting to my home network that way.
However, if my VPS is compromised, wouldn’t the attacker still be able to access my local network? How does using an extra layer (the VPS) make it safer?
I often read suggestions to use something like Tailscale (…) safer than opening a port for WireGuard (WG)
I guess someone is trying really hard to upsell Tailscale there. But anyways it all comes down to how you configure things, Tailscale might come with more sensible defaults and/or help inexperienced user to get things working in a safe way. It also makes it easier to deal with the dynamic address at home, reconnects and whatnot.
Specifically about Wireguard, don’t be afraid to expose its port because if someone tries to connect and they don’t authenticate with the right key the server will silently drop the packets. An attacker won’t even know there’s something listening on that port / it will be invisible to typical IP scans / will ignore any piece of traffic that isn’t properly encrypted with your keys.
f my VPS is compromised, wouldn’t the attacker still be able to access my local network? How does using an extra layer (the VPS) make it safer?
The extra layer does a couple of things, the most important might be hiding your home network IP address because your domains will resolve the VPS public IP and then the VPS will tunnel the traffic to your network. Since your home IP isn’t public nobody can DDoS your home network directly nor track your approximate location from the IP. Most VPS providers have some security checks on incoming traffic, like DDoS detection, automatically rate limit requests from some geographies and other security measures that your ISP doesn’t care about.
Besides that, it depends on how you setup things.
You should NOT have a WG tunnel from the home network to the VPS with fully unrestricted access to everything. There should be firewall rules in place, at your home router / local server side, to restrict what the VPS can access. First configure the router / local VPN peer to drop all incoming traffic from the VPN interface, then add exceptions as needed. Imagine you’re hosting a website on the local machine 10.0.0.50, incoming traffic from the VPN interface should only be allowed to reach 10.0.0.50 port 80 and nothing else. This makes it all much more secure then just blunt access to your network and if the VPN gets compromised you’ll still be mostly protected.
You should NOT have a WG tunnel from the home network to the VPS with fully unrestricted access to everything.
This is what I came here to make sure was said. Use your firewall to severely restrict access from your public endpoint. Your wiregaurd tunnel is effectively a DMZ so firewall it off accordingly
I completely disagree with recommending exposing a port to someone who’s asking this very question about the relative risks.
If they lack the expertise to understand the risk differences, then they very much lack the expertise to securely expose a port.
How can you ever learn the risks of exposing ports if all answers are “if you don’t know you shouldn’t do it”?
The post explicitly recommends ONLY exposing the wireguard port, not 80/443/22 which one should usually not do anyways. Very different things!
Yes, and to be fair the OP doesn’t even need to expose a port on his home network. He can do the opposite and have the port exposed on the VPS and have the local router / server connect to the VPS endpoint instead. This will also remove the issues caused by having dynamic IPs at home as well.
And that’s a different animal (moving the goalposts, which is an excellent idea, but OP didn’t even think of doing this).
OP asked about exposing a local port, which is a Bad Idea 99.9% of the time, especially for someone asking why it’s a risk.
Using a VPS with reverse proxy is an excellent approach to adding a layer between the real resource and the public internet.
By learning before you take on the risk.
It’s not like this isn’t well documented.
If OP is asking this question, he’s nowhere near knowledgeable enough to take on this risk.
Hell, I’ve been Cisco certified as an instructor since 1998 and I wouldn’t expose a port. Fuck that.
I could open a port today, and within minutes I’ll be getting hammered with port scans.
I did this about 10 years ago as a demonstration, and was immediately getting thousands of scans per second, eventually causing performance issues on the consumer-grade router.
I self host because i do not trust companies. I will not even consider giving tailscale the keys to my kingdom.
The company Tailscale is a giant target and has a much higher risk in getting compromised than my VPN or even accessible services.
Understand the technology that you use and assess your use case and threat model.
The company Tailscale is a giant target and has a much higher risk in getting compromised than my VPN or even accessible services.
One must be careful about this mindset. A bunch of smart lightbulbs that are individually operated aren’t a particularly appealing target either. However, in aggregate… If someone can write a script that abuses security flaws in them or their default configuration … even though you’re not part of a big centralized target, you are part of a class that can be targeted automatically at scale.
Self hosting only yields better security when you are willing to take steps to adequately secure your self hosted services and implement a disaster recovery strategy.
To add to this, self-hosting is also best when you minimize everything - limited service, with limited functionality, on dedicated hardware that doesn’t share access to your internal network or storage. Folks who use point-and-click apps to install a half dozen unauthenticated docker containers, all open to the internet, running on the same PC they store the only copy of their family photos and music/movie collection on… make me crazy.
It’s mainly about managing risk, but also not all ISPs allow residential accounts to host services on their IP addresses.
Opening a port to the internet exposes the service to the whole internet, which means you need to secure the service with strong credentials, set up SSL, manage the certificate, and keep software up to date. You incur a lot of extra work, and also extra risk not only to your self-hosted service, but to any other services you host that “trust” your service.
All that work requires extra knowledge and experience to get right which, let’s just be honest here: we’ve all probably followed that one How-To blog post, and maybe not understood every step along the way to get past that one pesky error.
Running a secure VPN overlay like Tailscale has much less overhead. You generate some keys, and configure your lighthouse server so the enrolled devices can find each other. It effectively extends your LAN environment to trusted hosts wherever they might be without exposing any of the services to the Internet.
Overall, Tailscale is simpler and much less work for individuals to set up and maintain than to secure multiple services against casual or targeted intrusion.
Tailscale also has the benefit of being a “client” in the view of the ISP, who see your IP address reach out to your VPS to initiate the tunnel, and not the other way around. If there’s any CGNAT going on, Tailscale would tunnel through it.
This is a pretty good summary. In enterprise networking, it’s common to have the ‘DMZ’, the network for servers exposed to the internet, firewalled off from the rest of the system.
If you have a webserver, you would need two sets of ports open, often on two separate firewalls. On the WAN firewall, you would open ports 80/443 pointing to the webserver. On the system firewall, between the DMZ and LAN, you would open specific ports between the webserver and whatever internal resources it needs; a database server for example.
This helps limit the damage if a malicious actor hacks into your webserver by making sure they don’t also have unrestricted access to other parts of your system. It’s called a layered security approach.
However, someone self hosting may not have the expertise or even the hardware to set up their system like this. A VPS for public facing services, as long as it’s configured properly, can be a good alternative. It also helps if you have a dynamic WAN IP address and/or are behind CG-NAT.
Edit: maybe good to mention that securing your local network behind a VPN, even one hosted on your local network, is more secure than allowing public facing services. Yes, it means you still have to open a port. But that’s useless to a malicious actor without the encryption keys. Whereas, if you have a webserver exposed publicly, malicious actors already have some level of access to your system. More than they would if that service didn’t exist anyway. That’s not inherently bad. It comes with the territory when you’re hosting public services. It is more more risky though. And, if the exposed server is compromised, it can potentially open up the rest of your system to compromise as well. Like the original commenter said, it’s about managing risk and different network configurations have different levels of risk.
It’s very hard to compromise a VPN, they’re designed specifically to prevent that.
A random service being exposed to the entire Internet may not be secure, and could provide a way in to your network for someone.
VPN also have their attack surface: https://www.paloaltonetworks.com/cyberpedia/ivanti-VPN-vulnerability-what-you-need-to-know
Both can be true, a hardened service with strict segmentation and authorization can be harder to compromise than a loosely maintained VPN appliance.
Even when designing secure software, appliances and protocols, they can have their flaws.
I would say there is no definite answer for the question, it’s still on a case-by-case basis.
True, but at least you only have 1 VPN exposed that you need to secure and keep up to date, vs a bunch of random services that may or may not be built well for security.
Adding on one aspect to things others have mentioned here.
I personally have both ports/URLs opened and VPN-only services.
IMHO, it also depends on the exposure tolerance the software has or risk of what could get compromised if an attacker were to find the password.
Start by thinking of the VPN itself (Taliscale, Wireguard, OpenVPN, IPSec/IKEv2, Zerotier) as a service just like the service your considering exposing.
Almost all (working on the all part lol) of my external services require TOTP/2FA and are required to be directly exposed - i.e. VPN gateway, jump host, file server (nextcloud), git server, PBX, music reflector I used for D&D, game servers shared with friends. Those ones I either absolutely need to be external (VPN, jump) or are external so that I don’t have to deal with the complicated networking of per-user firewalls so my friends don’t need to VPN to me to get something done.
The second part for me is tolerance to be external and what risk it is if it got popped. I have a LOT of things I just don’t want on the web - my VM control panels (proxmox, vSphere, XCP), my UPS/PDU, my NAS control panel, my monitoring server, my SMB/RDP sessions, etc. That kind of stuff is super high risk - there’s a lot of damage that someone could do with that, a LOT of attack surface area, and, especially in the case of embedded firmware like the UPSs and PDUs, potentially software that the vendor hasn’t updated in years with who-knows-what bugs lurking in it.
So there’s not really a one size fits all kind of situation. You have to address the needs of each service you host on a case by case basis. Some potential questions to ask yourself (but obviously a non-exhaustive list):
- does this service support native encryption?
- does the encryption support reasonably modern algorithms?
- can I disable insecure/broken encryption types?
- if it does not natively support encryption, can I place it behind a reverse proxy (such as nginx or haproxy) to mitigate this?
- does this service support strong AAA (Authentication, Authorization, Auditing)?
- how does it log attempts, successful and failed?
- does it support strong credentials, such as appropriately complex passwords, client certificate, SSH key, etc?
- if I use an external authenticator (such as AD/LDAP), does it support my existing authenticator?
- does it support 2FA?
- does the service appear to be resilient to internet traffic?
- does the vendor/provider indicate that it is safe to expose?
- are there well known un-patched vulnerabilities or other forum/social media indicators that hosting even with sane configuration is a problem?
- how frequently does the vendor release regular patches (too few and too many can be a problem)?
- how fast does the vendor/provider respond to past security threats/incidents (if information is available)?
- is this service required to be exposed?
- what do I gain/lose by not exposing it?
- what type of data/network access risk would an attacker gain if they compromised this service?
- can I mitigate a risk to it by placing a well understood proxy between the internet and it? (for example, a well configured nginx or haproxy could mitigate some problems like a TCP SYN DoS or an intermediate proxy that enforces independent user authentication if it doesn’t have all the authentication bells and whistles)
- what VLAN/network is the service running on? (*if you have several VLANs you can place services on and each have different access classes)
- do I have an appropriate alternative means to access this service remotely than exposing it? (Is VPN the right option? some services may have alternative connection methods)
So, as you can see, it’s not just cut and dry. You have to think about each service you host and what it does.
Larger well known products - such as Guacamole, Nextcloud, Owncloud, strongswan, OpenVPN, Wireguard - are known to behave well under these circumstances. That’s going to factor in to this too. Many times the right answer will be to expose a port - the most important thing is to make an active decision to do so.
- does this service support native encryption?
However, if my VPS is compromised, wouldn’t the attacker still be able to access my local network?
That depends on your setup. I terminate my wireguard tunnels on my opnsense router, where I have explicit fw rules for what the vps hosts can talk to.
Wireguard is a VPN.
A VPN is preferable because what’s safer, making one hole, or making six holes? One, obviously.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CGNAT Carrier-Grade NAT Git Popular version control system, primarily for code HTTP Hypertext Transfer Protocol, the Web IP Internet Protocol NAS Network-Attached Storage NAT Network Address Translation SMB Server Message Block protocol for file and printer sharing; Windows-native SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption TCP Transmission Control Protocol, most often over IP UDP User Datagram Protocol, for real-time communications VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
14 acronyms in this thread; the most compressed thread commented on today has 17 acronyms.
[Thread #765 for this sub, first seen 27th May 2024, 12:45] [FAQ] [Full list] [Contact] [Source code]
I guess the VPN adds an additional layer of authentication. And of course everything is encrypted by default which might otherwise not be the case.
Plus, with a VPN you can access multiple services.
Yep, VPN makes it easier to access different services. Connect once, open anything you want in the local service. But I can set up WG without a VPS. To me that extra layer seems unnecessary.
Yeah, any additional VPN is unnecessary.
That’s what I was thinking too. Thank you!
The thing about something like TailScale or ZeroTier or Nebula is that it’s dynamic. These all behave similar to a multiplayer game … a use case every residential firewall should “just get.”
The ports that are “opened” can change regularly, they’re not some standard port that can just be checked to see if it’s open (typically).
Compare that to the average novice opening port 51822 for wireguard or 22 for SSH and you start to see the difference. With those ports, you’ve got a pretty good idea what’s on the other side and it might even be willing to talk to you and give you error messages or TCP ACK packets to confirm it’s there (e.g. SSH).
This advice is as you can probably imagine more relevant to things like OpenVPN that are notoriously hard to correctly configure or application protocols like SSH or HTTP.
With these mesh VPNs you also don’t have to worry about your home dynamic IP changing and breaking your connection at inopportune times… And that’s a huge benefit (IMO). It’s also very easy to tie in new devices to the network.
A lot of it is about outsourcing labor to programs that know how to set up a VPN and make management of it easy. That ties into security because … a LOT of security issues boil down to misconfiguration.
Wireguard doesn’t send anything back if the key is not correct.
Because of this, Tailscale port swapping is inconsequential vs wireguard here.
Tailscale transfers trust of your VPN subnet to a third party, which is a real security concern.
I agree SSH service will be attacked if they are plainly exposed, out of date and allow login challenges.
Also agree that under or misconfiguration is a massive cause for security issues.
Yes, WireGuard was designed to fix a lot of these issues. It does change the equation quite a bit. I agree with you on that (I kind of hinted at it but didn’t spell that out I suppose).
That said, WireGuard AFAIK still only works well with static IPs/becomes a PITA once dynamic IPs are in play. I think some of that is mitigated if the device being connected to has a static IP (even if the device being connected from doesn’t). However, that doesn’t cover a lot of self hosting use cases.
Tailscale/ZeroTier/Nebula etc do transfer some control (Nebula can actually be used with fully internal control and ZeroTier can also be used that way as well though you’re going to have to put more work in with ZeroTier … I don’t know about TailScale’s offering here).
Though doing things yourself also (in most cases) means transferring some level of control to a cloud/traditional server hosting provider anyways (e.g, AWS, DigitalOcean, NFO, etc).
Using something like ZeroTier can cutout a cloud provider/VPS entirely in favor of a professionally managed SAS for a lot of folks.
A lot of this just depends on who you trust – yourself or the team running the service(s) you’re relying on – more and how much time you have to practically devote to maintenance. There’s not a “one size fits all answer” but … I think most people are better off doing SAS to form an internal mesh network and running whatever services they’re interested in running inside of that network. It’s a nice tradeoff.
You can still setup device firewalls, SSH key-only authorization, fail2ban, and things of that ilk as a precaution in case their networks do get compromised. These are all things you should do if you’re self hosting … but hobbyist/novices will probably stumble through them/get it wrong, which IMO is more okay in the SAS case because you’ve got a professional security team keeping an eye on things.
Is it for security? I think is mostly recommended because your home router is likely to have a dynamic address.
This is in regards to opening a port for WG vs a tunnel to a VPS. Of course directly exposing nginx on your router is bad.
Quite often I see replies like “don’t open ports, use tailscale”. Maybe they mix different reasons and solutions, confusing people like me :D
The really nice thing about tailscale for accessing your hosted services is absolutely nothing can connect without authentication via a professionally hosted standard authentication, and there’s no public ports for script kiddies to scan for, spot and start hammering on. There’s thousands of bots that do nothing but scan the internet for hosted services and then try to compromise them, so not even showing up on those scans is a good thing.
For example, I have tailscale on my Minecraft server and connect to it via tailscale when away from home. If a buddy wants to join I just send a link sharing the machine to them and they can install tailscale and connect to it normally. If for some reason buddy needs to be cut off, I can just stop sharing to that account on Tailscale and they can no longer access the machine.
The biggest challenge of tailscale is also it’s biggest benefit. Nothing can connect without connecting through the tailscale client, so if my buddy can’t/won’t install tailscale they can’t join my Minecraft server
WG uses UDP, so as long as your firewall is configured correctly it should be impossible to scan the open port. Any packet hitting the open port that isn’t valid or doesn’t have a valid key is just dropped, same as any ports that are closed.
Most modern firewalls default to dropping packets, so you won’t be showing up in scans even with an open WG port.
Checking nmap’s documentation it looks like it’s perfectly possible to detect open udp ports
Yes, but only if your firewall is set to reject instead of drop. The documentation you linked mentions this; that’s why open ports are listed as
open|filtered
because any port that’s “open” might actually be being filtered (dropped).On a modern firewall, an nmap scan will show every port as
open|filtered
, regardless of whether it’s open or not.Edit: Here’s the relevant bit from the documentation:
The most curious element of this table may be the open|filtered state. It is a symptom of the biggest challenges with UDP scanning: open ports rarely respond to empty probes. Those ports for which Nmap has a protocol-specific payload are more likely to get a response and be marked open, but for the rest, the target TCP/IP stack simply passes the empty packet up to a listening application, which usually discards it immediately as invalid. If ports in all other states would respond, then open ports could all be deduced by elimination. Unfortunately, firewalls and filtering devices are also known to drop packets without responding. So when Nmap receives no response after several attempts, it cannot determine whether the port is open or filtered. When Nmap was released, filtering devices were rare enough that Nmap could (and did) simply assume that the port was open. The Internet is better guarded now, so Nmap changed in 2004 (version 3.70) to report non-responsive UDP ports as open|filtered instead.
Huh! Thank you very much for the detailed answer that’s extremely interesting!
I think you misunderstood the advice. If your goal is to open your services to the internet then any of the approaches can let in an attacker. It would depend on whether any of the things you expose to the internet has a remote exploitable vulnerability.
A long-standing software like SSH or WG that everybody relies on and everybody checks all the time will have fewer vulnerabilities than a service made by one person, that you expose over reverse proxy; but they’re not 100% foolproof either.
The Tailscale advice is about connecting your devices privately, on a private mesh network that is never exposed to the internet.
If you’re behind CGNAT and use a VPS to open up to the internet then any method you use to tunnel traffic from the VPS into your LAN will have the same risk because it’s the service inside that’s the most vulnerable not the tunnel itself.
Wireguard (which is what tailscale is built on) doesn’t even require you to open ports on both sides.
Set up wireguard on a vps first, where it is accessible, then set it up from within your network. It’ll traverse NAT and everything, and you don’t have to open a port on your network.
Tailscale is the exact same thing, just easier because it does everything for you (key generation, routing, …). Their service replaces your vps, up to you if you think that’s acceptable or not. IMHO, wireguard is worth learning at least. I eventually (partially) switched to tailscale because I’m lazy, and all services I host have authentication anyway, with vpn just being a second layer.