I’ve spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:
- “It’s just good security practice.”
- “You need it if you are running a server.”
- “You need it if you don’t trust the other devices on the network.”
- “You need it if you are not behind a NAT.”
- “You need it if you don’t trust the software running on your computer.”
The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer. #2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access. #4 feels like an extension of #3 – only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.
If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?
It seems that the consensus from all the comments is that you do in fact need a firewall. So my question is how does that look exactly? A hardware firewall device directly between modem and router? I using the software firewall on the router enough? Or, additionally having software firewall installed on all capable devices on the network? A combination of the above?
And like most things related to Linux on the internet, the consensus is generally incorrect. For a typical home user who isn’t opening ports or taking a development laptop to places with unsecure wifi networks, you don’t really need a firewall. It’s completely superflous. Anything you do to your PC that causes you genuine discomfort will more than likely be your own fault rather than an explicit vulnerability. And if you’re opening ports on your home network to do self-hosting, you’re already inviting trouble and a firewall is, in that scenario, a bandaid on a sucking chest wound you self-inflicted.
For a typical home user who isn’t opening ports or taking a development laptop to places with unsecure wifi networks, you don’t really need a firewall. It’s completely superflous.
A “typical” home user, whom I assume is less knowledgeable about technology, is probably the person who would benefit the most from strict firewalls installed on their device. Such an individual assumedly doesn’t have the prerequisite knowledge, or awareness required to adequately gauge the threats on their network.
Anything you do to your PC that causes you genuine discomfort will more than likely be your own fault rather than an explicit vulnerability.
Would this not be adequate rationale for having contingencies, i.e. firewalls? A risk/threat needn’t only be an external malicious actor. One’s own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation.
And if you’re opening ports on your home network to do self-hosting, you’re already inviting trouble and a firewall is, in that scenario, a bandaid on a sucking chest wound you self-inflicted.
Well, no, not necessarily. It’s important to understand what the purpose of the firewall is. If a device can potentially become an attack vector, it’s important to take precautions against that – you’d want to secure other devices on the network in the off chance that it does become compromised, or secure that very device to limit the potential damage that it could inflict.
A “typical” home user, whom I assume is less knowledgeable about technology, is probably the person who would benefit the most from strict firewalls installed on their device. Such an individual assumedly doesn’t have the prerequisite knowledge, or awareness required to adequately gauge the threats on their network.
They also would not realistically be doing anything that would cause open ports on their machine to serve data to some external application. It’s not like someone can just “hack” your computer by picking a random port and weaseling their way in. They have to have some exploitable mechanism on the machine that serves data in a way that’s insecure.
Would this not be adequate rationale for having contingencies, i.e. firewalls? A risk/threat needn’t only be an external malicious actor. One’s own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation.
I am assuming that there’s a hierarchy of needs in terms of maintaining any Linux system. Whenever you learn how to use something (and you would have to learn how to use a firewall), you are sacrificing time and energy that would be spent learning something else. Knowing how your package manager works, or how to use systemctl, or understanding your file system structure, or any number of pieces of fundamental Linux knowledge is, for a less technically sophisticated user, going to do comparatively more to guarantee the longevity and health of their system than learning how to use a firewall, which is something capable of severely negatively impacting your user experience if you misconfigure it. In other words: don’t mess around with a firewall if you don’t know what you’re doing. Use your time learning other things first if you’re a not technically sophisticated user. I also don’t exactly know what “mistakes” you’d be mitigating by installing a firewall if you aren’t binding processes to those ports (something a novice user should not be doing anyway).
Well, no, not necessarily. It’s important to understand what the purpose of the firewall is. If a device can potentially become an attack vector, it’s important to take precautions against that – you’d want to secure other devices on the network in the off chance that it does become compromised, or secure that very device to limit the potential damage that it could inflict.
You just wrote that “One’s own mistakes could certainly be interpreted as a potential threat, and are, therefore, worthy of mitigation.” The best way of mitigating mistakes is by not making them in the first place, or creating a scenario in which you could potentially make them. Prevention is always better than cure. You should never open ports on your local network. Ever. I don’t care if you have firewalls on everything down to your smart thermostat - if you need to expose locally hosted services you should be maintaining a cloud VM or similar cloud based service that forwards connections to the desired service on your internal network via a VPN like Tailscale. Or, even better, just put Tailscale’s service on whatever machine you’re using that needs access to your personal network. And, yes, if you’re doing things like that, you would also want robust firewall protections everywhere. But the firewall simply isn’t ever “enough.”
Anyway, just my 2 cents. The more you know and do, the greater steps you should take to protect yourself. For someone who knows very little, the most important thing that can help them is knowing more, and there is a hierarchy of learning that will take them from “knowing little” to “knowing much,” but they shouldn’t/don’t need to concern themselves with certain mechanisms before they know enough to reliably use them or mitigate their own mistakes. That said, if you are a new user, you’re probably installing a linux distro that already comes with its own preconfigured firewall that’s already running and you just don’t know about it. In which case, moot point. If you’re not, though, I’m assuming your goal is learning linux stuff, in which case, I’ve gone into that.
They also would not realistically be doing anything that would cause open ports on their machine to serve data to some external application.
They may not explicitly do it, no, but I could certainly see the possibility of the software that they use having such a vulnerability, or even a malicious bit of software inadvertently being installed on their device.
In other words: don’t mess around with a firewall if you don’t know what you’re doing. Use your time learning other things first if you’re a not technically sophisticated user. I also don’t exactly know what “mistakes” you’d be mitigating by installing a firewall if you aren’t binding processes to those ports (something a novice user should not be doing anyway).
This sort of skirts around answering the question.
The best way of mitigating mistakes is by not making them in the first place
But mistakes will be made all the same.
Prevention is always better than cure.
This is exactly the point that I am trying to make. Having contingencies in place on the off chance that something doesn’t go as expected could certainly be interpreted as “prevention”.
You should never open ports on your local network. Ever.
What would be the rationale for this statement?
if you need to expose locally hosted services you should be maintaining a cloud VM or similar cloud based service that forwards connections to the desired service on your internal network via a VPN like Tailscale.
I’m not sure that I understand what issue that this would solve. Would the malicious connections not still be forwarded through the VPN to the service? I am quite lacking in knowledge on Tailscale, and how related infrastructure is used in production, so please pardon my ignorance.
Depends on your setup. I got a network-level firewall+router setup between my modem and my LAN. But also, got
firewalld
(friendly wrapper on iptables) on every Linux device I care about because I don’t want to unintentionally expose something to the network.hm, guess maybe I should find something for Android and my Windows boxes.
(friendly wrapper on iptables)
iptables is deprecated, so it’s better to label it as a wrapper for nftables.
I think it’s better to have one but you probably don’t need multiple layers. When I’m setting up servers nowadays, it’s typically in the cloud and AWS and the like typically have firewalls. So, I don’t really do much on those machines besides change ports to non-standard things. (Like the SSH port should be a random one instead of 22.)
But you should use one if you don’t have an ecosystem where ports can be blocked or forwarded. If nothing else, the constant login attempts from bots will fill up your logs. I disable password logins on web servers and if I don’t change the port, I get a zillion attempts to ssh using “admin” and some common password on port 22. No one gets in but it still requires more compute than just blocking port 22 and making your SSH port something else.
If nothing else, the constant login attempts from bots will fill up your logs.
Yeah, this is defintely a scenario that I hadn’t considerd.
As i see it, the term “firewall” was originally the neat name for an overall security concept for your systems privacy/integrity/security. Thus physical security is (or can be) as well part of a firewall concept as maybe training of users. The keys of your server rooms door could be part of that concept too.
In general you only “need” to secure something that actually is there, you won’t build a safe into the wall and hide it with an old painting without something to put in it or - could be part of the concept - an alarmsensor that triggers when that old painting is moved, thus creating sort of a honeypot.
if and what types of security you want is up to you (so don’t blame others if you made bad decisions).
but as a general rule out of practice i would say it is wise to always have two layers of defence. and always try to prepare for one “error” at a time and try to solve it quickly then.
example: if you want an rsync server on an internet facing machine to only be accessible for some subnets, i would suggest you add iptables rules as tight as possible and also configure the service to reject access from all other than the wanted addresses. also consider monitoring both, maybe using two different approaches: monitor the config to be as defined as well as setup an access-check from one of the unwanted, excluded addresses that fires an alarm when access becomes possible.
this would not only prevent those unwanted access from happening but also prevent accidental opening or breaking of config from happen unnoticed.
here the same, if you want monitoring is also up to you and your concept of security, as is with redundancy.
In general i would suggest to setup an ip filtering “firewall” if you have ip forwarding activated for some reason. a rather tight filtering would maybe only allow what you really need, while DROPping all other requests, but sometimes icmp comes in handy, so maybe you want ping or MTU discovery to actually work. always depends on what you have and how strong you want to protect it from what with what effort. a generic ip filter to only allow outgoing connections on a single workstation may be a good idea as second layer of “defence” in case your router has hidden vendor backdoors that either the vendor sold or someone else simply discovered. Disallowing all that might-be-usable-for-some-users-default-on-protocols like avahi & co in some distros would probably help a bit then.
so there is no generic fault-proof rule of thumb…
to number 5.: what sort of “not trusting” the software? might, has or “will” have: a. security flaws in code b. insecurity by design c. backdoors by gov, vendor or distributor d. spy functionality e. annoying ads as soon as it has internet connection f. all of the above (now guess the likely vendors for this one)
for c d and e one might also want to filter some outgoing connection…
one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)
so maybe create a concept first and ask how to achieve the desired precautions then. or just start with your idea of the firewall and dig into some of the appearing rabbit holes afterwards ;-)
regards
for c d and e one might also want to filter some outgoing connection…
Is there any way to reliably do this in practice? There’s no way of really knowing what outgoing source ports are being used, as they are chosen at random when the connection is made, and if the device is to be practically used at all, some outgoing destination ports must be allowed as well e.g. DNS, HTTP, HTTPS, etc. What other methods are there to filter malicious connections originating from the device using a packet filtering firewall? There is the option of using a layer 7 firewall like OpenSnitch, but, for the purpose of this post, I’m mostly curious about packet filtering firewalls.
one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)
This is a fair point! I hadn’t considered that.
you do not need to know the source ports for filtering outgoing connections.
(i usually use “shorewall” as a nice and handy wrapper around iptables and a “reject everything else policy” when i configured everything as i wanted. so i only occasionally use iptables directly, if my examples dont work, i simply might be wrong with the exact syntax)
something like:
iptables -I OUTPUT -p tcp --dport 22 -j REJECT
should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)
so … one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.
better try this first on a VM on your workstation, not your server in a datacenter:
iptables -I OUTPUT -j REJECT iptables -I OUTPUT -p tcp -m owner --owner squiduser -j ACCEPT
“-I” inserts at the beginning, so that the second -I actually becomes the first rule in that chain allowing tcp for the linux user named “squiduser” while the very next would be the reject everything rule.
here i also assume “squiduser” exists, and hope i recall the syntax for owner match correctly.
then create user accounts within squid for all applications (that support using proxies) with precise acl’s to where (the fqdn’s) these squid-users are allowed to connect to.
there are possibilities to intercept regular tcp/http connections and “force” them to go through the http proxy, but if it comes to https and not-already-known domains the programs would connect to, things become way more complicated (search for “ssl interception”) like the client program/system needs to trust “your own” CA first.
so the concept is to disallow everything by iptables, then allow more finegrained by http proxy where the proxy users would have to authenticate first. this way your weather desktop applet may connect to w.foreca.st if configured, but not e.vili.sh as that would not be included in its users acl.
this setup, would not prevent everything applications could do to connect to the outside world: a local configured email server could probably be abused or even DNS would still be available to evil applications to “transmit” data to their home servers, but thats a different story and abuse of your resolver or forwarder, not the tcp stack then. there exists a library to tunnel tcp streams through dns requests and their answers, a bit creepy, but possible and already prepaired. and only using a http-only proxy does not prevent tcp streams like ssh, i think a simple tcp-through-http-proxy-tunnel software was called “corckscrew” or similar and would go straight through a http proxy but would need the other ond of the tunnel software to be up and running.
much could be abused by malicious software if they get executed on your computer, but in general preventing simple outgoing connections is possible and more or less easy depending on what you want to achieve
should prevent all new tcp connection TO ssh ports on other servers when initiated locally (the forward chain is again another story)
But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.
so … one could run an http/s proxy under a specific user account, block all outgoing connections except those of that proxy (i.e. squid) then every program that wants to connect somewhere using direct ip connections would have to use that proxy.
I don’t fully understand what this is trying to accomplish.
But the point that I was trying to make was that that would then also block you from using SSH. If you want to connect to any external service, you need to open a port for it, and if there’s an open port, then there’s a opening for unintended escape.
now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine. i can do so from MY machine to other machines as i like and if those allow me, but you cannot do anything against that unless that other machine happens to be actually yours (or you own a router that happens to be on my path to where i connect to)
lets try something. your machine A has ssh service running my machine B has ssh and another machine C has ssh.
users on the machines are a b c , the machine letters but in small. what should be possible and what not? like: “a can connect to B using ssh” “a can not connect to C using ssh (forbidden by A)” “a can not connect to C using ssh (forbidden by C)” […]
so what is your scenario? what do you want to prevent?
I don’t fully understand what this is trying to accomplish.
accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.
now i have the feeling as if there might be a misunderstanding of what “ports” are and what an “open” port actually is. Or i just dont get what you want. i am not on your server/workstation thus i cannot even try to connect TO an external service “from” your machine.
This is most likely a result of my original post being too vague – which is, of course, entirely my fault. I was intending it to refer to a firewall running on a specific device. For example, a desktop computer with a firewall, which is behind a NAT router.
so what is your scenario? what do you want to prevent?
What is your example in response to? Or perhaps I don’t understand what it is attempting to clarify. I don’t necessarily have any confusion regarding setting up rules for known and discrete connections like SSH.
accomplish control (allow/block/report) over who or what on my machine can connect to the outside world (using http/s) and to exactly where, but independant of ip addresses but using domains to allow or deny on a per user/application + domain combonation while not having to update ip based rules that could quickly outdate anyway.
Are you referring to an application layer firewall like, for example, OpenSnitch?
so here are some reasons for having a firewall on a computer, i did not read in the thread (could have missed them) i have already written this but then lost the text again before it was saved :( so here a compact version:
- having a second layer of defence, to prevent some of the direct impact of i.e. supply chain attacks like “upgrading” to an malicously manipulated version.
- control things tightly and report strange behaviour as an early warning sign ‘if’ something happens, no matter if attacks or bugs.
- learn how to tighten security and know better what to do in case you need it some day.
- sleep more comfortable when knowing what you have done or prevented
- compliance to some laws or customers buzzword matching whishes
- the fun to do because you can
- getting in touch with real life side quests, that you would never be aware of if you did not actively practiced by hardening your system.
one side quest example i stumbled upon: imagine an attacker has ccompromised the vendor of a software you use on your machine. this software connects to some port eventually, but pings the target first before doing so (whatever! you say). from time to time the ping does not go to the correct 11.22.33.44 of the service (weather app maybe) but to 0.11.22.33 looks like a bug you say, never mind.
could be something different. pinging an IP that does not exist ensures that the connection tracking of your router keeps the entry until it expires, opening a time window that is much easier to hit even if clocks are a bit out of sync.
also as the attacker knows the IP that gets pinged (but its an outbound connection to an unreachable IP you say what could go wrong?)
lets assume the attacker knows the external IP of your router by other means (i.e. you’ve send an email to the attacker and your freemail provider hands over your external router address to him inside of an email received header, or the manipulated software updates an dyndns address, or the attacker just guesses your router has an address of your providers dial up range, no matter what.)
so the attacker knows when and from where (or what range) you will ping an unreachable IP address in exact what timeframe (the software running from cron, or in user space and pings at exact timeframes to the “buggy” IP address) Then within that timeframe the attacker sends you an icmp unreachable packet to your routers external address, and puts the known buggy IP in the payload as the address that is unreachable. the router machtes the payload of the package, recognizes it is related to the known connection tracking entry and forwards the icmp unreachable to your workstation which in turn gives your application the information that the IP address of the attacker informs you that the buggy IP 0.11.22.33 cannot be reached by him. as the source IP of that packet is the IP of the attacker, that software can then open a TCP connection to that IP on port 443 and follow the instructions the attacker sends to it. Sure the attacker needs that backdoor already to exist and run on your workstation, and to know or guess your external IP address, but the actual behaviour of the software looks like normal, a bit buggy maybe, but there are exactly no informations within the software where the command and control server would be, only that it would respond to the icmp unreachable packet it would eventually receive. all connections are outgoing, but the attacker “connects” to his backdoor on your workstation through your NAT “Firewall” as if it did not exist while hiding the backdoor behind an occasional ping to an address that does not respond, either because the IP does not exist, or because it cannot respond due to DDos attack on the 100% sane IP that actually belongs to the service the App legitimately connects to or to a maintenance window, the provider of the manipulated software officially announces. the attacker just needs the IP to not respond or slooowly to increase the timeframe of connecting to his backdoor on your workstation before your router deletes the connectiin tracking entry of that unlucky ping.
if you don’t understand how that example works, that is absolutely normal and i might be bad in explaining too. thinking out of the box around corners that only sometimes are corners to think around and only under very specific circumstances that could happen by chance, or could be directly or indirectly under control of the attacker while only revealing the attackers location in the exact moment of connection is not an easy task and can really destroy the feeling of achievable security (aka believe to have some “control”) but this is not a common attack vector, only maybe an advanced one.
sometimes side quests can be more “informative” than the main course ;-) so i would put that (“learn more”, not the example above) as the main good reason to install a firewall and other security measures on your pc even if you’ld think you’re okay without it.
This is most likely a result of my original post being too vague – which is, of course, entirely my fault.
Never mind, and i got distracted and carried away a bit from your question by the course the messages had taken
What is your example in response to?
i thought it could possibly help clarifying something, sort of it did i guess.
Are you referring to an application layer firewall like, for example, OpenSnitch?
no, i do not conside a proxy like squid to be an “application level firewall” (but i fon’t know opensnitch however), i would just limit outbound connections to some fqdn’s per authenticated client and ensure the connection only goes to where the fqdns actually point to. like an atracker could create a weather applet that “needs” https access to f.oreca.st, but implements a backdoor that silently connects to a static ip using https. with such a proxy, f.oreca.st would be available to the applet, but the other ip not as it is not included in the acl, neither as fqdn nor as an ip. if you like to say this is an application layer firewall ok, but i dont think so, its just a proxy with acls to me that only checks for allowed destination and if the response has some http headers (like 200 ok) but not really more. yet it can make it harder for some attackers to gain the control they are after ;-)
Firewall for incoming traffic :
-
If you a home user with your computer or laptop inside a LAN you would not really need a firewall, unless you start to use applications which expose its ports to 0.0.0.0 rather than 127.0.0.1 (I believe Redis server software did this a few years ago) and do not trust other users or devices (smart home devices, phones, tablets, modems, switches and so on) inside your LAN.
-
If you are running a server with just a few services, for example ssh, smtp, https, some hosting company people I knew argue that no firewall is needed. I am not sure, my knowledge is lacking.
Application firewalls, watching also outgoing traffic :
If you compare Linux with some other Operating System you will see that on Linux for years an application firewall was non existing. But there is a choice now : opensnitch This can be useful if you run desktop applications that you do not fully trust, or want more control.
If you a home user with your computer or laptop inside a LAN you would not really need a firewall, unless you start to use applications which expose its ports to 0.0.0.0 rather than 127.0.0.1
Interestingly, on one of my devices, running
# ss -utpnl
shows quite a number of Spotify, and Steam sockets listening on 0.0.0.0. I looked up some of the ports, and, for example, one of the steam ones was a socket for Remote Play.But there is a choice now : opensnitch
This is really cool! Thank you so much for this recommendation! This pretty much solves what was bugging me about outgoing connections in a layer 3/4 firewall like nftables.
-
You always need a firewall, no other answer’s.
Why do you think windows and most linix distributions come packaged with one?
You always need a firewall, no other answer’s.
Okay, but why? That’s kind of the point of why I made this post, as is stated in the post’s body.
To keep your system secure no matter what, you open up only the ports you absolutely need.
People will always make a mistake while configuring software, a firewall is there to make sure that error is caught. With more advanced firewall’ you can even make sure only certain app’s have access to the internet to make sure only what you absolutely need toconnect to the internet does.
In general it’s for security, but can also be privacy related depending on how deep you want to get into it.
EDIT: It isnt about not trusting other devices on your netork,or software you run, or whether you are runni g a server. It’s about general security of your system.
With more advanced firewall’ you can even make sure only certain app’s have access to the internet to make sure only what you absolutely need toconnect to the internet does.
This sounds very interesting. This would have to be some forme of additional layer 7 firewall, right (As in it would have to interract with system processes, rather than filtering by network packet at layers 3, and 4)? Does this type of firewall have a specific name, or do you perhaps have some examples? I don’t think it would be possible with something like nftables, but I could certainly be wrong.
I honestly only know of a windows one called simplewall.
I used to use it to outright block windows telemetry, microsoft services, apps, …
It also helped me to save a lot of bandwith in regards to windows and all the stuff that comes preinstalked with it.
I havent searched for one for linux, mostly because 90% of apps I run are cli tools that don’t require internet connection, but I’m sure there is probably one that exists.
OpenSnitch was recommended to me in this comment. I’ve set it up, and it seems to be working quite well. While doing some research on the topic, I also came across Portmaster, but, while it does look nice, some of it’s features are locked behind a paywall, so I’m not interested – OpenSnitch works just fine!
I personally use a firewall for containing the local services I am running on my non-server PC, ex. Tiny Tiny RSS. If I am only using Tiny Tiny RSS locally, it’s just potentially dangerous to make this service visible and accessible for every client in my local network, which in my case, isn’t populated by my own personal devices, as I live in a dormitory. Other than that, you can block the well-known ports of commonly exploited protocols such as UPnP. That’s not because someone will “break into your device” with UPnP, but rather as a matter of digital autonomy, to control the mode of network communication done by the software on your device.
For me, it’s primarily #5: I want to know which apps are accessing the network and when, and have control over what I allow and what I don’t. I’ve caught lots of daemons for software that I hadn’t noticed was running and random telemetry activity that way, and it’s helped me sort-of sandbox software that IMO does not need access to the network.
Not much to say about the other reasons, other than #2 makes more sense in the context of working with other people: If your policy is “this is meant to be an HTTPS-only machine,” then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they’re throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course. Then once you have a policy you like, it’s also easier to copy a firewall config around to multiple machines (which may be running different apps), instead of just making sure to get it consistently right on a server-by-server basis.
So… Necessary? Not for any reason I can think of. But useful, especially as systems and teams grow.
I’ve caught lots of daemons for software that I hadn’t noticed was running and random telemetry activity that way
I did the exact same thing recently when I installed OpenSnitch – it was quite interesting to see all the requests that were being made.
If your policy is “this is meant to be an HTTPS-only machine,” then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they’re throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course.
That’s a fair point!
You’re right. If you don’t open up ports on the machines, you don’t need a firewall to drop the packages to ports that are closed and will drop the packets anyways. So you just need it if your software opens ports that shouldn’t be available to the internet. Or you don’t trust the software to handle things correctly. Or things might change and you or your users install additional software and forget about the consequences.
However, a firewall does other things. For example forwarding traffic. Or in conjunction with fail2ban: blocking people who try to guess ssh passwords and connect to your server multiple times a second.
Edit:
- “It’s just good security practice.” => nearly every time I’ve heard that people followed up with silly recommendations or were selling snake-oil.
- “You [just] need it if you are running a server.” => I’d say it’s more like the opposite. A server is much more of a controlled environment than lets say a home network with random devices and people installing random stuff.
- “You need it if you don’t trust the other devices on the network.” => True, I could for example switch on and off your smarthome lights or disable the alarm and burgle your home. Or print 500 pages.
- “You need it if you are not behind a NAT.” => Common fallacy, If A then B doesn’t mean If B then A. Truth is, if you have a NAT, it does some of the jobs a firewall does. (Dropping incoming traffic.)
- “You need it if you don’t trust the software running on your computer.” => True
True, I could for example switch on and off your smarthome lights or disable the alarm and burgle your home. Or print 500 pages.
How would the firewall on one device prevent other devices from abusing the rest of the network? Perhaps you misunderstood the original intent of my post. I certainly wouldn’t blame you if that is the case, though – when I made my post I was far too vague in my intent – perhaps I simply didn’t think through my question enough, but the more likely answer is that I simply wasn’t knowledgeable enough on the topic to accurately pose the question that I wanted to ask.
Common fallacy, If A then B doesn’t mean If B then A. Truth is, if you have a NAT, it does some of the jobs a firewall does. (Dropping incoming traffic.)
Fair point!
“You need it if you don’t trust the software running on your computer.” => True
For this, though, the only solution to it would be an application layer firewall like OpenSnitch, correct?
How would the firewall on one device prevent other devices from abusing the rest of the network?
Sure. I’m not exactly sure any more what I was trying to convey. I think I was going for the firewall as a means if perimeter security. Usually devices are just configured to allow access to devices from the same Local Access Network. This is the case for lots of consumer electronics (and some enterprises also rely on securing the perimeter, once you get in their internal network, you can exploit that.) My printer lets everyone print and scan, no password setup required while installing the drivers. The wifi smart plugs I use to turn on and off the mood light in the livingroom also per default accept everyone in the WiFi. And lots of security cameras also have no password on them or people don’t change the default since they’re the only ones able to connect to the home WiFi. This works, since usually there is a Wifi router that connects to the internet and also does NAT, which I’d argue is the same concept as a firewall that discards incoming connections. And while wifi protocols have/had vulnerabilities, it’s fairly uncommon that people go wardriving or close to your house to crack the wifi password. However, since you mentioned mixing devices you trust and devices you don’t trust… That can have bad consequences in a network setup like this. You either do it properly, or you need some other means to secure your stuff. That may be isolating the cheap chinese consumer electronic with god knows which bugs and spying tech from the rest of the network. And/or shielding the devices you can’t set up a password on.
the only solution to it would be an application layer firewall like OpenSnitch, correct?
I don’t think you can make an absolute statement in this case. It depends on the scenario, as it always does with security. If you have broken web software with known and unpatched vulnerabilities, a Web Application Firewall might filter out malicious requests. An Application Firewall if other software is susceptible to attacks or might become the attacker itself (I’m not entirely sure what they do.) But you might also be able to use a conventional firewall (or a VPN) to restrict access to that software to trusted users only. For example drop all packets if it’s not you interacting with that piece of software. And you can also combine several measures.
I think I was going for the firewall as a means if perimeter security.
Are you referring to the firewall on the router?
it’s fairly uncommon that people go wardriving
Interesting. I hadn’t heard of this.
That may be isolating the cheap chinese consumer electronic with god knows which bugs and spying tech from the rest of the network.
As in blocking or restricting their communication with the rest of the lan in the router’s firewall, for example? Or, perhaps, putting them behind their own dedicated firewall (this is probably superfluous to the firewall in the router though).
But you might also be able to use a conventional firewall (or a VPN) to restrict access to that software to trusted users only
For clarity’s sake, would you be able to provide an example of how this could be implemented? It’s not immediately clear to me exactly what you are referring to when combining “user” with network related topics.
Are you referring to the firewall on the router?
Yes. At home this will run on your (wifi) router. But the standard rules on that are pretty simple: Discard everything incoming, allow everything outgoing. Companies might have a dedicated machine, something like a pfSense in a server rack at each of their subsidiaries and draw a perimeter line around what they deem fit, the office building, a department, or separate the whole company’s internal network from the internet. (Or a combination of those.) You just have one point at home where two network segments interconnect: your router.
I think it is important to distinguish between this kind of firewall and something that runs on a desktop computer. I’d call that a personal firewall or desktop firewall. It does different things: like detect what kind of network you’re connected to. Enable access when you’re at your workplace but inhibit the Windows network share when you’re at the airport wifi. It adds a bit of protection to the software running on the computer, and can also filter packets from the LAN. And it’s often configured to be easygoing in order not to get in the way of the user. But it is not an independent entity, since it runs on the same machine that it is protecting. If that computer gets compromised for example, so is the personal firewall. A dedicated firewall however runs on a dedicated and secure machine, one where there is no user software installed that could interfere with it. And at a different location, it filters traffic between network segments, so it might be physically at some network interconnect. There are lots of different ways to do it, and people apply things in different ways. Such a firewall might not be able to entirely protect you or stop malicious activity spread within the attached network at all. And of course you need the correct policy and type in the rules that allow people at the company to be able to work, but inhibit everything else. Perfection is more a theoretical concept here and nothing that can be achieved in reality.
[isolating the cheap chinese consumer electronics] As in blocking or restricting their communication with the rest of the lan in the router’s firewall, for example?
Yes, you’d need to separate them from the rest of the network so your router gets in-between of them. Lots of wifi routers can open an additional guest network, or do several independent WiFis. For cables there is VLAN. For example: You configure 4 independent networks, get your computers on one network, your IoT devices on another network, your TV and NAS storage on a third and your guests and visitors on yet another. You tell your router the IoT devices can’t be messed with by guests and they can only connect to their respective update servers on the internet and your smarthome. Your guests can only connect to the internet but not to your other devices or each other. The TV is blocked from sending your behavior tracking data to arbitrary companies, it can only access your NAS and update servers. The devices you trust go on the network that is easygoing with the restrictions. You can make it arbitrarily complex or easy. This would be configured with the firewall of the router.
But an approach like this isn’t perfect by any means. The IoT devices can still mess with each other. Everything is a hassle to set up. And the WiFi is a single point of failure. If there are any security vulnerabilities in the WiFi stack of the router, attackers are probably just as likely to get into the guest wifi as they’d get into your secured wifi. And then the whole setup and separating things was an exercise in futility.
would you be able to provide an example of how this [use a conventional firewall (or a VPN) to restrict access to that software to trusted users only] could be implemented? It’s not immediately clear to me exactly what you are referring to when combining “user” with network related topics.
I mean something like: You have a network drive that you use to upload your vacation pictures to in case your camera/phone gets stolen. You can now immediately block everyone from all countries except from France, since you’re traveling there. This would be kind of a crude example but alike what we sometimes do with our credit cards. You can also set up a VPN that connects specifically you to your home-network or services. Your Nextcloud server can’t be reached or hacked from the internet, unless you also have the VPN credentials to connect to it in the first place. You obviously need some means of mapping the concept ‘user’ to something that is distinguishable from a network perspective. If you know in advance what IP addresses you’re going to use to connect, this is easy. If you don’t, you have to use something like a VPN to accomplish that, make just your phone be able to dial in to your home network. (Or compromise, like in the France example.)
Enable access when you’re at your workplace but inhibit the Windows network share when you’re at the airport wifi.
How would something like this be normally accomplished? I know that Firewalld has the ability to select a zone based on the connection, but, if I understand correctly, I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself (e.g. nftables). I don’t think an application layer firewall would be able to differentiate networks, so I don’t think something like OpenSnitch would be able to control this, for example.
But an approach like this isn’t perfect by any means. The IoT devices can still mess with each other. Everything is a hassle to set up. And the WiFi is a single point of failure.
What would be a better alternative that you would suggest?
You can also set up a VPN that connects specifically you to your home-network or services. Your Nextcloud server can’t be reached or hacked from the internet, unless you also have the VPN credentials to connect to it in the first place.
The unfortunate thing about this – and I have encountered this personally – is that some networks may block VPN related traffic. You can take measures to attempt to obfuscate the VPN traffic from the network, but it is still a potential headache that could lock you out of using your service.
I think this is decided by the Firewalld daemon, rather than the packet filtering firewall itself
Mmh, I probably was way to vague with that. This is done by something like FirewallD or whatever Windows or MacOS uses for this. AFAIK it then uses packet filtering to accomplish the task. Seems FirewallD includes the packet filtering too and not tie into nftables and transfer the filtering task to that. I don’t think OpenSnitch does things like that. I’m really not an expert on firewalls. I could be wrong. If you read the Wikipedia article (which isn’t that good) you’ll see there are at least 3 main types of firewall, probably more sub-types and a plethora of different implementations. Some software does more than one of the things. And everything kinda overlaps. Depending on the use-case you might need more than just one concept like packet-filtering. Or connect different software, for example detect which network was connected to and re-configure the packet filter. Or like fail2ban: read the logfiles with one piece of software and hand the results to the packet filter firewall and ban the hackers.
I don’t really know how the network connection detection is accomplished and manages the firewall. Either something pops up and I click on it, or it doesn’t. My laptop has just 3 ports open, ssh, ipp (printing) and mdns. I haven’t felt the need to address that and care about a firewall on that machine. But I’ve made mistakes. I had MDNS or Bonjour or whatever automatically shows who is on the network and which services they offer activated and it showed some of the Apple devices at work and I didn’t intend to show up in anyone’s chat with my laptop or anything. And at one point I forgot to deactivate a webserver on my laptop. I had used that to design a website and then forgotten about. Everyone in the local networks I’ve connected to in that time could have accessed that and depending on where I was that could have made me mildly embarassed. But no-one did and I eventually deleted the webserver. I think I’ve been living alright without caring about a firewall on my private laptop. I could have prevented that hypothetical scenario by using a firewall that detects where I’m at, but far more embarassing stuff happens to other people. Like people changing their name and then Airdropping silly stuff to people who are just holding a lecture, or Skype popping up while their screen is mirrored to the beamer infront of a large audience. But that has nothing to do with firewalls. Also, in the old days every Windows and network share was displayed on the whole network anyways. Nothing ever happened to me. And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.
On my server I use nftables. Drop everything and specifically allow the ports that I want to be open. In case I forget about an experiment or configure something entirely wrong (which also has happened) it adds a layer of protection there. I handle things differently because the server is directly connected to the internet and targeted, and my laptop is behind some router or firewall all the time. Additionally, I configured fail2ban and configured every service so it isn’t susceptible to brute-forcing the passwords. I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud. But it should be alright on it’s own, I keep it updated and followed best practices when setting it up.
[IoT devices] What would be a better alternative that you would suggest?
I really don’t have a good answer to that. Separating your various assortment of IoT devices from the rest of the network is probably a good idea. I personally would stop at that. I wouldn’t install cameras inside of my house and not buy an Alexa. I have a few smart lightbulbs and 2 thermostats, they communicate via Zigbee (and not Wifi), so that’s my separate network. And I indeed have a few Wifi IoT devices, a few plugs and an LED-strip. I took care to buy ones where I could hack the firmware and flash Tasmota or Esphome on them. So they run free software now and don’t connect to some manufacturers cloud. And I can keep them updated and hopefully without security vulnerabilities indefinitely, despite them originally being really cheap no-name stuff from china.
You can also set up a guest Wifi (for your guests) if you want to. I recently did, but didn’t bother to do it for many years. I feel I can trust my guests, we’re old enough now and outgrew the time when it was funny to mess with other people’s stuff, set an alarm to 3am or change the language to arabic. And all they can do is use my printer anyways. So I usually just give my wifi password to anyone who asks.
However, what I do might not be good advice for other people. I know people who don’t like to give their wifi credentials to anyone, since it could be used to do illegal stuff over the internet connection. That would backfire on who owns the internet connection and they’d face the legal troubles. That will also happen if it’s a guest wifi. I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that. So I don’t participate in that fearmongering and just share my tools and internet connection anyways.
(And you don’t absolutely need to put in all of that effort at home. Companies need to do it, since sending all the employers home and then paying 6 figures to another company to analyze the attack and restore the data is very expensive. At home you’re somewhat unlikely to get targeted directly. You’ll just be probed by all the stuff that scans for vulnerable and old IoT devices, open RDP connections, SSH, insecure webservers and badly configured telephony boxes. Your home wifi router will do the bare minimum and the NAT on it will filter that out for you. Do Backups, though.)
some networks may block VPN related traffic
That’s a bummer. There is not much you can do except obfuscate your traffic. Use something that runs on port 443 and looks like https (i think that’d be a TCP connection) or some other means of obfuscating the traffic. I think there are several approaches available.
for example detect which network was connected to and re-configure the packet filter.
Firewalld is capable of this – it can switch zones depending on the current connection.
And while I think that is not a good argument at all, I feel protected enough by using the free software I do and roughly knowing how to use a computer. I don’t see a need to install a firewall just to feel better. Maybe that changes once my laptop is cluttered and I lose track of what software opens new ports.
There does still exist the risk of a vulnerability being pushed to whatever software that you use – this vulnerability would be essentially out of your control. This vulnerability could be used as a potential attack vector if all ports are available.
I’m currently learning about Web Application Firewalls. Maybe I’ll put ModSecurity in-front of my Nextcloud.
Interesting! I haven’t heard of this. Side note, out of curiosity, how did you go about installing your Nextcloud instance? Manual install? AIO? Snap?
I’m personally not a friend of that kind of legislation. If somebody uses my tools to commit a crime, I don’t think I should be held responsible for that.
It would be a rather difficult thing to prove – one could certainly just make the argument that you did, in that someone else that was on the guest network did something illegal. I would argue that it is most likely difficult to prove otherwise.
You’re right. If you don’t open up ports on the machines, you don’t need a firewall to drop the packages to ports that are closed and will drop the packets anyways.
Sorry, hard disagree.
I assume you are assuming: 1.) You know about all open ports at all times, which is usually not the case 2.) There are no bugs/errors in the network stacks or services with open ports (e.g. you assume a port is only available to localhost) 3.) That there are no timing attacks which can easily be mitigated by a firewall 4.) That software one uses does not trigger/start other services transitively which then open ports you are not even aware of w/o constant port scanning
I agree with your point, that a server is a more controlled environment. Even then, as you pointed out, you want to rate limit bad login attempts via firewall/fail2ban etc. for the simple reason, that even a fully updated ssh server might use a weak key (because of errors/bugs in software/hardware during key generation) and to prevent timing attacks etc.
In summary: IMHO it is bad advice to tell people they don’t need a firewall, because it is demonstrably wrong and just confuses people like OP.
Sure, maybe I’ve worded things too factually and not differentiated between theory and practice. But,
- “you know everything”: I’ve said that. Configurations might change or you you don’t pay enough attention: A firewall adds an extra layer of security. In practice people make mistakes and things are complex. In theory where everything is perfect, blocking an already closed port doesn’t add anything.
- “There are no bugs in the network stack”: Same applies to the firewall. It also has a network stack and an operating system and it’s connected to your private network. Depends on how crappy network stacks you’re running and how the network stack of the firewall compares against that. Might even be the same as on my VPS where Linux runs a firewall and the services. So this isn’t an argument alone, it depends.
- Who migitates for timing attacks? I don’t think this is included in the default setup of any of the commonly used firewalls.
- “open ports you are not even aware of”: You open ports then. And your software isn’t doing what you think it does. We agree that this is a use-case for a firewall. that is what I was trying to convey with the previous argument no 5.
Regarding the summary: I don’t think I want to advise people not to use a firewall. I thought this was a theoretical discussion about single arguments. And it’s complicated and confusing anyways. Which firewall do you run? The default Windows firewall is a completely different thing and setup than nftables and a Linux server that closes everything and only opens ports you specifically allow. Next question: How do you configure it? And where do you even run it? On a seperate host? Do you always rent 2 VPS? Do you do only do perimeter security for your LAN network and run a single firewall? Do you additionally run firewalls on all the connected computers in the network? Does that replace the firewall in front of them? What other means of security protection did you implement? As we said a firewall won’t necessarily protect against weak passwords and keys. And it might not be connected to the software that gets brute-forced and thus just forward the attack. In practice it’s really complicated and it always depends on the exact context. It is good practice to not allow everything by default, but take the approach to block everything and explicitly configure exceptions like a firewall does. It’s not the firewall but this concept behind it that helps.
I think I get you and the ‘theory vs. practice’ point you make is very valid. ;-) I mean, in theory my OS has software w/o bugs, is always up-to-date and 0-days do not exist. (Might even be true in practice for a default OpenBSD installation regarding remote vulnerabilities. :-P)
Who migitates for timing attacks? I don’t think this is included in the default setup of any of the commonly used firewalls.
fail2ban absolutely mitigates a subset of timing attacks in its default setup. ;-)
LIMIT is a high level concept which can easily applied for ufw, don’t know about default setups of commonly used firewalls.
If someone exposes something like SSH or anything else w/o fail2ban/LIMIT IMHO that is grossly incompetent.
You are totally right, of course firewalls have bugs/errors/miss configurations… BUT … if you are using a Linux firewall, good chances are, that the firewall has been reviewed/attacked/pen tested more often and thoroughly than almost all other services reachable from the internet. So, if I have to choose between a potential attacker first hitting a well tested and maintained firewall software or a MySQL server, which got no love from Orcacle and lives in my distribution as an outdated package, I’ll put my money on the firewall every single time. ;-)
Thank you for pointing out that my arguments don’t necessarily apply to reality. Sometimes I answer questions too direct. And the question wasn’t “should I use a firewall” or I would have answered with “probably yes.”
I think I have to make a few slight corrections: I think we use the word “timing attack” differently. To me a timing attack is something that relies on the exact order or interval/distance packets arrive at. I was thinking of something like TOR does where it shuffles around packets, waits for a few milliseconds, merges them or maybe blows them up so they all have the same size. Brute forcing something isn’t exploiting the exact time where a certain packet arrives, it’s just sending many of them and the other side lets the attacker try an indefinite amount of passwords. But I wouldn’t put that in the same category with timing attacks.
Firewall vs MySQL: I don’t think that is a valid comparison. The firewall doesn’t necessarily look into the packets and detect that someone is running a SQL injection. Both do a very different job. And if the firewall doesn’t do deep-packet-inspection or rate limiting or something, it just forwards the attack to the service and it passes through anyways. And MySQL probably isn’t a good example since it rarely should be exposed to the internet in the first place. I’ve configured MariaDB just to listen on the internal interface and not to packets from other computers. Additionally I didn’t open the port in the firewall but MariaDB doesn’t listen on that interface anyways. Maybe a better comparison would be a webserver with https. The firewall can’t look into the packets because it’s encrypted traffic. It can’t tell apart an attack from a legitimate request and just forwards them to the webserver. Now it’s the same with or without a firewall. Or you terminate the encrypted traffic at the firewall, do packet inspection or complicated heuristics. But that shifts the complexity (including potential security vulberabilities in complex code) from the webserver to the firewall. And it’s a niche setup that also isn’t well tested. And you need to predict the attacks. If your software has known vulnerabilities that won’t get fixed, this is a valid approach. But you can’t know future attacks.
Having a return channel from the webserver/software to the firewall so the application can report an attack and order the firewall to block the traffic is a good thing. That’s what fail2ban is for. I think it should be included by default wherever possible.
I think there is no way around using well-written software if you expose it to the internet (like a webserver or a service that is used by other people.) If it doesn’t need to be exposed to the internet, don’t do it. Any means of assuring that are alright. For crappy software that is exposed and needs to be exposed, a firewall doesn’t do much. The correct tools for that are virtualization, containers, VPNs, and replacing that software… Maybe also the firewall if it can tell apart good and bad actors by some means. But most of the time that’s impossible for the firewall to tell.
I agree. You absolutely need to do something about security if you run services on the internet. I do and have ran a few services. And especially webserver-logs (especially if you have a wordpress install or some other commonly attacked CMS), SSH and Voice-over-IP servers get bombarded with automated attacks. Same for Remote-Desktop, Windows-Networkshares and IoT devices. If I disable fail2ban, the attackers ramp up the traffic and I can see attacks scroll through the logfiles all day.
I think a good approach is:
- Choose safe passwords and keys.
- Don’t allow people to brute-force your login credentials.
- If you don’t need a service, deactivate it entirely and remove the software.
- If you just need a service internally, don’t expose it to the internet. A firewall will help, and most software I use can be configured to either listen on external requests or don’t do it. Also configure your software to just listen on/to localhost (127.0.0.1). Or just the LAN that contains the other things that tie into it. Doing it at two distinct layers helps if you make mistakes or something happens by accident or complexity or security vulnerabilities arise. (Or you’re not in complete control of everything and every possibility.)
- If only some people need a service, either make it as secure as a public service or hide it behind a VPN.
- Perimeter security isn’t the answer to everything. The subject is complex and we have to look at the context. Generally it adds, though.
- If you run a public service, do it right. Follow state of the art security practices. It’s always complicated and depends on your setup and your attackers. There are entire books written about it, people dedicate their whole career to it. For every specific piece of software and combination, there are best practices and specific methods to follow and implement. Lots of things aren’t obvious.
- Do updates and backups.
#1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer.
Agreed. That’s mostly BS from people who make commissions from some vendor.
#2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded?
A Firewall might be more advanced than just NAT/poking a hole, it may do intrusion detection (whatever that means) and DDoS protection
#3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access.
Maybe you’ve a bunch of IoT devices in your network that are sold by a Chinese company or any IoT device (lol) and you don’t want them to be able to access the internet because they’ll establish connections to shady places and might be used to access your network and other devices inside it.
#5 is the only one that makes some sense;
Essentially the same answer and in #3
If we’re talking about your home setup and/or homelab just don’t get a hardware firewall, those are overpriced and won’t add much value. You’re better off by buying an OpenWRT compatible router and ditching your ISP router. OpenWRT does NAT and has a firewall that is easy to manage and setup whatever policies you might need to restrict specific devices. You’ll also be able to setup things such as DoH / DoT for your entire network, setup a quick Wireguard VPN to access your local services from the outside in a safe way and maybe use it to setup a couple of network shares. Much more value for most people, way cheaper.
A Firewall might be more advanced than just NAT/poking a hole, it may do intrusion detection (whatever that means) and DDoS protection
I mean, sure, but the original question of why there’s a need for a second firewall still exists.
Maybe you’ve a bunch of IoT devices in your network that are sold by a Chinese company or any IoT device (lol) and you don’t want them to be able to access the internet because they’ll establish connections to shady places and might be used to access your network and other devices inside it.
This doesn’t really answer the question. The device without a firewall would still be on the same network as the “sketchy IoT devices”. The question wasn’t about whether or not you should have outgoing rules on the router preventing some devices from making contact with the outside world, but instead was about what risk there is to a device that doesn’t have a firewall if it doesn’t have any services listening.
Essentially the same answer and in #3
Somewhat, only I would solve it using an application layer firewall rather than a packet filtering firewall (if it’s even possible to practically solve that with a packet filtering firewall without just dropping all outgoing packets, that is).
just don’t get a hardware firewall
What is the purpose of these devices? Is it because enterprise routers don’t contain a firewall within them, so you need a dedicated device that offers that functionality?
I don’t know what else is there to answer about the purpose of a hardware firewall.
Hardware firewalls have their use cases, mostly overkill for homelabs and most companies but they have specific features you may want that are hard or impossible to get in other ways.
A hardware firewall may do the following things:
- Run DPI and effectively block machines on the network to access certain protocols, websites, hosts or detect whenever some user is about to download malware and block it;
- Run stats and alert sysadmins of suspicious behaviors like a user sending large amount of confidential data to the outside;
- Have “smart” AI features that will detect threats even when they aren’t known yet;
- Provide VPN endpoints and site-to-site connections. This is very common in brands like WatchGuard;
- Higher throughput than your router while doing all the other operations above;
- Better isolation.
An isolated device is the fact that you can then play around with your routers without having to think about the security as much - you may break them, mess some config but you can be sure that the firewall is still in place and doing its job. The firewall becomes both a virtual and a physical and physiological barrier between your network and the outside, there’s less risk of plugging a wire on the wrong spot or a apply a configuration and suddenly having your entire network exposed.
Sure you may be able to setup something on OpenWRT to cover most of the things I listed before but how much time will you spend on that? Will it be as reliable? What about support? A Pi-hole is also another common solution for those problems, and it may work until a specific machines devices to ignore its DNS server and go straight to the router / outside.
You can even argue that you can virtualize something like pfSense or OPNsense on some host that also virtualizes your router and a bunch of other stuff, however, is it wise? Most likely not. Virtualization is mostly secure but we’ve seen cases from time to time where a compromised VM can be used to gain access to the host or other VMs, in this case the firewall could be hacked to access the entirety of your network.
When you’ve to manage larger networks, lets say 50* devices I believe it becomes easier to see how a hardware firewall can become useful. You can’t simply trust all those machines, users and software policies in them to ensure that things are secure.
Have “smart” AI features that will detect threats even when they aren’t known yet;
This is a crazy one – pattern recognition of traffic.
Higher throughput than your router while doing all the other operations above;
Fair point! I hadn’t considered that one.
You can even argue that you can virtualize something like pfSense or OPNsense on some host
This is an intriguing idea. I hadn’t heard of it before.
also virtualizes your router
How would one virtualize a router…? That sounds strange, to say the least.
[virtualized router/firewall] This is an intriguing idea. I hadn’t heard of it before.
- https://forum.opnsense.org/index.php?topic=31809.0
- https://docs.netgate.com/pfsense/en/latest/virtualization/index.html
- https://www.sdxcentral.com/networking/nfv/definitions/whats-network-functions-virtualization-nfv/nfv-elements-overview/whats-a-virtual-router-vrouter/
- https://hometechhacker.com/how-why-i-built-virtual-router/
- https://netshopisp.medium.com/pros-and-cons-of-installing-pfsense-as-virtual-vs-dedicated-server-9e12d39c4cfd
- https://openwrt.org/docs/guide-user/virtualization/qemu
Virtualized routers and firewalls are more common than you might think, specially in large datacenters and other deployments that require a lot of flexibility / SDN.
Other people just like the convenience of having a single machine / mini PC whatever that runs everything from their router/firewall to their NAS and VMs to self-host stuff.
But… at the end of the day virtualization is only mostly secure and we’ve seen cases where a compromised VM can be used to gain access to the host or other VMs, in this case the firewall could be hacked to access the entirety of your network.
You always need it and you actually use it. The smarter question is when you need to customize its settings. Defaults are robust enough, so unless you know what and why you need to change, you don’t.
Defaults are robust enough
Would you mind defining what “defaults” are?
Defaults are the default settings of your firewall (netfilter in linux).
Is netfilter not just the API through which you can make firewall rules (e.g. nftables) for the networking stack?
When you are attacked. Ok so when are you attacked , as soon as you connect outside. So unless you are air gapped you need a firewall.
Would you mind defining what you mean by “attacked”?
Scanned for vulnerability and exploited if found
I’m not sure if I understand perfectly the scenario that you are describing, but it appears that you are describing a situation in which one has a device behind a NAT that is running a server, which is port forwarded. In this scenario, the attack vector would depend on the security of the server itself, and is essentially independent to the existence of a firewall. One could potentially drop packets based on source IP, or some other metadata, or behaviour that identifies the connections as malicious, but, generally, unless the firewall drops all incoming connections (essentially creating an offline device), a packet filtering firewall will make no difference to thwarting such exploits.
TempleOS doesn’t need one
Out of curiosity, why do you claim that? I know very little about TempleOS’s functionality – I’m essentially only aware of it’s existence and some of it’s history.
It doesn’t have networking
Haha, yeah, a device without networking capabilities would be rather well protected from attacks originating from a network 😜
This question reads a bit to me like someone asking, “Why do trapeze artists perform above nets? If they were good at what they did they shouldn’t fall off and need to be caught.”
Do you really need a firewall? Well, are you intimately familiar with every smidgeon of software on your machine, not just userland ones but also system ones, and you understand perfectly under which and only which circumstances any of them open any ports, and have declared that only the specific ports you want open actually are at every moment in time? Yes? You’re that much of a sysadmin god? Then no, I guess you don’t need a firewall.
If instead you happen to be mortal like the rest of us who don’t read and internalize the behaviors of every piddly program that runs or will ever possibly run on our systems, you can always do what we do for every other problem that is too intensive to do manually: script that shit. Tell the computer explicitly which ports it can and cannot open.
Luckily, you don’t even have to start from scratch with a solution like that. There are prefab programs that are ready to do this for you. They’re called firewalls.
Tell the computer explicitly which ports it can and cannot open.
Isn’t this all rather moot if there is even one open port, though? Say, for example, that you want to mitigate outgoing connections from potential malware that gets installed onto your device. You set a policy to drop all outgoing packets in your firewall; however, you want to still use your device for browsing the web, so you then allow outgoing connections to DNS (UDP, and TCP port 53), HTTP (TCP port 80), and HTTPS (TCP port 443). What if the malware on your device simply pipes its connections through one of those open ports? Is there anything stopping it from siphoning data from your PC to a remote server over HTTP?
The point of the firewall is not to make your computer an impenetrable fortress. It’s to block any implicit port openings you didn’t explicitly ask for.
Say you install a piece of software that, without your knowledge, decides to spin up an SSH server and start listening on port 22. Now you have that port open as a vector for malware to get in, and you are implicitly relying on that software to fend it off. If you instead have a firewall, and port 22 is not one of your allowed ports, the rogue software will hopefully take the hint and not spin up that server.
Generally you only want to open ports for specific processes that you want to transmit or listen on them. Once a port is bound to a process, it’s taken. Malware can’t just latch on without hijacking the program that already has it bound. And if that’s your fear, then you probably have a lot of way scarier theoretical attack vectors to sweat over in addition to this.
Yes, if you just leave a port wide open with nothing bound to it, either via actually having the port reserved or by linking the process to the port with a firewall rule, and you happened to get a piece of actual malware that scanned every port looking for an opening to sneak through, sure, it could. To my understanding, that’s not typically what you’re trying to stop with a firewall.
In some regards a firewall is like a padlock. It keeps out honest criminals. A determined criminal who really wants in will probably circumvent it. But many opportunistic criminals just looking for stuff not nailed down will probably leave it alone. Is the fact that people who know how to pick locks exist an excuse to stop locking things because “it’s all pointless anyway”?
Once a port is bound to a process, it’s taken. Malware can’t just latch on without hijacking the program that already has it bound.
Is this because the kernel assigns that port to that specific process, so that all traffic at that port is associated with only that process? For example, if you have an SSH server listening on 22, and another malicious porgram decides to start listening on 22, all traffic sent to 22 will only be sent to the SSH server, and not the malicious program?
EDIT (2024-01-31T01:20Z): While writing this, I came across this stackoverflow answer, which states that when a socket is created it calls some
bind()
function that attaches it to a port. This makes me wonder how difficult it would be for malware to steal the bound port.Is this because the kernel assigns that port to that specific process, so that all traffic at that port is associated with only that process?
Yes, that’s what ports do. They split your IP connection into 65,536 separate communication lines, that’s the main thing, but that is specifically 65,536 1-on-1 lines, not party lines. When a process on your PC reserves port 80, that’s it. It’s taken. Short of hacking the kernel itself, it cannot be reassigned or stolen until the bound process frees it.
The SO answer you found it interesting, I was not aware that the Linux kernel had a feature that allowed two or more processes to willingly share a single port. But the answer explains that this is an opt-in parameter that the first binding process has to explicitly allow. And even then, traffic is not duplicated to all listening processes. It sounds like it’s more of a “first come first serve” to whichever of the processes are free to read the incoming message at the time it arrives, making it more of a load balancing feature that isn’t a useful vector for eavesdropping.
Always, as others have said.
Do you have any supporting arguments/rationale for that claim?
Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always (and even being extremely specialized, I would still enable a firewall. :-P)
Operating systems nowadays are extremely complex with a lot of moving parts. There are security relevant bugs in your network stack and in all applications that you are running. There might be open ports on your computer you did not even think about, and unless you are monitoring 24/7 your local open ports, you don’t know what is open.
First of all, you can never trust other devices on a network. There is no way to know, if they are compromised. You can also never trust the software running on your own computer - just look at CVEs, even without malicious intentions your software is not secure and never will be.
As soon as you are part of a network, your computer is exposed, doesn’t matter if desktop/laptop, and especially for attacking Linux there is a lot of drive by attacks happening 24/7.
Your needs for firewalls mostly depend on your threat model, but just disabling accepting incoming requests is trivial and increases your security by a great margin. Further, setting a rate limit for failed connection attempts for open ports like SSH if you use this services, is another big improvement for security. (… and of course disabling password authentication, YADA YADA)
That said, obviously security has to be seen in context, the only snake oil that I know of are virus scanners, but that’s another story.
People, which claim you don’t need a firewall make at least one of the following wrong assumptions:
- Your software is secure - demonstrably wrong, as proven by CVEs
- You know exactly what is running/reachable on your computer - this might be correct for very small specialized embedded systems, even for them one still must always assume security relevant bugs in software/hardware/drivers
Security is a game, and no usable system can be absolutely secure. With firewalls, you can (hopefully) increase the price for successful attacks, and that is important.
Seriously, unless you are extremely specialized and know exactly what you are doing, IMHO the answer is: Always
In what capacity, though? I see potential issues with both server firewalls, and client firewalls. Unless one wants their devices to be offline, there will always be at least one open port (for example, inbound on a server, and outbound on a client) which can be used as an attack vector.
Perhaps I don’t understand your point. If I understand your point in the sense that there are also issues with firewalls and that one always has attack vectors against usable systems, I fully agree with your remark. My point is simply, as a rule of thump a firewall usually mitigates a lot of attack vectors (see my remark about LIMIT for ssh ports elsewhere). Especially for client systems having a firewall which blocks all incoming traffic by default is IMHO high payoff for almost no effort.
My point is simply, as a rule of thump a firewall usually mitigates a lot of attack vectors
The only quibble that I would have with your statement is that I would say that it’s better to word it as it “mitigates a lot of potential attack vectors”, but, other than that, I completely agree with what you said.
You may also want to check up on regulations and laws of your country.
In Belgium, for instance, I am responsible for any and all attacks originating from my PC. If you were hacked and said hackers used your computer to stage an attack, the burden of proof is upon you. So instead of hiring very expensive people to trace the real source of an attack originating from your own PC, enabling a firewall just makes sense, besides making it harder on hackers…
That’s a strange law. That’s like saying one should be held responsible for a thief stealing their car and then running over someone with it (well, perhaps an argument could be made for that, but I would disagree with it).
Even if you do trust the software running on your computer, did you actually fuzz it for vulnerabilities? Heartbleed could steal your passwords even if you ran ostensibly trustworthy software.
So unless you harden the software and prove it’s completely exploit-free, then you can’t trust it.
Heartbleed could steal your passwords even if you ran ostensibly trustworthy software.
Heartbleed is independent of a firewall though – it’s a protocol vulnerability that was patched into a specific library – this feels somewhat like a strawman argument.
So unless you harden the software and prove it’s completely exploit-free, then you can’t trust it.
The type of “firewall” that I am referring to operates at layer 3/4. From what I understand, you seem to be describing exploits closer to the application layer.
I’m not saying there would be a Heartbleed 2.0 that you need a firewall against
I’m saying unless you read the code you’re running, including the firmware and the kernel, how can you trust there isn’t a remote execution exploit?
At work I showed a trivial remote execution using an upload form. If we didn’t run php, it wouldn’t happen. If the folder had proper
.htaccess
, it wouldn’t happen. If we didn’t trust the uploader’s MIME type, it wouldn’t happen.There’s something to be said about defense in depth. Even if you have some kind of a bug or exploit, the firewall just blocking everything might save you.
I’m saying unless you read the code you’re running, including the firmware and the kernel, how can you trust there isn’t a remote execution exploit?
A packet filtering firewall isn’t able to protect against server, or protocol exploits directly. Sure, if you know that connections originating from a specific IP are malicious, then you can drop connections originating from that IP, but it will not be able to direclty protect against application layer exploits.
There do exist application layer firewalls (an example of which was pointed out to me here (opensnitch)), but those are out of the scope of this post.