There are definitely people running ZFS on systems with less than 1GB of RAM. I'm not sure if anyone is running ZFS on a system with less than 512 MB of RAM.
(FWIW, at one point the amount of address space was far more important than the amount of RAM -- amd64 systems with 1 GB of RAM would run better than i386 systems with 2 GB. I'm not sure if this is still the case.)
Back in the days of OpenSolaris, I was absolutely running it on 512MB, including a Gnome 2 desktop. I won't claim it won any records for speed, but it absolutely worked, and the data survived a rather nasty intermittent disk controller failure.
No first-hand experience with ZFS on BSD, but I believe in the early days of the ZFS port there were issues where if the system came under sudden memory pressure, ZFS might not hand its RAM back to the kernel fast enough, leading to (I guess) a panic. So this is a fit-and-finish issue with a specific port, not an inherent ZFS issue, but it seems to have fed into the whole notion of "ZFS needs bucketloads of RAM".
Compression in ZFS doesn't use RAM, dedup tables, the arc and l2arc mapping tables are all that matters with regards to memory usage with ZFS. l2arc is where a lot of newbies end up shooting themselves in the foot, they go toss in a mirrored pair of 512GB SSD's for their l2arc on a system with 32GB of RAM and wonder why performance got worse or it crashed, or they enabled dedup on 24TB of data with 64GB of ram and wonder why it crashed and their zpool is beyond recovery.
If you are using ZFS without an l2arc or dedup enabled even a measly 1GB of RAM is "sufficient", you just won't get the best performance since it's going to have to constantly fetch from disk depending on your active data set (which is literally no different a problem than OS-level file caching in any other operating system).
Ah crap. The advice I got was just about the opposite - I wanted an L2ARC device in my system for performance.
Atom C2758, 32GB ECC RAM, 4x4TB in RAIDZ2 (losing that much storage was painful, and the $1200AUD upgrade path to 4x8TB even more so) with a 128GB SSD as L2ARC.
I found out some time later that I probably wanted an SLOG device instead but I'm really too afraid to touch my config. FWIW it's pretty stable on FreeBSD 10.3 with around 5GB RAM in use.
Your L2ARC shouldn't be bigger than 5x your system memory, ZFS has to keep mapping tables in-memory to determine where data on the L2ARC is stored, the bigger it gets the more memory you take away from your in-memory ARC, which can lead to worse performance than before since more data is having to come from your much higher-latency and lower bandwidth L2ARC or straight from spinning disks.
RAIDZ is a performance killer right off the bat, I would switch to using mirrored vdev's instead if throughput is an issue for you. Parity calculation kills write speeds, especially on slower CPU's like a C2758, and RAIDZ doesn't give you any extra throughput on reads since there's only one usable copy of a stripe to read from. I have 2x4TB disks in my pool as a mirrored vdev (with an additional 2x3TB coming after I get my 128GB flash drive for XenServer to free up my second 3TB disk again) and I get reads over 500MB/sec using 10Gig-E to my XenServer host, and equally fast writes. Disks are fairly cheap, RAIDZ is not a good solution if you need performance, and if you are using four disks in RAIDZ-2 you would lose the same amount of storage using two mirrored vdev's anyway (and save the increased potential of a rebuild failing due to multiple disk failures that is increasingly common on higher capacity drives).
If writes are a bigger problem for you, then, yes, a SLOG device will help - but any random SSD is not going to do. If you are using a SLOG ZFS expects it will not corrupt in-flight data following a power loss, even a "high-end" SSD like a Samsung 850 Pro or a Crucial MX200 will lose data in the event of a power failure. You also don't want a SLOG that isn't mirrored, if your SLOG is corrupted you just lost your entire pool. In addition you need a SSD with proper power-failure protection like the Intel DC series. Also, large writes skip the SLOG (>64KB) entirely, so if you are bandwidth bound (either from network or to disk) it is not going to help, if you are IOPS bound it can help tremendously.
Also, you mention 4x4TB and 4x8TB, I assume your issue is the available disk bays in your system? I personally ignore high-capacity drives as they are far too expensive, and my HP ML10 that I run FreeNAS off only has 1 internal drive bay (with a $50 add-on I can buy to install an extra 3), instead of dealing with internal drives as well as limited capacity I bought a cheap DAS array (Lenovo SA120, here's a picture of mine and my cheap TP-Link layer 3 switch http://i.imgur.com/eEvtP6Z.jpg) that cost me $200USD and connected it with an equally cheap LSI 9200-8e SAS HBA ($40), I now have 12 hot-swap drive bays and can replace my FreeNAS box without worrying about how many bays it has, if I need more I just need to buy an additional enclosure and daisy chain it off the first. This isn't a dirt cheap solution by any means (my homelab gear is easily worth over $1500 at this point, including my Lenovo TD340 acting as my XenServer host), but it will save you money in the long run by allowing you to easily buy many cheaper drives than fewer more expensive (but high capacity) ones.
DAS arrays are extremely easy to understand, they're just "expanders" - SAS has special support built in for this, in servers you typically have the front drive bays connected to a single SAS port on the board instead of individual cables running to each drive, a DAS is the same thing just as an external enclosure with separate power. You connect it to a basic SAS HBA with an external SAS port (SFF-8088), like the LSI 9200-8e I mentioned and it just shows up as a bunch of drives to your operating system. There's some special control services you have to the enclosure depending on the model through a protocol called SAS Enclosure Services (SES), my Lenovo SA120 uses this to manage things like fan speeds (very important, because every time it loses power it resets to HIGH which is VERY LOUD, I have a boot script on my FreeNAS box to set this back down to LOW which is much more manageable).
This seems like a lot, but you can ignore most of the technical details I just posted. Buy a SAS HBA with external ports, buy a SAS DAS array, plug it in and you see a bunch of drives - no fuss. Since SAS controllers can also support SATA drives you can save yourself $10-20 a drive and buy normal SATA disks, or you can get some added reliability (multi-pathing and error handling) and buy near-line SAS drives for a small premium (I don't bother personally, but I only have one controller installed in my SA120 so I have no second path for data to travel in the event of a failure anyway).
Feel free to hit me up anytime, I'm /u/snuxoll on reddit (pretty active on /r/homelab) and you can email me at stefan [a] nuxoll.me.
> if your SLOG is corrupted you just lost your entire pool
As far as I'm aware, SLOG loss does not jeopardise the pool, only the transactions that are in the SLOG. The pool may violate synchronous write guarantees in that supposedly committed writes that were in the SLOG effectively get rolled back (or rather, never get applied to the underlying pool), but that's about it.
Compression does not use a significant amount of memory - it's the deduplication which uses a lot of memory. In fact compression can make it run faster as there is less I/O data.