fallocate vs dd for swap file creation

I recently ran across this helpful Digital Ocean community answer about creating a swap file at droplet creation time. So I decided to test how long using my old method (using dd) takes to run vs using fallocate. Here’s how long it takes to run fallocate on a fresh 40GB droplet: root@ubuntu:/# rm swapfile && …
Continue reading fallocate vs dd for swap file creation

putting owncloud 8 on a subdomain instead of a subdirectory on centos 7

After moving to a new server, I wanted to finally get {{ownCloud}} up and running (over SSL, of course) on it. And I like subdomains for different services, so I wanted to put it at sub.domain.tld. This turns out to be not as straight-forward as one might otherwise hope, sadly – ownCloud expects to be …
Continue reading putting owncloud 8 on a subdomain instead of a subdirectory on centos 7

above the cloud storage

Who wants to go into business with me? I’ve got a super-cool storage company idea. Load up a metric buttload of cubesats with radiation-hardened {{SSD}} storage, solar power, and [relatively] simple communicaton stacks (secured by {{SSH}} or {{SSL}}, of course), and launch them into orbit. You think cloud storage is cool? What about above-the-cloud storage? Pros: …
Continue reading above the cloud storage

owncloud vs pydio – more diy cloud storage

Last week I wrote a how-to on using Pydio as a front-end to a MooseFS distributed data storage cluster. The big complaint I had while writing that was that I wanted to use ownCloud, but it doesn’t Just Work™ on {{CentOS}} 6*. After finishing the tutorial, I decided to do some more digging – because …
Continue reading owncloud vs pydio – more diy cloud storage

create your own clustered cloud storage system with moosefs and pydio

This started-off as a how-to on installing ownCloud. But their own installation procedures don’t work for the 8.0x release and {{CentOS}} 6. Most of you know I’ve been interested in distributed / cloud storage for quite some time. And that I find MooseFS to be fascinating. As of 2.0, MooseFS comes in two flavors – the …
Continue reading create your own clustered cloud storage system with moosefs and pydio

on-demand, secure, distributed storage

In follow-up to a friend’s blog post on TrueCrypt, and in conjunction with some previous investigation and interests I have had, I am wondering how difficult it would be to run a tool like MooseFS in conjunction with TrueCrypt to provide a Wuala-like service as a plausibly-deniable data haven a la {{Cryptonomicon}}.

testing hardware performance differences

I’ve been attempting to understand how hard disk cache sizes affect performance recently (and whether it’s worth shelling-out about twice as much for a drive with 128MB vs one with just 64MB). What would be the best way to personally investigate the performance differences to help determine which is better (if there’s even a noticeable difference)? …
Continue reading testing hardware performance differences

#moosefs @smartfile – distributed, redundant file management (#olf2013 talk)

As promised, some follow-up to OLF. Chris from SmartFile gave a great talk at OLF this year on MooseFS and how SmartFile leverages it to handle their rapidly-growing storage infrastructure. Specifically, he compared it to Ceph and GlusterFS. In short, MooseFS provides better configurability than either Ceph or {{GlusterFS}}, runs with lower overhead, and provides …
Continue reading #moosefs @smartfile – distributed, redundant file management (#olf2013 talk)

p2p cloud storage

I have yet to find a peer-to-peer file storage system. You’d think that with all the p2p and cloud services out there, there’d be a way of dropping files into a virtual folder and having them show up around the network (encrypted, of course) – replicated on some kinda of RAID-over-WAN methodology. I’m thinking {{Cryptonomicon}}’s …
Continue reading p2p cloud storage

digital preservation

I have been an active member on the Stack Exchange family of sites [nearly] since StackOverflow started a few years ago. Recently a new proposal has been made for Digital Preservation. Many of the proposed questions are interesting (including one of mine) – and I would strongly encourage anyone interested in the topic to check …
Continue reading digital preservation

storage strategies – part 4

Last time I talked about storage robustifiers. In the context of a couple applications with which I am familiar, I want to discuss ways to approach balancing storage types and allocations. Storage Overview Core requirements of any modern server, from a storage standpoint, are the following: RAM swap Base OS storage OS/application log storage Application …
Continue reading storage strategies – part 4

storage strategies – part 3

In part 2, I introduced SAN/NAS devices, and in part 1, I looked as the more basic storage type, DAS. Today we’ll look at redundancy and bundling/clustering of storage as a start of a robust storage approach. Before I go any further, please note I am not a “storage admin” – I have a pretty broad …
Continue reading storage strategies – part 3

storage strategies – part 2

Continuing my series on storage strategies and options (see part 1), today I want to briefly look at SAN and NAS options. First, storage area networks. SANs are “dedicated network[s] that provides access to consolidated, block level data storage”. Storage presented to a target server appears to the machine as if it is a “real” …
Continue reading storage strategies – part 2

storage strategies – part 1

In follow-up to my previous article about bind mounts, here is the first in a series on storage strategies (while everything contained in this series is applicable to desktops and laptops, the main thrust will be towards servers). Today we’ll look at local/simple storage options (DAS – both the spinning and solid-state varieties). The most …
Continue reading storage strategies – part 1

binding your mounts

Over the past several years, I have grown quite fond of the ability to do bind mounts on Linux. First, a little background. Most applications have specific directory structure requirements. Many have wildly varying space requirements depending on how a given install is utilized. For example, HPSA can use anywhere from 40-400-4000 gigabytes of space …
Continue reading binding your mounts