Analog Synthesizer Heaven/Hell

I have been seriously pondering buying another synthesizer. I originally was eyeing a Virus Ti but am now thinking about trying to fix my Virus Classic (it has a few quirks) and buying an analog synthesizer of some sort. Seems like there has been a bit of a resurgence there both in the forum of analog modular and more integrated synths. The former I find very interesting but wildly expensive and unrealistically expensive. Plus, as cool as those are, they seem to be for supporting sounds rather than, say, for pads, which is really what I am after.

I go back and forth on it, in part due to the varying prices of things, but so far my pick is a DSI Prophet ’08. I am still debating on the rack/desktop or the keyboard version. The latter I would have to find used as the new price is out of my price-range. It’s pretty glorious though I think. I have also considered a DSI Tetra, or going vintage with something like an Oberheim OB-6 or Juno synths.

I tend to keep going back to the ’08 for the modern feel with the vintage sound. My only gripes are lack of an audio input (for running external sounds through the filter), and lack of a high-pass filter. The Prophet 6 has an HPF but it’s quite expensive and, apart from being a VCO vs a DCO and maybe being easier to use, I’m not sure I see the draw. Likewise going vintage gives me tons of options, many of which are affordable but many also lacking in control surfaces. The lack of audio inputs on a lot of these isn’t terrible. I still have SSM2044 VCF chips and still want to build a sort of Midibox effect box with them.

On that note, I still want to build an rackmount MBSID. The main problem is the cost of the control surface boards (if I have them fabbed) and panel. And to a lesser degree a chassis to fit it all in. The SID chip has a unique sound that I really like but it isn’t an analog synth and I have a working little brother SID synth anyway (the SammichSID).

There is now a rather epic thread over at GearSlutz about all this. Trouble is, these things cost quite a bit such that, at least to a point, it’s worth spending time to figure out which one I think I might like the most. It’s rather nice to see that lots of these analog synthesizers sound different from each other. And I think perhaps that is among the point in getting an analog synth. They seem to have quite a bit of character which is really hard to find in the world of virtual-analog synths…

Posted in Music | Comments Off on Analog Synthesizer Heaven/Hell

Of Plex and hardlinks

For a long time, I always thought iTunes was Apple’s best feature up until relatively recently. It, plus a few other things Apple with the user experience that I am not a fan of, has caused me to no longer think that. And, in fact, I’m now running Linux a majority of the time (and how freeing it is! Oh Tux how I’ve missed you!). Linux Mint in my case. The hardest part of the switch was trying to escape Apple’s crumbling iTunes ecosystem. I didn’t want to go back to searching for music by way of the file-system though. That’s what iTunes did so well. For music, finding music by file is a nightmare. In iTunes I could search by artist, album, song, genre, BPM, grouping, rating, time played, etc. I dare you to try that with a file-system.

It took a lot of searching but I finally ended up with something that I’m super happy with – Plex! It handles both music and video, mobile, desktop, Roku, Pi. It’s a glorious unity that frees me from the chains of Apple. And, yes, I know, these days people use Spotify and Pandora. And those are great for some things, but I have an eclectic mix of music, many of which is not found on any of those platforms, as well as my personal vinyl record recordings, EDM tracks, you name it. Plus I like owning my music. There are some *-as-a-service things that are great, but I don’t feel that music should be like air. In any case, I want to be the curator. And Plex lets me to do that!

It does have a few shortcomings. Genre is based on the album, not song, which works a majority of the time, but not for some EDM albums. Likewise, sub-genres are not really a thing or grouping, which was a feature I used quite a bit in iTunes. To solve the grouping part, I finally figured out a great use for hardlinking. Outside of rsync and Time Machine backups, I never quite figured out what would be a good use case, well now I have! As you can imagine, I have a huge amount of songs in iTunes having used for over 10 years (yikes). It’s a lot of data and I made the mistake of letting iTunes manage the file-structure. So it’s a bit of a mess. So rather than copying music files to folders and wasting disk-space, I simply create hard-links. I keep by iTunes collection as well as well well curated collections both in Plex and, in the worst case, I can bust out my Mac and open iTunes if need be.

A good example that I did just this morning was to copy over flashgoodness’ Tower of Heaven soundtrack into my Game collection in Plex. So now if I want to listen to video game music, I just go to the Game library in Plex. But that album is also a chiptune album, so I may end up creating a Chiptune library in Plex as well. Chiptune is a pretty well defined genre, but there are still some crossover and hybrid artists (such as My band of course) such that a single genre may not be useful enough. BAM! Libraries and hard-linking to the rescue!

I honestly probably didn’t explain why that is so cool well at all but it really works super well in Plex, at least so far, and means that, one day, I may have ridden myself entirely of iTunes without having a disk space nightmare on my hands. Hate iTunes but like metadata and searching music the way it should be searched? Try Plex!

Posted in Computers | Tagged , | Comments Off on Of Plex and hardlinks

Trying Out Linux Containers (LXC), Plus LOOONG Delay In Posting

Wow shows how busy I was to now post here in over a year. In fact right at the year mark I added a new RouterBoard CCR1009 to my setup having had such good luck with the CRS109. It’s quite fantastic in my book!

But onto the topic at hand. Today I managed to finally make some forward progress with using lxc containers on Ubuntu. I had been testing it a bit while trying to deploy OpenStack on my workstation, but ended-up using standard virtualization (KVM in my case) to get that working. Today I had a really good use case to try out lxc containers on a small scale. I have been testing some Linux-based NVR software to capture recordings of a simple IP camera we have setup. The camera is to see if we want to invest in buying nicer security cameras and mounting them to our house. I won’t get into why I’m not getting a Nest camera, or a Ring doorbell camera. If you know me, you know why I don’t like the idea of those.

Anyways, I’m using DigitalWatchdog’s Spectrum software. Before Open Source advocates bust me on it, yes I tried ZoneMinder and I liked it, but it is just way too resource hungry for what it does. It likes to crush my Ubuntu NAS. By contrast, Spectrum runs extremely well and can capture full video (as opposed to JPEG like ZoneMinder seems to like to do). I’m all about Open Source, but in this case, Spectrum works really well and I needed something solid since it is for home security, after all.

As good as Spectrum seems to be, there’s a few big gotchas. The service likes to run as root and likes to create it’s storage directories in really weird places. No doubt it is designed to run primarily on a dedicated server or VM. I may do that at some point, but for our tiny test setup, it didn’t make any sense to do that. But I don’t want it running as root on my NAS, for sure. Originally I simply modified the init script to run as a non-root user, and after some permission adjustments, despite what DW has said, it seems to run just fine. But I soon realized that containers can help avoid having to do customized stuff and improving my security even more without adding a lot of overhead. If my NAS were more powerful, I might look at a full virtualized setup, but given the space requirements, among other things, containers seem like a great fit!

I did have to end up making a small change to the Spectrum’s init script. It wants to set adjust umask settings which LXC does not currently support via unprivileged containers. I also had to install some packages apt-get didn’t catch for some reason which caused some confusion. And I setup the container to use macvlan instead of the bridged mode. This let me assign an IP address to the container directly and vastly simplified the networking aspect of the whole thing. I also learned that /var/log/upstart exists – who knew! Overall it’s working quite well and using nearly the same resources it did when it was running directly on my NAS.

I’m a big fan, in short. And plan on perhaps setting up containers for Dropbox and Plex as well.

I realize the post is light on details so here’s some information to get started with containers:

  • https://linuxcontainers.org/lxc/getting-started/
  • https://help.ubuntu.com/lts/serverguide/lxc.html
Posted in Uncategorized | Tagged , | Comments Off on Trying Out Linux Containers (LXC), Plus LOOONG Delay In Posting

Finally Gave RouterBoard a Try

I’ve been wanting to try out a RouterBoard for a long while. I ran into them many moons ago in Linux Journal and just could never quite justify it until now. RouterBoards are, as the name implies, routers which run a variant of Linux (RouterOS which is also available on its own). You can either build your own router by assembling their semi-modular components or grab an off the shelf solution. You can sort of think of them as a router running DD-WRT, but on steroids. In any case, I finally took the plunge and bought a CRS-109. I’m still tweaking things, but wow it’s a glorious piece of hardware!

At least for my use-case. The 109 has plenty of power to keep up with my U-Verse FTTP 45/5 connection. It would not be the best choice for trying to keep up with something like Google Fiber, though RouterBoard has solutions for those cases. I didn’t need that sort of horsepower, and the 109 was very affordable as a result. The 109 is powered by a MIPS-based Atheros AR9344, which is a platform that is used by other consumer routers. It adds to this a dedicated switch type which is capable of some QoS, VLANs, and other fun things. In my setup, it replaced my Apple Time Capsule and D-Link GigE router and, in doing so, greatly improved by network setup.

RouterOS supports IPv6 in various incarnations. It took a lot of work, but I do have IPv6 via ATT (using their tunnel, but I don’t have to fuss with that) and setup my 109 to handle most of my IPv6 delegation. I have a basic set of iptables rules for both IPv4 and IPv6 along with the switch-based QoS and software QoS running on the WAN port. The rest of the switch ports are all part of my (currently) single segment LAN and also died to the WiFi. Since the 109 has a dedicated switch chip, communicating on the LAN between the ports is at wire speed and doesn’t burden the main CPU of the router. Instead, the CPU is focused on the firewall, QoS, and some other periphery services like DNS, DHCP, NTP, etc.. I also have SNMP setup, which I’m using via Cacti running on my file-server to graph switch port and CPU usage.

I have to say, QoS is glorious. For outbound from the WAN I’m using the ‘default-small’ queue, and for incoming I’m using ‘RED Default’. The latter is a bit more difficult to test at the moment, though so far, so good. Uploads, however, are dramatically improved. I can upload a file via SSH while also using SSH for interactive sessions and hardly notice any impact. Even when I rolled my own QoS directly via iptables I couldn’t get SSH to work under that use case. Likewise, the impact to gaming is minimal when uploading a file as well.

Long story short, I love this box! I’m a believer! As a Linux sysadmin and somewhat of a network admin, I love this thing. It does what I need, gives me plenty of info to make informed decisions, runs well, wasn’t expensive. This thing isn’t for your average consumer that just wants to plug in a magic box and have it work. For me, though? It’s just about perfect!

Posted in Computers | Tagged | Comments Off on Finally Gave RouterBoard a Try

Fully online RAID1 to RAID10 migration using Btrfs

I have actually been running Btrfs on our home NAS for a while. At first I was sort of turned off by the name and a bit by the folks behind if. But a nice article over at Ars Technica convinced me to give it a go. Previously, I was using MDRAID + LVM + XFS, a setup that is still solid. Btrfs can replace that entire stack but really only if you use Btrfs exclusively (a least on that particular set of drives). In some cases, that is not ideal – I would say MySQL on XFS is probably still the best fit there. In multi-filesystem cases, Btrfs can live happily on MDRAID and/or LVM, but one will miss some of the really neat features that way.

Regardless of those cases, Btrfs seemed like soemthing worthy of trying out on my NAS where data integrity is rather important. Setting it up is definitely different. I won’t cover that here (the Btrfs wiki linked above should get most folks going), but it is easy if a tad bit alien. Instead, I thought I’d share my experience with going from a RAID1 of 2x 1TB drives to a monster (for me) RAID10 of 4x 4TB drives. I had been wanting to do that for a while, particularly since the new line of consumer NAS drives came out. I opted to go with Seagate’s NAS drives, having been a long time Seagate fan. Other hard drive vendors make similar products these days.

As a bit of an aside, while I opted for a RAID10, Btrfs has a pretty neat approach to mirroring – Btrfs will generally figure out the most optimum way to store the data while keeping two copies of the data across different drives. That means configurations of more than 2 drives, or even drives of different sizes, are possible while having a balance between data integrity and space usage. I didn’t go with this option because I had 4 of the exact same drive, and my hunch is that RAID10 will still be faster (it usually is on classic RAID).

So how did I make all this happen? I started on a healthy RAID1 setup already so the first step was to add 2 4TB drives and setup a mirror. I only have 4 bays available so I wasn’t able to just chuck all 4 of the new drives in, although conceivably that may have removed a few steps if I could have. My system supports SATA hotplug devices, so it was just a matter of putting the drives in caddies and shoving them in, making sure the system saw the new drives, and then adding them to btrfs. In brief that was, more or less the following:

# echo "- - -" > /sys/class/scsi_host/host0/scan
# partprobe
# mount /mnt/btrfs
# btrfs device add /dev/sdd /mnt/btrfs
# btrfs device add /dev/sde /mnt/btrfs
# btrfs filesystem df /mnt/btrfs/

Note that host0 may be different depending on your configuration (I actually had two buses in my case). You can figure that out by doing an ls -l /sys/class/scsi_host/. For those not familiar with Btrfs subvoluming, I’m using that extensively but normally do not keep the root btrfs file-system mounted (as there is no reason to during normal operations). Also note that I’m not partitioning the drives, though that certainly can be done if preferred. I got in the habit of not doing that when dealing with LVM and SSDs, where cluster alignment was a headache when dealing with old style partitions. It just seemed easier to use the entire drive unless I have a need to partition. Btrfs makes that need less since it supports subvolumes anyway. Anyways, after doing the above, the next step was to remove the old drives.

That is where I ran into a minor snag. Turns out, removing drives requires mounting the file-system in degraded mode. Otherwise you get a somewhat cryptic error message. So removing the drives required:

# mount -oremount,degraded /mnt/btrfs
# btrfs device delete /dev/sdb
# btrfs filesystem df /mnt/btrfs/
(wait for migration to complete)
# btrfs device delete /dev/sdc

According to the documentation, you can remove more than one device at once, so I could have removed both 1TB drives. In either case, btrfs will migrate the data over to the other drives for you. In my case, I opted to do it one at a time. Once that was done, I yanked the old drives out, added the new 4TB drives in and then converted btrfs over to a RAID10:

# echo "- - -" > /sys/class/scsi_host/host0/scan
# partprobe
# btrfs device add /dev/sdb /dev/sdc
# btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/btrfs

I didn’t think to grab some screenshots of some of the btrfs output of the previous steps, but I do have what the conversion to RAID10 looks like:

# btrfs filesystem df /mnt/btrfs/
Data, RAID10: total=220.00GiB, used=109.49GiB
Data, RAID1: total=818.00GiB, used=817.99GiB
System, RAID10: total=64.00MiB, used=148.00KiB
Metadata, RAID10: total=1.00GiB, used=294.68MiB
Metadata, RAID1: total=2.00GiB, used=1.54GiB

Once that is done, I’ll have around 8TB (not accounting for compression) of usable space! A minor quirk with Btrfs is that conventional tools like df don’t give you the whole story, so df actually looks more impressive than it is:

root@filedawg:~# df -hT /mnt/btrfs/
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/sdc       btrfs   15T  1.9T   12T  14% /mnt/btrfs

This is both a good and bad thing. Given the way Btrfs works, alongside compression and de-duplication, providing a simple answer to how much space is available is no longer quite so trivial. In my opinion, that, plus the other minor quirks of Btrfs, are totally worth it given how easy it was to convert, without any downtime, from a RAID1 to RAID10. I could have done the same thing using MDRAID + LVM, but it would have involved more steps. I don’t think Btrfs is a one-size-fits-all sort of file-system – at least not yet, but for my NAS it has done a pretty good job and I’ve been quite happy with it!

All that said, it is still experimental, though, and I take backups of my important data (in my case to an XFS file-system running on an ioSafe), but *knock on wood*, it has served me very well so far!

Posted in Computers | Tagged | Comments Off on Fully online RAID1 to RAID10 migration using Btrfs

Watched the New Cosmos

And it was absolutely glorious! Really hope it was a ratings success as well as this show deserves to be on TV and on prime-time. So good!

Posted in Ramblings | Comments Off on Watched the New Cosmos

Merry Christmas!

Christmas was lovely this year. The usual material possessions were exchanged with some nice surprises. My mom gave me a dive knife, for instance, which was a total surprise! Likewise, Camden got some neat things.

But I think the thing I will remember the most is what Camden said when we finally buried Chaucer’s ashes. Yeah, I know, doing it on Christmas might seem odd, but that’s not important. When Paul asked if anyone wanted to say anything, Camden spoke up. He said “I love you, Chaucer…”

That was very special.

Posted in Uncategorized | Tagged , | Comments Off on Merry Christmas!

Chaucer, 1997-2013

Chaucer

I can’t imagine there ever being a dog quite like Chaucer. He’s had a good life and certainly made mine ever so special. I’m gonna miss you good buddy. See you again someday.

Posted in Personal | Tagged | Comments Off on Chaucer, 1997-2013

Love at First Sight, Live at Artslam!

A song about the hero, early in his leveling up, falling in love with the princess. Performed live at Artslam!

Posted in Music, Songs, artslam! | Tagged | Comments Off on Love at First Sight, Live at Artslam!

Hurt, Live at Artslam!

Our cover of Hurt performed live at Artslam! with a cleaned up mix. The mix isn’t perfect but it certainly has a live sound to it! Perfection is boring, anyway.

Posted in Music, Songs, artslam! | Tagged | Comments Off on Hurt, Live at Artslam!