ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. KOOLER
    3. Best
    • Profile
    • Following 6
    • Followers 8
    • Topics 5
    • Posts 294
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: Windows Tape Library Emulator

      @thwr said in Windows Tape Library Emulator:

      I'm looking for a tape library and drive emulator for Windows, preferable one that is emulating an SAS / SCSI interface. There used to be a tool called VMCE (Virtual Media Changer Emulator), but it doesn't seem to be available anymore.

      I'm not looking for any VTL solution but for something that just fakes an robotic autoloader like the Overland Neo series (which sells under various names, Tandberg for example) . Windows is a hard requirement in this case.

      I'm planning to write a little how-to where a tape lib is involved and I don't want to use my real one at work 😉

      Like others (thanks for REF @Danp !!) pointed out we can help. VTL isn't free but I can manage a NFR license for you. If you have something to offer we can even give away a SDK so you'll write own / modify existing VTL plug-in for StarWind to do something. Do you remember C/C++? 😉

      P.S. SyncSort did exactly this as they OEM our VTL and iSCSI engine for tapes.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: What is a Linux Distro

      @thwr said in What is a Linux Distro:

      Good write-up.

      About OpenSuSE: It's very popular, that's true, But it's not a base distro. OpenSuSE was derived from SuSE which was derived from Slackware, which in fact is on of the three large base distros.

      There's a wonderful diagram over at Wikimedia:
      https://commons.wikimedia.org/wiki/File:Linux_Distribution_Timeline.svg

      Good one but a bit outdated info IMHO.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Converting to a virtual environment

      @dafyre said in Converting to a virtual environment:

      Their Endoint Recovery Free would work for backups in this scenario as well. And with that one, you do get file-level restores.

      Unfortunately, it'd have to be loaded per VM, and is Windows only at the moment (Linux version just hit beta).

      They are improving greatly and we'll get releases and fully commercially supported versions soon. VERY soon 😉

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: 2.5" or 3.5" drives

      @NETS said in 2.5" or 3.5" drives:

      We are getting ready to build backup server based on a R720 should we look for 2.5" chassis or 3.5" chassis?

      I'm thinking the density of the 2.5" chassis would be beneficial for future expansion.

      1.8 or 2.5 SSDs for performance

      3.5 HDD (slow spin) for on-site capacity tier

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Rebuild Time on a 96 Drive RAID 7 Array (RAIDZ3)

      Great job! Thanks for brining it here. So many misconception and black magic assumed about RAID Z3...

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Cross Posting - Storage Spaces Conundrum

      @DustinB3403 said in Cross Posting - Storage Spaces Conundrum:

      Hello Spiceheads (from here),

      I am currently looking at implementing a large file server. I have a Lenovo server with 70x 1.8tb 10k sas drives attached via DAS. This server will be used as a file server. Serving up 80% small files 1 - 2mb and 20% large files 10GB+.

      > What I am not sure about is how to provision the drives. Do I use RAID? Should I use storage spaces? Or should I go with something else like ScaleIO, OpenIO, Starwinds etc..?

      I am looking for a solution that is scalable so if I wanted to increase the volume and I was also thinking about a little future proofing so setting this up so I could scale it out if I wanted to.

      This dose need to be resilient with a quick turn around should a disk go down and it also needs to be scalable.

      Looking forward to hearing your views.

      StarWind assumes you use some local RAID (hardware or software). We do replication and per-node redundancy is handled by RAID. So we do RAID61 (RAID1-over-RAID6) for SSDs and HDDs, RAID51 (RAID1-over-RAID5) for SSDs, RAID01 (RAID1-over-RAID0) for SSDs and HDDs (3-way replication is recommended), and like RAID101 (RAID1-over-RAID10) for HDDs and SSDs. It's very close to what SimpliVity does if you care. ScaleIO does 2-way replication on the smaller block level and needs no local RAID (but they take away one node from the capacity equation so from 3 nodes raw you'll get [(3-1)/2] you can really use). OpenIO is something I've never seen before you posted so I dunno what they do.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: EMC ScaleIO Available for Free for Non-Production Use

      @dafyre said in EMC ScaleIO Available for Free:

      This looks pretty interesting. It would be a competitor to say Starwind & co?

      EMC competitor to StarWind? That's a compliment, Sir you made my day!!! There's a 100x gap in revenue between us so that's great.

      Technically positioning is very different: ScaleIO is looking for many-many nodes to get reasonable performance (<10 is a joke) and small deployments are expensive in terms of a data witness. I mean they do have (N-1) capacity so with a 2-way replication with 3 nodes (starter kit) you'll get 1 node (1/3 of a cluster) usable capacity. That's expensive for NVMe storage! Expensive and slow. 2-way replica, no dedupe etc.

      My own imho - Ceph is running circles around ScaleIO and as soon as somebody would bother to finalize a healthy ecosystem around Ceph (we need more companies and not just InkTank) ScaleIO will have a VERY strong competitor. Competitor = somebody targeting MASSIVE many nodes deployments (SMB is few nodes, that's where StarWind plays so we aren't competitor really).

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: 4 PB on USB3 Drives Single Storage Array

      @thwr said in 4 PB on USB3 Drives Single Storage Array:

      @KOOLER said in 4 PB on USB3 Drives Single Storage Array:

      @thwr said in 4 PB on USB3 Drives Single Storage Array:

      @scottalanmiller said in 4 PB on USB3 Drives Single Storage Array:

      https://community.spiceworks.com/topic/1874784-whole-hard-drive-manufacturerer

      I kid you not, this is how the day of posting is going. Never saw what the OP's real goal is. But... what?

      That's... interesting. Somehow. I've seen things before, like a guy who created a ZFS pool on 20-something USB thumbdrives to demonstrate ZFS failure handling when unplugging a member, but a production array with 4 PB on USB3? Probably not a good idea.

      Why not getting a few cheap SAS enclosures, toploaders with 40+ disks for example? Backup and RAID are both big issues with so large arrays, hope that guy keeps that in kind.

      Physical dimensions? 😉

      vs IKEA book shelve(s)? 😉

      U-boot water closet? Servers sitting on the top of [you know yourself]?

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Setup 3 node cluster

      @scottalanmiller said in Setup 3 node cluster:

      @matthewaroth35 said in Setup 3 node cluster:

      Ok. I have 3 hp g7 same model. I want to set them all up as a cluster. same way scale does.

      What Scale does is highly unique. What you have is a modified hypervisor that knows about the storage, a customer storage layer that is very advanced and designed specifically for this use and a then a customer web management layer that controls it all for you. That's not something you can build at home.

      You can use VMware + VSAN (very expensive) or Hyper-V + StarWind (not cheap) to do this without some crazy complication. But I'm assuming that for an "old cluster" you aren't looking for an investment. If you were, you'd just buy more Scale gear.

      So you are going to be looking at free. And probably Xen or KVM + CEPH is going to do that and that's not trivial, at all.

      We'll try to fix that immediately after our V8 R5 release after NY giving away unrestricted StarWind vSAN for free. Switching to appliance business ;))

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Announcing the Death of RAID

      @scottalanmiller said in Announcing the Death of RAID:

      @coliver said in Announcing the Death of RAID:

      Is StarWinds vSAN considered RAIN?

      We'd have to dig in under the hood. I think that they are mostly focused on network RAID, just really advanced.

      StarWind uses local reconstruction codes (for now - stand-alone software or hardware RAID on every node; can be RAID0, 1, 5, 6 or 10) and inter-node n-way replication between the nodes, can be considered as a network RAID1. There's no network parity RAID like HPE (ex-Left Hand) or Ceph does.

      P.S. We're working on our own local reconstruction codes now, so local protection (SimpliVity style) won't be required soon. FYI.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Understanding 3-2-1 backup rule and son/father/grandfather model backups.

      @Dashrender said in Understanding 3-2-1 backup rule and son/father/grandfather model backups.:

      @Net-Runner said in Understanding 3-2-1 backup rule and son/father/grandfather model backups.:

      Here are some good explanations on the rule:
      https://knowledgebase.starwindsoftware.com/explanation/the-3-2-1-backup-rule/
      https://www.veeam.com/blog/the-3-2-1-0-rule-to-high-availability.html

      We have a highly-available cluster based on StarWind https://www.starwindsoftware.com/starwind-virtual-san and thus having 2 copies of data as a synchronous replica and a third copy as an on-site backup (which is part 3 of the rule). Obviously, cluster is running on primary internal storage and backups are stored on a separate NAS (wich is 2 part of the rule). And we have a VTL virtual machine https://azure.microsoft.com/en-us/marketplace/partners/starwind/starwindvtl/ running in Azure that hosts our offsite backups (which is part 1).

      I don't agree that StarWinds provides two copies of data - this is like saying that RAID 1 is two copies of the data. They are in real time sync, so if one become corrupted, so does the other.

      This is true! Unless snapshot-based async replication is configured instead of a sync. I mean 1+1 or 2+1 instead of HA(2).

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Hyper-V Networking 101 article series

      This is great one! Hope more Hyper-V related stuff is coming 🙂

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: New StarWind Virtual SAN Free - All restrictions removed!

      @Breffni-Potter said in New StarWind Virtual SAN Free - All restrictions removed!:

      @KOOLER said in New StarWind Virtual SAN Free - All restrictions removed!:

      @Breffni-Potter said in New StarWind Virtual SAN Free - All restrictions removed!:

      I tried to get a copy of Starwind free in the past and they were like "Here's a trial, have fun"

      I don't know how you've managed to do that, free and trial were always SEPARATE download options. What did you do exactly to achieve your "here's trial etc"?

      I filled in the form.
      Spoke to a rep. The rep then said, here's a trial.
      I asked where the free 2 node license was, was told I could not have it.

      Sales reps never contact people who ask for the free stuff, but they do it for trials. My guess is a) you initially asked for trial and not for free b) you didn't mention to sale rep you're coming from MangoLassi so you can have more.

      Either way, licensing policy changed completely, I'd suggest PM me your e-mail and I'll bring you in touch with somebody from SEs (no sales!!!) who'll help with a key and pain-free engagement process. That's what I can do now...

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: New StarWind Virtual SAN Free - All restrictions removed!

      @dafyre said in New StarWind Virtual SAN Free - All restrictions removed!:

      @Tim_G said in New StarWind Virtual SAN Free - All restrictions removed!:

      @scottalanmiller said in New StarWind Virtual SAN Free - All restrictions removed!:

      now. I'm on.

      StarWind has got it going on! They've been doing this stuff before Microsoft finally got around to implementing some of it... such as S2D and other Datacenter features.

      Anyone get ahold of all the slides? I missed it... was super busy at work today.

      Or how about a recording of it?

      We did! We'll share content around next week.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: KVM vs XenServer

      @scottalanmiller said in KVM vs XenServer:

      Like your management layer or storage layer. Like if you want DRBD or Starwind, you bring your own. Or if you want a GUI or whatever on top.

      Yup. We might consider brining in if not native Linux version than something being KVM or Xen native.

      StarWind Virtual SAN Free

      https://www.starwindsoftware.com/starwind-virtual-san-free

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Kooler on DFS-R Issues

      Scott, thank you for bring in this thread! I've actually forgot about performance. Both source and destination updated 🙂

      1. Performance issues

      DFS isn't in-line, it writes file first to read it and replicate later. This means there's 100% IOPS (read) overhead on everything you write to DFS-R enabled share.

      DFS-R is reading from one replica always so there's no performance "boost" on reading data from the second copy as well (this is something what active-active clustered guys will do).

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Kooler on DFS-R Issues

      @dafyre said in Kooler on DFS-R Issues:

      With Starwind's coming Linux release (or has it already been released?)... Would this not be done in a Linux VM? That would eliminate concerns about licensing and such.

      1. StarWind Linux VSA is released

      2. There's no problem to install anything like us into parent partition, question was is it OK to use it as a file server with a free version of Windows

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Is HyperConvergence Even a Thing

      @dafyre said in Is HyperConvergence Even a Thing:

      @scottalanmiller said in Is HyperConvergence Even a Thing:

      Merging components into a single node doesn't stop you from scaling horizontally to other nodes. But EACH node must have the stack collapsed onto it. That's the key.

      Then what stops a VMware setup with vSAN from being called hyperconverged?

      VMware vSphere with tightly coupled vSAN and VMware vCenter with vSAN management seamlessly integrated IS a hyperconverged solution. Lots of HCI vendors will give you FUD telling their own GUI is "single plane of glass management hiding all burdens behind" but reality is - most often their GUI is just a bunch of an open-source junk compiled by some script kiddie and what they try to sell you is actually "single pain in the ass".

      P.S. No, I won't give any names. This is tiny industry so you know whom I mean.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Erasure Coding

      @Tim_G said in Erasure Coding:

      I've no experience with Erasure on VMWare vSAN... but I know that it's production worthy and safe with S2D. It gives you the same resiliency but more efficient capacity. I believe it's nothing more than just an algorithm... so I can't see it being any less safe/efficient when used with a different product.

      I do know that all flash = better efficiency.

      It actually should do much better one. For some reason MSFT decided to cut off own balls and stop with double parity which is one linear parity sum and one global parity, while it was possible to make N => M e/c, same way as Azure and Ceph does.

      posted in IT Discussion
      KOOLERK
      KOOLER
    • RE: Caching Needs and SSDs

      @Tim_G said in Caching Needs and SSDs:

      I WISH there was a built-in way in Windows Server to use RAM as cache. I think it's awesome that StarWind has it.

      Well it opens a big can of worms if done in the wrong way. People don't understand block DRAM cache is dangerous w/out synchronous replication between nodes to keep multiple copies of the cache coherent between independent "controller" nodes and sort of a log at the back end, they install StarWind on a single controller w/out UPS, experience power outage and few GBs of transactions lost and... come blame us! 😞

      posted in IT Discussion
      KOOLERK
      KOOLER
    • 1 / 1