ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. StorageNinja
    3. Posts
    S
    • Profile
    • Following 1
    • Followers 10
    • Topics 3
    • Posts 988
    • Groups 1

    Posts

    Recent Best Controversial
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      The retraining costs that would be required to fulfill an IT mandate of "can run on any OS or hypervisors" to migrate this platform (or many major EMR's) are staggered.

      Well, no. The cost is literally zero. In fact, it takes cost and effort to not support it. OS and hypervisors are totally different here. Writing for an OS takes work, because that's your application deployment target. That's where you need to target the OS in question. The hypervisor is no business to an application writer. That's below the OS, on the other side of your interface. There is zero effort, literally zero, for the application team.
      So what I see isn't a lack of effort, it's throwing in additional effort to try to place blame elsewhere for problems that aren't there. I've run application development teams, if your team is this clueless, I'm terrified to tell the business that I let you even in the door, let alone deployed your products.

      Applications can certainly care what the hypervisor/storage is and integrate with it.

      If your leveraging the underlying platform for writable clones, for test/dev QA workflows (See this a lot with Oracle DB applications and in oil gas where the application might even directly call Netapp API's and manage the array for this stuff).

      Backups - (Some Hypervisors have changed block tracking so a backup takes minutes, others don't meaning a full can take hours). BTW I hear Hyper-V is getting this in 2016 (Veeam had to write their own for their platform).

      VDI - always leverages so many hypervisor based integrations that the experience can be wildly different. Citrix and View both need 3D Video Cards to do certain things and being locked into a platform that has limited support for that can be a problem.

      Security - Guest introspection support to hit compliance needs, Micro-segmentation requirements (EPIC has drop in templates for NSX, Possibly HCI at some point). If you want actual micro-semtnagation and inspection on containers there isn't anything on the market that competes with Photon yet. At some point there may be ACI templates but that will require network hardware lock in (Nexus 9K) and that's even crazier (Applications defining what switch you can buy!).

      Monitoring - app owners want full stack monitoring and this is a gap in a lot of products. Here's an example.

      Applications caring about hypervisor and hardware will become more pronounced when Apache's Pass and Hurley hits the market and applications start being developed to access Byte addressable persistent memory, and FPGA co-processors. I don't expect all 4 to have equal support for this on day one, and the persistent memory stuff is going to be game changing (and also a return to the old days, as technology proves itself to be cyclical once again!).

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      I agree that you should strongly avoid lock in where possible, but when 99% of the other community is running it on X and your the 1% that makes support calls a lot more difficult, not just because they are not familiar with your toolset.

      If your application isn't working, why are you looking at my hardware? I've never once, ever, seen a company that needed to call their EMR vendor to get their storage working, or their hypervisor. What scenario do you picture this happening in? What's the use case where your application vendor is being asked to support your infrastructure stack? And, where does it end?

      Because performance and availability problems come from the bottom up not the top down. SQL has storage as a dependency, storage doesn't have SQL as a dependency, and everything rolls downhill...

      IF I'm running EPIC and want to understand why a KPI was missed and if there was a correlation to something in the infrastructure stack I can overlay the syslog of the entire stack, as well as the SNMP/SMI-S, API and hypervisor performance stats (application stats) the EUC environment stats (Hyperspace either on Citrix or View) and see EXACTLY what caused that query to run slow. There are tools for this. These tools though are not simple to build and if they don't have full API access to the storage, or hypervisor, with full documentation and this stuff built (including log clarification) its an expensive opportunity to migrate this to a new stack, and something they want to be restrictive on.

      SAP HANA is a incredible pain in the ass to tune and setup, and depending on the underlying disk engine may have different block sizing or other best practices. This is one of the times where things like NUMA affinity can actually make or break an experience. Getting their support to understand a platform enough to help customers tune this, their PSO assist in deployments, and their support identify known problems with the partner ecosystem means they are incredibly restrictive (Hence they ares still defining HCI requirements).

      It costs the software vendors money in support to deal with non-known platforms (Even if they don't suck). The difference between two vendors arguing and pointing fingers, and two vendors collaborating at the engineering level to understand and solve problems together is massive in customer experience (and the cost required).

      The amount of effort that goes into things like this, and reference architectures is honestly staggering and humbling (these people are a lot smarter than me). I used to assume it was just vendors being cranky, but seeing the effort required of a multi-billion dollar.

      At the end of the day EPIC, SAP and other ERP platforms will take the blame (not Scale, or Netapp, or EMC) if the platform delivers an awful experience (Doctors or CFO's will just remember that stuff was slow and broke a lot) so being fussy about what you choose to support is STRONGLY in their business interests and is balanced against offering enough choice that they do not inflate their costs too high. Its a balancing act.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      The other issue with Scale (and this is no offense to them) or anyone in the roll your own hypervisor game right now is there are a number of verticle applications that the business can not avoid, that REQUIRE specific hypervisors, or storage be certified. To get this certification you have to spend months working with their support staff to validate capabilities (performance, availability, predictability being one that can strike a lot of hybrid or HCI players out) as well as commit to cross engineering support with them.

      This is always tough and is certainly a challenge to any product. It would be interesting to see a survey of just how often this becomes an issue and how it is addressed in different scenarios. From my perspective, and few companies can do this, it's a good way to vet potential products. Any software vendor that needs to know what is "under the hood" isn't ready for production at all. They might need to specify IOPS or resiliency or whatever, sure. But caring about the RAID level used, that it is RAID or RAIN, what hypervisor is underneath the OS that they are given - those are immediate show stoppers, no vendor with those kinds of artificial excuses to not provide support are show the door. Management should never even know that the company exists as they are not viable options and not prepared to support their products. Whether it is because they are incompetent, looking for kickbacks or just making any excuse to not provide support does not matter, it's something that a business should not be relying on for production.

      This right here makes no sense to me. You are ok with recommending infrastructure that can ONLY be procured from a single vendor for all expansions and has zero cost control of support renewal spikes, hardware purchasing, software purchasing (Proprietary hypervisor only sold with hardware), but you can't buy a piece of software that can run on THOUSANDS of different hardware configurations and more than 1 bare metal platform?

      In Medicine for EMR's Caché effectively controls the database market for anyone with multiple branches with 300+ beds and one of every service. (Yes there is All Scripts that runs on MS SQL, and no it doesn't scale and is only used for clinics and the smallest hospitals as Philip will tell you). If you tell the chief of medicine you will only offer him tools that will not scale to his needs you will (and should) get fired. There are people who try to break out from the stronghold they have (MD Anderson who has a system that is nothing more than a collection of PDF's) but its awful, and doctors actively choose not to work in these hospitals because the IT systems are so painful to use (You can't actually look up what medicines someone is on, you have to click through random PDF's attached to them or find a nurse). The retraining costs that would be required to fulfill an IT mandate of "can run on any OS or hypervisors" to migrate this platform (or many major EMR's) are staggered. IT doesn't have this much power even in the enterprise. Sometimes the business driver for a platform outweighs the loss of stack control, or conformity of infrastructure (I"m sure the HFT IT guys had this drilled into their head a long time ago). This is partly the reason many people still quietly have a HP-UX, or AS400 in the corner still churning their ERP..

      I agree that you should strongly avoid lock in where possible, but when 99% of the other community is running it on X and your the 1% that makes support calls a lot more difficult, not just because they are not familiar with your toolset. A lot of these products have cross engineering escalation directly into the platforms they have certified. We have lock-in on databases for most application stacks (and live with it no matter how many damn yachts we buy Larry). The key things are:

      1. Know the costs going in. Don't act surprised when you buy software for 1 million and discover you need 500K worth of complementary products and hardware to deploy it.

      2. Know what parts you can swap if they fail to deliver. (Hardware, support, OS, Database, Hypervisor) and be comfortable with any with reduced choice, or no choice. Different people may need different levels of support for each.

      3. Also know what your options for the platform for hosted or OPEX non-hardware offerings are (IE can I replicate to a multi-tenant DR platform to make DR a lowered Opex).

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller 1.8TB drives if they are 10K are 2.5'' 6TB are 3.5''. If they can stuff a 3.5'' drive in a 2.5'' bay I'd be impressed.

      The reality of 10K drives is the roadmap is dead. I don't expect to see anything over 1.8TB and in reality because those are 512E/4KN block drives anyone with any legacy OS's ends up stuck with 1.2TB drives more often than not if they don't want weird performance issues.

      (Fun fact, Enterprise Flash drives are ALL 512E for 4KN back ends, but it doesn't matter because they have their own write buffers that absorb and re-order the writes to prevent any amplification).

      Storage nodes are not commonly used, but largely because the vendors effectively charge you the same amount for them (At least the pricing on Nutanix storage only nodes wasn't that much of a discount). Outside of licensing situations no one would ever buy them up front (they would have right sized the cluster design from the start). In reality they something you kinda get forced into buying (You can't add anyone else's storage to the cluster).

      I get the opex benefits of CI and HCI appliances, but the fact that you've completely frozen any flexibility on servers and storage comes at a cost, and that's lack of control by the institution on expansion costs.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @John-Nicholson said in The Inverted Pyramid of Doom Challenge:

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      We have this now and we use the same capacity with replicated local disks as you would with a SAN with RAID 10. Are you using RAID 6 or something else to get more capacity from the SAN than you can with RLS? We aren't wasting any capacity having the extra redundancy and reliability of the local disks with RAIN.

      With HDT pools you could have Tier 0 be RAID 5 SSD, RAID 10 for 10K's in the middle Tier, and RAID 6 for NL-SAS with sub-lun block tiering across all of that. Replicated local you generally can't do this (or you can't dynamically expand individual tiers). Now as 10K drives make less sense (hell magnetic drives make less sense) the cost benefits of a fancy tiering system might make less sense. Then again I see HDS doing tiering now between their custom FMD's regular SSD's, and NL's in G series deployments so there's still value in having a big ass array that can do HSM.

      We have only two tiers, but they can be dynamically expanded. Any given node can be any mix of all slow tier, all fast tier or a blend. There is a standard just because it's pre-balanced for typical use, but nothing ties it to that.

      The other advantage of having tiers with different raid levels etc, is he can use RAID 6 NL-SAS for ICE cold data, and RAID 5/10 for higher tiers for better performance. Only a few HCI solutions today do true always on erasure codes in a way that isn't murderous to performance during rebuilds. (GridStore, VSAN, ?)

      1. Cost. Mirroring has a 2/3x overhead for FTT=1/2 while erasure codes can get that much much lower (IE half as many drives for FTT=2, or potentially less depending on stripe width). As we move to all flash in HCI (it coming) the IO/latency overhead for Erasure Codes and Dedupe/Compression becomes negligible. This is a competitive gap between several different solutions right now in that space.

      2. When your adding nodes purely for capacity this carries other non-visible costs. Power draw (A Shelf on that HUS draws a lot less than a server), scale out systems consume more ports (and while this benefits throughput, and network ports are a LOT cheaper this is more structured cabling, more ports to monitor for switch monitoring licensing etc).

      At small scale none of this matters that much (OPEX benefits in labor and support footprint trump these other costs). At mid/large scale this stuff adds up...

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @hutchingsp sorry that it was so long for me to address the post. When you first posted it seemed reasonable and we did not have any environment of our own that exactly addressed the scale and needs that you have. But for the past seven months, we've been running on a Scale cluster, first a 2100 and now a 2100/2150 hybrid and that addresses every reason that I feel that you were avoiding RLS and addresses them really well.

      The other issue with Scale (and this is no offense to them) or anyone in the roll your own hypervisor game right now is there are a number of verticle applications that the business can not avoid, that REQUIRE specific hypervisors, or storage be certified. To get this certification you have to spend months working with their support staff to validate capabilities (performance, availability, predictability being one that can strike a lot of hybrid or HCI players out) as well as commit to cross engineering support with them. EPIC EMR (and the underlying Cachè database), application from industrial control systems from Honeywell and all kinds of others.

      This is something that takes time, it takes customers asking for it, it takes money, and it takes market clout. I remember when even basic Sage based applications refused to support virtualization at all (then Hyper-V). It takes time for market adoption and even in HCI there are still some barriers (SAP is still dragging their feat on HANA certifications for HCI). At the end of the day customer choice is great, and if you can be a trailblazer and risk support to help push vendors to be more opened minded (That's great) but not everyone can do this.

      There are other advantages to his design over a HCI design. If he has incredibly data heavy growth in his environment he doesn't have to add hosts. As licensing for Microsoft applications stacks (datacenter, SQL etc) are tied to CPU Core's here in the near future adding hosts to add storage can be come rather expensive if you don't account for it properly. Now you could mount external storage to the cluster to put the growing VM's on, but I'm not sure if Scale Supports that? He also within the HUS can either grow existing pools, add new pools (maybe a dedicated cold Tier 3) or PIN LUN's to a given tier (Maybe put a database always in flash). There's a lot of control here of storage costs and performance (if you have patience to manage it. Sadly no VVOLs support coming to the old HUS's.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      @hutchingsp said in The Inverted Pyramid of Doom Challenge:

      In the end I ruled it out because it introduces more complexity than I wanted...

      I'm confused here. How does RLS introduce more complexity? Our RLS system is so simple I couldn't possible install any enterprise SAN on the market as easily. It takes literally no knowledge or setup at all. We are using Scale's RLS hyperconverged system and I literally cannot fathom it being easier today. Just making a LUN would be more complex than using RLS alone. Just needing to know that you need to make a LUN is more complex. With the RLS that we have, you just "use it". If you want more power, you can choose to manage your performance tiering manually, but there is no need to if you don't want to as it does it automatically for you out of the box.

      To be fair in comparison he could have deployed Hitachi Unified Compute (Their CI stack) and gotten the same experience (basically someone builds out the solution for you and gives you pretty automated tools to abstract the complexity away, as well as have API drive endpoints for provisioning against etc). This isn't an argument for an architecture (and I like HCI, I REALLY do) this is an argument about build vs. buy your making. HCI with VSA's can be VERY damn complicated. HCI can be very simple (especially when it's built by someone else). CI can do this too.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller His previous RAIN system (HP Store Virtual) requires local RAID. So your stuck with nested RAID 5's (Awful IO performance) or nested RAID 10 (awful capacity capabilities, but great residency). Considering the HDS has 5 nines already though kinda moot.

      posted in IT Discussion
      S
      StorageNinja
    • RE: The Inverted Pyramid of Doom Challenge

      @scottalanmiller said in The Inverted Pyramid of Doom Challenge:

      We have this now and we use the same capacity with replicated local disks as you would with a SAN with RAID 10. Are you using RAID 6 or something else to get more capacity from the SAN than you can with RLS? We aren't wasting any capacity having the extra redundancy and reliability of the local disks with RAIN.

      With HDT pools you could have Tier 0 be RAID 5 SSD, RAID 10 for 10K's in the middle Tier, and RAID 6 for NL-SAS with sub-lun block tiering across all of that. Replicated local you generally can't do this (or you can't dynamically expand individual tiers). Now as 10K drives make less sense (hell magnetic drives make less sense) the cost benefits of a fancy tiering system might make less sense. Then again I see HDS doing tiering now between their custom FMD's regular SSD's, and NL's in G series deployments so there's still value in having a big ass array that can do HSM.

      posted in IT Discussion
      S
      StorageNinja
    • RE: It's 10K Day

      said in It's 10K Day:

      And if my calculations are correct, this is thread 10,000 itself!!

      For some reason I was thinking 10K filing when yo said this. (Looks for coffee).

      posted in Announcements
      S
      StorageNinja
    • RE: The Great NTG Lab Liquidation of 2015

      @scottalanmiller Your making me feel wasteful for scraping a dozen Gen7's with 6 core proc's.

      posted in IT Discussion
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @donaldlandru said in ZFS Based Storage for Medium VMWare Workload:

      If I can sell them on Office 365 this time around (third times a charm), but that is for a different thread

      What version of Exchange are you on?

      2010 is in extended support (That's why OWA is broken in Chrome, Microsoft doesn't care). Its time to START budgeting to move to the current version. Show the hardware costs to deploy a 2 or 3 site DAG (this is apples/apples with 365), show the current version CALs, show the cost for backup software that can handle it if you do not have that, and show the cost of GSLB's to front the DAG cluster.

      Don't show the cost to "keep running your 2007 server into the ground". Present real options, not hobo IT stuff that's cheap.

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @donaldlandru

      Edit: I don't always use WD NAS (RED) drives, but when I do I use the WDIDLE tool to fix that problem

      The problem is that its a 5400RPM drive that is way to damn slow to use for virtual machines that have any kind of transactional workload. In order to get any functioning amount of IOPS out of the drive you have to wide stripe across a bunch, and then operate at deep queue depth (driving latency through the roof).

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

      @John-Nicholson said in ZFS Based Storage for Medium VMWare Workload:

      In an era of 18 Core processors, going single proc isn't actually that limiting. Can still run a TB of RAM on some platforms as crazy as that sounds...

      I agree, as do a lot of box vendors. Single proc servers should be quite standard. A huge Xeon or Opteron will run insane loads. And that's before we look at Sparc or Power that have come single socket for forever for exactly that reason. We just need licensing to catch up with the hardware designs.

      Are you seeing Opteron's? Last survey I saw was sub 5% in Enterprise.

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

      @John-Nicholson said in ZFS Based Storage for Medium VMWare Workload:

      In an era of 18 Core processors, going single proc isn't actually that limiting. Can still run a TB of RAM on some platforms as crazy as that sounds...

      I agree, as do a lot of box vendors. Single proc servers should be quite standard. A huge Xeon or Opteron will run insane loads. And that's before we look at Sparc or Power that have come single socket for forever for exactly that reason. We just need licensing to catch up with the hardware designs.

      It has, just different vendors have different attitudes. The thing to consider is Essentials Plus is really a "3 servers with no more than 2 sockets" license". That's what the EULA states (The key technically allows 6 sockets, and as of 4.x you could actually put it on a single 4 way box despite it being a EULA violation).

      Microsofts progression was to go from Charging "not a lot" (2008R2) per socket for datacenter to charging "a lot and forcing 2 sockets" (2012R2) to charging a yuuuuge amount (2016 per core). They want to try to capture the same revenue per Virtual Machine on the core hypervisor, and so for small shops ideally squeeze them into Azure. This is honestly the biggest "risk" to the guys like Scale targeting the tiny side of SMB, as Email, LOB apps are all going hosted. I've walked into a 50 man office that didn't actually have AD or anything bigger than a NAS because GoogleApps handled email, their ERP was hosted by IBM, and everything else they did was SaaS. For people who Need "IaaS" I'm seeing the telco's sell lease this and tie it to service contracts at or below what you could directly lease in a move to make their contracts stickier.

      VMware is trying to capture the same or more revenue by up-seling other stuff (NSX, vRA, View, vROPS, VSAN). Effectively if you track Moore's law, Essentials Plus's socket based licensing has become "Cheaper" as time has gone on. If you can run 100VM's on the license where previously you would have run 10 back in the day, its 10x cheaper now. VSAN's got the same thing as when people first deployed it 2 years ago 400GB was a "big" SSD, and now people are deploying 4TB without having to pay anything extra.

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @scottalanmiller

      The way I look at it is you can still "Expand in place" buy buying VSAN licenses later.
      I had one customer start with 3 single socket CPU's on essentials plus (needed to run 7 VM's) only started with a single disk group in each server with only 1 cache device and 2 capacity devices. They were growing show and quickly (startup). By the end of the year they had expanded to 4 nodes, doubled the CPU's, added more disk groups, increased cache sizes and were running 100+ VM's. The nice thing was they wanted nothing along the way, and as they were using SuperMicro they could buy enterprise flash at commodity prices so flash went from being $2.50 to ~$1 a GB over the course of working with them. Last I checked its at ~52 cents per GB for samsung capacity grade flash, so over a 3 year period the system became cheaper to expand (and they could expand it with single drives if they wanted, rather than needing a full raid group).

      In an era of 18 Core processors, going single proc isn't actually that limiting. Can still run a TB of RAM on some platforms as crazy as that sounds...

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @donaldlandru said in ZFS Based Storage for Medium VMWare Workload:

      @scottalanmiller said:

      @donaldlandru said:

      Back to the original requirements list. HA and FT are not listed as needed for the development environment. This conversation went sideways when we started digging into the operations side (where there should be HA) and I have a weak point, the storage.

      Okay, so we are looking exclusively at the non-production side?

      But production completely lacks HA today, it should be a different thread, but your "actions" say you dont need HA in production even if you feel that you do. Either what you have today isn't good enough and has to be replaced there, or HA isn't needed since you've happily been without it for so long. This can't be overlooked - you are stuck with either falling short of a need or not being clear on the needs for production.

      Ahh -- there is the detail I missed. Just re-read my post and that doesn't make this clear. Yes, the discussion was supposed to pertain to the non-production side. My apologies.

      I agree we do lack true HA in the production side as there is a single weak link (one storage array), the solution here depends on our move to Office 365 as that would take most of the operations load off of the network and change the requirements completely.

      We have qasi-HA with the current solution, but now based on new enlightenment I would agree it is not fully HA.

      To be clear, Exchange (2010 on) shouldn't be putting that much load. What could/can happen is your arbitraging DiskIO for CPU (CPU threads waiting on disk IO) and Memory (cache). If your running cached mode for your users a single reasonably sized Exchange server can serve thousands of users....

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

      @John-Nicholson said in ZFS Based Storage for Medium VMWare Workload:

      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

      @John-Nicholson said in ZFS Based Storage for Medium VMWare Workload:

      @donaldlandru Cuts licensing for VSAN in half (single CPU)

      Can you fill in the background on this comment for the rest of us?

      He said he only had single sockets deployed on one of his clusters. VSAN is licensed by socket (well, among other options but this would be the most common in his case)

      Oh okay, cool. I figured but wanted to be sure. Doesn't that cause VSAN some license disparity with Essentials Plus users, but line up well with Standard users?

      Actually it works fine with essentials plus (i've deployed it). Not VSAN includes a vDS license so you'll get that and NIOC thrown in with it.

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @bhershen said in ZFS Based Storage for Medium VMWare Workload:

      Hi Scott,
      Donald mentioned SM and referenced generic ZFS (could be Oracle, OpenIndiana, FreeBSD, etc.) solutions which have uncoordinated HW, SW and support. Nexenta is packaged to compete with EMC, NetApp, etc. as primary storage in the Commercial market.
      If you would like to get an overview, please feel free to ping me.
      Best.

      Weird I've seen it packaged as software only (as a virtual NAS piece to run on top of HCI).

      posted in SAM-SD
      S
      StorageNinja
    • RE: ZFS Based Storage for Medium VMWare Workload

      @scottalanmiller said in ZFS Based Storage for Medium VMWare Workload:

      @John-Nicholson said in ZFS Based Storage for Medium VMWare Workload:

      @donaldlandru Cuts licensing for VSAN in half (single CPU)

      Can you fill in the background on this comment for the rest of us?

      He said he only had single sockets deployed on one of his clusters. VSAN is licensed by socket (well, among other options but this would be the most common in his case)

      posted in SAM-SD
      S
      StorageNinja
    • 1
    • 2
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 45 / 50