ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    An ESXi Rebuild and Veeam Backup Job Oddities

    IT Discussion
    4
    6
    1.6k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • NetworkNerdN
      NetworkNerd
      last edited by NetworkNerd

      I completely rebuilt an ESXi host this weekend because the jump drive running ESXi had a bootbank issue. The host was running 5.1U1 (all VMs on local storage) and is now running 5.5U1 and has been patched for Heartbleed.

      Well, I am backing up the VMs on this host and one other with Veeam. After the rebuild of ESXi, I had to re-import all VMs on the host into inventory. I then had to reconnect the host to Veeam and enter the new root credentials for it so Veeam could back up its VMs successfully. Well, after Veeam re-scanned the host for VMs, my backup jobs went haywire. The selection lists that contained VMs for the newly rebuilt host were all wrong (different VMs on different jobs than they should be), and backups of any VM on the newly rebuilt ESXi host would fail. I had to remove all VMs that were on the host I rebuilt from Veeam backup jobs and add them to the backup jobs anew to fix the problem. Once I did that, all was smooth sailing.

      My guess is this may have to do with the way Veeam sees VMs inventoried on an ESXi host and the fact that they had to be re-imported into inventory on the host due to the rebuild. Has anyone else experienced this?

      alexntgA 1 Reply Last reply Reply Quote 0
      • alexntgA
        alexntg @NetworkNerd
        last edited by

        @NetworkNerd said:

        I completely rebuilt an ESXi host this weekend because the jump drive running ESXi had a bootbank issue. The host was running 5.1U1 (all VMs on local storage) and is now running 5.5U1 and has been patched for Heartbleed.

        Well, I am backing up the VMs on this host and one other with Veeam. After the rebuild of ESXi, I had to re-import all VMs on the host into inventory. I then had to reconnect the host to Veeam and enter the new root credentials for it so Veeam could back up its VMs successfully. Well, after Veeam re-scanned the host for VMs, my backup jobs went haywire. The selection lists that contained VMs for the newly rebuilt host were all wrong (different VMs on different jobs than they should be), and backups of any VM on the newly rebuilt ESXi host would fail. I had to remove all VMs that were on the host I rebuilt from Veeam backup jobs and add them to the backup jobs anew to fix the problem. Once I did that, all was smooth sailing.

        My guess is this may have to do with the way Veeam sees VMs inventoried on an ESXi host and the fact that they had to be re-imported into inventory on the host due to the rebuild. Has anyone else experienced this?

        Veeam uses something other than name to track the VMs. I've seen similar when I"ve moved a VM out of a vSphere datacenter to a mobile host. When I moved the VM back in, Veeam didn't see it without having to update the replica and backup jobs.

        1 Reply Last reply Reply Quote 0
        • Vladimir EreminV
          Vladimir Eremin
          last edited by Vladimir Eremin

          Veeam tracks all VMs by unique moref ID. The upgrade process seems to have resulted in morefID changes. Thus, the jobs containing previously existed VMs failed. Once VMs were re-added to the job, everything started working as expected.

          However, I should mention that typically in-place upgrade shouldn't lead to morefID changes.

          Thanks.

          NetworkNerdN 1 Reply Last reply Reply Quote 3
          • NetworkNerdN
            NetworkNerd @Vladimir Eremin
            last edited by

            @Vladimir-Eremin said:

            Veeam tracks all VMs by unique moref ID. The upgrade process seems to have resulted in morefID changes. Thus, the jobs containing previously existed VMs failed. Once VMs were re-added to the job, everything started working as expected.

            However, I should mention that typically in-place upgrade shouldn't lead to morefID changes.

            Thanks.

            Since this was a complete ESXi rebuild on a newer version of ESXi, that makes complete sense. Thanks for sharing!

            1 Reply Last reply Reply Quote 1
            • Vladimir EreminV
              Vladimir Eremin
              last edited by

              You're welcome. Should any other questions arise, feel free to contact me either here or on our Community Forum referenced above. Thanks.

              1 Reply Last reply Reply Quote 0
              • G
                Gabi
                last edited by

                Good replies on here. This should help others out.

                1 Reply Last reply Reply Quote 0
                • 1 / 1
                • First post
                  Last post