ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Any idea why Debian complains about a start job failure on boot after a new install?

    IT Discussion
    linux debian
    4
    22
    886
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      biggen
      last edited by scottalanmiller

      So installing Debian 10 bare metal today I ran into a problem where upon reboot after a successful install, it complains immediately about a start job it’s waiting for. Looking up the UUID, it’s waiting for my swap partition which I have.

      I thought I screwed up the partitioning segment so I went through it again and tried a re-install: Create a primary partition using 95% or so of the drive and assign it as “/“. Then create an extended partition and assign it as “swap”.

      Drive is set to DOS label (256GB SSD). Installer runs fine and upon reboot I’m waiting for a start job again. WTH? Why can’t Debian find the swap partition? What am I missing here on a brand new install?

      1 scottalanmillerS 3 Replies Last reply Reply Quote 0
      • 1
        1337 @biggen
        last edited by 1337

        @biggen
        What happens if you just install it with the default partitioning (everything in one partition)?

        1 Reply Last reply Reply Quote 2
        • scottalanmillerS
          scottalanmiller @biggen
          last edited by

          @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

          it’s waiting for my swap partition which I have.

          Should not cause any issues, but swap partitions are generally not recommended today. Swap files are the "new" standard for handling that.

          1 Reply Last reply Reply Quote 1
          • B
            biggen
            last edited by

            I’ve not tried that. I tried to do my normal LVM install and it complained that way as well about failed start job. It’s almost as if there is some old partition info hanging around causing problems.

            I’ve used Gparted and DD to wipe the drive.

            1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller @biggen
              last edited by

              @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

              Drive is set to DOS label (256GB SSD). Installer runs fine and upon reboot I’m waiting for a start job again.

              DOS label seems wrong here. Is this a proxy for MPT?

              B 1 Reply Last reply Reply Quote 0
              • black3dynamiteB
                black3dynamite
                last edited by

                Never had any issue with my using the default partitioning scheme and using a partitioning scheme like /boot, /, and a swapfile.

                B 1 Reply Last reply Reply Quote 0
                • B
                  biggen @black3dynamite
                  last edited by

                  @black3dynamite Me neither. I normally never use /boot. Just / and swap. Been doing it for years.

                  1 Reply Last reply Reply Quote 0
                  • B
                    biggen @scottalanmiller
                    last edited by

                    @scottalanmiller Not sure what MPT is. I’m reusing this disk from an earlier Debian 9 install a year or so go. I set it up as MBR/Dos label back then so I just didn’t change the disk label for this install.

                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                    • B
                      biggen
                      last edited by biggen

                      I initially tried to use this disk and another identical disk for my standard md -> LVM -> XFS install. But the installer would always fail to install grub near the end. Would say something like “failed to install grub to /dev/mapper”. So when that didn’t work I decided to just cut my losses and install to one drive. Now this issue cropped up.

                      I’ve been working on this for days. I’m thinking there is something up with these two drives.

                      scottalanmillerS 1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller @biggen
                        last edited by

                        @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                        @scottalanmiller Not sure what MPT is. I’m reusing this disk from an earlier Debian 9 install a year or so go. I set it up as MBR/Dos label back then so I just didn’t change the disk label for this install.

                        Oh okay, should be fine.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller @biggen
                          last edited by

                          @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                          I initially tried to use this disk and another identical disk for my standard md -> LVM -> XFS install.

                          https://askubuntu.com/questions/945337/ubuntu-17-04-will-not-boot-on-uefi-system-with-xfs-system-partition#945397

                          Older info, but might be related. XFS, LVM, Grub, and UEFI can all be culprits here.

                          1 Reply Last reply Reply Quote 0
                          • 1
                            1337
                            last edited by 1337

                            I installed debian 10 on a VM with LVM and XFS.

                            Debian puts a swap partition in by default and runs EXT4 by default but after changing that I had this:
                            deb10_lvm_xfs.png

                            This is the block list:
                            deb10_lsblk.png

                            Only a 10GB disk for the VM but it works fine.

                            Normally I don't mess with the swap partition though.

                            B 2 Replies Last reply Reply Quote 0
                            • B
                              biggen @1337
                              last edited by

                              @Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.

                              I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...

                              1 1 Reply Last reply Reply Quote 0
                              • B
                                biggen @1337
                                last edited by

                                @Pete-S I’ll try later with the default install that the partitioner wants to do and see if that changes things. Man, this is a real head scratcher.

                                1 Reply Last reply Reply Quote 0
                                • 1
                                  1337
                                  last edited by 1337

                                  I did another try with setting things up manually in the installer with raid1 as well.
                                  Also works.

                                  deb10_raid_lvm.png

                                  If I were you I would have a look at uefi settings in your bios.
                                  I usually just disable it so I don't have to deal with any problems, but maybe you need it.

                                  Some BIOS also have bugs in their uefi implementation. So maybe upgrade the BIOS as well.

                                  1 Reply Last reply Reply Quote 0
                                  • 1
                                    1337 @biggen
                                    last edited by

                                    @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                                    @Pete-S Yeah I tested it with a VM yesterday on my desktop in Virtual Box after I had my problems on the physical machine. Worked fine. I even did my md -> lvm > XFS setup using two VHDs. Installed and fired right up with a nice RAID 1 array.

                                    I can’t figure it out. It’s like there is something up with both disks. I’ve blown them out with Gparted and DD. I guess I can change the disk label to GPT and see if that makes a difference. I’m at a total loss...

                                    Label shouldn't make any difference.

                                    When reusing drives that has been in md raid you can use mdadm --zero-superblock /dev/sdX to wipe the raid info from it.

                                    B 1 Reply Last reply Reply Quote 0
                                    • B
                                      biggen @1337
                                      last edited by

                                      @Pete-S Yeah I have the Bios set to Legacy Boot which I assume means that UEFI is turned off,

                                      When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.

                                      I appreciate you testing in a VM. I’ll try it again later with the default installer partitioning. If it fails to work then I don’t know...

                                      I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.

                                      I swear I’ve never had issues with Debian. Very odd indeed.

                                      1 2 Replies Last reply Reply Quote 0
                                      • 1
                                        1337 @biggen
                                        last edited by 1337

                                        @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                                        I’ve tried to zero the md superblock, after that fact, I’m not sure it works anymore. If I boot into Debian (after waiting for the failed job start) on that disk and run that command, I get “couldn’t open for write. Not zeroing” for that drive /dev/sda”.

                                        If you created the raid on the device, I think you should zero the superblocks on /dev/sda

                                        But if you created the raid on the partition I think you need to zero the superblocks on /dev/sda1

                                        1 Reply Last reply Reply Quote 0
                                        • 1
                                          1337 @biggen
                                          last edited by

                                          @biggen said in Any idea why Debian complains about a start job failure on boot after a new install?:

                                          When I say “disk label” I mean partition type. So DOS = MBR. That is how the disk is partitioned now.

                                          What you say is confusing to me. What does fdisk -l look like on the system?

                                          B 1 Reply Last reply Reply Quote 1
                                          • B
                                            biggen @1337
                                            last edited by biggen

                                            @Pete-S I’m not there now but shows disklabel as “dos” if I remember correctly. So partition type should be plain ole MBR I believe.

                                            1 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 1 / 2
                                            • First post
                                              Last post