Opened 12 years ago

Last modified 7 years ago

#8990 new bug

intel partiton addon allows creating partitions > 2TB (easy)

Reported by: luroh Owned by: bonefish
Priority: normal Milestone: R1
Component: Partitioning Systems/Intel Version: R1/Development
Keywords: Cc:
Blocked By: Blocking:
Platform: All

Description

Attachments (8)

0001-Intel-add-on-partition-size-greater-than-2-TB-gsoc2015.patch (2.2 KB ) - added by kushalsingh007 9 years ago.
A patch which limits the maximum size, over which intel partition map can be initialized to 2TB. (Also includes small changes for operator precedence issue)
Screenshot-after-patch.zip (552.3 KB ) - added by kushalsingh007 9 years ago.
Screenshot displaying the status after adding the patch.
0001-Intel-Partition-Addon-2TB-limit-fix-gsoc2015.patch (2.3 KB ) - added by kushalsingh007 9 years ago.
Made comment more descriptive and made conditions more readable. (Changed the commit message)
0001-Making-code-more-readable-and-removing-extra-parenthesis-gsoc2015.patch (1.7 KB ) - added by kushalsingh007 9 years ago.
Seperate Patch for removing extra parentheisis and making code more readable by breaking into multiple conditions.
0002-Intel-add-on-partition-size-greater-than-2-TB-gsoc2015.patch (1.4 KB ) - added by kushalsingh007 9 years ago.
Patch with only the changes corresponding to the validation problem.
0001-changes-for-block_size-value-being-sector_size.patch (5.7 KB ) - added by kushalsingh007 9 years ago.
Patch for the problem of block size not being equal to sector size.
0002-partition-offset-check-while-validating.patch (1.5 KB ) - added by kushalsingh007 9 years ago.
Patch for adding offset check for the partition.
0003-Minor-code-cleaning-for-checking-conditions-.patch (2.0 KB ) - added by kushalsingh007 9 years ago.
Making the conditions more readable by changing the "if" conditions and adding comments.

Download all attachments as: .zip

Change History (34)

comment:1 by luroh, 12 years ago

Component: Add-OnsPartitioning Systems/Intel

comment:2 by pulkomandy, 9 years ago

Summary: intel partiton addon allows creating partitions > 2TBintel partiton addon allows creating partitions > 2TB (easy)

comment:3 by kushalsingh007, 9 years ago

I would like to start working on this bug.

by kushalsingh007, 9 years ago

A patch which limits the maximum size, over which intel partition map can be initialized to 2TB. (Also includes small changes for operator precedence issue)

comment:4 by kushalsingh007, 9 years ago

patch: 01

by kushalsingh007, 9 years ago

Attachment: Screenshot-after-patch.zip added

Screenshot displaying the status after adding the patch.

comment:5 by kushalsingh007, 9 years ago

Kindly review the patch and suggest the changes that may have to be done.

by kushalsingh007, 9 years ago

Made comment more descriptive and made conditions more readable. (Changed the commit message)

comment:6 by pdziepak, 9 years ago

  1. This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).
  2. What was wrong with partition->Size() / partition->BlockSize() < UINT32_MAX ?
  3. The part with B_BAD_VALUE checks is unrelated and should be in a separate patch.
  4. Personally, I really hate redundant code and while (unfortunately) explicit checks like foo == NULL are encouraged by the coding guidelines there are also way too many parentheses in your patch. What about if (!CanInitialize(partition) || parameters == NULL || parameters[0] == '\0') ? Splitting that conditional into three ifs (one for CanInitialize(), one for name and one for parameters) also may be a good idea.

in reply to:  6 ; comment:7 by phoudoin, 9 years ago

Replying to pdziepak:

  1. This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).

Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.

  1. What was wrong with partition->Size() / partition->BlockSize() < UINT32_MAX ?

It was not checking that partition *offset* was also within 32 bits limits?

comment:8 by kushalsingh007, 9 years ago

1.Using partition->Size() / partition->BlockSize() < UINT32_MAX causes the condition to fail.

For instance, If BlockSize = 2048 =2^11.Then partition->Size() < 2^32*2^11 or 2^43 which is 8TB.

Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.

2.Yeah I assumed the sector size to be 512 , the max limit is UINT32_MAX * SECTOR_SIZE, but then since MBR assumes 512 bytes sectors , so I think that's fine.

3.I'll make a separate patch for B_BAD_VALUE part and remove the parenthesis.

So, apart from the minor code cleanup , are there any other problems too?

Last edited 9 years ago by kushalsingh007 (previous) (diff)

in reply to:  7 ; comment:9 by pdziepak, 9 years ago

Replying to phoudoin:

Replying to pdziepak:

  1. This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).

Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.

I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.

  1. What was wrong with partition->Size() / partition->BlockSize() < UINT32_MAX ?

It was not checking that partition *offset* was also within 32 bits limits?

That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?

in reply to:  9 comment:10 by kushalsingh007, 9 years ago

Replying to pdziepak:

Replying to phoudoin:

Replying to pdziepak:

  1. This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).

Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.

I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.

So how should I get the sector size?

  1. What was wrong with partition->Size() / partition->BlockSize() < UINT32_MAX ?

It was not checking that partition *offset* was also within 32 bits limits?

That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?

As I explained in my previous comment, if you take a simple example you can see it fails.

in reply to:  9 comment:11 by phoudoin, 9 years ago

Replying to pdziepak:

I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.

The MBR itself assumes nothing indeed as it's unit is sector, not byte. But softwares (BIOS, OS, everything using MBR info) does assumes a 512 bytes sector. Relying on the hope that they will be able to detect when to assume another sector size when interpreting MBR values is too risky and can lead to data corruption.

There is GPT for that, please don't play with fire. And user's data safety.

  1. What was wrong with partition->Size() / partition->BlockSize() < UINT32_MAX ?

It was not checking that partition *offset* was also within 32 bits limits?

That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?

That it's broken *if* BlockSize() is not 512.

in reply to:  8 ; comment:12 by pdziepak, 9 years ago

Replying to kushalsingh007:

1.Using partition->Size() / partition->BlockSize() < UINT32_MAX causes the condition to fail.

For instance, If BlockSize = 2048 =2^11.Then partition->Size() < 2^32*2^11 or 2^43 which is 8TB.

Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.

Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that BlockSize() is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seen BlockSize() returning something else than 512 on a disk with 512 byte sectors?

2.Yeah I assumed the sector size to be 512 , the max limit is UINT32_MAX * SECTOR_SIZE, but then since MBR assumes 512 bytes sectors , so I think that's fine.

If I understand the MBR code correctly BlockSize() returns size of the sector. So the only change required is checking whether the offset is < BlockSize() * UINT32_MAX.

in reply to:  12 ; comment:13 by kushalsingh007, 9 years ago

Replying to pdziepak:

1.Using partition->Size() / partition->BlockSize() < UINT32_MAX causes the condition to fail.

For instance, If BlockSize = 2048 =2^11.Then partition->Size() < 2^32*2^11 or 2^43 which is 8TB.

Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.

Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that BlockSize() is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seen BlockSize() returning something else than 512 on a disk with 512 byte sectors?

Yes , while testing I found the blocksize to be 2048 when sector size was 512. It depends on the value that has been initialized by you.

2.Yeah I assumed the sector size to be 512 , the max limit is UINT32_MAX * SECTOR_SIZE, but then since MBR assumes 512 bytes sectors , so I think that's fine.

If I understand the MBR code correctly BlockSize() returns size of the sector. So the only change required is checking whether the offset is < BlockSize() * UINT32_MAX.

But then again BlockSize() returns the size of the block (512/2048/4096 etc) not the size of the sector. Had it returned the size of the sector , your assumption would have been true.

Last edited 9 years ago by kushalsingh007 (previous) (diff)

comment:14 by phoudoin, 9 years ago

If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions.

But mostly all software from BIOS to other operating systems and Disk tools will break hard using such MBR. And eventually corrupt data while doing it. Better be on the safe side when it comes to user's data.

Even if someone in the future start to play with this partition add-on BlockSize() value, the more explicit check will stands on the safe side.

The reality is not that MBR unit is sector, the reality is that most soft using MBR info assumes a 512 bytes sector and could do BAD thing if MBR was written with another size in mind. That's why 4k disk reports emulated 512 sizes sectors until everybody use GPT and/or operating systems supporting native 4K sectors.

Again, it's not about code quality, it's about (your) data safety.

by kushalsingh007, 9 years ago

Seperate Patch for removing extra parentheisis and making code more readable by breaking into multiple conditions.

in reply to:  13 comment:15 by pdziepak, 9 years ago

Replying to kushalsingh007:

Replying to pdziepak:

1.Using partition->Size() / partition->BlockSize() < UINT32_MAX causes the condition to fail.

For instance, If BlockSize = 2048 =2^11.Then partition->Size() < 2^32*2^11 or 2^43 which is 8TB.

Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.

Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that BlockSize() is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seen BlockSize() returning something else than 512 on a disk with 512 byte sectors?

Yes , while testing I found the blocksize to be 2048 when sector size was 512. It depends on the value that has been initialized by you.

What is this "block size" then? The only "blocks" at this abstraction level are sectors.

2.Yeah I assumed the sector size to be 512 , the max limit is UINT32_MAX * SECTOR_SIZE, but then since MBR assumes 512 bytes sectors , so I think that's fine.

If I understand the MBR code correctly BlockSize() returns size of the sector. So the only change required is checking whether the offset is < BlockSize() * UINT32_MAX.

But then again BlockSize() returns the size of the block (512/2048/4096 etc) not the size of the sector. Had it returned the size of the sector , your assumption would have been true.

Commit 5f1b31debdf97da331be6e97a6c2bc5899e2423a suggests that my assumption is indeed true and BlockSize() should always return the size of a sector. If it doesn't then we probably have another bug.

in reply to:  14 comment:16 by pdziepak, 9 years ago

Replying to phoudoin:

If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions.

But mostly all software from BIOS to other operating systems and Disk tools will break hard using such MBR. And eventually corrupt data while doing it.

Linux and Windows seem to support non-512 byte sectors even when using MBR.

Better be on the safe side when it comes to user's data.

I can't see how not hardcoding 512 byte sectors makes anything safer.

Even if someone in the future start to play with this partition add-on BlockSize() value, the more explicit check will stands on the safe side.

The reality is not that MBR unit is sector, the reality is that most soft using MBR info assumes a 512 bytes sector and could do BAD thing if MBR was written with another size in mind. That's why 4k disk reports emulated 512 sizes sectors until everybody use GPT and/or operating systems supporting native 4K sectors.

Again, it's not about code quality, it's about (your) data safety.

I am sorry, but this is silly. You are basically saying that since there is some broken software with hardcoded 512 byte sectors we should, on purpose, break our code in the same way. If a user has a device with logical 4k sectors and for whatever reason wants to use MBR partitioning on it why disallow it?

in reply to:  14 ; comment:17 by kushalsingh007, 9 years ago

An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),

Last edited 9 years ago by kushalsingh007 (previous) (diff)

in reply to:  17 ; comment:18 by pulkomandy, 9 years ago

Replying to kushalsingh007:

An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),

Then there is a bug there. The intel partition map would erase the BFS volume in that case, and it should use the physical block size of the disk.

I don't know if it's correct to allow MBR on disks which expose 4K sectors (note that most consumer grade disk don't, they use "512e" and expose 512 byte sectors to the BIOS and OS), and I don't know what sector size the MBR should be using in that case. If we want to be safe, we shouldn't allow use of MBR when sectors are not reported as 512 bytes by the drive, at least until we can find reliable information on how this is handled by the BIOS, our MBR code (I'm fairly sure we don't allocate 4K RAM for the sector to read there), and the stage1 loader and makebootable, as well as the kernel and the ata driver. (see http://en.wikipedia.org/wiki/Advanced_Format#512e , and the following section on "4K native drives").

So, I would suggest not allowing use of the MBR on non-512 byte drives until these possible problems have been investigated and we know for sure the system can boot off a drive which exposes 4K sectors.

by kushalsingh007, 9 years ago

Patch with only the changes corresponding to the validation problem.

in reply to:  18 ; comment:19 by pdziepak, 9 years ago

Replying to pulkomandy:

Replying to kushalsingh007:

An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),

Then there is a bug there. The intel partition map would erase the BFS volume in that case, and it should use the physical block size of the disk.

Yes, it looks like somewhere in the code "file system block size" and "block device block size" (aka sector) are confused.

I don't know if it's correct to allow MBR on disks which expose 4K sectors (note that most consumer grade disk don't, they use "512e" and expose 512 byte sectors to the BIOS and OS), and I don't know what sector size the MBR should be using in that case. If we want to be safe, we shouldn't allow use of MBR when sectors are not reported as 512 bytes by the drive, at least until we can find reliable information on how this is handled by the BIOS, our MBR code (I'm fairly sure we don't allocate 4K RAM for the sector to read there), and the stage1 loader and makebootable, as well as the kernel and the ata driver. (see http://en.wikipedia.org/wiki/Advanced_Format#512e , and the following section on "4K native drives").

So, I would suggest not allowing use of the MBR on non-512 byte drives until these possible problems have been investigated and we know for sure the system can boot off a drive which exposes 4K sectors.

AFAIK you need UEFI to boot from 4k disk, but that's not really important here. Besides, you could still use it as a secondary disk (assuming that there is no hardcoded sector size anywhere in the OS). Anyway, what I want to say is that what we do with disks with 4kB logical sectors is irrelevant in this discussion. However, when the kernel block device layer and its drivers expose such disk to the rest of the world (including MBR code) then it should be handled appropriately.

The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use BlockSize() as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".

in reply to:  19 comment:20 by kushalsingh007, 9 years ago

Replying to pdziepak:

The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use BlockSize() as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".

Since BlockSize() right now is returning the wrong values, using it in the patch would be a bad idea. As for instance I have 2.95TB (512 sector size) disk , I can initialize it to Be-File System first and then create intel partition map (which is wrong). A patch fixing the code for BlockSize() should be different. And can be added later. I agree that this patch is not the final solution to the problem. Once I find the problem with BlockSize() , I'll update the patch.

comment:21 by pdziepak, 9 years ago

Well, the most obvious plan of doing that would be to, first, fix values returned by BlockSize() and then, in a separate patch, add sector-size-independent offset check.

comment:22 by pulkomandy, 9 years ago

In Haiku we have a simple policy: "don't hide the bugs, fix them". Here, the bug is in BlockSize, so ze need to fix BlockSize. We should not try to avoid using it because it doesn't work.

AFAIK you need UEFI to boot from 4k disk, but that's not really important here. Besides, you could still use it as a secondary disk (assuming that there is no hardcoded sector size anywhere in the OS). Anyway, what I want to say is that what we do with disks with 4kB logical sectors is irrelevant in this discussion. However, when the kernel block device layer and its drivers expose such disk to the rest of the world (including MBR code) then it should be handled appropriately.

Yes, it is a separate issue and first of all the problems with BlockSize and the missing offset check should be fixed. For the problem of 4K sectors we should make sure we support them in the whole stack, and also that our MBR is interoperable with other systems handling it (it seems Windows won't: systems up to Windows 7 only support disks less than 2TB in size, and Windows 8 needs EFI and does not use MBRs anymore - but Linux may support this in one way or another). I opened a separate ticket for this: #11796

by kushalsingh007, 9 years ago

Patch for the problem of block size not being equal to sector size.

by kushalsingh007, 9 years ago

Patch for adding offset check for the partition.

by kushalsingh007, 9 years ago

Making the conditions more readable by changing the "if" conditions and adding comments.

comment:23 by kushalsingh007, 9 years ago

Kindly have a look at the following 3 patches 0001-changes-for-block_size-value-being-sector_size.patch 0002-partition-offset-check-while-validating.patch​ 0003-Minor-code-cleaning-for-checking-conditions-.patch

Sorry for the multiple patches for the third cases , ( they got uploaded accidentally).

in reply to:  19 comment:24 by kushalsingh007, 9 years ago

Replying to pdziepak:

Yes, it looks like somewhere in the code "file system block size" and "block device block size" (aka sector) are confused.

I think the latest patch takes care of it.

The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use BlockSize() as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".

As you pointed out the need for separate patches for unrelated functionality, So I have added

0001-changes-for-block_size-value-being-sector_size.patch for the problem of block-size being reported incorrectly. https://dev.haiku-os.org/attachment/ticket/8990/0001-changes-for-block_size-value-being-sector_size.patch

0002-partition-offset-check-while-validating.patch​ for the partition offset check https://dev.haiku-os.org/attachment/ticket/8990/0002-partition-offset-check-while-validating.patch

and 0003-Minor-code-cleaning-for-checking-conditions-.patch for the minor code clean-up that you suggested previously. https://dev.haiku-os.org/attachment/ticket/8990/0003-Minor-code-cleaning-for-checking-conditions-.patch

Kindly have a look at these patches.

comment:25 by vidrep, 7 years ago

Are these patches still valid?

comment:26 by pulkomandy, 7 years ago

IIRC this was discussed on the mailing list, and the outcome is that there seems to be some confusion on the use of the block_size field in the structure, leading to the partition add-ons using the wrong block size when running them on an already formatted file system. We should fix the underlying issue and make sure the partitionning system get the correct info reliably (ie the actual disk sector size, no matter what filesystem is currently on it). The patches do not address the issue properly, and as such, they will break use of MBR on disks with 4K sectors, which is an uncommon but valid use case.

So, the issue is still there, but the patches should be reworked to fix the actual issue (that the block_size in the partition info is overwritten with the block size of the file system inside it), rather than working around it.

Note: See TracTickets for help on using tickets.