Opened 12 years ago
Last modified 8 years ago
#8990 new bug
intel partiton addon allows creating partitions > 2TB (easy)
Reported by: | luroh | Owned by: | bonefish |
---|---|---|---|
Priority: | normal | Milestone: | R1 |
Component: | Partitioning Systems/Intel | Version: | R1/Development |
Keywords: | Cc: | ||
Blocked By: | Blocking: | ||
Platform: | All |
Attachments (8)
Change History (34)
comment:1 by , 12 years ago
Component: | Add-Ons → Partitioning Systems/Intel |
---|
comment:2 by , 10 years ago
Summary: | intel partiton addon allows creating partitions > 2TB → intel partiton addon allows creating partitions > 2TB (easy) |
---|
comment:3 by , 10 years ago
by , 10 years ago
Attachment: | 0001-Intel-add-on-partition-size-greater-than-2-TB-gsoc2015.patch added |
---|
A patch which limits the maximum size, over which intel partition map can be initialized to 2TB. (Also includes small changes for operator precedence issue)
comment:4 by , 10 years ago
patch: | 0 → 1 |
---|
by , 10 years ago
Attachment: | Screenshot-after-patch.zip added |
---|
Screenshot displaying the status after adding the patch.
comment:5 by , 10 years ago
Kindly review the patch and suggest the changes that may have to be done.
by , 10 years ago
Attachment: | 0001-Intel-Partition-Addon-2TB-limit-fix-gsoc2015.patch added |
---|
Made comment more descriptive and made conditions more readable. (Changed the commit message)
follow-up: 7 comment:6 by , 10 years ago
- This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).
- What was wrong with
partition->Size() / partition->BlockSize() < UINT32_MAX
? - The part with B_BAD_VALUE checks is unrelated and should be in a separate patch.
- Personally, I really hate redundant code and while (unfortunately) explicit checks like
foo == NULL
are encouraged by the coding guidelines there are also way too many parentheses in your patch. What aboutif (!CanInitialize(partition) || parameters == NULL || parameters[0] == '\0')
? Splitting that conditional into three ifs (one forCanInitialize()
, one forname
and one forparameters
) also may be a good idea.
follow-up: 9 comment:7 by , 10 years ago
Replying to pdziepak:
- This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).
Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.
- What was wrong with
partition->Size() / partition->BlockSize() < UINT32_MAX
?
It was not checking that partition *offset* was also within 32 bits limits?
follow-up: 12 comment:8 by , 10 years ago
1.Using partition->Size() / partition->BlockSize() < UINT32_MAX
causes the condition to fail.
For instance, If BlockSize = 2048 =
2^11
.Then partition->Size() <2^32
*2^11
or2^43
which is 8TB.
Thus clearly the condition fails.Also there can be overflow error using the previous condition.
2.Yeah I assumed the sector size to be 512 which is wrong in some cases. So I think the max limit is UINT32_MAX * SECTOR_SIZE
, so how do I get the SectorSize directly?
Any suggestions ?
3.I'll make a separate patch for B_BAD_VALUE part and remove the parenthesis.
follow-ups: 10 11 comment:9 by , 10 years ago
Replying to phoudoin:
Replying to pdziepak:
- This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).
Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.
I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.
- What was wrong with
partition->Size() / partition->BlockSize() < UINT32_MAX
?It was not checking that partition *offset* was also within 32 bits limits?
That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?
comment:10 by , 10 years ago
Replying to pdziepak:
Replying to phoudoin:
Replying to pdziepak:
- This patch assumes sector size to be 512 which may not be true (e.g. 4Kn).
Well, MBR partionning assume 512 bytes sectors, real or emulated (512e). AFAICT, support of native 4096 bytes sectors needs using GPT partitionning.
I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.
So how should I get the sector size?
- What was wrong with
partition->Size() / partition->BlockSize() < UINT32_MAX
?It was not checking that partition *offset* was also within 32 bits limits?
That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?
As I explained in my previous comment, if you take a simple example you can see it fails.
comment:11 by , 10 years ago
Replying to pdziepak:
I don't think that MBR assumes anything. Its data structures just use sector as a base unit and the size of the sector isn't really important here.
The MBR itself assumes nothing indeed as it's unit is sector, not byte. But softwares (BIOS, OS, everything using MBR info) does assumes a 512 bytes sector. Relying on the hope that they will be able to detect when to assume another sector size when interpreting MBR values is too risky and can lead to data corruption.
There is GPT for that, please don't play with fire. And user's data safety.
- What was wrong with
partition->Size() / partition->BlockSize() < UINT32_MAX
?It was not checking that partition *offset* was also within 32 bits limits?
That's not what I was asking about. I understand the problem with the partition offset. My question is, what was wrong with that line of code that it needed to be changed?
That it's broken *if* BlockSize() is not 512.
follow-up: 13 comment:12 by , 10 years ago
Replying to kushalsingh007:
1.Using
partition->Size() / partition->BlockSize() < UINT32_MAX
causes the condition to fail.For instance, If BlockSize = 2048 =
2^11
.Then partition->Size() <2^32
*2^11
or2^43
which is 8TB.Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.
Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that BlockSize()
is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seen BlockSize()
returning something else than 512 on a disk with 512 byte sectors?
2.Yeah I assumed the sector size to be 512 , the max limit is
UINT32_MAX * SECTOR_SIZE
, but then since MBR assumes 512 bytes sectors , so I think that's fine.
If I understand the MBR code correctly BlockSize()
returns size of the sector. So the only change required is checking whether the offset is < BlockSize() * UINT32_MAX
.
follow-up: 15 comment:13 by , 10 years ago
Replying to pdziepak:
1.Using
partition->Size() / partition->BlockSize() < UINT32_MAX
causes the condition to fail.For instance, If BlockSize = 2048 =
2^11
.Then partition->Size() <2^32
*2^11
or2^43
which is 8TB.Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.
Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that
BlockSize()
is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seenBlockSize()
returning something else than 512 on a disk with 512 byte sectors?
Yes , while testing I found the blocksize to be 2048 when sector size was 512. It depends on the value that has been initialized by you.
2.Yeah I assumed the sector size to be 512 , the max limit is
UINT32_MAX * SECTOR_SIZE
, but then since MBR assumes 512 bytes sectors , so I think that's fine.If I understand the MBR code correctly
BlockSize()
returns size of the sector. So the only change required is checking whether the offset is< BlockSize() * UINT32_MAX
.
But then again BlockSize() returns the size of the block (512/2048/4096 etc) not the size of the sector. Had it returned the size of the sector , your assumption would have been true.
follow-ups: 16 17 comment:14 by , 10 years ago
If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions.
But mostly all software from BIOS to other operating systems and Disk tools will break hard using such MBR. And eventually corrupt data while doing it. Better be on the safe side when it comes to user's data.
Even if someone in the future start to play with this partition add-on BlockSize() value, the more explicit check will stands on the safe side.
The reality is not that MBR unit is sector, the reality is that most soft using MBR info assumes a 512 bytes sector and could do BAD thing if MBR was written with another size in mind. That's why 4k disk reports emulated 512 sizes sectors until everybody use GPT and/or operating systems supporting native 4K sectors.
Again, it's not about code quality, it's about (your) data safety.
by , 10 years ago
Seperate Patch for removing extra parentheisis and making code more readable by breaking into multiple conditions.
comment:15 by , 10 years ago
Replying to kushalsingh007:
Replying to pdziepak:
1.Using
partition->Size() / partition->BlockSize() < UINT32_MAX
causes the condition to fail.For instance, If BlockSize = 2048 =
2^11
.Then partition->Size() <2^32
*2^11
or2^43
which is 8TB.Thus clearly the condition fails. Also there can be overflow error using the previous condition. Moreover the offset wasn't been checked anywhere.
Well, I am getting a bit confused here. When reading the MBR add-on code I got the impression that
BlockSize()
is supposed to return the sector size (apparently the original author of the check in question assumed the same). In such case there is nothing wrong in the scenario you presented. 2TB partition size limit applies only when sector size 512 because that limit originates from the 32 bit size of "partition size in sectors" value. If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions. Have you actually seenBlockSize()
returning something else than 512 on a disk with 512 byte sectors?Yes , while testing I found the blocksize to be 2048 when sector size was 512. It depends on the value that has been initialized by you.
What is this "block size" then? The only "blocks" at this abstraction level are sectors.
2.Yeah I assumed the sector size to be 512 , the max limit is
UINT32_MAX * SECTOR_SIZE
, but then since MBR assumes 512 bytes sectors , so I think that's fine.If I understand the MBR code correctly
BlockSize()
returns size of the sector. So the only change required is checking whether the offset is< BlockSize() * UINT32_MAX
.But then again BlockSize() returns the size of the block (512/2048/4096 etc) not the size of the sector. Had it returned the size of the sector , your assumption would have been true.
Commit 5f1b31debdf97da331be6e97a6c2bc5899e2423a suggests that my assumption is indeed true and BlockSize()
should always return the size of a sector. If it doesn't then we probably have another bug.
comment:16 by , 10 years ago
Replying to phoudoin:
If the sector size is 2kB nothing prevents MBR from supporting 8TB partitions.
But mostly all software from BIOS to other operating systems and Disk tools will break hard using such MBR. And eventually corrupt data while doing it.
Linux and Windows seem to support non-512 byte sectors even when using MBR.
Better be on the safe side when it comes to user's data.
I can't see how not hardcoding 512 byte sectors makes anything safer.
Even if someone in the future start to play with this partition add-on BlockSize() value, the more explicit check will stands on the safe side.
The reality is not that MBR unit is sector, the reality is that most soft using MBR info assumes a 512 bytes sector and could do BAD thing if MBR was written with another size in mind. That's why 4k disk reports emulated 512 sizes sectors until everybody use GPT and/or operating systems supporting native 4K sectors.
Again, it's not about code quality, it's about (your) data safety.
I am sorry, but this is silly. You are basically saying that since there is some broken software with hardcoded 512 byte sectors we should, on purpose, break our code in the same way. If a user has a device with logical 4k sectors and for whatever reason wants to use MBR partitioning on it why disallow it?
follow-up: 18 comment:17 by , 10 years ago
An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),
follow-up: 19 comment:18 by , 10 years ago
Replying to kushalsingh007:
An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),
Then there is a bug there. The intel partition map would erase the BFS volume in that case, and it should use the physical block size of the disk.
I don't know if it's correct to allow MBR on disks which expose 4K sectors (note that most consumer grade disk don't, they use "512e" and expose 512 byte sectors to the BIOS and OS), and I don't know what sector size the MBR should be using in that case. If we want to be safe, we shouldn't allow use of MBR when sectors are not reported as 512 bytes by the drive, at least until we can find reliable information on how this is handled by the BIOS, our MBR code (I'm fairly sure we don't allocate 4K RAM for the sector to read there), and the stage1 loader and makebootable, as well as the kernel and the ata driver. (see http://en.wikipedia.org/wiki/Advanced_Format#512e , and the following section on "4K native drives").
So, I would suggest not allowing use of the MBR on non-512 byte drives until these possible problems have been investigated and we know for sure the system can boot off a drive which exposes 4K sectors.
by , 10 years ago
Attachment: | 0002-Intel-add-on-partition-size-greater-than-2-TB-gsoc2015.patch added |
---|
Patch with only the changes corresponding to the validation problem.
follow-ups: 20 24 comment:19 by , 10 years ago
Replying to pulkomandy:
Replying to kushalsingh007:
An example that blocksize is not always indeed not 512 can be seen when you try to format the given space to Be File System ( you choose a value of BlockSize for the same) , and then once it's done then you try to Initialize Intel partition Map. You will notice there that partition->BlockSize() is going to return the value chosen by you (not 512),
Then there is a bug there. The intel partition map would erase the BFS volume in that case, and it should use the physical block size of the disk.
Yes, it looks like somewhere in the code "file system block size" and "block device block size" (aka sector) are confused.
I don't know if it's correct to allow MBR on disks which expose 4K sectors (note that most consumer grade disk don't, they use "512e" and expose 512 byte sectors to the BIOS and OS), and I don't know what sector size the MBR should be using in that case. If we want to be safe, we shouldn't allow use of MBR when sectors are not reported as 512 bytes by the drive, at least until we can find reliable information on how this is handled by the BIOS, our MBR code (I'm fairly sure we don't allocate 4K RAM for the sector to read there), and the stage1 loader and makebootable, as well as the kernel and the ata driver. (see http://en.wikipedia.org/wiki/Advanced_Format#512e , and the following section on "4K native drives").
So, I would suggest not allowing use of the MBR on non-512 byte drives until these possible problems have been investigated and we know for sure the system can boot off a drive which exposes 4K sectors.
AFAIK you need UEFI to boot from 4k disk, but that's not really important here. Besides, you could still use it as a secondary disk (assuming that there is no hardcoded sector size anywhere in the OS). Anyway, what I want to say is that what we do with disks with 4kB logical sectors is irrelevant in this discussion. However, when the kernel block device layer and its drivers expose such disk to the rest of the world (including MBR code) then it should be handled appropriately.
The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use BlockSize()
as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".
comment:20 by , 10 years ago
Replying to pdziepak:
The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use
BlockSize()
as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".
Since BlockSize() right now is returning the wrong values, using it in the patch would be a bad idea. As for instance I have 2.95TB (512 sector size) disk , I can initialize it to Be-File System first and then create intel partition map (which is wrong). A patch fixing the code for BlockSize() should be different. And can be added later. I agree that this patch is not the final solution to the problem. Once I find the problem with BlockSize() , I'll update the patch.
comment:21 by , 10 years ago
Well, the most obvious plan of doing that would be to, first, fix values returned by BlockSize()
and then, in a separate patch, add sector-size-independent offset check.
comment:22 by , 10 years ago
In Haiku we have a simple policy: "don't hide the bugs, fix them". Here, the bug is in BlockSize, so ze need to fix BlockSize. We should not try to avoid using it because it doesn't work.
AFAIK you need UEFI to boot from 4k disk, but that's not really important here. Besides, you could still use it as a secondary disk (assuming that there is no hardcoded sector size anywhere in the OS). Anyway, what I want to say is that what we do with disks with 4kB logical sectors is irrelevant in this discussion. However, when the kernel block device layer and its drivers expose such disk to the rest of the world (including MBR code) then it should be handled appropriately.
Yes, it is a separate issue and first of all the problems with BlockSize and the missing offset check should be fixed. For the problem of 4K sectors we should make sure we support them in the whole stack, and also that our MBR is interoperable with other systems handling it (it seems Windows won't: systems up to Windows 7 only support disks less than 2TB in size, and Windows 8 needs EFI and does not use MBRs anymore - but Linux may support this in one way or another). I opened a separate ticket for this: #11796
by , 10 years ago
Attachment: | 0001-changes-for-block_size-value-being-sector_size.patch added |
---|
Patch for the problem of block size not being equal to sector size.
by , 10 years ago
Attachment: | 0002-partition-offset-check-while-validating.patch added |
---|
Patch for adding offset check for the partition.
by , 10 years ago
Attachment: | 0003-Minor-code-cleaning-for-checking-conditions-.patch added |
---|
Making the conditions more readable by changing the "if" conditions and adding comments.
comment:23 by , 10 years ago
Kindly have a look at the following 3 patches 0001-changes-for-block_size-value-being-sector_size.patch 0002-partition-offset-check-while-validating.patch 0003-Minor-code-cleaning-for-checking-conditions-.patch
Sorry for the multiple patches for the third cases , ( they got uploaded accidentally).
comment:24 by , 10 years ago
Replying to pdziepak:
Yes, it looks like somewhere in the code "file system block size" and "block device block size" (aka sector) are confused.
I think the latest patch takes care of it.
The real problem with the posted patch was (and still is in v2) that it: 1) breaks the size check by hardcoding 512 byte sector size, 2) adds offset check with hardcoded 512 byte sector size. The correct way to get the sector size is to use
BlockSize()
as it is done for the currently used size check. As long as these issues are not corrected in the patch it has my "nacked-by".
As you pointed out the need for separate patches for unrelated functionality, So I have added
0001-changes-for-block_size-value-being-sector_size.patch for the problem of block-size being reported incorrectly. https://dev.haiku-os.org/attachment/ticket/8990/0001-changes-for-block_size-value-being-sector_size.patch
0002-partition-offset-check-while-validating.patch for the partition offset check https://dev.haiku-os.org/attachment/ticket/8990/0002-partition-offset-check-while-validating.patch
and 0003-Minor-code-cleaning-for-checking-conditions-.patch for the minor code clean-up that you suggested previously. https://dev.haiku-os.org/attachment/ticket/8990/0003-Minor-code-cleaning-for-checking-conditions-.patch
Kindly have a look at these patches.
comment:26 by , 8 years ago
IIRC this was discussed on the mailing list, and the outcome is that there seems to be some confusion on the use of the block_size field in the structure, leading to the partition add-ons using the wrong block size when running them on an already formatted file system. We should fix the underlying issue and make sure the partitionning system get the correct info reliably (ie the actual disk sector size, no matter what filesystem is currently on it). The patches do not address the issue properly, and as such, they will break use of MBR on disks with 4K sectors, which is an uncommon but valid use case.
So, the issue is still there, but the patches should be reworked to fix the actual issue (that the block_size in the partition info is overwritten with the block size of the file system inside it), rather than working around it.
I would like to start working on this bug.