Accessing 32bit variable on byte-by-byte basis


#1

Hi,

What is the best way to represent a 32bit value that I need to access on a byte-by-byte basis?

Is it best to use an array like this example…

func myFunc(blk1, blk2 [4]uint8) [4]uint8 {

	var blkOut  [4]uint8

	blkOut[0] = blk1[0] ^ blk2[0]
	blkOut[1] = blk1[1] ^ blk2[1]
	blkOut[2] = blk1[2] ^ blk2[2]
	blkOut[3] = blk1[3] ^ blk2[3]

	return blkOut
}

…or perhaps a struct of four uint8?

How will the tools synthesize array variables? Always as a memory of some type?

thanks,
Mark.


#2

No takers on this one?


#3

Hi Mark,

There isn’t really a difference at this level, since small structs or arrays are converted to concatenated bit vectors rather than RAM. This means that as long as you’re using constant index values there is no additional overhead in accessing the data (variable index values infer more logic so are best avoided if possible). Any reads and disjoint writes to slices of the bit vector can be carried out in parallel, but read after write or consecutive writes for overlapping slices need to preserve sequential ordering so will have a time penalty.

In that particular example, the tools should recognize that blkOut is just an intermediate variable so after optimization it will effectively disappear and you will be left with a pipeline structure from blk1 and blk2 to blkOut.

For splitting larger data types into bytes, using constant shifts and casts should infer no additional overhead after synthesis. Similarly, reconstructing larger data types from constant shifts and bitwise OR operations should infer no additional overhead. However, as things stand the tools may insert unnecessary pipeline stages associated with the bitwise OR operators - which is something we’re looking into with some new operator optimizations.

Chris.


#4

Hi Chris,

thanks for the reply.Could you quantify what you mean by “small structs or arrays”? How big do they need to be before being converted to RAM?

thanks,
Mark


#5

Hi Mark,

At the moment only arrays where the total number of bits stored exceeds 512 bits are converted to RAM - anything less is treated as a bit vector. This isn’t set in stone, since we may adjust the default value as we get more real world code to test with. It’s also possible we may make this an advanced user option in the future.

Chris.


#6

Thanks for the reply Chris.

My particular use case is for an array that is exactly 512bits - is that going to be a concatenated bit vector, or a RAM?

What if I defined it as a struct of uint8? Would that guarantee that it gets treated as a vector rather than a RAM?

cheers,
Mark


#7

Hi Mark,

The descriptive text may have been a bit ambiguous, but the the test for converting an array to RAM is ‘nbits > 512’ - so your example would be the largest array that is still treated as a vector. If you want to force the vector behaviour for larger types then defining them as a struct would be one way of doing it.

Chris.