My assumptions (please correct me if anything in this part is wrong including wrong usages of jargons)
In MIPS instructions, add t1 t2 t3
translates to 00000001010010110100100000100000
. The add, t1, t2, t3
I suppose first of all get translated to some ASCII value implemented on the hardware level. For example, t1
is 116 049
in binary by the ASCII convention. But then, these ASCII values gotta map to the instruction. Per the MIPS op code specification, add
corresponds to the last 6 100000
bits, t1
corresponds to the 01001
bits in the middle, etc.
Question
Assuming my assumptions are correct, how exactly does the assembler take the 116 049
and map it to 01001
? In this course called nand2tetris, I’ve been writing an assembler in C which is really just a text parser. Obviously, C gets translated to assembly, so the actual assembler has no access to it. Thus came the confusion.