# Anyone speak Cortex A53?



## Ceegen (Sep 28, 2020)

Okay so, as I understand it, the difference between RISC and CISC processors goes something like this:
-RISC
   read-do-store
-CISC
   do_x @ (mem_loc_1) & (mem_loc_2)

And so the assembly instructions for Arm processors differ in how a single cycle for an instruction is timed? So each corresponding read-do-store cycle of quickest time for, say, one command and then save it somewhere, is considered "one cycle"? Or is each operation other than load/store ops, like logic and arithmetic operations on registers? And this "lowest time in one cycle" definition is what is known as 'atmoic'?

Previous threads directed me to websites like this:
https://wiki.osdev.org/Real_mode_assembly_bare_bones - x86 aside, the concept behind it is understood.
And of course, the manual. 

Just general questions though, still learning and using the raspberry pi 3 b+ has helped quite a bit. Rpi3 seems to operate differently though. Seems that some kind of EEPROM stores a startup cycle that reads from a file in a certain format. Was wondering the conceptual details behind this sequence, or if it matters if processor commands can be known and you can follow the steps in the osdev.org and just write bits to location 0x0 to 0x4 for the first 4 bytes loaded from storage "directly" into wherever. Or something like that. Details are hazy.

Others have suggested just to learn cisc instead of arm stuff. What kind of pro/cons are there to this type of choice between choosing to learn one or the other? Especially considering most things on tiny chips are super specialized? It seems that the concept of interacting with the hardware through memory at an abi level, as the efi stuff is loaded (as on the RasPi from EEPROM) on boot to make sure it boots as per manufacturer specs? Correct?


----------



## Ceegen (Sep 29, 2020)

More specifically, Arm's T32 or Thumb 2 instructions. Having trouble understanding how the 16 bit instruction/data are encoded. Especially since the docs I've been reading indicate that the full 64 bit width is used regardless of state, but broken down into "sub-registers" as the processor can step down to 32 bit mode and then down to 16 bit mode. Eight bytes broken into lanes based on the position of the instructions and data.

x x x x _ x x x x  ; where x is a byte
Or represented in hex as
(0-F)(0-F) (0-F)(0-F) (0-F)(0-F) (0-F)(0-F) _ (0-F)(0-F) (0-F)(0-F) (0-F)(0-F) (0-F)(0-F)
And the manuals say encoding is from left to right, most significant bit to 0.
A 16 bit instruction listed in the documentation is "Software interrupt" listed as:
15-[11011111][Value8]-0

This probably wouldn't work as a way to interface with a 32 or 64 kernel using 16 bit instructions, would it? "int 80" wouldn't possibly work?
"1101_1111_1000_0000" would be int 80 according to the docs, but where does this bit pattern go? The upper or lower bytes?
[instruction][data](i)[d](i)[d]...? (Where each [] is a 16-bit pattern).
Or
[d][d](i)(i)?
Or
(i)(i)[d][d]?
All the way up to 64-bit width or just a portion of that width?


----------



## mark_j (Sep 29, 2020)

Ceegen said:


> Just general questions though, still learning and using the raspberry pi 3 b+ has helped quite a bit. Rpi3 seems to operate differently though. Seems that some kind of EEPROM stores a startup cycle that reads from a file in a certain format. Was wondering the conceptual details behind this sequence, or if it matters if processor commands can be known and you can follow the steps in the osdev.org and just write bits to location 0x0 to 0x4 for the first 4 bytes loaded from storage "directly" into wherever. Or something like that. Details are hazy.



If you want to know the rationale behind the RPI's boot process, you need to go chat on the official raspberry pi forums. They made the decision to boot that way, with closed-source binary blobs Or maybe take a look at u-boot's approach?
You may also join the freebsd-arm mailing list and ask people directly involved with the entire maze of ARM "standards" (choke... ahem). ARM is a slippery-slope of design because of its use within SoCs, where anything goes. I envisage will soon be closed-source anyway.
Bring on RISC-V!  (Alas, a full board will set you back a LOT: https://www.crowdsupply.com/microchip/hifive-unleashed-expansion-board)


----------



## a6h (Sep 29, 2020)

You are all over the place. Pick some books on computer architecture and assembly language, preferably intel/nasm. Study/practice them for a while. Then you will have enough knowledge to switch/choose/target to different arch/uarch/platform.


----------



## Ceegen (Sep 30, 2020)

mark_j said:


> If you want to know the rationale behind the RPI's boot process, you need to go chat on the official raspberry pi forums.


I spent a lot of time searching the forums there and stuff, have lots of great information. But, one of the more interesting posts there was locked when the person was asking something along the same lines I asked a few posts up. They wanted to know the hex tables were for processor instructions. Others were trying to direct this person to a development environment and "just uhhh use higher level languages" and tried explaining their rationale, just before it was locked for being irrelevant or something. It's like they want people to learn about computers, but not learn too much about computers.



> They made the decision to boot that way, with closed-source binary blobs Or maybe take a look at u-boot's approach?


There is an interesting thing about the RasPi 3B+ and the on-board EEPROM, is that an internal note on one of the PDFs I was reading (search yielded some discussion on this topic saying it wasn't supposed to be in the PDF) indicates that it can somehow be reprogrammed if you know microcode or something. Have a feeling the "switches" are capacitors that need to have the same instruction/data written to it 3 or 5 times in a row before it triggers the caps and writes the new info to the EEPROM. I guess that involves (quite possibly) microcode and other things that aren't accessible, but it would be neat to play with. If nothing else I could say I bricked a RasPi which they insist you can't do.



> You may also join the freebsd-arm mailing list and ask people directly involved with the entire maze of ARM "standards" (choke... ahem). ARM is a slippery-slope of design because of its use within SoCs, where anything goes. I envisage will soon be closed-source anyway.
> Bring on RISC-V!


True.


----------



## Ceegen (Sep 30, 2020)

vigole said:


> You are all over the place. Pick some books on computer architecture and assembly language, preferably intel/nasm. Study/practice them for a while. Then you will have enough knowledge to switch/choose/target to different arch/uarch/platform.


Honestly my funds are limited. Minimum wage is what it is. So buying one of these is a big risk/investment. Do you recommend something in particular that really genuinely helped you? As long as I can get the information and a bit of hardware to go with it, my studies could accelerate. I am admittedly new to assembly, but not new enough that any of the things I'm researching so far have been beyond understanding. (Maybe a few things, but progress is progress damnit).

I was trying to learn Arm because phones are a thing. Two of the phones that were given to me are much like the RasPi (also given to me), in that some things can't feasibly be changed. So getting an unlocked phone to play around with is probably a distant goal if manufacturers are going to be releasing super secure and tamper-proof monoliths. (Contemplating a Pinephone). It's whatever, but I had to start somewhere and I tried starting with what I had in front of me. Two phones and a RasPi all with Cortex A53s in them. Guess it will have to remain a dream. Trying to work with scraps and crumbs is slow progress.


----------



## ralphbsz (Sep 30, 2020)

Stop speculating. Read a book. Go to the nearest public library, check out and read (cover to cover) Hennessy and Patterson: "Computer Architecture, a quantitative approach". Once you have understood it ...



Ceegen said:


> Okay so, as I understand it, the difference between RISC and CISC processors goes something like this:


Super simplified theory

CISC: One instruction, can be variable length. Often instruction streams can be various number of bytes, which makes for strangeness on a 32- or 64-bit machine. RISC: all instructions are one word (say for example 32 bits long), making instruction reading and decoding much easier.
CISC: Instructions are decoded into internal microcode, which is often VLIW, and that microcode is executed by CPU internal hardware. Microcode can contain loops. RISC: Instructions are executed directly.
CISC: One single instruction can perform multiple memory accesses: loads and stores, or reads and writes, or even string operations, big memory copies, string translation. And simultaneously operate the ALU (meaning do arithmetic and logic), and in some cases even branch. Look at some of the crazy string operations in the Z80, VAX or NS32032 instruction sets to see examples of exceedingly powerful (but complex!) operations. RISC: One instruction will do one load, or one store, or one arithmetic/logic operation, or one branch.
CISC: Some instructions execute extremely fast (in one clock cycle), just enough to read the instruction. Others can take extremely long, in particular string instructions. RISC: Every instruction uses exactly one clock cycle.
The practice today is totally different. We have explicit VLIW processors that you can buy (Itanium). Modern CISC CPUs have many instructions that are extremely fast, often the fastest ones (there is a reason that Intel and AMD with their CISC x86 CPUs dominate the supercomputer market). Modern RISC CPUs have interestingly complex combination instructions, and some that need multiple clock cycles.

Modern CPUs (both RISC and CISC) are deeply pipelined, and are decoding several instructions, executing several instructions, updating the memory from instructions executed a while back, and speculatively executing future instructions of which they don't even know whether they will needed, all at the same time. They often execute a later instruction before finishing an earlier instruction (which leads to fascinating issues when the later instruction needs the result from the earlier one). Actually, when I say "modern", that's a bit of a lie: Real computers (what we today call "mainframes") have used all these techniques since the last 1960s, which is why they were so insanely fast. Look up the CDC 6xxx and the IBM 360/91 sometime. When we went to microprocessors in the late 70s and 80s, a lot of that technology was lost, which is one of the reason that CISC machines of the 80s and 90s were slow. That in turn led computer architects to pick up an idea pushed already in the late 60s by one of the greatest computer architects of all time, John Cocke, to go to RISC machines: make the CPU super simple but super fast. Which then led to giant religious wars, the founding of companies and architectures such as MIPS, SPARC, Power-PC, HP-PA, and finally ARM. And finally VLIW and Transmeta as a (crazy and pointless, but initially promising) idea to bridge the religious divide. And in the end, both sides lost, but they made a big pile of money in the process: Today the only thing that's left is CISC on the desktop/server side (nearly completely dominated by x86 and x86-64) and RISC on the handheld side. Although neither is religiously pure today, as I said above.



> Previous threads directed me to websites like this:


To understand that, you first need to read a textbook. Then you should spend about 20 or 50 hours programming in assembly on both sides. Let me think ... when I took my operating systems classes, I did about a dozen homework problems in 360 assembly, about a dozen homework problems in Cyber assembly, so the 20 hours is an absolute minimum.

I bet that good instructions for assembly programming can be found online. It's easiest to do today by using completely emulated hardware that has a good debugger built in. Actually, maybe writing some programs on the MMIX hardware might be a good exercise.



> Rpi3 seems to operate differently though. Seems that some kind of EEPROM stores a startup cycle ...


At this point, you are about a half dozen layers away from instruction sets. Booting a computer is a very complex operation (on modern complex hardware), and there is a huge semantic gap between instruction set and what really happens when the CPU powers up. I would bet it takes a few dozen pages to describe it in detail. And in the case of the Raspberry Pi, not all of it may be documented in public; I know that the company that makes the SoC keeps quite a few things under NDA, only shared with the firmware / driver / OS / hardware teams.



> Details are hazy.


They are hazy because you are looking at an elephant with an electron microscope. An electron microscope is a great tool, if you are interested in something very small and simple, like a bacterium. If you are trying to understand how a computer starts, at the level of bits in memory, addresses, and instructions, you are better off using an 8-bit microprocessor. For example say a Z80: It always has (E-)PROM (the predecessor of flash) mapped at memory address zero. When the process is reset (which includes power-up), it starts running whatever instruction is at memory address 0x0000. That is typically a hardware setup code. Once the hardware is initialized and whatever OS-like thing it needs copied to the correct place in memory (from disk, floppy, or EPROM), it typically unmaps the PROM from address 0 and substitutes RAM, and jumps to the "OS" of sorts (which for many microprocessors is an internal monitor program). At a machine of this complexity, it is possible to see how instructions and address work, with a few hours of effort. The RPi is already too complex as a teaching tool.



> Others have suggested just to learn cisc instead of arm stuff.


If you want to learn assembly programming, there are two schools of thought. One is: Find the most comfortable, orthogonal, and powerful instruction set, because you can get a lot done with few instructions. For that, the NS32032 would be ideal. Two is: Find the simplest instruction set that is functional, because there is the least cruft to learn, and you learn how to built complexity from small pieces.


----------



## mark_j (Sep 30, 2020)

Ceegen said:


> I spent a lot of time searching the forums there and stuff, have lots of great information. But, one of the more interesting posts there was locked when the person was asking something along the same lines I asked a few posts up. They wanted to know the hex tables were for processor instructions. Others were trying to direct this person to a development environment and "just uhhh use higher level languages" and tried explaining their rationale, just before it was locked for being irrelevant or something. It's like they want people to learn about computers, but not learn too much about computers.


Well, honestly, the best and cheapest means of playing with ARM is the Raspberry Pi. I think though you're confused about the architecture and the implementation. The architecture of ARM is ARM essentially, regardless of the SoC. The SoC (implementation) varies so much depending on the manufacturer: Broadcom, i.MX etc. It is just how they implement all the ancillary bits around the CPU, such as USB, SATA, SD, RAM etc.
Some useful links:


			https://iitd-plos.github.io/col718/ref/arm-instructionset.pdf
		






						Documentation – Arm Developer
					






					developer.arm.com
				




However, if you're set on learning the nuts-and-bolts of the ARM systems, then something like the Raspberry Pi is a *BAD CHOICE.* You need a small, easily programmable development board to test out your newly acquired ARM skills. Pick a CPU, Cortex-?? and go develop on it.




Ceegen said:


> There is an interesting thing about the RasPi 3B+ and the on-board EEPROM, is that an internal note on one of the PDFs I was reading (search yielded some discussion on this topic saying it wasn't supposed to be in the PDF) indicates that it can somehow be reprogrammed if you know microcode or something.


It sounds like you're referring to OTP, which is actually PROM, not an EEPROM. The acronym means One Time Programmable, BUT, this is not an ARM thing, this is a SoC thing. This is provided by the SoC manufacturer because they provide all the interfaces into the CPU. The CPU on the Raspberry Pi, for example, is just a dumb Cortex-A?? but the smarts in how it deals with you the user is inside the Broadcom chipset. That provides the ability to switch on *PERMANENTLY *the booting of the system via USB rather than SD. https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/bootflow.md



Ceegen said:


> Have a feeling the "switches" are capacitors that need to have the same instruction/data written to it 3 or 5 times in a row before it triggers the caps and writes the new info to the EEPROM. I guess that involves (quite possibly) microcode and other things that aren't accessible, but it would be neat to play with. If nothing else I could say I bricked a RasPi which they insist you can't do.
> True.


A capacitor may indeed be used to quickly discharge and 'burn' the PROM. That is just supposition though on my behalf.


----------

