Compare commits

...

142 Commits

Author SHA1 Message Date
Berkus Decker 23705cd7ed chore(docs): Fix typo 2023-12-22 14:51:43 +02:00
Berkus Decker fbfcfcff8a chore(docs): Try to generate documentation using tabnine 2023-12-12 15:02:39 +02:00
Berkus Decker bcba5b7a4d wip: chainboot builds! 2023-12-10 04:44:44 +02:00
Berkus Decker 79f859b576 wip: nucleus builds! 2023-12-10 04:44:36 +02:00
Berkus Decker 7c76dbded1 sq: refactor build system 2023-11-20 02:40:17 +02:00
Berkus Decker 6e3e618c12 wip: update lock file 2023-11-20 02:32:25 +02:00
Berkus Decker 1ad51993d0 chore(build): Add emoji to command output 2023-11-18 23:15:03 +02:00
Berkus Decker c6e466e914 wip: refactor build system
Reduce redundancy, make naming more clear.
Add ttt target.
2023-11-18 14:29:39 +02:00
Berkus Decker a1b62fbd54 sq: sorted commands list 2023-11-18 14:29:39 +02:00
Berkus Decker e09214f819 wip: bump deps 2023-11-18 14:29:38 +02:00
Berkus Decker 32dc32ff46 chore: Drop unused fehler dependency 2023-11-18 14:29:38 +02:00
Berkus Decker 9c39cb698e wip: adding ttt 2023-11-18 14:29:17 +02:00
Berkus Decker 2367376ba5 fix: 🐛 Remove unused text 2023-11-12 01:22:54 +02:00
Berkus Decker 90e9390cbc fix: 🐛 Fix chainboot linker script
Part 2: Add linker dependency.
2023-11-12 01:20:18 +02:00
Berkus Decker 4a22e91d77 fix: 🐛 Fix chainboot linker script 2023-11-12 01:15:32 +02:00
Berkus Decker 89943857af fix: 🐛 Update rpi4 target to use virtual MMIO bases 2023-11-12 01:15:32 +02:00
Berkus Decker 90d5d96098 fix: 🐛 Rename RPi4 imports 2023-11-12 01:15:32 +02:00
Berkus Decker bb38addd83 fix: 🐛 Put BOOT_CORE_ID const in platform config 2023-11-12 01:15:32 +02:00
Berkus Decker 8c3b7d3d0f build(deps): 🛠 Bump dependencies 2023-11-12 01:15:32 +02:00
Berkus Decker 2bbf3d4d45 build(deps): 🛠 Bump dependencies 2023-08-21 01:01:55 +03:00
Berkus Decker 84b596b2db refactor: 📦 Prepare for future Mailbox mod
Mailbox mod is disabled for now.
Needs to become a driver.
2023-08-12 03:29:02 +03:00
Berkus Decker c40797ed19 refactor: 📦 Prepare for future Power mod
Power mod is disabled for now.
Needs to become a driver.
2023-08-12 03:29:02 +03:00
Berkus Decker cfa9b61429 feat: Improve GPIO implementation
* Add locking
* Implement Pin control via locked GPIO
2023-08-12 03:29:02 +03:00
Berkus Decker 134d7c530f feat: Update linker script
* Add MMIO remap region
* Move script to appropriate place
2023-08-12 03:29:02 +03:00
Berkus Decker e8a587ea7b fix: 🐛 Don't overflow calculations in align_up 2023-08-12 03:29:02 +03:00
Berkus Decker a656a9bdd7 feat: Add kernel and MMIO mapping support
Not all the memory is mapped now, only kernel
sections and MMIO remap space
are mapped on the go.
2023-08-12 03:29:02 +03:00
Berkus Decker 028866fdbb test: 🚨 Don't spam QEMU console when testing 2023-08-12 03:29:02 +03:00
Berkus Decker 287d04ea11 chore: ♻️ Improve scope usage 2023-08-12 03:29:02 +03:00
Berkus Decker f3b65fa44c fix: 🐛 Fix Ubuntu LTS suddenly not able to install 2023-08-08 00:44:31 +03:00
Berkus Decker 0d70caa271 feat: Enable interrupts for PL011 UART 2023-08-08 00:44:31 +03:00
Berkus Decker 0ef9ca0dc6 refactor: 📦 Disable MiniUART driver 2023-08-08 00:44:31 +03:00
Berkus Decker decdd0c56d refactor: 📦 Prepare exception handling code 2023-08-08 00:44:31 +03:00
Berkus Decker 0f30bf00aa refactor: 📦 Restructure code
All modules are modified to unified model
(mod.rs file in module directory).
Arch imports use modules from arch/ namespace
explicitly as arch_xxx.
2023-08-08 00:44:31 +03:00
Berkus Decker 577b0b74ee build(deps): 🛠 Bump dependencies 2023-08-08 00:44:31 +03:00
Berkus Decker 7796cfc646 chore: ♻️ Update dividers 2023-08-01 16:59:42 +03:00
Berkus Decker f4e13be125 chore: ♻️ Update snafu features 2023-08-01 16:59:42 +03:00
Berkus Decker 77d04d3d67 refactor(cleanup): 📦 Clean up MiniUART code 2023-08-01 16:59:42 +03:00
Berkus Decker d0e4334afe refactor(cleanup): 📦 Remove unused code 2023-08-01 16:59:42 +03:00
Berkus Decker 2cf5e1dea8 refactor: 📦 Update PL011 UART 2023-08-01 16:59:42 +03:00
Berkus Decker 625fc496ce refactor: 📦 Share ConsoleOps implementation 2023-08-01 16:59:42 +03:00
Berkus Decker 4733c012ad feat: Print panic message with details 2023-08-01 16:59:42 +03:00
Berkus Decker c3f23108b9 feat: Print more boot info
Temporarily play around with time, loop with
1 second delays.
2023-08-01 16:59:42 +03:00
Berkus Decker 9b715f6927 feat: Use actual time for delays in GPIO init 2023-08-01 16:59:42 +03:00
Berkus Decker fe97a116df refactor: 📦 Rename GPIO registers 2023-08-01 16:59:42 +03:00
Berkus Decker fc01f03714 fix: 🐛 Read actual timer frequency 2023-08-01 16:59:42 +03:00
Berkus Decker 0f435d7152 feat: Add info!/warn! to plain println!
These functions additionally log current time.
2023-08-01 16:59:42 +03:00
Berkus Decker 84fbdcc707 feat: Add time support 2023-08-01 16:59:42 +03:00
Berkus Decker 33418e79ab refactor: 📦 Refactor command_prompt 2023-08-01 16:59:42 +03:00
Berkus Decker b1d54d3b44 chore: ♻️ Disable asm output in QEMU runner
But keep it for qemu-gdb.
2023-08-01 16:59:42 +03:00
Berkus Decker 97145d8a8e build(deps): 🛠 Bump dependencies 2023-07-29 04:08:18 +03:00
Berkus Decker 1be3f9e2e0 fix: 🐛 Disable outdated test installers 2023-07-29 04:08:18 +03:00
Berkus Decker ebb73e5cb0 chore: ♻️ Fix rustfmt and clippy complaints 2023-07-29 04:08:18 +03:00
Berkus Decker 7de1af043e fix: 🐛 Add RUST_STD to clippy invocation
Combine both parts of RUST_STD and RUST_STD_FEATURES into a single
option, easier to control, harder to miss.
2023-07-29 04:08:18 +03:00
Berkus Decker ce3b94e86e fix: 🐛 Fix 2/2 for objcopy unaligned sections bug
This one restores rust-objcopy but explicitly aligns
the beginning of each section. This avoids incorrect
binary output (.rodata section was offset 10-12 bytes
because of unaligned section start).
2023-07-29 04:08:18 +03:00
Berkus Decker d2ed7c21ac fix: 🐛 Fix 1/2 for objcopy unaligned sections bug
Due to a bug in llvm-objcopy sections
must be explicitly aligned, see
https://github.com/llvm/llvm-project/issues/58407
and
https://github.com/rust-lang/rust/issues/102983

This fix just replaces rust-objcopy with a GNU
binutils counterpart from `brew install
aarch64-elf-binutils`. Next commit will do a
less intrusive fix.
2023-07-29 04:08:18 +03:00
Berkus Decker 994ea39760 fix: 🐛 Update linker script w/ segment attributes.
Double the size of the kernel (by including all
the necessary sections).
2023-07-29 04:08:18 +03:00
Berkus Decker b8e9617b06 chore: ♻️ Add source dividers template 2023-07-29 04:08:18 +03:00
Berkus Decker 13d6b2a037 chore: ♻️ Add QEMU tracing options for aarch64
Disabled for now, need to try them out.
2023-07-29 04:08:18 +03:00
Berkus Decker 157604d7c9 chore: ♻️ Drop bitcode embedding 2023-07-29 04:08:18 +03:00
Berkus Decker d37495bc01 fix: 🐛 Synchronise used features 2023-07-29 04:08:18 +03:00
Berkus Decker 9710866524 feat: Update panics, exit QEMU on exceptions 2023-07-29 04:08:18 +03:00
Berkus Decker 0e1c6669ac refactor: 📦 Use better code structure
As inspired by andre-richter's tutorials.
2023-07-29 04:08:18 +03:00
Berkus Decker 46d0c4cffc fix: 🐛 Add missing exception vectors start symbol 2023-07-29 04:08:18 +03:00
Berkus Decker 5356de7cbb fix: 🐛 Disable some make tasks
Allows running gdb and hopper tasks.
Enable QEMU task.
2023-07-29 04:08:18 +03:00
Berkus Decker 45e18de842 refactor: 📦 Rearrange kernel_main 2023-07-29 04:08:18 +03:00
Berkus Decker d78bc67d8f fix(build): 🐛 Allow building qemu-gdb target 2023-07-29 04:08:18 +03:00
Berkus Decker 1ca54d9ed6 fix(console): 🐛 Fix unicode character output
(At the expense of about 3kb code size.)
2023-07-29 04:08:18 +03:00
Berkus Decker 2c91e685bd fix(console): 🐛 Fix console I/O on the host side 2023-07-29 04:08:18 +03:00
Berkus Decker fa725c51cb fix: 🐛 Update cargo resolver to version 2 2023-07-29 04:08:18 +03:00
Berkus Decker e77c65632b chore: ♻️ Omit wip commits from the changelog
Add sq commits type for "to squash".
2023-07-29 04:08:18 +03:00
Berkus Decker b1bbdf087a feat: Use gdbgui for debug 2023-07-29 04:08:18 +03:00
Berkus Decker dfbd424bde chore: ♻️ Add sparkly magic 2023-07-29 04:08:18 +03:00
Berkus Decker 94d23a6a47 refactor: 📦 kernel_main should be the main entry point 2023-07-29 04:08:18 +03:00
Berkus Decker d6887bccee refactor(build): 📦 Use single gdb-config command 2023-07-29 04:08:18 +03:00
Berkus Decker 2313b0cf97 fix: 🐛 Make sdeject command more useful 2023-07-29 04:08:18 +03:00
Berkus Decker df135952e9 build(deps): 🛠 Bump dependencies 2023-07-29 04:08:18 +03:00
Berkus Decker 1bcbe3271a refactor: 📦 Replace cortex-a with aarch64-cpu 2023-07-29 04:08:18 +03:00
Berkus Decker b1bf9dc09d fix: 🐛 Restore libmachine tests
To make unit tests work we build libmachine as a
binary with test-runner.
2023-07-29 04:08:18 +03:00
Berkus Decker 78a864c433 refactor(linker): 📦 Share exception handlers 2023-07-29 04:08:18 +03:00
Berkus Decker 4598330506 refactor: 📦 Convert zellij config
Auto-close panes on quit.
2023-07-29 04:08:18 +03:00
Berkus Decker afbb317403 refactor: 📦 Improve boot code structure
Rename sections to not conflict during link.
Update linker script docs to align on PAGE_SIZE.
2023-07-29 04:08:18 +03:00
Berkus Decker 12f51399df feat: Do a Rust-only chainloader! 2023-07-29 04:08:18 +03:00
Berkus Decker 0cc683a50f refactor: 📦 Fix new clippy errors 2023-07-29 04:08:18 +03:00
Berkus Decker 227761c575 build(ci): 🛠 Add new lint task 2023-07-29 04:08:18 +03:00
Berkus Decker a4fea833bb fix: 🐛 Fix zellij layout path argument 2023-07-29 04:08:18 +03:00
Berkus Decker e95b01104a refactor(console): 📦 Improve console code 2023-07-29 04:08:18 +03:00
Berkus Decker e228a1cff4 chore: ♻️ Fix typos 2023-07-29 04:08:18 +03:00
Berkus Decker 4d8048f3d0 refactor(gpio): 📦 Refactor gpio code
Introduce changes to support new tock-registers
and rename the fields finally.
2023-07-29 04:08:18 +03:00
Berkus Decker 9660347688 docs: 📚 Update readme docs 2023-07-29 04:08:18 +03:00
Berkus Decker f964fea4c3 docs: 📚 Update safety docs 2023-07-29 04:08:18 +03:00
Berkus Decker 61762ccbf6 feat(qemu): Print QEMU run options 2023-07-29 04:08:18 +03:00
Berkus Decker 97ef3d355f build(deps): 🛠 Upgrade clap 2023-07-29 04:08:18 +03:00
Berkus Decker 526d9fa46d build(deps): 🛠 Bump dependencies 2023-07-29 04:08:18 +03:00
Berkus Decker dae26262bc feat(boot): Replace r0 dependency
Use pointer provenance to guarantee absence of UBs.
2023-07-29 04:08:18 +03:00
Berkus Decker 568fdcb649 build(deps): 🛠 Bump dependencies 2023-07-29 04:08:18 +03:00
Berkus Decker 97fc7f6b3d feat(qemu): Generate QEMU logs 2023-07-29 04:08:18 +03:00
Berkus Decker 9b35283ca6 refactor(clippy): 📦 Fix clippy error with matches!() 2023-07-29 04:08:18 +03:00
Berkus Decker 3fd8c16b16 Merge pull-request from metta-systems:misc/updates-and-fixes to develop
Misc updates and fixes
None

[close ]
2022-06-11 02:31:02 +03:00
Berkus Decker 9ac097c3cf fix: 🐛 Fix warnings on newer rust toolchain 2022-06-11 01:44:58 +03:00
Berkus Decker 5b0dbbfb8f build(ci): 🛠 Depend test runs on clippy results 2022-06-11 01:42:03 +03:00
Berkus Decker a27e4b0661 build(deps): 🛠 Bump dependencies 2022-06-11 00:25:21 +03:00
Berkus Decker b4fcedc5e0 build(deps): 🛠 Add update-all-dependencies command 2022-06-11 00:25:21 +03:00
Berkus Decker 886cd0a18d fix: 🐛 Allow executing gdb from cargo-make
It was failing before because no tty was available.
2022-06-11 00:25:21 +03:00
Berkus Decker fc90fde4f0 feat: Add qemu-cb-gdb target 2022-06-11 00:25:21 +03:00
Berkus Decker b52c63796c fix: 🐛 Set GDB breakpoints by physical address 2022-06-11 00:25:21 +03:00
Berkus Decker 31d0ed9c57 chore: ♻️ Set release tags prefix 2022-06-11 00:25:21 +03:00
Berkus Decker aa00713049 chore: ♻️ Ignore non-conventional merge commits 2022-06-11 00:12:04 +03:00
Berkus Decker aa1356da43 chore: ♻️ Add wip conventional commit type 2022-06-11 00:07:16 +03:00
Berkus Decker f97e75d3bd Merge pull-request from metta-systems:fix/nm-command to develop
Fix nm invocation
None

[close ]
2022-05-13 01:01:24 +03:00
Berkus Decker 67db178c6f fix: 🐛 Invoke nm properly 2022-05-08 23:21:33 +03:00
Berkus Decker 2d5ea676cd Merge pull-request from metta-systems:fix/enable-mmu to develop
Fix MMU enable code
Refactor MMU code structure, add some improvements.

- [x] Build and test on real RPi4.

[close ]
2022-05-08 21:45:54 +03:00
Berkus Decker bc0cc2d93d fix: 🐛 Allow clippy warning 2022-05-08 21:08:29 +03:00
Berkus Decker ab95de393b fix: 🐛 Map VC memory to make `disp` command work 2022-05-08 12:15:21 +03:00
Berkus Decker ddf6d09136 feat: Switch mailboxes to correct DMA-backed storage by default
Allocate DmaBackedMailboxStorage out of DMA_ALLOCATOR.
Replace DMA bump_allocator with buddy_alloc.
2022-05-08 12:15:21 +03:00
Berkus Decker 07df330b62 feat: Implement MMU based on Andre Richter's tutorial
As per https://github.com/rust-embedded/rust-raspberrypi-OS-tutorials/tree/master/10_virtual_mem_part1_identity_mapping

Bring better separation of abstract, platform and BSP code.

Init MMU and traps after serial output.
2022-05-08 12:15:21 +03:00
Berkus Decker 4a02f5fd2c feat: Upgrade exception trap handler output 2022-05-08 12:11:12 +03:00
Berkus Decker 113b4abbc5 feat: Add UnsafeCell trick
It replaces old "C" style linker symbol references.
2022-05-08 12:11:12 +03:00
Berkus Decker 29d61f4bdb refactor: 📦 Rename access flag values 2022-05-08 12:11:12 +03:00
Berkus Decker bb40980419 refactor: 📦 Add formatter for memory::AttributeFields 2022-05-08 11:39:55 +03:00
Berkus Decker 248b17ff54 Merge pull-request from metta-systems:fix/update-deps to develop
build(deps): 🛠 bump dependencies
None

[close ]
2022-05-05 22:32:14 +03:00
Berkus Decker cbd6242470 build(deps): 🛠 bump dependencies 2022-05-05 22:04:24 +03:00
Berkus Decker 023ab89a43 Merge pull-request from metta-systems:fix/add-chainboot-emoji to develop
Fix codegen and add chainboot emojis
None

[close ]
2022-04-25 00:00:45 +03:00
Berkus Decker 92feb2d982 feat: Add emojis to the chainboot protocol 2022-04-24 22:10:59 +03:00
Berkus Decker 9dcc5b192a fix(codegen): 🐛 Disable FP/NEON features in the target file
This fixes the build warnings for the
new rustc nightly.
2022-04-24 22:10:23 +03:00
Berkus Decker ffc6e50dcf Merge pull-request from metta-systems:feat/ci-deps to develop
Depend all CI steps on check_formatting
None

[close ]
2022-03-27 23:00:13 +03:00
Berkus Decker 0464f7d95b build(ci): 🛠 depend all CI steps on check_formatting 2022-03-27 22:25:06 +03:00
Berkus Decker 4c3001ba50 Merge pull-request from metta-systems:fix/update-deps to develop
Bump dependencies
None

[close ]
2022-03-27 22:15:56 +03:00
Berkus Decker 7eae2069b6 fix(windows): 🐛 allow scoop installation on CI 2022-03-27 21:34:01 +03:00
Berkus Decker eb4411bc97 fix(rustc): 🐛 stabilise const_fn_fn_ptr_basics
Stable since Rust 1.61.0
2022-03-27 21:34:01 +03:00
Berkus Decker 0b3973f58d build(deps): 🛠 bump dependencies 2022-03-27 21:33:54 +03:00
Berkus Decker c37b44a6f7 Merge pull-request from metta-systems:fix/chainofcommand-corrupted-console to develop
Fix chainofcommand corrupted console
None

[close ]
2022-03-01 01:38:30 +02:00
Berkus Decker b4ff5541a8 fix: 🐛 improve chainofcommand expect() fn 2022-03-01 01:14:00 +02:00
Berkus Decker 072a06e7bb fix: 🐛 update serialport-rs
Use version with fixed setup on macos.
2022-03-01 01:13:45 +02:00
Berkus Decker c9f3d68e81 build: 🛠 bump dependencies 2022-03-01 01:12:48 +02:00
Berkus Decker 7ab44c7d15 build: 🛠 allow deprecated code in clippy 2022-03-01 01:12:34 +02:00
Berkus Decker d22eb31d10 build: 🛠 add `chainofcommand` target 2022-03-01 01:11:58 +02:00
Berkus Decker 463ce25bd7 Merge pull-request from metta-systems:fix/update-deps to develop
build: 🛠 Bump anyhow version
None

[close ]
2022-02-23 16:23:48 +02:00
Berkus Decker 19d9de4ac2 build: 🛠 Bump anyhow version 2022-02-23 14:25:17 +02:00
Berkus Decker 64ded6652d Merge pull-request from metta-systems:fix/update-deps to develop
Bump dependencies versions
Upgrade clap to new API.

[close ]
2022-02-23 14:22:29 +02:00
Berkus Decker b40530ea46 build: 🛠 Bump dependencies versions
Upgrade clap to new API.
2022-02-23 12:05:56 +02:00
Berkus Decker fb6be33983 Merge pull-request from metta-systems:fix/license-update to develop
Add a non-military license constraint
None

[close ]
2022-02-12 02:01:30 +02:00
Berkus Decker 0746382d06 docs(license): 📚 Add a non-military license constraint 2022-02-12 01:59:18 +02:00
111 changed files with 10067 additions and 3644 deletions

View File

@ -4,9 +4,9 @@ pipelining = true
[target.aarch64-vesper-metta]
rustflags = [
"-C", "target-feature=-fp-armv8",
"-C", "target-cpu=cortex-a53",
"-C", "embed-bitcode=yes",
"-C", "target-cpu=cortex-a53", # raspi 2 .. 3b+
#"-C", "target-cpu=cortex-a73", # raspi 4
# ^^ how to set this dynamicall depending on the features??
"-Z", "macro-backtrace",
]
runner = "cargo make test-runner"

View File

@ -7,8 +7,42 @@ on:
pull_request:
jobs:
check_formatting:
name: "Check Formatting"
runs-on: ubuntu-latest
timeout-minutes: 2
steps:
- uses: actions/checkout@v1
- run: rustup toolchain install nightly --profile minimal --component rustfmt
- run: cargo +nightly fmt -- --check
clippy:
name: "Clippy"
needs: check_formatting
strategy:
matrix:
features: [
"",
"noserial",
"qemu",
"noserial,qemu",
"jtag",
"noserial,jtag",
# jtag and qemu together don't make much sense
]
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v1
- run: sudo apt update
- run: sudo apt install libudev-dev
- run: rustup toolchain install nightly
- run: cargo install cargo-make
- run: env CLIPPY_FEATURES=${{ matrix.features }} cargo make clippy
test:
name: Test
needs: clippy
strategy:
matrix:
@ -45,13 +79,6 @@ jobs:
- name: "Install build tools"
run: cargo install cargo-make cargo-binutils
- name: "Prepare packages (Linux)"
run: |
sudo apt install software-properties-common
sudo add-apt-repository ppa:jacob/virtualisation
sudo apt update
if: runner.os == 'Linux'
- name: "Install dev libraries (Linux)"
run: sudo apt install libudev-dev
if: runner.os == 'Linux'
@ -74,7 +101,8 @@ jobs:
- name: Install QEMU (Linux)
run: |
sudo apt install qemu-system-aarch64
sudo apt-get update
sudo apt-get install --fix-missing qemu-system-aarch64
if: runner.os == 'Linux'
- name: Install QEMU (macOS)
@ -87,19 +115,20 @@ jobs:
- name: Install Scoop (Windows)
run: |
Invoke-Expression (New-Object System.Net.WebClient).DownloadString('https://get.scoop.sh')
iwr -useb get.scoop.sh -outfile 'install.ps1'
.\install.ps1 -RunAsAdmin
echo "$HOME\scoop\shims" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
if: runner.os == 'Windows'
shell: pwsh
- name: Add custom Scoop bucket (Windows)
- name: Add custom Scoop bucket for QEMU (Windows)
run: |
scoop bucket add scoop-for-ci https://github.com/metta-systems/scoop-for-ci
if: runner.os == 'Windows'
shell: pwsh
- name: Install QEMU (Windows)
run: scoop install qemu-510
run: scoop install qemu-810
if: runner.os == 'Windows'
shell: pwsh
@ -109,42 +138,5 @@ jobs:
- name: 'Build kernel'
run: cargo make build
- name: 'Run tests (macOS)'
- name: 'Run tests'
run: cargo make test
if: runner.os == 'macOS'
- name: 'Run tests (other OSes)'
run: env QEMU_MACHINE=raspi3 cargo make test
if: runner.os != 'macOS'
check_formatting:
name: "Check Formatting"
runs-on: ubuntu-latest
timeout-minutes: 2
steps:
- uses: actions/checkout@v1
- run: rustup toolchain install nightly --profile minimal --component rustfmt
- run: cargo +nightly fmt -- --check
clippy:
name: "Clippy"
strategy:
matrix:
features: [
"",
"noserial",
"qemu",
"noserial,qemu",
"jtag",
"noserial,jtag",
# jtag and qemu together don't make much sense
]
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v1
- run: sudo apt update
- run: sudo apt install libudev-dev
- run: rustup toolchain install nightly
- run: cargo install cargo-make
- run: env CLIPPY_FEATURES=${{ matrix.features }} cargo make clippy

1
.gitignore vendored
View File

@ -5,3 +5,4 @@
target/
kernel8*
.gdb_history
qemu.log

990
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,12 @@
[workspace]
members = [
"machine",
"nucleus",
"bin/chainboot",
"bin/chainofcommand"
"bin/chainofcommand",
"tools/ttt"
]
resolver = "2"
[patch.crates-io]
serialport = { git = "https://github.com/metta-systems/serialport-rs", branch = "macos-ENOTTY-fix" }

View File

@ -1,26 +1,34 @@
_default:
@just --list
# Clean project
clean:
cargo make clean
# Update all dependencies
deps-up:
cargo update
# Build default hw kernel and run chainofcommand to boot this kernel onto the board
boot: chainofcommand
cargo make chainboot
cargo make chainboot # make boot-kernel ?
# Build and run kernel in QEMU with serial port emulation
zellij:
cargo make zellij-nucleus
zellij --layout-path emulation/layout.zellij
zellij --layout emulation/layout.zellij
# Build and run chainboot in QEMU with serial port emulation
zellij-cb:
# Connect to it via chainofcommand to load an actual kernel
# TODO: actually run chainofcommand in a zellij session too
cargo make zellij-cb
zellij --layout-path emulation/layout.zellij
zellij --layout emulation/layout.zellij
# Build chainofcommand serial loader
chainofcommand:
cd bin/chainofcommand
cargo make build
cargo make build # --workspace=bin/chainofcommand
# Build and run kernel in QEMU
qemu:
@ -35,6 +43,11 @@ qemu-cb:
# Connect to it via chainofcommand to load an actual kernel
cargo make qemu-cb
# Build and run chainboot in QEMU with GDB port enabled
qemu-cb-gdb:
# Connect to it via chainofcommand to load an actual kernel
cargo make qemu-cb-gdb
# Build and write kernel to an SD Card
device:
cargo make sdcard
@ -51,21 +64,16 @@ cb-eject:
# Build default hw kernel
build:
cargo make build
cargo make kernel-binary
# Clean project
clean:
cargo make clean
# Run clippy checks
clippy:
# TODO: use cargo-hack
cargo make clippy
env CLIPPY_FEATURES=noserial cargo make clippy
env CLIPPY_FEATURES=qemu cargo make clippy
env CLIPPY_FEATURES=noserial,qemu cargo make clippy
env CLIPPY_FEATURES=jtag cargo make clippy
env CLIPPY_FEATURES=noserial,jtag cargo make clippy
cargo make xtool-clippy
env CLIPPY_FEATURES=noserial cargo make xtool-clippy
env CLIPPY_FEATURES=qemu cargo make xtool-clippy
env CLIPPY_FEATURES=noserial,qemu cargo make xtool-clippy
env CLIPPY_FEATURES=jtag cargo make xtool-clippy
env CLIPPY_FEATURES=noserial,jtag cargo make xtool-clippy
# Run tests in QEMU
test:
@ -75,7 +83,7 @@ alias disasm := hopper
# Build and disassemble kernel
hopper:
cargo make hopper
cargo make xtool-hopper
alias ocd := openocd
@ -93,19 +101,26 @@ gdb-cb:
# Build and print all symbols in the kernel
nm:
cargo make nm
# Check formatting
fmt-check:
cargo fmt -- --check
cargo make xtool-nm
# Run `cargo expand` on nucleus
expand:
cargo make expand -- nucleus
cargo make xtool-expand-target -- nucleus
# Render modules dependency tree
modules:
cargo make xtool-modules
# Generate and open documentation
doc:
cargo make docs-flow
# Check formatting
fmt-check:
cargo fmt -- --check
# Run lint tasks
lint: clippy fmt-check
# Run CI tasks
ci: clean build test clippy fmt-check
ci: clean build test lint

View File

@ -53,3 +53,34 @@ No contributor can revoke this license.
without any warranty or condition, and no contributor
will be liable to anyone for any damages related to this
software or this license, under any kind of legal claim.***
---
[Addtional restrictions](https://blog.yossarian.net/2020/06/03/You-may-not-use-my-projects-in-a-military-or-law-enforcement-context):
The following terms additionally apply and override any above terms for
applicable parties:
You may not use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software in a military or law enforcement context,
defined as follows:
1. A military context is a professional context where the intended application
of the Software is integration or use with or by military software, tools
(software or hardware), or personnel. This includes contractors and
subcontractors as well as research affiliates of any military organization.
2. A law enforcement context is a professional context where the intended
application of the Software is integration or use with or by law enforcement
software, tools (software or hardware), or personnel. This includes
contractors and subcontractors as well as research affiliates of any law
enforcement organization.
Entities that sell or license to military or law enforcement organizations
may use the Software under the original terms, but only in contexts that do
not assist or supplement the sold or licensed product.
Students and academics who are affiliated with research institutions may use
the Software under the original terms, but only in contexts that do not assist
or supplement collaboration or affiliation with any military or law
enforcement organization.

View File

@ -3,9 +3,12 @@
#
# Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
#
# Global workspace configuration
#
[config]
min_version = "0.32.0"
default_to_workspace = true
skip_core_tasks = true
[env]
DEFAULT_TARGET = "aarch64-vesper-metta"
@ -42,13 +45,16 @@ VOLUME = { value = "/Volumes/BOOT", condition = { env_not_set = ["VOLUME"] } }
#
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = true
RUST_LIBS = "-Z build-std=compiler_builtins,core,alloc -Z build-std-features=compiler-builtins-mem"
RUST_STD = "-Zbuild-std=compiler_builtins,core,alloc -Zbuild-std-features=compiler-builtins-mem"
TARGET_JSON = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/targets/${TARGET}.json"
PLATFORM_TARGET="--target=${TARGET_JSON} --features=${TARGET_FEATURES} ${RUST_LIBS}"
PLATFORM_TARGET="--target=${TARGET_JSON} --features=${TARGET_FEATURES}"
DEVICE_FEATURES = "noserial"
QEMU_FEATURES = "qemu,rpi3"
# Working objcopy from `brew install aarch64-elf-binutils`
#OBJCOPY = "/opt/homebrew/Cellar/aarch64-elf-binutils/2.40/bin/aarch64-elf-objcopy" # Part of `cargo objcopy` in cargo-binutils
# LLVM's objcopy, usually full of bugs like https://github.com/llvm/llvm-project/issues/58407
OBJCOPY = "rust-objcopy" # Part of `cargo objcopy` in cargo-binutils
OBJCOPY_PARAMS = "--strip-all -O binary"
NM = "rust-nm" # Part of `cargo nm` in cargo-binutils
@ -61,9 +67,12 @@ QEMU_CONTAINER_CMD = "qemu-system-aarch64"
# Could additionally use -nographic to disable GUI -- this shall be useful for automated tests.
#
# QEMU has renamed the RasPi machines since version 6.2.0, use just `raspi3` for previous versions.
QEMU_OPTS = "-M ${QEMU_MACHINE} -d int -semihosting"
QEMU_DISASM_OPTS = "-d in_asm,unimp,int"
QEMU_SERIAL_OPTS = "-serial pty -serial stdio"
QEMU_OPTS = "-M ${QEMU_MACHINE} -semihosting"
QEMU_ARM_TRACE_OPTS = "arm_gt_cntvoff_write,arm_gt_ctl_write,arm_gt_cval_write,arm_gt_imask_toggle,arm_gt_recalc,arm_gt_recalc_disabled,arm_gt_tval_write,armsse_cpu_pwrctrl_read,armsse_cpu_pwrctrl_write,armsse_cpuid_read,armsse_cpuid_write,armsse_mhu_read,armsse_mhu_write"
QEMU_BCM_TRACE_OPTS = "bcm2835_cprman_read,bcm2835_cprman_write,bcm2835_cprman_write_invalid_magic,bcm2835_ic_set_cpu_irq,bcm2835_ic_set_gpu_irq,bcm2835_mbox_irq,bcm2835_mbox_property,bcm2835_mbox_read,bcm2835_mbox_write,bcm2835_sdhost_edm_change,bcm2835_sdhost_read,bcm2835_sdhost_update_irq,bcm2835_sdhost_write,bcm2835_systmr_irq_ack,bcm2835_systmr_read,bcm2835_systmr_run,bcm2835_systmr_timer_expired,bcm2835_systmr_write"
QEMU_TRACE_OPTS = "trace:${QEMU_ARM_TRACE_OPTS},${QEMU_BCM_TRACE_OPTS}" # @todo trace: prefix for each opt
QEMU_DISASM_OPTS = "-d in_asm,unimp,int,mmu,cpu_reset,guest_errors,nochain,plugin"
QEMU_SERIAL_OPTS = "-serial stdio -serial pty"
QEMU_TESTS_OPTS = "-nographic"
# For gdb connection:
# - if this is set, MUST have gdb attached for SYS_WRITE0 to work, otherwise QEMU will crash.
@ -78,35 +87,54 @@ KERNEL_BIN = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/nucleus.bin"
CHAINBOOT_SERIAL = "/dev/tty.SLAB_USBtoUART"
CHAINBOOT_BAUD = 115200
#
# === Base reusable commands ===
#
[tasks.default]
alias = "all"
[tasks.all]
dependencies = ["kernel-binary"]
dependencies = ["kernel-binary", "chainboot", "chainofcommand", "ttt"]
[tasks.modules]
[tasks.xtool-modules]
workspace = false
command = "cargo"
args = ["modules", "tree"]
# Disable build in the root by default.
[tasks.build]
env = { "TARGET_FEATURES" = "${TARGET_BOARD}" }
workspace = false
alias = "empty"
# Run a target build with current platform configuration.
[tasks.build-target]
workspace = false
command = "cargo"
args = ["build", "@@split(PLATFORM_TARGET, )", "--release"]
args = ["build", "@@split(PLATFORM_TARGET, )", "@@split(RUST_STD, )", "--release"]
[tasks.build-device]
workspace = false
env = { "TARGET_FEATURES" = "${TARGET_BOARD}" }
run_task = "build-target"
[tasks.build-qemu]
workspace = false
env = { "TARGET_FEATURES" = "${QEMU_FEATURES}" }
command = "cargo"
args = ["build", "@@split(PLATFORM_TARGET, )", "--release"]
run_task = "build-target"
[tasks.qemu-runner]
workspace = false
dependencies = ["build-qemu", "kernel-binary"]
env = { "TARGET_FEATURES" = "${QEMU_FEATURES}" }
script = [
"echo Run QEMU ${QEMU_OPTS} ${QEMU_RUNNER_OPTS} with ${KERNEL_BIN}",
"${QEMU} ${QEMU_OPTS} ${QEMU_RUNNER_OPTS} -dtb ${TARGET_DTB} -kernel ${KERNEL_BIN}"
"echo 🚜 Run QEMU ${QEMU_OPTS} ${QEMU_RUNNER_OPTS} with ${KERNEL_BIN}\n\n\n",
"rm -f qemu.log",
"${QEMU} ${QEMU_OPTS} ${QEMU_RUNNER_OPTS} -dtb ${TARGET_DTB} -kernel ${KERNEL_BIN} 2>&1 | tee qemu.log",
"echo \n\n"
]
[tasks.expand]
[tasks.xtool-expand-target]
workspace = false
env = { "TARGET_FEATURES" = "" }
command = "cargo"
args = ["expand", "@@split(PLATFORM_TARGET, )", "--release"]
@ -114,20 +142,24 @@ args = ["expand", "@@split(PLATFORM_TARGET, )", "--release"]
[tasks.test]
env = { "TARGET_FEATURES" = "${QEMU_FEATURES}" }
command = "cargo"
args = ["test", "@@split(PLATFORM_TARGET, )"]
args = ["test", "@@split(PLATFORM_TARGET, )", "@@split(RUST_STD, )"]
[tasks.docs]
env = { "TARGET_FEATURES" = "" }
command = "cargo"
args = ["doc", "--open", "--no-deps", "@@split(PLATFORM_TARGET, )"]
[tasks.clippy]
[tasks.xtool-clippy]
workspace = false
env = { "TARGET_FEATURES" = "rpi3", "CLIPPY_FEATURES" = { value = "--features=${CLIPPY_FEATURES}", condition = { env_set = ["CLIPPY_FEATURES"] } } }
command = "cargo"
args = ["clippy", "@@split(PLATFORM_TARGET, )", "@@remove-empty(CLIPPY_FEATURES)", "--", "-D", "warnings"]
args = ["clippy", "@@split(PLATFORM_TARGET, )", "@@split(RUST_STD, )", "@@remove-empty(CLIPPY_FEATURES)", "--", "--deny", "warnings", "--allow", "deprecated"]
# These tasks are written in cargo-make's own script to make it portable across platforms (no `basename` on Windows)
[tasks.custom-binary]
## Copy and prepare a given ELF file. Convert to binary output format.
[tasks.build-custom-binary]
workspace = false
env = { "BINARY_FILE" = "${BINARY_FILE}" }
script_runner = "@duckscript"
script = [
@ -137,36 +169,52 @@ script = [
outBin = set ${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/${binaryFile}.bin
cp ${BINARY_FILE} ${outElf}
exec --fail-on-error ${OBJCOPY} %{OBJCOPY_PARAMS} ${BINARY_FILE} ${outBin}
echo Copied ${binaryFile} to ${outElf}
echo Converted ${binaryFile} to ${outBin}
elfSize = get_file_size ${outElf}
binSize = get_file_size ${outBin}
echo 🔄 Processing ${BINARY_FILE}:
echo 🔄 Copied ${binaryFile} to ${outElf} (${elfSize} bytes)
echo 💫 Converted ${binaryFile} to ${outBin} (${binSize} bytes)
'''
]
install_crate = { crate_name = "cargo-binutils", binary = "rust-objcopy", test_arg = ["--help"] }
## Copy and prepare binary with tests.
[tasks.test-binary]
workspace = false
env = { "BINARY_FILE" = "${CARGO_MAKE_TASK_ARGS}" }
run_task = "custom-binary"
## Run binary with tests in QEMU.
[tasks.test-runner]
workspace = false
dependencies = ["test-binary"]
script_runner = "@duckscript"
script = [
'''
binaryFile = basename ${CARGO_MAKE_TASK_ARGS}
echo 🏎️ Run QEMU %{QEMU_OPTS} %{QEMU_TESTS_OPTS} with target/${binaryFile}.bin
exec --fail-on-error ${QEMU} %{QEMU_OPTS} %{QEMU_TESTS_OPTS} -dtb ${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/targets/bcm2710-rpi-3-b-plus.dtb -kernel ${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/${binaryFile}.bin
'''
]
## Generate GDB startup configuration file.
[tasks.gdb-config]
workspace = false
script_runner = "@duckscript"
script = [
'''
writefile ${GDB_CONNECT_FILE} "target extended-remote :5555\n"
appendfile ${GDB_CONNECT_FILE} "break 0x80000\n"
appendfile ${GDB_CONNECT_FILE} "break *0x80000\n"
appendfile ${GDB_CONNECT_FILE} "break kernel_init\n"
appendfile ${GDB_CONNECT_FILE} "break kernel_main\n"
echo 🖌️ Generated GDB config file
'''
]
#appendfile ${GDB_CONNECT_FILE} "continue\n"
## Generate zellij configuration file.
[tasks.zellij-config]
workspace = false
dependencies = ["build-qemu", "kernel-binary"]
script_runner = "@duckscript"
env = { "ZELLIJ_CONFIG_FILE" = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/emulation/zellij-config.sh" }
@ -183,17 +231,89 @@ script = [
install_crate = { crate_name = "zellij", binary = "zellij", test_arg = ["--help"] }
[tasks.openocd]
workspace = false
script = [
"${OPENOCD} -f interface/jlink.cfg -f ../ocd/${TARGET_BOARD}_target.cfg"
]
[tasks.sdeject]
workspace = false
dependencies = ["sdcard"]
script = [
"diskutil unmount ${VOLUME}"
"diskutil ejectAll ${VOLUME}"
]
[tasks.chainboot]
dependencies = ["build", "kernel-binary"]
command = "echo"
args = ["\n***===***\n", "Run the following command in your terminal:\n", " ${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/debug/chainofcommand ${CHAINBOOT_SERIAL} ${CHAINBOOT_BAUD} --kernel ${KERNEL_BIN}\n", "***===***\n\n"]
[tasks.qemu]
alias = "empty"
#
# Per-workspace commands, disabled in the root by efault.
# TODO: defined only in sub-modules with workspace = false
#
# Tasks for nucleus
#[tasks.build-kernel-binary]
#alias = "empty"
# Tasks for chainboot
#[tasks.chainboot]
#alias = "empty"
# sdeject
#[tasks.cb-eject]
#alias = "empty"
# Tasks for chainofcommand
#[tasks.chainofcommand]
#alias = "empty"
# Tasks for ttt
#[tasks.ttt]
#alias = "empty"
# Other tasks
#[tasks.gdb]
#alias = "empty"
#[tasks.gdb-cb]
#alias = "empty"
#[tasks.sdcard]
#alias = "empty"
#[tasks.qemu-gdb]
#alias = "empty"
#[tasks.qemu-cb]
#alias = "empty"
#[tasks.qemu-cb-gdb]
#alias = "empty"
#[tasks.xtool-hopper]
#alias = "empty"
#
#[tasks.xtool-nm]
#alias = "empty"
#[tasks.zellij-cb]
#alias = "empty"
#[tasks.zellij-cb-gdb]
#alias = "empty"
#[tasks.zellij-nucleus]
#alias = "empty"
## Target dependencies:
#[tasks.kernel-binary]
#alias = "empty"
#[tasks.chainboot-binary]
#alias = "empty"
#
#[tasks.chainofcommand-binary]
#alias = "empty"
#[tasks.ttt-binary]
#alias = "empty"

View File

@ -30,7 +30,9 @@ Vesper has been influenced by the kernels in L4 family, notably seL4. Fawn and N
## Build instructions
Use at least rustc nightly 2020-09-30 with cargo nightly of the same or later date. It adds support for `cargo build --build-std` feature (since 2020-07-15) and support for compiler_builtins memory operations ([since 2020-09-30](https://github.com/rust-lang/rust/pull/77284)).
MSRV: 1.61.0
We require `cargo build --build-std` feature (since 2020-07-15), compiler_builtins memory operations ([since 2020-09-30](https://github.com/rust-lang/rust/pull/77284)) and `const_fn_fn_ptr_basics` feature (stable since Rust 1.61.0).
* Install tools: `cargo install just cargo-make`.
* Install qemu (at least version 4.1.1): `brew install qemu`.
@ -78,7 +80,7 @@ just ocd
just gdb
```
If you launch OpenOCD or QEMU before, then gdb shall connect to it and allow you to load the kernel binary directly into memory. Type `load` in gdb to do that.
If you launch OpenOCD or QEMU before (for example, via `just qemu-gdb`), then gdb shall connect to it and allow you to load the kernel binary directly into memory. Type `load` in gdb to do that.
### To see kernel symbols and their values
@ -125,6 +127,8 @@ Various references from [OSDev Wiki](https://wiki.osdev.org/Raspberry_Pi_Bare_Bo
![Build](https://github.com/metta-systems/vesper/workflows/Build/badge.svg)
![License](https://raster.shields.io/badge/license-BlueOak%20with%20restrictions-blue.png)
[![Dependency Status](https://deps.rs/repo/github/metta-systems/vesper/status.svg)](https://deps.rs/repo/github/metta-systems/vesper)
[![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/metta-systems/vesper)

View File

@ -12,28 +12,29 @@ edition = "2021"
maintenance = { status = "experimental" }
[features]
default = ["asm"]
default = []
# Build for running under QEMU with semihosting, so various halt/reboot options would for example quit QEMU instead.
qemu = ["machine/qemu"]
# Build for debugging it over JTAG/SWD connection - halts on first non-startup function start.
jtag = ["machine/jtag"]
# Dummy feature, ignored in this crate.
noserial = []
# Startup relocation code is implemented in assembly
asm = []
# Mutually exclusive features to choose a target board
rpi3 = ["machine/rpi3"]
rpi4 = ["machine/rpi4"]
[dependencies]
machine = { path = "../../machine" }
r0 = "1.0"
cortex-a = "7.0"
tock-registers = "0.7"
aarch64-cpu = "9.4"
tock-registers = "0.8"
ux = { version = "0.1", default-features = false }
usize_conversions = "0.2"
bit_field = "0.10"
bitflags = "1.3"
bitflags = "2.4"
cfg-if = "1.0"
snafu = { version = "0.7", default-features = false }
seahash = "4.1"
[[bin]]
name = "chainboot"
test = false

View File

@ -1,52 +1,76 @@
#
# SPDX-License-Identifier: BlueOak-1.0.0
#
# Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
#
# Build chainboot binary
#
[env]
CHAINBOOT_ELF = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/${TARGET}/release/chainboot"
CHAINBOOT_BIN = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/chainboot.bin"
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = true
[tasks.kernel-binary]
[tasks.build]
alias = "chainboot"
[tasks.chainboot]
workspace = false
dependencies = ["build-device", "build-kernel-binary"]
command = "echo"
args = ["\n***===***\n", "🏎️ Run the following command in your terminal:\n", "🏎️ ${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/target/debug/chainofcommand ${CHAINBOOT_SERIAL} ${CHAINBOOT_BAUD} --kernel ${KERNEL_BIN}\n", "***===***\n\n"]
[tasks.build-kernel-binary]
workspace = false
env = { "BINARY_FILE" = "${CHAINBOOT_ELF}" }
run_task = "custom-binary"
[tasks.hopper]
disabled = true
[tasks.zellij-nucleus]
disabled = true
run_task = "build-custom-binary"
[tasks.zellij-cb]
workspace = false
env = { "KERNEL_BIN" = "${CHAINBOOT_BIN}", "QEMU_OPTS" = "${QEMU_OPTS} ${QEMU_DISASM_OPTS}" }
run_task = "zellij-config"
[tasks.zellij-cb-gdb]
workspace = false
env = { "KERNEL_BIN" = "${CHAINBOOT_BIN}", "QEMU_OPTS" = "${QEMU_OPTS} ${QEMU_DISASM_OPTS} ${QEMU_GDB_OPTS}", "TARGET_BOARD" = "rpi3", "TARGET_DTB" = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/targets/bcm2710-rpi-3-b-plus.dtb" }
run_task = "zellij-config"
[tasks.qemu]
disabled = true
[tasks.qemu-cb]
workspace = false
env = { "QEMU_RUNNER_OPTS" = "${QEMU_DISASM_OPTS} -serial pty", "KERNEL_BIN" = "${CHAINBOOT_BIN}", "TARGET_DTB" = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/targets/bcm2710-rpi-3-b-plus.dtb" }
extend = "qemu-runner"
[tasks.gdb]
disabled = true
[tasks.qemu-cb-gdb]
workspace = false
env = { "QEMU_RUNNER_OPTS" = "${QEMU_DISASM_OPTS} ${QEMU_GDB_OPTS} -serial pty", "KERNEL_BIN" = "${CHAINBOOT_BIN}", "TARGET_DTB" = "${CARGO_MAKE_WORKSPACE_WORKING_DIRECTORY}/targets/bcm2710-rpi-3-b-plus.dtb" }
extend = "qemu-runner"
[tasks.gdb-cb]
dependencies = ["build", "kernel-binary", "gdb-config"]
workspace = false
dependencies = ["build", "build-kernel-binary", "gdb-config"]
env = { "RUST_GDB" = "${GDB}" }
script = [
"rust-gdb -x ${GDB_CONNECT_FILE} ${CHAINBOOT_ELF}"
"exec < /dev/tty && rust-gdb -x ${GDB_CONNECT_FILE} ${CHAINBOOT_ELF}"
]
[tasks.sdcard]
dependencies = ["build", "kernel-binary"]
alias = "sdcard-cb"
[tasks.sdcard-cb]
workspace = false
dependencies = ["build", "build-kernel-binary"]
script_runner = "@duckscript"
script = [
'''
kernelImage = set "chain_boot_rpi4.img"
cp ${CHAINBOOT_BIN} ${VOLUME}/${kernelImage}
echo "Copied chainboot to ${VOLUME}/${kernelImage}"
echo 🔄 Copied chainboot to ${VOLUME}/${kernelImage}
'''
]
[tasks.cb-eject]
dependencies = ["sdeject"]
# Just use sdeject
#[tasks.cb-eject]
#clean = true
#alias = "cb-eject-chainboot"
#
#[tasks.cb-eject-chainboot]
#dependencies = ["sdeject"]

View File

@ -1,6 +1,10 @@
/// This build script is used to link chainboot binary.
const LINKER_SCRIPT: &str = "bin/chainboot/src/link.ld";
const LINKER_SCRIPT_AUX: &str = "machine/src/arch/aarch64/linker/aarch64-exceptions.ld";
fn main() {
println!("cargo:rerun-if-changed={}", LINKER_SCRIPT);
println!("cargo:rerun-if-changed={}", LINKER_SCRIPT_AUX);
println!("cargo:rustc-link-arg=--script={}", LINKER_SCRIPT);
}

View File

@ -1,68 +1,104 @@
// Assembly counterpart to this file.
#[cfg(feature = "asm")]
core::arch::global_asm!(include_str!("boot.s"));
// This is quite impossible - the linker constants are resolved to fully constant offsets in asm
// version, but are image-relative symbols in rust, and I see no way to force it otherwise.
// Make first function small enough so that compiler doesn't try
// to crate a huge stack frame before we have a chance to set SP.
#[no_mangle]
#[link_section = ".text._start"]
#[cfg(not(feature = "asm"))]
#[link_section = ".text.chainboot.entry"]
pub unsafe extern "C" fn _start() -> ! {
use {
cortex_a::registers::{MPIDR_EL1, SP},
machine::endless_sleep,
aarch64_cpu::registers::{MPIDR_EL1, SP},
core::cell::UnsafeCell,
machine::cpu::endless_sleep,
tock_registers::interfaces::{Readable, Writeable},
};
const CORE_0: u64 = 0;
const CORE_MASK: u64 = 0x3;
if CORE_0 == MPIDR_EL1.get() & CORE_MASK {
if CORE_0 != MPIDR_EL1.get() & CORE_MASK {
// if not core0, infinitely wait for events
endless_sleep()
}
extern "Rust" {
// Stack top
static __boot_core_stack_end_exclusive: UnsafeCell<()>;
}
// Set stack pointer.
SP.set(__boot_core_stack_end_exclusive.get() as u64);
reset();
}
#[no_mangle]
#[link_section = ".text.chainboot"]
pub unsafe extern "C" fn reset() -> ! {
use core::{
cell::UnsafeCell,
sync::{atomic, atomic::Ordering},
};
// These are a problem, because they are not interpreted as constants here.
// Subsequently, this code tries to read values from not-yet-existing data locations.
extern "C" {
extern "Rust" {
// Boundaries of the .bss section, provided by the linker script
static mut __bss_start: u64;
static mut __bss_end_exclusive: u64;
static __BSS_START: UnsafeCell<()>;
static __BSS_SIZE_U64S: UnsafeCell<()>;
// Load address of the kernel binary
static mut __binary_nonzero_lma: u64;
static __binary_nonzero_lma: UnsafeCell<()>;
// Address to relocate to and image size
static mut __binary_nonzero_vma: u64;
static mut __binary_nonzero_vma_end_exclusive: u64;
static __binary_nonzero_vma: UnsafeCell<()>;
static __binary_nonzero_vma_end_exclusive: UnsafeCell<()>;
// Stack top
static mut __boot_core_stack_end_exclusive: u64;
static __boot_core_stack_end_exclusive: UnsafeCell<()>;
}
// Set stack pointer.
SP.set(&mut __boot_core_stack_end_exclusive as *mut u64 as u64);
// This tries to call memcpy() at a wrong linked address - the function is in relocated area!
// Zeroes the .bss section
r0::zero_bss(&mut __bss_start, &mut __bss_end_exclusive);
// Relocate the code
core::ptr::copy_nonoverlapping(
&mut __binary_nonzero_lma as *const u64,
&mut __binary_nonzero_vma as *mut u64,
(&mut __binary_nonzero_vma_end_exclusive as *mut u64 as u64
- &mut __binary_nonzero_vma as *mut u64 as u64) as usize,
// Relocate the code.
// Emulate
// core::ptr::copy_nonoverlapping(
// __binary_nonzero_lma.get() as *const u64,
// __binary_nonzero_vma.get() as *mut u64,
// __binary_nonzero_vma_end_exclusive.get() as usize - __binary_nonzero_vma.get() as usize,
// );
let binary_size =
__binary_nonzero_vma_end_exclusive.get() as usize - __binary_nonzero_vma.get() as usize;
local_memcpy(
__binary_nonzero_vma.get() as *mut u8,
__binary_nonzero_lma.get() as *const u8,
binary_size,
);
_start_rust();
}
// This tries to call memset() at a wrong linked address - the function is in relocated area!
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
// Zeroes the .bss section
// Emulate
// crate::stdmem::local_memset(__bss_start.get() as *mut u8, 0u8, __bss_size.get() as usize);
let bss = core::slice::from_raw_parts_mut(
__BSS_START.get() as *mut u64,
__BSS_SIZE_U64S.get() as usize,
);
for i in bss {
*i = 0;
}
/// The Rust entry of the `kernel` binary.
///
/// The function is called from the assembly `_start` function, keep it to support "asm" feature.
#[no_mangle]
#[inline(always)]
pub unsafe fn _start_rust(max_kernel_size: u64) -> ! {
// Don't cross this line with loads and stores. The initializations
// done above could be "invisible" to the compiler, because we write to the
// same memory location that is used by statics after this point.
// Additionally, we assume that no statics are accessed before this point.
atomic::compiler_fence(Ordering::SeqCst);
let max_kernel_size =
__binary_nonzero_vma.get() as u64 - __boot_core_stack_end_exclusive.get() as u64;
crate::kernel_init(max_kernel_size)
}
#[inline(always)]
#[link_section = ".text.chainboot"]
unsafe fn local_memcpy(mut dest: *mut u8, mut src: *const u8, n: usize) {
let dest_end = dest.add(n);
while dest < dest_end {
*dest = *src;
dest = dest.add(1);
src = src.add(1);
}
}

View File

@ -1,93 +0,0 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2021 Andre Richter <andre.o.richter@gmail.com>
// Modifications
// Copyright (c) 2021- Berkus <berkus+github@metta.systems>
//--------------------------------------------------------------------------------------------------
// Definitions
//--------------------------------------------------------------------------------------------------
// Load the address of a symbol into a register, PC-relative.
//
// The symbol must lie within +/- 4 GiB of the Program Counter.
//
// # Resources
//
// - https://sourceware.org/binutils/docs-2.36/as/AArch64_002dRelocations.html
.macro ADR_REL register, symbol
adrp \register, \symbol
add \register, \register, #:lo12:\symbol
.endm
// Load the address of a symbol into a register, absolute.
//
// # Resources
//
// - https://sourceware.org/binutils/docs-2.36/as/AArch64_002dRelocations.html
.macro ADR_ABS register, symbol
movz \register, #:abs_g2:\symbol
movk \register, #:abs_g1_nc:\symbol
movk \register, #:abs_g0_nc:\symbol
.endm
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
.section .text._start
//------------------------------------------------------------------------------
// fn _start()
//------------------------------------------------------------------------------
_start:
// Only proceed on the boot core. Park it otherwise.
mrs x1, MPIDR_EL1
and x1, x1, 0b11 // core id mask
cmp x1, 0 // boot core id
b.ne .L_parking_loop
// If execution reaches here, it is the boot core.
// Initialize bss.
ADR_ABS x0, __bss_start
ADR_ABS x1, __bss_end_exclusive
.L_bss_init_loop:
cmp x0, x1
b.eq .L_relocate_binary
stp xzr, xzr, [x0], #16
b .L_bss_init_loop
// Next, relocate the binary.
.L_relocate_binary:
ADR_REL x0, __binary_nonzero_lma // The address the binary got loaded to.
ADR_ABS x1, __binary_nonzero_vma // The address the binary was linked to.
ADR_ABS x2, __binary_nonzero_vma_end_exclusive
sub x4, x1, x0 // Get difference between vma and lma as max size
.L_copy_loop:
ldr x3, [x0], #8
str x3, [x1], #8
cmp x1, x2
b.lo .L_copy_loop
// Prepare the jump to Rust code.
// Set the stack pointer.
ADR_ABS x0, __rpi_phys_binary_load_addr
mov sp, x0
// Pass maximum kernel size as an argument to Rust init function.
mov x0, x4
// Jump to the relocated Rust code.
ADR_ABS x1, _start_rust
br x1
// Infinitely wait for events (aka "park the core").
.L_parking_loop:
wfe
b .L_parking_loop
.size _start, . - _start
.type _start, function
.global _start

View File

@ -51,8 +51,8 @@ SECTIONS
.text :
{
KEEP(*(.text._start))
/* *(text.memcpy) -- only relevant for Rust relocator impl which is currently impossible */
KEEP(*(.text.chainboot.entry))
*(.text.chainboot)
} :segment_start_code
/* Align to 8 bytes, b/c relocating the binary is done in u64 chunks */
@ -70,9 +70,7 @@ SECTIONS
__binary_nonzero_vma = .;
.text : AT (ADDR(.text) + SIZEOF(.text))
{
*(.text._start_rust) /* The Rust entry point */
/* *(text.memcpy) -- only relevant for Rust relocator impl which is currently impossible */
*(.text*) /* Everything else */
*(.text*) /* The Rust entry point and everything else */
} :segment_code
.rodata : ALIGN(8) { *(.rodata*) } :segment_code
@ -87,12 +85,17 @@ SECTIONS
. = ALIGN(8);
__binary_nonzero_vma_end_exclusive = .;
/* Section is zeroed in pairs of u64. Align start and end to 16 bytes */
/* Section is zeroed in pairs of u64. Align start and end to 16 bytes at least */
.bss (NOLOAD) : ALIGN(16)
{
__bss_start = .;
*(.bss*);
__BSS_START = .;
*(.bss .bss.*)
*(COMMON)
. = ALIGN(16);
__bss_end_exclusive = .;
__BSS_SIZE_U64S = (. - __BSS_START) / 8;
} :segment_data
/DISCARD/ : { *(.comment) *(.gnu*) *(.note*) *(.eh_frame*) *(.text.boot*)}
}
INCLUDE machine/src/arch/aarch64/linker/aarch64-exceptions.ld

View File

@ -5,15 +5,12 @@
#![reexport_test_harness_main = "test_main"]
#![no_main]
#![no_std]
#![no_builtins]
use {
core::{hash::Hasher, panic::PanicInfo},
cortex_a::asm::barrier,
machine::{
devices::SerialOps,
platform::rpi3::{gpio::GPIO, pl011_uart::PL011Uart, BcmHost},
print, println, CONSOLE,
},
aarch64_cpu::asm::barrier,
core::hash::Hasher,
machine::{console::console, platform::raspberrypi::BcmHost, print, println},
seahash::SeaHasher,
};
@ -25,18 +22,16 @@ mod boot;
///
/// - Only a single core must be active and running this function.
/// - The init calls in this function must appear in the correct order.
#[inline(always)]
unsafe fn kernel_init(max_kernel_size: u64) -> ! {
#[cfg(feature = "jtag")]
machine::arch::jtag::wait_debugger();
machine::debug::jtag::wait_debugger();
let gpio = GPIO::default();
let uart = PL011Uart::default();
let uart = uart.prepare(&gpio).expect("What could go wrong?");
CONSOLE.lock(|c| {
// Move uart into the global CONSOLE.
c.replace_with(uart.into());
});
if let Err(x) = machine::platform::drivers::init() {
panic!("Error initializing platform drivers: {}", x);
}
// Initialize all device drivers.
machine::drivers::driver_manager().init_drivers_and_irqs();
// println! is usable from here on.
@ -53,17 +48,15 @@ const LOGO: &str = r#"
"#;
fn read_u64() -> u64 {
CONSOLE.lock(|c| {
let mut val: u64 = u64::from(c.read_byte());
val |= u64::from(c.read_byte()) << 8;
val |= u64::from(c.read_byte()) << 16;
val |= u64::from(c.read_byte()) << 24;
val |= u64::from(c.read_byte()) << 32;
val |= u64::from(c.read_byte()) << 40;
val |= u64::from(c.read_byte()) << 48;
val |= u64::from(c.read_byte()) << 56;
val
})
let mut val: u64 = u64::from(console().read_byte());
val |= u64::from(console().read_byte()) << 8;
val |= u64::from(console().read_byte()) << 16;
val |= u64::from(console().read_byte()) << 24;
val |= u64::from(console().read_byte()) << 32;
val |= u64::from(console().read_byte()) << 40;
val |= u64::from(console().read_byte()) << 48;
val |= u64::from(console().read_byte()) << 56;
val
}
/// The main function running after the early init.
@ -74,19 +67,19 @@ fn kernel_main(max_kernel_size: u64) -> ! {
print!("{}", LOGO);
println!("{:>51}\n", BcmHost::board_name());
println!("[<<] Requesting kernel image...");
println!(" Requesting kernel image...");
let kernel_addr: *mut u8 = BcmHost::kernel_load_address() as *mut u8;
loop {
CONSOLE.lock(|c| c.flush());
console().flush();
// Discard any spurious received characters before starting with the loader protocol.
CONSOLE.lock(|c| c.clear_rx());
console().clear_rx();
// Notify `chainofcommand` to send the binary.
for _ in 0..3 {
CONSOLE.lock(|c| c.write_byte(3u8));
console().write_byte(3u8);
}
// Read the binary's size.
@ -94,7 +87,10 @@ fn kernel_main(max_kernel_size: u64) -> ! {
// Check the size to fit RAM
if size > max_kernel_size {
println!("ERR Kernel image too big (over {} bytes)", max_kernel_size);
println!(
"ERR ❌ Kernel image too big (over {} bytes)",
max_kernel_size
);
continue;
}
@ -105,7 +101,7 @@ fn kernel_main(max_kernel_size: u64) -> ! {
// Read the kernel byte by byte.
for i in 0..size {
let val = CONSOLE.lock(|c| c.read_byte());
let val = console().read_byte();
unsafe {
core::ptr::write_volatile(kernel_addr.offset(i as isize), val);
}
@ -118,7 +114,7 @@ fn kernel_main(max_kernel_size: u64) -> ! {
let valid = hasher.finish() == checksum;
if !valid {
println!("ERR Kernel image checksum mismatch");
println!("ERR Kernel image checksum mismatch");
continue;
}
@ -127,16 +123,16 @@ fn kernel_main(max_kernel_size: u64) -> ! {
}
println!(
"[<<] Loaded! Executing the payload now from {:p}\n",
" Loaded! Executing the payload now from {:p}\n",
kernel_addr
);
CONSOLE.lock(|c| c.flush());
console().flush();
// Use black magic to create a function pointer.
let kernel: fn() -> ! = unsafe { core::mem::transmute(kernel_addr) };
// Force everything to complete before we jump.
unsafe { barrier::isb(barrier::SY) };
barrier::isb(barrier::SY);
// Jump to loaded kernel!
kernel()
@ -144,12 +140,20 @@ fn kernel_main(max_kernel_size: u64) -> ! {
#[cfg(not(test))]
#[panic_handler]
fn panicked(info: &PanicInfo) -> ! {
fn panicked(info: &core::panic::PanicInfo) -> ! {
machine::panic::handler(info)
}
#[cfg(test)]
#[panic_handler]
fn panicked(info: &PanicInfo) -> ! {
#[cfg(test)]
fn panicked(info: &core::panic::PanicInfo) -> ! {
machine::panic::handler_for_tests(info)
}
#[cfg(test)]
mod chainboot_tests {
#[test_case]
fn nothing() {
assert_eq!(2 + 2, 4);
}
}

View File

@ -12,14 +12,15 @@ edition = "2021"
maintenance = { status = "experimental" }
[dependencies]
clap = "3.0"
clap = "4.4"
seahash = "4.1"
anyhow = "1.0"
fehler = "1.0"
crossterm = { version = "0.23", features = ["event-stream"] }
tokio-serial = "5.4"
tokio = { version = "1.16", features = ["full"] }
crossterm = { version = "0.27", features = ["event-stream"] }
futures = "0.3"
futures-util = { version = "0.3", features = ["io"] }
tokio = { version = "1.34", features = ["full"] }
tokio-util = { version = "0.7", features = ["io", "codec", "io"] }
tokio-stream = { version = "0.1" }
tokio-serial = "5.4"
defer = "0.1"
tokio-util = { version = "0.7", features = ["codec"] }
bytes = "1.1"
bytes = "1.5"

View File

@ -1,44 +1,32 @@
#
# SPDX-License-Identifier: BlueOak-1.0.0
#
# Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
#
# Build chainofcommand tool
#
[env]
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = true
[tasks.build]
alias = "build-coc"
[tasks.build-device]
alias = "empty"
[tasks.build-coc]
workspace = false
command = "cargo"
args = ["build"]
[tasks.chainofcommand]
workspace = false
alias = "build-coc"
[tasks.test]
command = "cargo"
args = ["test"]
[tasks.clippy]
[tasks.xtool-clippy] # todo this should be modules-specific? or just "clippy" for workspace cmd
command = "cargo"
args = ["clippy", "--", "-D", "warnings"]
[tasks.hopper]
disabled = true
[tasks.kernel-binary]
disabled = true
[tasks.zellij-nucleus]
disabled = true
[tasks.zellij-cb]
disabled = true
[tasks.zellij-cb-gdb]
disabled = true
[tasks.qemu]
disabled = true
[tasks.qemu-cb]
disabled = true
[tasks.sdcard]
disabled = true
[tasks.cb-eject]
disabled = true
[tasks.gdb]
disabled = true
[tasks.gdb-cb]
disabled = true

View File

@ -1,9 +1,12 @@
#![feature(trait_alias)]
#![allow(stable_features)]
#![feature(let_else)] // stabilised in 1.65.0
#![feature(slice_take)]
use {
anyhow::{anyhow, Result},
bytes::Bytes,
clap::{App, AppSettings, Arg},
clap::{value_parser, Arg, ArgAction, Command},
crossterm::{
cursor,
event::{Event, EventStream, KeyCode, KeyEvent, KeyModifiers},
@ -11,9 +14,10 @@ use {
tty::IsTty,
},
defer::defer,
futures::{future::FutureExt, StreamExt},
futures::{future::FutureExt, Stream},
seahash::SeaHasher,
std::{
fmt::Formatter,
fs::File,
hash::Hasher,
io::{BufRead, BufReader},
@ -22,32 +26,55 @@ use {
},
tokio::{io::AsyncReadExt, sync::mpsc},
tokio_serial::{SerialPortBuilderExt, SerialStream},
tokio_stream::StreamExt,
};
// mod utf8_codec;
trait Writable = std::io::Write + Send;
trait ThePath = AsRef<Path> + std::fmt::Display + Clone + Sync + Send + 'static;
async fn expect(
to_console2: &mpsc::Sender<Vec<u8>>,
from_serial: &mut mpsc::Receiver<Vec<u8>>,
m: &str,
) -> Result<()> {
if let Some(buf) = from_serial.recv().await {
if buf.len() == m.len() && String::from_utf8_lossy(buf.as_ref()) == m {
return Ok(());
trait FramedStream = Stream<Item = Result<Message, anyhow::Error>> + Unpin;
type Sender = mpsc::Sender<Result<Message>>;
type Receiver = mpsc::Receiver<Result<Message>>;
async fn expect(to_console2: &Sender, from_serial: &mut Receiver, m: &str) -> Result<()> {
let mut s = String::new();
for _x in m.chars() {
let next_char = from_serial.recv().await;
let Some(Ok(c)) = next_char else {
return Err(anyhow!(
"Failed to receive expected value {:?}: got empty buf",
m,
));
};
match c {
Message::Text(payload) => {
s.push_str(&payload);
to_console2.send(Ok(Message::Text(payload))).await?;
}
_ => unreachable!(),
}
to_console2.send(buf).await?;
return Err(anyhow!("Failed to receive expected value"));
}
Err(anyhow!("Failed to receive expected value"))
if s != m {
return Err(anyhow!(
"Failed to receive expected value {:?}: got {:?}",
m,
s
));
}
Ok(())
}
async fn load_kernel<P>(to_console2: &mpsc::Sender<Vec<u8>>, kernel: P) -> Result<(File, u64)>
async fn load_kernel<P>(to_console2: &Sender, kernel: P) -> Result<(File, u64)>
where
P: ThePath,
{
to_console2
.send("[>>] Loading kernel image\n".into())
.send(Ok(Message::Text(" Loading kernel image\n".into())))
.await?;
let kernel_file = match std::fs::File::open(kernel.clone()) {
@ -57,32 +84,37 @@ where
let kernel_size: u64 = kernel_file.metadata()?.len();
to_console2
.send(format!("[>>] .. {} ({} bytes)\n", kernel, kernel_size).into())
.send(Ok(Message::Text(format!(
"⏩ .. {} ({} bytes)\n",
kernel, kernel_size
))))
.await?;
Ok((kernel_file, kernel_size))
}
async fn send_kernel<P>(
to_console2: &mpsc::Sender<Vec<u8>>,
to_serial: &mpsc::Sender<Vec<u8>>,
from_serial: &mut mpsc::Receiver<Vec<u8>>,
async fn send_kernel<P: ThePath>(
to_console2: &Sender,
to_serial: &Sender,
from_serial: &mut Receiver,
kernel: P,
) -> Result<()>
where
P: ThePath,
{
) -> Result<()> {
let (kernel_file, kernel_size) = load_kernel(to_console2, kernel).await?;
to_console2.send("[>>] Sending image size\n".into()).await?;
to_serial.send(kernel_size.to_le_bytes().into()).await?;
to_console2
.send(Ok(Message::Text("⏩ Sending image size\n".into())))
.await?;
to_serial
.send(Ok(Message::Binary(Bytes::copy_from_slice(
&kernel_size.to_le_bytes(),
))))
.await?;
// Wait for OK response
expect(to_console2, from_serial, "OK").await?;
to_console2
.send("[>>] Sending kernel image\n".into())
.send(Ok(Message::Text(" Sending kernel image\n".into())))
.await?;
let mut hasher = SeaHasher::new();
@ -90,7 +122,9 @@ where
loop {
let length = {
let buf = reader.fill_buf()?;
to_serial.send(buf.into()).await?;
to_serial
.send(Ok(Message::Binary(Bytes::copy_from_slice(buf))))
.await?;
hasher.write(buf);
buf.len()
};
@ -102,10 +136,17 @@ where
let hashed_value: u64 = hasher.finish();
to_console2
.send(format!("[>>] Sending image checksum {:x}\n", hashed_value).into())
.send(Ok(Message::Text(format!(
"⏩ Sending image checksum {:x}\n",
hashed_value
))))
.await?;
to_serial.send(hashed_value.to_le_bytes().into()).await?;
to_serial
.send(Ok(Message::Binary(Bytes::copy_from_slice(
&hashed_value.to_le_bytes(),
))))
.await?;
expect(to_console2, from_serial, "OK").await?;
@ -116,8 +157,8 @@ where
async fn serial_loop(
mut port: tokio_serial::SerialStream,
to_console: mpsc::Sender<Vec<u8>>,
mut from_console: mpsc::Receiver<Vec<u8>>,
to_console: Sender,
mut from_console: Receiver,
) -> Result<()> {
let mut buf = [0; 256];
loop {
@ -126,8 +167,13 @@ async fn serial_loop(
Some(msg) = from_console.recv() => {
// debug!("serial write {} bytes", msg.len());
tokio::io::AsyncWriteExt::write_all(&mut port, msg.as_ref()).await?;
}
match msg.unwrap() {
Message::Text(s) => {
tokio::io::AsyncWriteExt::write_all(&mut port, s.as_bytes()).await?;
},
Message::Binary(b) => tokio::io::AsyncWriteExt::write_all(&mut port, b.as_ref()).await?,
}
}
res = port.read(&mut buf) => {
match res {
@ -137,7 +183,9 @@ async fn serial_loop(
}
Ok(n) => {
// debug!("Serial read {n} bytes.");
to_console.send(buf[0..n].to_owned()).await?;
// let codec = Utf8Codec::new(buf);
let s = String::from_utf8_lossy(&buf[0..n]);
to_console.send(Ok(Message::Text(s.to_string()))).await?;
}
Err(e) => {
// if e.kind() == ErrorKind::TimedOut {
@ -154,11 +202,57 @@ async fn serial_loop(
}
}
// Always send Binary() to serial
// Convert Text() to bytes and send in serial_loop
// Receive and convert bytes to Text() in serial_loop
#[derive(Clone, Debug)]
enum Message {
Binary(Bytes),
Text(String),
}
// impl Message {
// pub fn len(&self) -> usize {
// match self {
// Message::Binary(b) => b.len(),
// Message::Text(s) => s.len(),
// }
// }
// }
impl std::fmt::Display for Message {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Message::Binary(b) => {
for c in b {
write!(f, "{})", c)?;
}
Ok(())
}
Message::Text(s) => write!(f, "{}", s),
}
}
}
// impl Buf for Message {
// fn remaining(&self) -> usize {
// todo!()
// }
//
// fn chunk(&self) -> &[u8] {
// todo!()
// }
//
// fn advance(&mut self, cnt: usize) {
// todo!()
// }
// }
async fn console_loop<P>(
to_console2: mpsc::Sender<Vec<u8>>,
mut from_internal: mpsc::Receiver<Vec<u8>>,
to_serial: mpsc::Sender<Vec<u8>>,
mut from_serial: mpsc::Receiver<Vec<u8>>,
to_console2: Sender,
mut from_internal: Receiver,
to_serial: Sender,
mut from_serial: Receiver,
kernel: P,
) -> Result<()>
where
@ -175,33 +269,39 @@ where
biased;
Some(received) = from_internal.recv() => {
for &x in &received[..] {
execute!(w, style::Print(format!("{}", x as char)))?;
if let Ok(message) = received {
execute!(w, style::Print(message))?;
w.flush()?;
}
w.flush()?;
}
Some(received) = from_serial.recv() => {
// execute!(w, cursor::MoveToNextLine(1), style::Print(format!("[>>] Received {} bytes from serial", from_serial.len())), cursor::MoveToNextLine(1))?;
Some(received) = from_serial.recv() => { // returns Vec<char>
if let Ok(received) = received {
let Message::Text(received) = received else {
unreachable!();
};
execute!(w, cursor::MoveToNextLine(1), style::Print(format!("[>>] Received {} bytes from serial", received.len())), cursor::MoveToNextLine(1))?;
for &x in &received[..] {
if x == 0x3 {
// execute!(w, cursor::MoveToNextLine(1), style::Print("[>>] Received a BREAK"), cursor::MoveToNextLine(1))?;
breaks += 1;
// Await for 3 consecutive \3 to start downloading
if breaks == 3 {
// execute!(w, cursor::MoveToNextLine(1), style::Print("[>>] Received 3 BREAKs"), cursor::MoveToNextLine(1))?;
breaks = 0;
send_kernel(&to_console2, &to_serial, &mut from_serial, kernel.clone()).await?;
to_console2.send("[>>] Send successful, pass-through\n".into()).await?;
for x in received.chars() {
if x == 0x3 as char {
// execute!(w, cursor::MoveToNextLine(1), style::Print("[>>] Received a BREAK"), cursor::MoveToNextLine(1))?;
breaks += 1;
// Await for 3 consecutive \3 to start downloading
if breaks == 3 {
// execute!(w, cursor::MoveToNextLine(1), style::Print("[>>] Received 3 BREAKs"), cursor::MoveToNextLine(1))?;
breaks = 0;
send_kernel(&to_console2, &to_serial, &mut from_serial, kernel.clone()).await?;
to_console2.send(Ok(Message::Text("🦀 Send successful, pass-through\n".into()))).await?;
}
} else {
while breaks > 0 {
execute!(w, style::Print(format!("{}", 3 as char)))?;
breaks -= 1;
}
// TODO decode buf with Utf8Codec here?
execute!(w, style::Print(format!("{}", x)))?;
w.flush()?;
}
} else {
while breaks > 0 {
execute!(w, style::Print(format!("{}", 3 as char)))?;
breaks -= 1;
}
execute!(w, style::Print(format!("{}", x as char)))?;
w.flush()?;
}
}
}
@ -213,7 +313,7 @@ where
return Ok(());
}
if let Some(key) = handle_key_event(key_event) {
to_serial.send(key.to_vec()).await?;
to_serial.send(Ok(Message::Binary(Bytes::copy_from_slice(&key)))).await?;
// Local echo
execute!(w, style::Print(format!("{:?}", key)))?;
w.flush()?;
@ -236,8 +336,15 @@ where
P: ThePath,
{
// read from serial -> to_console==>from_serial -> output to console
let (to_console, from_serial) = mpsc::channel(256);
let (to_console2, from_internal) = mpsc::channel(256);
let (to_console, from_serial) = mpsc::channel::<Result<Message>>(256);
let (to_console2, from_internal) = mpsc::channel::<Result<Message>>(256);
// Make a Stream from Receiver
// let stream = ReceiverStream::new(from_serial);
// // Make AsyncRead from Stream
// let async_stream = StreamReader::new(stream);
// // Make FramedRead (Stream+Sink) from AsyncRead
// let from_serial = FramedRead::new(async_stream, Utf8Codec::new());
// read from console -> to_serial==>from_console -> output to serial
let (to_serial, from_console) = mpsc::channel(256);
@ -278,7 +385,7 @@ fn handle_key_event(key_event: KeyEvent) -> Option<Bytes> {
KeyCode::Char(ch) => {
if key_event.modifiers & KeyModifiers::CONTROL == KeyModifiers::CONTROL {
buf[0] = ch as u8;
if ('a'..='z').contains(&ch) || (ch == ' ') {
if ch.is_ascii_lowercase() || (ch == ' ') {
buf[0] &= 0x1f;
Some(&buf[0..1])
} else if ('4'..='7').contains(&ch) {
@ -304,9 +411,9 @@ fn handle_key_event(key_event: KeyEvent) -> Option<Bytes> {
#[tokio::main]
async fn main() -> Result<()> {
let matches = App::new("ChainOfCommand - command chainboot protocol")
let matches = Command::new("ChainOfCommand - command chainboot protocol")
.about("Use to send freshly built kernel to chainboot-compatible boot loader")
.setting(AppSettings::DisableVersionFlag)
.disable_version_flag(true)
.arg(
Arg::new("port")
.help("The device path to a serial port, e.g. /dev/ttyUSB0")
@ -315,20 +422,28 @@ async fn main() -> Result<()> {
.arg(
Arg::new("baud")
.help("The baud rate to connect at")
.use_delimiter(false)
.use_value_delimiter(false)
.action(ArgAction::Set)
.value_parser(value_parser!(u32))
.required(true), // .validator(valid_baud),
)
.arg(
Arg::new("kernel")
.long("kernel")
.help("Path of the binary kernel image to send")
.takes_value(true)
.default_value("kernel8.img"),
)
.get_matches();
let port_name = matches.value_of("port").unwrap();
let baud_rate = matches.value_of("baud").unwrap().parse::<u32>().unwrap();
let kernel = matches.value_of("kernel").unwrap().to_owned();
let port_name = matches
.get_one::<String>("port")
.expect("port must be specified");
let baud_rate = matches
.get_one("baud")
.copied()
.expect("baud rate must be an integer");
let kernel = matches
.get_one::<String>("kernel")
.expect("kernel file must be specified");
// Check that STDIN is a proper tty
if !std::io::stdin().is_tty() {
@ -348,7 +463,7 @@ async fn main() -> Result<()> {
execute!(
stdout,
cursor::RestorePosition,
style::Print("[>>] Opening serial port ")
style::Print(" Opening serial port ")
)?;
// tokio_serial::new() creates a builder with 8N1 setup without flow control by default.
@ -369,7 +484,7 @@ async fn main() -> Result<()> {
stdout,
cursor::RestorePosition,
style::Print(format!(
"[>>] Waiting for serial port {}\r",
" Waiting for serial port {}\r",
if serial_toggle { "# " } else { " #" }
))
)?;
@ -377,7 +492,10 @@ async fn main() -> Result<()> {
serial_toggle = !serial_toggle;
if crossterm::event::poll(Duration::from_millis(1000))? {
if let Event::Key(KeyEvent { code, modifiers }) = crossterm::event::read()? {
if let Event::Key(KeyEvent {
code, modifiers, ..
}) = crossterm::event::read()?
{
if code == KeyCode::Char('c') && modifiers == KeyModifiers::CONTROL {
return Ok(());
}
@ -391,7 +509,7 @@ async fn main() -> Result<()> {
execute!(
stdout,
style::Print("\n[>>] Waiting for handshake, pass-through"),
style::Print("\n✅ Waiting for handshake, pass-through. 🔌 Power the target now."),
)?;
stdout.flush()?;
@ -408,19 +526,11 @@ async fn main() -> Result<()> {
execute!(stdout, style::Print(format!("\nError: {:?}\n", e)))?;
stdout.flush()?;
let cont = match e.downcast_ref::<std::io::Error>() {
Some(e)
if e.kind() == std::io::ErrorKind::NotFound
|| e.kind() == std::io::ErrorKind::PermissionDenied =>
{
true
}
_ => false,
} || matches!(e.downcast_ref::<tokio_serial::Error>(), Some(e) if e.kind == tokio_serial::ErrorKind::NoDevice)
|| matches!(
e.downcast_ref::<tokio::sync::mpsc::error::SendError<Vec<u8>>>(),
Some(_)
);
let cont = matches!(e.downcast_ref::<std::io::Error>(),
Some(e) if e.kind() == std::io::ErrorKind::NotFound || e.kind() == std::io::ErrorKind::PermissionDenied)
|| matches!(e.downcast_ref::<tokio_serial::Error>(), Some(e) if e.kind == tokio_serial::ErrorKind::NoDevice)
|| e.downcast_ref::<tokio::sync::mpsc::error::SendError<Vec<u8>>>()
.is_some();
if !cont {
break;

6
cog.toml Normal file
View File

@ -0,0 +1,6 @@
tag_prefix = "R"
ignore_merge_commits = true
[commit_types]
wip = { changelog_title = "Work in progress", omit_from_changelog = true }
sq = { changelog_title = "Squash me later!", omit_from_changelog = true }

View File

@ -0,0 +1,39 @@
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// OS Interface Code
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------

View File

@ -1,39 +1,23 @@
---
template:
direction: Horizontal
parts:
- direction: Vertical
borderless: true
split_size:
Fixed: 1
run:
plugin:
location: "zellij:tab-bar"
- direction: Vertical
body: true
tabs:
- direction: Vertical
parts:
- direction: Horizontal
borderless: true
run:
command:
cmd: "bash"
args: ["-c", "bash emulation/qemu_multi_uart.sh"]
- direction: Horizontal
parts:
- direction: Vertical
split_size:
Percent: 30
run:
command:
cmd: "bash"
args: ["-c", "clear; echo -e \"\\033]0;MiniUart\\007\"; bash /dev/ptmx FIRST=1"]
- direction: Vertical
split_size:
Percent: 70
run:
command:
cmd: "bash"
args: ["-c", "clear; echo -e \"\\033]0;PL011 Uart\\007\"; bash /dev/ptmx SECOND=1"]
layout {
default_tab_template {
pane size=1 borderless=true {
plugin location="zellij:tab-bar"
}
children
}
tab split_direction="Vertical" {
pane split_direction="Vertical" {
pane command="bash" borderless=true close_on_exit=true {
args "-c" "bash emulation/qemu_multi_uart.sh"
}
pane split_direction="Horizontal" {
pane command="bash" size="30%" close_on_exit=true {
args "-c" "clear; echo -e \"\\033]0;MiniUart\\007\"; bash /dev/ptmx FIRST=1"
}
pane command="bash" size="70%" close_on_exit=true {
args "-c" "clear; echo -e \"\\033]0;PL011 Uart\\007\"; bash /dev/ptmx SECOND=1"
}
}
}
}
}

View File

@ -1,83 +0,0 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
ENTRY(_boot_cores);
/* Symbols between __BOOT_START and __BOOT_END should be dropped after init is complete.
Symbols between __RO_START and __RO_END are the kernel code.
Symbols between __BSS_START and __BSS_END must be initialized to zero by r0 code in kernel.
*/
SECTIONS
{
. = 0x80000; /* AArch64 boot address is 0x80000, 4K-aligned */
__STACK_START = 0x80000; /* Stack grows from here towards 0x0. */
__BOOT_START = .;
.text :
{
KEEP(*(.text.boot.entry)) // Entry point must go first
*(.text.boot)
. = ALIGN(4096);
*(.data.boot)
. = ALIGN(4096); /* Here boot code ends */
__BOOT_END = .; // __BOOT_END must be 4KiB aligned
__RO_START = .;
*(.text .text.*)
}
.vectors ALIGN(2048):
{
KEEP(*(.vectors))
}
.rodata ALIGN(4):
{
*(.rodata .rodata.*)
FILL(0x00)
}
. = ALIGN(4096); /* Fill up to 4KiB */
__RO_END = .; /* __RO_END must be 4KiB aligned */
__DATA_START = .; /* __DATA_START must be 4KiB aligned */
.data : /* @todo align data to 4K -- it's already aligned up to __RO_END marker now */
{
*(.data .data.*)
FILL(0x00)
}
/* @todo could insert .data.boot here with proper alignment */
.bss ALIGN(8) (NOLOAD):
{
__BSS_START = .;
*(.bss .bss.*)
*(COMMON)
. = ALIGN(4096); /* Align up to 4KiB */
__BSS_END = .;
}
/DISCARD/ : { *(.comment) *(.gnu*) *(.note*) *(.eh_frame*) }
}
PROVIDE(current_el0_synchronous = default_exception_handler);
PROVIDE(current_el0_irq = default_exception_handler);
PROVIDE(current_el0_fiq = default_exception_handler);
PROVIDE(current_el0_serror = default_exception_handler);
PROVIDE(current_elx_synchronous = default_exception_handler);
PROVIDE(current_elx_irq = default_exception_handler);
PROVIDE(current_elx_fiq = default_exception_handler);
PROVIDE(current_elx_serror = default_exception_handler);
PROVIDE(lower_aarch64_synchronous = default_exception_handler);
PROVIDE(lower_aarch64_irq = default_exception_handler);
PROVIDE(lower_aarch64_fiq = default_exception_handler);
PROVIDE(lower_aarch64_serror = default_exception_handler);
PROVIDE(lower_aarch32_synchronous = default_exception_handler);
PROVIDE(lower_aarch32_irq = default_exception_handler);
PROVIDE(lower_aarch32_fiq = default_exception_handler);
PROVIDE(lower_aarch32_serror = default_exception_handler);

View File

@ -28,13 +28,20 @@ rpi3 = []
rpi4 = []
[dependencies]
r0 = "1.0"
qemu-exit = "3.0"
cortex-a = "7.0"
tock-registers = "0.7"
aarch64-cpu = "9.4"
tock-registers = "0.8"
ux = { version = "0.1", default-features = false }
usize_conversions = "0.2"
bit_field = "0.10"
bitflags = "1.3"
bitflags = "2.4"
cfg-if = "1.0"
snafu = { version = "0.7", default-features = false }
snafu = { version = "0.7", default-features = false, features = ["unstable-core-error"] }
buddy-alloc = { git = "https://github.com/metta-systems/buddy-alloc", version = "0.6.0", branch = "feature/allocator-api" }
once_cell = { version = "1.18", default-features = false, features = ["unstable"] }
[lib]
name = "machine"
test = true
# For proper testing in libmachine, we build it as a test_runner binary!

11
machine/Makefile.toml Normal file
View File

@ -0,0 +1,11 @@
#
# SPDX-License-Identifier: BlueOak-1.0.0
#
# Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
#
# Build nucleus library (machine)
#
[env]
CARGO_MAKE_EXTEND_WORKSPACE_MAKEFILE = true
# No special configuration needed.

10
machine/build.rs Normal file
View File

@ -0,0 +1,10 @@
/// This build script is used to create lib tests.
const LINKER_SCRIPT: &str = "machine/src/platform/raspberrypi/linker/kernel.ld";
const LINKER_SCRIPT_AUX: &str = "machine/src/arch/aarch64/linker/aarch64-exceptions.ld";
fn main() {
println!("cargo:rerun-if-changed={}", LINKER_SCRIPT);
println!("cargo:rerun-if-changed={}", LINKER_SCRIPT_AUX);
println!("cargo:rustc-link-arg=--script={}", LINKER_SCRIPT);
}

View File

@ -4,6 +4,8 @@ This directory contains code specific to a certain architecture.
Implementations of arch-specific kernel calls are also placed here.
One of the submodules will be exported based conditionally on target_arch. Currently, the code depending on it will import specific architecture explicitly, there are no default reexports.
----
For more information please re-read.

View File

@ -5,18 +5,21 @@
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
*/
//! Low-level boot of the Raspberry's processor
//! Low-level boot of the ARMv8-A processor.
//! <http://infocenter.arm.com/help/topic/com.arm.doc.dai0527a/DAI0527A_baremetal_boot_code_for_ARMv8_A_processors.pdf>
use {
crate::endless_sleep,
cortex_a::{asm, registers::*},
super::endless_sleep,
crate::platform::cpu::BOOT_CORE_ID,
aarch64_cpu::{asm, registers::*},
core::{
cell::UnsafeCell,
slice,
sync::atomic::{self, Ordering},
},
tock_registers::interfaces::{Readable, Writeable},
};
// Stack placed before first executable instruction
const STACK_START: u64 = 0x0008_0000; // Keep in sync with linker script
/// Type check the user-supplied entry function.
#[macro_export]
macro_rules! entry {
@ -27,59 +30,63 @@ macro_rules! entry {
#[inline(always)]
pub unsafe fn __main() -> ! {
// type check the given path
let f: fn() -> ! = $path;
let f: unsafe fn() -> ! = $path;
f()
}
};
}
/// Reset function.
/// Entrypoint of the processor.
///
/// Initializes the bss section before calling into the user's `main()`.
/// Parks all cores except core0 and checks if we started in EL2/EL3. If
/// so, proceeds with setting up EL1.
///
/// This is invoked from the linker script, does arch-specific init
/// and passes control to the kernel boot function reset().
///
/// Dissection of various RPi core boot stubs is available
/// [here](https://leiradel.github.io/2019/01/20/Raspberry-Pi-Stubs.html).
///
/// # Safety
///
/// Totally unsafe! We're in the hardware land.
#[link_section = ".text.boot"]
unsafe fn reset() -> ! {
extern "C" {
// Boundaries of the .bss section, provided by the linker script
// The type, `u64`, indicates that the memory is 8-byte aligned
static mut __BSS_START: u64;
static mut __BSS_END: u64;
}
// Zeroes the .bss section
r0::zero_bss(&mut __BSS_START, &mut __BSS_END);
/// We assume that no statics are accessed before transition to main from reset() function.
#[no_mangle]
#[link_section = ".text.main.entry"]
pub unsafe extern "C" fn _boot_cores() -> ! {
// Can't match values with dots in match, so use intermediate consts.
#[cfg(qemu)]
const EL3: u64 = CurrentEL::EL::EL3.value;
const EL2: u64 = CurrentEL::EL::EL2.value;
const EL1: u64 = CurrentEL::EL::EL1.value;
extern "Rust" {
fn main() -> !;
// Stack top
// Stack placed before first executable instruction
static __STACK_TOP: UnsafeCell<()>;
}
// Set stack pointer. Used in case we started in EL1.
SP.set(__STACK_TOP.get() as u64);
shared_setup_and_enter_pre();
if BOOT_CORE_ID == super::smp::core_id() {
match CurrentEL.get() {
#[cfg(qemu)]
EL3 => setup_and_enter_el1_from_el3(),
EL2 => setup_and_enter_el1_from_el2(),
EL1 => reset(),
_ => endless_sleep(),
}
}
main()
// if not core0 or not EL3/EL2/EL1, infinitely wait for events
endless_sleep()
}
// [ARMv6 unaligned data access restrictions](https://developer.arm.com/documentation/ddi0333/h/unaligned-and-mixed-endian-data-access-support/unaligned-access-support/armv6-unaligned-data-access-restrictions?lang=en)
// dictates that compatibility bit U in CP15 must be set to 1 to allow Unaligned accesses while MMU is off.
// (In addition to SCTLR_EL1.A being 0)
// See also [CP15 C1 docs](https://developer.arm.com/documentation/ddi0290/g/system-control-coprocessor/system-control-processor-registers/c1--control-register).
// #[link_section = ".text.boot"]
// #[inline]
// fn enable_armv6_unaligned_access() {
// unsafe {
// core::arch::asm!(
// "mrc p15, 0, {u}, c1, c0, 0",
// "or {u}, {u}, {CR_U}",
// "mcr p15, 0, {u}, c1, c0, 0",
// u = out(reg) _,
// CR_U = const 1 << 22
// );
// }
// }
#[link_section = ".text.boot"]
#[inline]
#[inline(always)]
fn shared_setup_and_enter_pre() {
// Enable timer counter registers for EL1
CNTHCTL_EL2.write(CNTHCTL_EL2::EL1PCEN::SET + CNTHCTL_EL2::EL1PCTEN::SET);
@ -106,14 +113,21 @@ fn shared_setup_and_enter_pre() {
// Set EL1 execution state to AArch64
// @todo Explain the SWIO bit (SWIO hardwired on Pi3)
HCR_EL2.write(HCR_EL2::RW::EL1IsAarch64 + HCR_EL2::SWIO::SET);
// @todo disable VM bit to prevent stage 2 MMU translations
}
#[link_section = ".text.boot"]
#[inline]
fn shared_setup_and_enter_post() -> ! {
extern "Rust" {
// Stack top
static __STACK_TOP: UnsafeCell<()>;
}
// Set up SP_EL1 (stack pointer), which will be used by EL1 once
// we "return" to it.
SP_EL1.set(STACK_START);
unsafe {
SP_EL1.set(__STACK_TOP.get() as u64);
}
// Use `eret` to "return" to EL1. This will result in execution of
// `reset()` in EL1.
@ -181,43 +195,52 @@ fn setup_and_enter_el1_from_el3() -> ! {
shared_setup_and_enter_post()
}
/// Entrypoint of the processor.
/// Reset function.
///
/// Parks all cores except core0 and checks if we started in EL2/EL3. If
/// so, proceeds with setting up EL1.
/// Initializes the bss section before calling into the user's `main()`.
///
/// This is invoked from the linker script, does arch-specific init
/// and passes control to the kernel boot function reset().
/// # Safety
///
/// Dissection of various RPi core boot stubs is available
/// [here](https://leiradel.github.io/2019/01/20/Raspberry-Pi-Stubs.html).
/// Totally unsafe! We're in the hardware land.
/// We assume that no statics are accessed before transition to main from this function.
///
#[no_mangle]
#[link_section = ".text.boot.entry"]
pub unsafe extern "C" fn _boot_cores() -> ! {
const CORE_0: u64 = 0;
const CORE_MASK: u64 = 0x3;
// Can't match values with dots in match, so use intermediate consts.
#[cfg(qemu)]
const EL3: u64 = CurrentEL::EL::EL3.value;
const EL2: u64 = CurrentEL::EL::EL2.value;
const EL1: u64 = CurrentEL::EL::EL1.value;
// Set stack pointer. Used in case we started in EL1.
SP.set(STACK_START);
shared_setup_and_enter_pre();
if CORE_0 == MPIDR_EL1.get() & CORE_MASK {
match CurrentEL.get() {
#[cfg(qemu)]
EL3 => setup_and_enter_el1_from_el3(),
EL2 => setup_and_enter_el1_from_el2(),
EL1 => reset(),
_ => endless_sleep(),
}
/// We are guaranteed to be in EL1 non-secure mode here.
#[link_section = ".text.boot"]
unsafe fn reset() -> ! {
extern "Rust" {
// Boundaries of the .bss section, provided by the linker script.
static __BSS_START: UnsafeCell<()>;
static __BSS_SIZE_U64S: UnsafeCell<()>;
}
// if not core0 or not EL3/EL2/EL1, infinitely wait for events
endless_sleep()
// Zeroes the .bss section
// Based on https://gist.github.com/skoe/dbd3add2fc3baa600e9ebc995ddf0302 and discussions
// on pointer provenance in closing r0 issues (https://github.com/rust-embedded/cortex-m-rt/issues/300)
// NB: https://doc.rust-lang.org/nightly/core/ptr/index.html#provenance
// Importing pointers like `__BSS_START` and `__BSS_END` and performing pointer
// arithmetic on them directly may lead to Undefined Behavior, because the
// compiler may assume they come from different allocations and thus performing
// undesirable optimizations on them.
// So we use a pointer-and-a-size as described in provenance section.
let bss = slice::from_raw_parts_mut(
__BSS_START.get() as *mut u64,
__BSS_SIZE_U64S.get() as usize,
);
for i in bss {
*i = 0;
}
// Don't cross this line with loads and stores. The initializations
// done above could be "invisible" to the compiler, because we write to the
// same memory location that is used by statics after this point.
// Additionally, we assume that no statics are accessed before this point.
atomic::compiler_fence(Ordering::SeqCst);
extern "Rust" {
fn main() -> !;
}
main()
}

View File

@ -0,0 +1,15 @@
use aarch64_cpu::asm;
pub mod boot;
pub mod smp;
/// Expose CPU-specific no-op opcode.
pub use asm::nop;
/// Loop forever in sleep mode.
#[inline]
pub fn endless_sleep() -> ! {
loop {
asm::wfe();
}
}

View File

@ -0,0 +1,7 @@
#[inline(always)]
pub fn core_id() -> u64 {
use aarch64_cpu::registers::{Readable, MPIDR_EL1};
const CORE_MASK: u64 = 0x3;
MPIDR_EL1.get() & CORE_MASK
}

View File

@ -0,0 +1,136 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! Architectural asynchronous exception handling.
use {
aarch64_cpu::registers::*,
core::arch::asm,
tock_registers::interfaces::{Readable, Writeable},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
mod daif_bits {
pub const IRQ: u8 = 0b0010;
}
trait DaifField {
fn daif_field() -> tock_registers::fields::Field<u64, DAIF::Register>;
}
struct Debug;
struct SError;
struct IRQ;
struct FIQ;
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl DaifField for Debug {
fn daif_field() -> tock_registers::fields::Field<u64, DAIF::Register> {
DAIF::D
}
}
impl DaifField for SError {
fn daif_field() -> tock_registers::fields::Field<u64, DAIF::Register> {
DAIF::A
}
}
impl DaifField for IRQ {
fn daif_field() -> tock_registers::fields::Field<u64, DAIF::Register> {
DAIF::I
}
}
impl DaifField for FIQ {
fn daif_field() -> tock_registers::fields::Field<u64, DAIF::Register> {
DAIF::F
}
}
fn is_masked<T>() -> bool
where
T: DaifField,
{
DAIF.is_set(T::daif_field())
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Returns whether IRQs are masked on the executing core.
pub fn is_local_irq_masked() -> bool {
!is_masked::<IRQ>()
}
/// Unmask IRQs on the executing core.
///
/// It is not needed to place an explicit instruction synchronization barrier after the `msr`.
/// Quoting the Architecture Reference Manual for ARMv8-A, section C5.1.3:
///
/// "Writes to PSTATE.{PAN, D, A, I, F} occur in program order without the need for additional
/// synchronization."
#[inline(always)]
pub fn local_irq_unmask() {
unsafe {
asm!(
"msr DAIFClr, {arg}",
arg = const daif_bits::IRQ,
options(nomem, nostack, preserves_flags)
);
}
}
/// Mask IRQs on the executing core.
#[inline(always)]
pub fn local_irq_mask() {
unsafe {
asm!(
"msr DAIFSet, {arg}",
arg = const daif_bits::IRQ,
options(nomem, nostack, preserves_flags)
);
}
}
/// Mask IRQs on the executing core and return the previously saved interrupt mask bits (DAIF).
#[inline(always)]
pub fn local_irq_mask_save() -> u64 {
let saved = DAIF.get();
local_irq_mask();
saved
}
/// Restore the interrupt mask bits (DAIF) using the callee's argument.
///
/// # Invariant
///
/// - No sanity checks on the input.
#[inline(always)]
pub fn local_irq_restore(saved: u64) {
DAIF.set(saved);
}
/// Print the AArch64 exceptions status.
#[rustfmt::skip]
pub fn print_state() {
use crate::info;
let to_mask_str = |x| -> _ {
if x { "Masked" } else { "Unmasked" }
};
info!(" Debug: {}", to_mask_str(is_masked::<Debug>()));
info!(" SError: {}", to_mask_str(is_masked::<SError>()));
info!(" IRQ: {}", to_mask_str(is_masked::<IRQ>()));
info!(" FIQ: {}", to_mask_str(is_masked::<FIQ>()));
}

View File

@ -0,0 +1,394 @@
/*
* SPDX-License-Identifier: BlueOak-1.0.0
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
*/
//! Interrupt handling
//!
//! The base address is given by VBAR_ELn and each entry has a defined offset from this
//! base address. Each table has 16 entries, with each entry being 128 bytes (32 instructions)
//! in size. The table effectively consists of 4 sets of 4 entries.
//!
//! Minimal implementation to help catch MMU traps.
//! Reads ESR_ELx to understand why trap was taken.
//!
//! VBAR_EL1, VBAR_EL2, VBAR_EL3
//!
//! CurrentEL with SP0: +0x0
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! CurrentEL with SPx: +0x200
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! Lower EL using AArch64: +0x400
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! Lower EL using AArch32: +0x600
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! When the processor takes an exception to AArch64 execution state,
//! all of the PSTATE interrupt masks is set automatically. This means
//! that further exceptions are disabled. If software is to support
//! nested exceptions, for example, to allow a higher priority interrupt
//! to interrupt the handling of a lower priority source, then software needs
//! to explicitly re-enable interrupts
use {
crate::{
exception::{self, PrivilegeLevel},
info,
},
aarch64_cpu::{asm::barrier, registers::*},
core::{cell::UnsafeCell, fmt},
snafu::Snafu,
tock_registers::{
interfaces::{Readable, Writeable},
registers::InMemoryRegister,
},
};
pub mod asynchronous;
core::arch::global_asm!(include_str!("vectors.S"));
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
/// Wrapper structs for memory copies of registers.
#[repr(transparent)]
struct SpsrEL1(InMemoryRegister<u64, SPSR_EL1::Register>);
struct EsrEL1(InMemoryRegister<u64, ESR_EL1::Register>);
/// The exception context as it is stored on the stack on exception entry.
#[repr(C)]
struct ExceptionContext {
/// General Purpose Registers, x0-x29
gpr: [u64; 30],
/// The link register, aka x30.
lr: u64,
/// Exception link register. The program counter at the time the exception happened.
elr_el1: u64,
/// Saved program status.
spsr_el1: SpsrEL1,
/// Exception syndrome register.
esr_el1: EsrEL1,
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
/// The default exception, invoked for every exception type unless the handler
/// is overridden.
/// Prints verbose information about the exception and then panics.
///
/// Default pointer is configured in the linker script.
fn default_exception_handler(exc: &ExceptionContext) {
panic!(
"Unexpected CPU Exception!\n\n\
{}",
exc
);
}
//------------------------------------------------------------------------------
// Current, EL0
//------------------------------------------------------------------------------
#[no_mangle]
extern "C" fn current_el0_synchronous(_e: &mut ExceptionContext) {
panic!("Should not be here. Use of SP_EL0 in EL1 is not supported.")
}
#[no_mangle]
extern "C" fn current_el0_irq(_e: &mut ExceptionContext) {
panic!("Should not be here. Use of SP_EL0 in EL1 is not supported.")
}
#[no_mangle]
extern "C" fn current_el0_serror(_e: &mut ExceptionContext) {
panic!("Should not be here. Use of SP_EL0 in EL1 is not supported.")
}
//------------------------------------------------------------------------------
// Current, ELx
//------------------------------------------------------------------------------
#[no_mangle]
extern "C" fn current_elx_synchronous(e: &mut ExceptionContext) {
#[cfg(feature = "test_build")]
{
const TEST_SVC_ID: u64 = 0x1337;
if let Some(ESR_EL1::EC::Value::SVC64) = e.esr_el1.exception_class() {
if e.esr_el1.iss() == TEST_SVC_ID {
return;
}
}
}
default_exception_handler(e);
}
#[no_mangle]
extern "C" fn current_elx_irq(_e: &mut ExceptionContext) {
let token = unsafe { &exception::asynchronous::IRQContext::new() };
exception::asynchronous::irq_manager().handle_pending_irqs(token);
}
#[no_mangle]
extern "C" fn current_elx_serror(e: &mut ExceptionContext) {
default_exception_handler(e);
}
//------------------------------------------------------------------------------
// Lower, AArch64
//------------------------------------------------------------------------------
#[no_mangle]
extern "C" fn lower_aarch64_synchronous(e: &mut ExceptionContext) {
default_exception_handler(e);
}
#[no_mangle]
extern "C" fn lower_aarch64_irq(e: &mut ExceptionContext) {
default_exception_handler(e);
}
#[no_mangle]
extern "C" fn lower_aarch64_serror(e: &mut ExceptionContext) {
default_exception_handler(e);
}
//------------------------------------------------------------------------------
// Lower, AArch32
//------------------------------------------------------------------------------
#[no_mangle]
extern "C" fn lower_aarch32_synchronous(e: &mut ExceptionContext) {
default_exception_handler(e);
}
#[no_mangle]
extern "C" fn lower_aarch32_irq(e: &mut ExceptionContext) {
default_exception_handler(e);
}
#[no_mangle]
extern "C" fn lower_aarch32_serror(e: &mut ExceptionContext) {
default_exception_handler(e);
}
//------------------------------------------------------------------------------
// Misc
//------------------------------------------------------------------------------
/// Human readable SPSR_EL1.
#[rustfmt::skip]
impl fmt::Display for SpsrEL1 {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
// Raw value.
writeln!(f, "SPSR_EL1: {:#010x}", self.0.get())?;
let to_flag_str = |x| -> _ {
if x { "Set" } else { "Not set" }
};
writeln!(f, " Flags:")?;
writeln!(f, " Negative (N): {}", to_flag_str(self.0.is_set(SPSR_EL1::N)))?;
writeln!(f, " Zero (Z): {}", to_flag_str(self.0.is_set(SPSR_EL1::Z)))?;
writeln!(f, " Carry (C): {}", to_flag_str(self.0.is_set(SPSR_EL1::C)))?;
writeln!(f, " Overflow (V): {}", to_flag_str(self.0.is_set(SPSR_EL1::V)))?;
let to_mask_str = |x| -> _ {
if x { "Masked" } else { "Unmasked" }
};
writeln!(f, " Exception handling state:")?;
writeln!(f, " Debug (D): {}", to_mask_str(self.0.is_set(SPSR_EL1::D)))?;
writeln!(f, " SError (A): {}", to_mask_str(self.0.is_set(SPSR_EL1::A)))?;
writeln!(f, " IRQ (I): {}", to_mask_str(self.0.is_set(SPSR_EL1::I)))?;
writeln!(f, " FIQ (F): {}", to_mask_str(self.0.is_set(SPSR_EL1::F)))?;
write!(f, " Illegal Execution State (IL): {}",
to_flag_str(self.0.is_set(SPSR_EL1::IL))
)
}
}
impl EsrEL1 {
#[inline(always)]
fn exception_class(&self) -> Option<ESR_EL1::EC::Value> {
self.0.read_as_enum(ESR_EL1::EC)
}
#[cfg(feature = "test_build")]
#[inline(always)]
fn iss(&self) -> u64 {
self.0.read(ESR_EL1::ISS)
}
}
/// Human readable ESR_EL1.
#[rustfmt::skip]
impl fmt::Display for EsrEL1 {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
// Raw print of whole register.
writeln!(f, "ESR_EL1: {:#010x}", self.0.get())?;
// Raw print of exception class.
write!(f, " Exception Class (EC) : {:#x}", self.0.read(ESR_EL1::EC))?;
// Exception class.
let ec_translation = match self.exception_class() {
Some(ESR_EL1::EC::Value::DataAbortCurrentEL) => "Data Abort, current EL",
_ => "N/A",
};
writeln!(f, " - {}", ec_translation)?;
// Raw print of instruction specific syndrome.
write!(f, " Instr Specific Syndrome (ISS): {:#x}", self.0.read(ESR_EL1::ISS))
}
}
impl ExceptionContext {
#[inline(always)]
fn exception_class(&self) -> Option<ESR_EL1::EC::Value> {
self.esr_el1.exception_class()
}
#[inline(always)]
fn fault_address_valid(&self) -> bool {
use ESR_EL1::EC::Value::*;
match self.exception_class() {
None => false,
Some(ec) => matches!(
ec,
InstrAbortLowerEL
| InstrAbortCurrentEL
| PCAlignmentFault
| DataAbortLowerEL
| DataAbortCurrentEL
| WatchpointLowerEL
| WatchpointCurrentEL
),
}
}
}
/// Human readable print of the exception context.
impl fmt::Display for ExceptionContext {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "{}", self.esr_el1)?;
if self.fault_address_valid() {
writeln!(f, "FAR_EL1: {:#018x}", FAR_EL1.get() as usize)?;
}
writeln!(f, "{}", self.spsr_el1)?;
writeln!(f, "ELR_EL1: {:#018x}", self.elr_el1)?;
writeln!(f)?;
writeln!(f, "General purpose register:")?;
let alternating = |x| -> _ {
if x % 2 == 0 {
" "
} else {
"\n"
}
};
// Print two registers per line.
for (i, reg) in self.gpr.iter().enumerate() {
write!(f, " x{: <2}: {: >#018x}{}", i, reg, alternating(i))?;
}
write!(f, " lr : {:#018x}", self.lr)
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// The processor's current privilege level.
pub fn current_privilege_level() -> (PrivilegeLevel, &'static str) {
let el = CurrentEL.read_as_enum(CurrentEL::EL);
match el {
Some(CurrentEL::EL::Value::EL3) => (PrivilegeLevel::Unknown, "EL3"),
Some(CurrentEL::EL::Value::EL2) => (PrivilegeLevel::Hypervisor, "EL2"),
Some(CurrentEL::EL::Value::EL1) => (PrivilegeLevel::Kernel, "EL1"),
Some(CurrentEL::EL::Value::EL0) => (PrivilegeLevel::User, "EL0"),
_ => (PrivilegeLevel::Unknown, "Unknown"),
}
}
/// Init exception handling by setting the exception vector base address register.
///
/// # Safety
///
/// - Changes the HW state of the executing core.
/// - The vector table and the symbol `__EXCEPTION_VECTORS_START` from the linker script must
/// adhere to the alignment and size constraints demanded by the ARMv8-A Architecture Reference
/// Manual.
pub fn handling_init() {
// Provided by vectors.S.
extern "Rust" {
static __EXCEPTION_VECTORS_START: UnsafeCell<()>;
}
unsafe {
set_vbar_el1_checked(__EXCEPTION_VECTORS_START.get() as u64)
.expect("Vector table properly aligned!");
}
info!("[!] Exception traps set up");
}
/// Errors possibly returned from the traps module.
/// @todo a big over-engineered here.
#[derive(Debug, Snafu)]
enum Error {
/// IVT address is unaligned.
#[snafu(display("Unaligned base address for interrupt vector table"))]
Unaligned,
}
/// Configure base address of interrupt vectors table.
/// Checks that address is properly 2KiB aligned.
///
/// # Safety
///
/// Totally unsafe in the land of the hardware.
unsafe fn set_vbar_el1_checked(vec_base_addr: u64) -> Result<(), Error> {
if vec_base_addr.trailing_zeros() < 11 {
return Err(Error::Unaligned);
}
VBAR_EL1.set(vec_base_addr);
// Force VBAR update to complete before next instruction.
barrier::isb(barrier::SY);
Ok(())
}

View File

@ -0,0 +1,19 @@
PROVIDE(current_el0_synchronous = default_exception_handler);
PROVIDE(current_el0_irq = default_exception_handler);
PROVIDE(current_el0_fiq = default_exception_handler);
PROVIDE(current_el0_serror = default_exception_handler);
PROVIDE(current_elx_synchronous = default_exception_handler);
PROVIDE(current_elx_irq = default_exception_handler);
PROVIDE(current_elx_fiq = default_exception_handler);
PROVIDE(current_elx_serror = default_exception_handler);
PROVIDE(lower_aarch64_synchronous = default_exception_handler);
PROVIDE(lower_aarch64_irq = default_exception_handler);
PROVIDE(lower_aarch64_fiq = default_exception_handler);
PROVIDE(lower_aarch64_serror = default_exception_handler);
PROVIDE(lower_aarch32_synchronous = default_exception_handler);
PROVIDE(lower_aarch32_irq = default_exception_handler);
PROVIDE(lower_aarch32_fiq = default_exception_handler);
PROVIDE(lower_aarch32_serror = default_exception_handler);

View File

@ -4,7 +4,5 @@
*/
mod asid;
mod phys_addr;
mod virt_addr;
pub use {asid::*, phys_addr::*, virt_addr::*};
pub use asid::*;

View File

@ -1,713 +0,0 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
//! MMU initialisation.
//!
//! Paging is mostly based on [previous version](https://os.phil-opp.com/page-tables/) of
//! Phil Opp's [paging guide](https://os.phil-opp.com/paging-implementation/) and
//! [ARMv8 ARM memory addressing](https://static.docs.arm.com/100940/0100/armv8_a_address%20translation_100940_0100_en.pdf).
use {
crate::{
arch::aarch64::memory::{get_virt_addr_properties, AttributeFields},
println,
},
core::{
marker::PhantomData,
ops::{Index, IndexMut},
},
cortex_a::{
asm::barrier,
registers::{ID_AA64MMFR0_EL1, SCTLR_EL1, TCR_EL1, TTBR0_EL1},
},
tock_registers::{
fields::FieldValue,
interfaces::{ReadWriteable, Readable, Writeable},
register_bitfields,
},
};
mod mair {
use {cortex_a::registers::MAIR_EL1, tock_registers::interfaces::Writeable};
/// Setup function for the MAIR_EL1 register.
pub fn set_up() {
// Define the three memory types that we will map. Normal DRAM, Uncached and device.
MAIR_EL1.write(
// Attribute 2 -- Device Memory
MAIR_EL1::Attr2_Device::nonGathering_nonReordering_EarlyWriteAck
// Attribute 1 -- Non Cacheable DRAM
+ MAIR_EL1::Attr1_Normal_Outer::NonCacheable
+ MAIR_EL1::Attr1_Normal_Inner::NonCacheable
// Attribute 0 -- Regular Cacheable
+ MAIR_EL1::Attr0_Normal_Outer::WriteBack_NonTransient_ReadWriteAlloc
+ MAIR_EL1::Attr0_Normal_Inner::WriteBack_NonTransient_ReadWriteAlloc,
);
}
// Three descriptive consts for indexing into the correct MAIR_EL1 attributes.
pub mod attr {
pub const NORMAL: u64 = 0;
pub const NORMAL_NON_CACHEABLE: u64 = 1;
pub const DEVICE_NGNRE: u64 = 2;
// DEVICE_GRE
// DEVICE_NGNRNE
}
}
/// Parse the ID_AA64MMFR0_EL1 register for runtime information about supported MMU features.
/// Print the current state of TCR register.
pub fn print_features() {
// use crate::cortex_a::regs::RegisterReadWrite;
let sctlr = SCTLR_EL1.extract();
if let Some(SCTLR_EL1::M::Value::Enable) = sctlr.read_as_enum(SCTLR_EL1::M) {
println!("[i] MMU currently enabled");
}
if let Some(SCTLR_EL1::I::Value::Cacheable) = sctlr.read_as_enum(SCTLR_EL1::I) {
println!("[i] MMU I-cache enabled");
}
if let Some(SCTLR_EL1::C::Value::Cacheable) = sctlr.read_as_enum(SCTLR_EL1::C) {
println!("[i] MMU D-cache enabled");
}
let mmfr = ID_AA64MMFR0_EL1.extract();
if let Some(ID_AA64MMFR0_EL1::TGran4::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran4)
{
println!("[i] MMU: 4 KiB granule supported!");
}
if let Some(ID_AA64MMFR0_EL1::TGran16::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran16)
{
println!("[i] MMU: 16 KiB granule supported!");
}
if let Some(ID_AA64MMFR0_EL1::TGran64::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran64)
{
println!("[i] MMU: 64 KiB granule supported!");
}
match mmfr.read_as_enum(ID_AA64MMFR0_EL1::ASIDBits) {
Some(ID_AA64MMFR0_EL1::ASIDBits::Value::Bits_16) => {
println!("[i] MMU: 16 bit ASIDs supported!")
}
Some(ID_AA64MMFR0_EL1::ASIDBits::Value::Bits_8) => {
println!("[i] MMU: 8 bit ASIDs supported!")
}
_ => println!("[i] MMU: Invalid ASID bits specified!"),
}
match mmfr.read_as_enum(ID_AA64MMFR0_EL1::PARange) {
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_32) => {
println!("[i] MMU: Up to 32 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_36) => {
println!("[i] MMU: Up to 36 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_40) => {
println!("[i] MMU: Up to 40 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_42) => {
println!("[i] MMU: Up to 42 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_44) => {
println!("[i] MMU: Up to 44 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_48) => {
println!("[i] MMU: Up to 48 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_52) => {
println!("[i] MMU: Up to 52 Bit physical address range supported!")
}
_ => println!("[i] MMU: Invalid PARange specified!"),
}
let tcr = TCR_EL1.extract();
match tcr.read_as_enum(TCR_EL1::IPS) {
Some(TCR_EL1::IPS::Value::Bits_32) => {
println!("[i] MMU: 32 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_36) => {
println!("[i] MMU: 36 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_40) => {
println!("[i] MMU: 40 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_42) => {
println!("[i] MMU: 42 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_44) => {
println!("[i] MMU: 44 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_48) => {
println!("[i] MMU: 48 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_52) => {
println!("[i] MMU: 52 Bit intermediate physical address size supported!")
}
_ => println!("[i] MMU: Invalid IPS specified!"),
}
match tcr.read_as_enum(TCR_EL1::TG0) {
Some(TCR_EL1::TG0::Value::KiB_4) => println!("[i] MMU: TTBR0 4 KiB granule active!"),
Some(TCR_EL1::TG0::Value::KiB_16) => println!("[i] MMU: TTBR0 16 KiB granule active!"),
Some(TCR_EL1::TG0::Value::KiB_64) => println!("[i] MMU: TTBR0 64 KiB granule active!"),
_ => println!("[i] MMU: Invalid TTBR0 granule size specified!"),
}
let t0sz = tcr.read(TCR_EL1::T0SZ);
println!("[i] MMU: T0sz = 64-{} = {} bits", t0sz, 64 - t0sz);
match tcr.read_as_enum(TCR_EL1::TG1) {
Some(TCR_EL1::TG1::Value::KiB_4) => println!("[i] MMU: TTBR1 4 KiB granule active!"),
Some(TCR_EL1::TG1::Value::KiB_16) => println!("[i] MMU: TTBR1 16 KiB granule active!"),
Some(TCR_EL1::TG1::Value::KiB_64) => println!("[i] MMU: TTBR1 64 KiB granule active!"),
_ => println!("[i] MMU: Invalid TTBR1 granule size specified!"),
}
let t1sz = tcr.read(TCR_EL1::T1SZ);
println!("[i] MMU: T1sz = 64-{} = {} bits", t1sz, 64 - t1sz);
}
register_bitfields! {
u64,
// AArch64 Reference Manual page 2150, D5-2445
STAGE1_DESCRIPTOR [
// In table descriptors
NSTable_EL3 OFFSET(63) NUMBITS(1) [],
/// Access Permissions for subsequent tables
APTable OFFSET(61) NUMBITS(2) [
RW_EL1 = 0b00,
RW_EL1_EL0 = 0b01,
RO_EL1 = 0b10,
RO_EL1_EL0 = 0b11
],
// User execute-never for subsequent tables
UXNTable OFFSET(60) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
/// Privileged execute-never for subsequent tables
PXNTable OFFSET(59) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
// In block descriptors
// OS-specific data
OSData OFFSET(55) NUMBITS(4) [],
// User execute-never
UXN OFFSET(54) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
/// Privileged execute-never
PXN OFFSET(53) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
// @fixme ?? where is this described
CONTIGUOUS OFFSET(52) NUMBITS(1) [
False = 0,
True = 1
],
// @fixme ?? where is this described
DIRTY OFFSET(51) NUMBITS(1) [
False = 0,
True = 1
],
/// Various address fields, depending on use case
LVL2_OUTPUT_ADDR_4KiB OFFSET(21) NUMBITS(27) [], // [47:21]
NEXT_LVL_TABLE_ADDR_4KiB OFFSET(12) NUMBITS(36) [], // [47:12]
// @fixme ?? where is this described
NON_GLOBAL OFFSET(11) NUMBITS(1) [
False = 0,
True = 1
],
/// Access flag
AF OFFSET(10) NUMBITS(1) [
False = 0,
True = 1
],
/// Shareability field
SH OFFSET(8) NUMBITS(2) [
OuterShareable = 0b10,
InnerShareable = 0b11
],
/// Access Permissions
AP OFFSET(6) NUMBITS(2) [
RW_EL1 = 0b00,
RW_EL1_EL0 = 0b01,
RO_EL1 = 0b10,
RO_EL1_EL0 = 0b11
],
NS_EL3 OFFSET(5) NUMBITS(1) [],
/// Memory attributes index into the MAIR_EL1 register
AttrIndx OFFSET(2) NUMBITS(3) [],
TYPE OFFSET(1) NUMBITS(1) [
Block = 0,
Table = 1
],
VALID OFFSET(0) NUMBITS(1) [
False = 0,
True = 1
]
]
}
/// A function that maps the generic memory range attributes to HW-specific
/// attributes of the MMU.
fn into_mmu_attributes(
attribute_fields: AttributeFields,
) -> FieldValue<u64, STAGE1_DESCRIPTOR::Register> {
use super::{AccessPermissions, MemAttributes};
// Memory attributes
let mut desc = match attribute_fields.mem_attributes {
MemAttributes::CacheableDRAM => {
STAGE1_DESCRIPTOR::SH::InnerShareable
+ STAGE1_DESCRIPTOR::AttrIndx.val(mair::attr::NORMAL)
}
MemAttributes::NonCacheableDRAM => {
STAGE1_DESCRIPTOR::SH::InnerShareable
+ STAGE1_DESCRIPTOR::AttrIndx.val(mair::attr::NORMAL_NON_CACHEABLE)
}
MemAttributes::Device => {
STAGE1_DESCRIPTOR::SH::OuterShareable
+ STAGE1_DESCRIPTOR::AttrIndx.val(mair::attr::DEVICE_NGNRE)
}
};
// Access Permissions
desc += match attribute_fields.acc_perms {
AccessPermissions::ReadOnly => STAGE1_DESCRIPTOR::AP::RO_EL1,
AccessPermissions::ReadWrite => STAGE1_DESCRIPTOR::AP::RW_EL1,
};
// Execute Never
desc += if attribute_fields.execute_never {
STAGE1_DESCRIPTOR::PXN::NeverExecute
} else {
STAGE1_DESCRIPTOR::PXN::Execute
};
desc
}
/*
* With 4k page granule, a virtual address is split into 4 lookup parts
* spanning 9 bits each:
*
* _______________________________________________
* | | | | | | |
* | signx | Lv0 | Lv1 | Lv2 | Lv3 | off |
* |_______|_______|_______|_______|_______|_______|
* 63-48 47-39 38-30 29-21 20-12 11-00
*
* mask page size
*
* Lv0: FF8000000000 --
* Lv1: 7FC0000000 1G
* Lv2: 3FE00000 2M
* Lv3: 1FF000 4K
* off: FFF
*
* RPi3 supports 64K and 4K granules, also 40-bit physical addresses.
* It also can address only 1G physical memory, so these 40-bit phys addresses are a fake.
*
* 48-bit virtual address space; different mappings in VBAR0 (EL0) and VBAR1 (EL1+).
*/
/// Number of entries in a 4KiB mmu table.
pub const NUM_ENTRIES_4KIB: u64 = 512;
/// Trait for abstracting over the possible page sizes, 4KiB, 16KiB, 2MiB, 1GiB.
pub trait PageSize: Copy + Eq + PartialOrd + Ord {
/// The page size in bytes.
const SIZE: u64;
/// A string representation of the page size for debug output.
const SIZE_AS_DEBUG_STR: &'static str;
/// The page shift in bits.
const SHIFT: usize;
/// The page mask in bits.
const MASK: u64;
}
/// This trait is implemented for 4KiB, 16KiB, and 2MiB pages, but not for 1GiB pages.
pub trait NotGiantPageSize: PageSize {} // @todo doesn't have to be pub??
/// A standard 4KiB page.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum Size4KiB {}
impl PageSize for Size4KiB {
const SIZE: u64 = 4096;
const SIZE_AS_DEBUG_STR: &'static str = "4KiB";
const SHIFT: usize = 12;
const MASK: u64 = 0xfff;
}
impl NotGiantPageSize for Size4KiB {}
/// A “huge” 2MiB page.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub enum Size2MiB {}
impl PageSize for Size2MiB {
const SIZE: u64 = Size4KiB::SIZE * NUM_ENTRIES_4KIB;
const SIZE_AS_DEBUG_STR: &'static str = "2MiB";
const SHIFT: usize = 21;
const MASK: u64 = 0x1fffff;
}
impl NotGiantPageSize for Size2MiB {}
type EntryFlags = tock_registers::fields::FieldValue<u64, STAGE1_DESCRIPTOR::Register>;
// type EntryRegister = register::LocalRegisterCopy<u64, STAGE1_DESCRIPTOR::Register>;
/// L0 table -- only pointers to L1 tables
pub enum PageGlobalDirectory {}
/// L1 tables -- pointers to L2 tables or giant 1GiB pages
pub enum PageUpperDirectory {}
/// L2 tables -- pointers to L3 tables or huge 2MiB pages
pub enum PageDirectory {}
/// L3 tables -- only pointers to 4/16KiB pages
pub enum PageTable {}
/// Shared trait for specific table levels.
pub trait TableLevel {}
/// Shared trait for hierarchical table levels.
///
/// Specifies what is the next level of page table hierarchy.
pub trait HierarchicalLevel: TableLevel {
/// Level of the next translation table below this one.
type NextLevel: TableLevel;
}
impl TableLevel for PageGlobalDirectory {}
impl TableLevel for PageUpperDirectory {}
impl TableLevel for PageDirectory {}
impl TableLevel for PageTable {}
impl HierarchicalLevel for PageGlobalDirectory {
type NextLevel = PageUpperDirectory;
}
impl HierarchicalLevel for PageUpperDirectory {
type NextLevel = PageDirectory;
}
impl HierarchicalLevel for PageDirectory {
type NextLevel = PageTable;
}
// PageTables do not have next level, therefore they are not HierarchicalLevel
/// MMU address translation table.
/// Contains just u64 internally, provides enum interface on top
#[repr(C)]
#[repr(align(4096))]
pub struct Table<L: TableLevel> {
entries: [u64; NUM_ENTRIES_4KIB as usize],
level: PhantomData<L>,
}
// Implementation code shared for all levels of page tables
impl<L> Table<L>
where
L: TableLevel,
{
/// Zero out entire table.
pub fn zero(&mut self) {
for entry in self.entries.iter_mut() {
*entry = 0;
}
}
}
impl<L> Index<usize> for Table<L>
where
L: TableLevel,
{
type Output = u64;
fn index(&self, index: usize) -> &u64 {
&self.entries[index]
}
}
impl<L> IndexMut<usize> for Table<L>
where
L: TableLevel,
{
fn index_mut(&mut self, index: usize) -> &mut u64 {
&mut self.entries[index]
}
}
/// Type-safe enum wrapper covering Table<L>'s 64-bit entries.
#[derive(Clone)]
// #[repr(transparent)]
enum PageTableEntry {
/// Empty page table entry.
Invalid,
/// Table descriptor is a L0, L1 or L2 table pointing to another table.
/// L0 tables can only point to L1 tables.
/// A descriptor pointing to the next page table.
TableDescriptor(EntryFlags),
/// A Level2 block descriptor with 2 MiB aperture.
///
/// The output points to physical memory.
Lvl2BlockDescriptor(EntryFlags),
/// A page PageTableEntry::descriptor with 4 KiB aperture.
///
/// The output points to physical memory.
PageDescriptor(EntryFlags),
}
/// A descriptor pointing to the next page table. (within PageTableEntry enum)
// struct TableDescriptor(register::FieldValue<u64, STAGE1_DESCRIPTOR::Register>);
impl PageTableEntry {
fn new_table_descriptor(next_lvl_table_addr: usize) -> Result<PageTableEntry, &'static str> {
if next_lvl_table_addr % Size4KiB::SIZE as usize != 0 {
// @todo SIZE must be usize
return Err("TableDescriptor: Address is not 4 KiB aligned.");
}
let shifted = next_lvl_table_addr >> Size4KiB::SHIFT;
Ok(PageTableEntry::TableDescriptor(
STAGE1_DESCRIPTOR::VALID::True
+ STAGE1_DESCRIPTOR::TYPE::Table
+ STAGE1_DESCRIPTOR::NEXT_LVL_TABLE_ADDR_4KiB.val(shifted as u64),
))
}
}
/// A Level2 block descriptor with 2 MiB aperture.
///
/// The output points to physical memory.
// struct Lvl2BlockDescriptor(register::FieldValue<u64, STAGE1_DESCRIPTOR::Register>);
impl PageTableEntry {
fn new_lvl2_block_descriptor(
output_addr: usize,
attribute_fields: AttributeFields,
) -> Result<PageTableEntry, &'static str> {
if output_addr % Size2MiB::SIZE as usize != 0 {
return Err("BlockDescriptor: Address is not 2 MiB aligned.");
}
let shifted = output_addr >> Size2MiB::SHIFT;
Ok(PageTableEntry::Lvl2BlockDescriptor(
STAGE1_DESCRIPTOR::VALID::True
+ STAGE1_DESCRIPTOR::AF::True
+ into_mmu_attributes(attribute_fields)
+ STAGE1_DESCRIPTOR::TYPE::Block
+ STAGE1_DESCRIPTOR::LVL2_OUTPUT_ADDR_4KiB.val(shifted as u64),
))
}
}
/// A page descriptor with 4 KiB aperture.
///
/// The output points to physical memory.
impl PageTableEntry {
fn new_page_descriptor(
output_addr: usize,
attribute_fields: AttributeFields,
) -> Result<PageTableEntry, &'static str> {
if output_addr % Size4KiB::SIZE as usize != 0 {
return Err("PageDescriptor: Address is not 4 KiB aligned.");
}
let shifted = output_addr >> Size4KiB::SHIFT;
Ok(PageTableEntry::PageDescriptor(
STAGE1_DESCRIPTOR::VALID::True
+ STAGE1_DESCRIPTOR::AF::True
+ into_mmu_attributes(attribute_fields)
+ STAGE1_DESCRIPTOR::TYPE::Table
+ STAGE1_DESCRIPTOR::NEXT_LVL_TABLE_ADDR_4KiB.val(shifted as u64),
))
}
}
impl From<u64> for PageTableEntry {
fn from(_val: u64) -> PageTableEntry {
// xxx0 -> Invalid
// xx11 -> TableDescriptor on L0, L1 and L2
// xx10 -> Block Entry L1 and L2
// xx11 -> PageDescriptor L3
PageTableEntry::Invalid
}
}
impl From<PageTableEntry> for u64 {
fn from(val: PageTableEntry) -> u64 {
match val {
PageTableEntry::Invalid => 0,
PageTableEntry::TableDescriptor(x)
| PageTableEntry::Lvl2BlockDescriptor(x)
| PageTableEntry::PageDescriptor(x) => x.value,
}
}
}
static mut LVL2_TABLE: Table<PageDirectory> = Table::<PageDirectory> {
entries: [0; NUM_ENTRIES_4KIB as usize],
level: PhantomData,
};
static mut LVL3_TABLE: Table<PageTable> = Table::<PageTable> {
entries: [0; NUM_ENTRIES_4KIB as usize],
level: PhantomData,
};
trait BaseAddr {
fn base_addr_u64(&self) -> u64;
fn base_addr_usize(&self) -> usize;
}
impl BaseAddr for [u64; 512] {
fn base_addr_u64(&self) -> u64 {
self as *const u64 as u64
}
fn base_addr_usize(&self) -> usize {
self as *const u64 as usize
}
}
/// Set up identity mapped page tables for the first 1 gigabyte of address space.
/// default: 880 MB ARM ram, 128MB VC
///
/// # Safety
///
/// Completely unsafe, we're in the hardware land! Incorrectly initialised tables will just
/// restart the CPU.
pub unsafe fn init() -> Result<(), &'static str> {
// Prepare the memory attribute indirection register.
mair::set_up();
// Point the first 2 MiB of virtual addresses to the follow-up LVL3
// page-table.
LVL2_TABLE.entries[0] =
PageTableEntry::new_table_descriptor(LVL3_TABLE.entries.base_addr_usize())?.into();
// Fill the rest of the LVL2 (2 MiB) entries as block descriptors.
//
// Notice the skip(1) which makes the iteration start at the second 2 MiB
// block (0x20_0000).
for (block_descriptor_nr, entry) in LVL2_TABLE.entries.iter_mut().enumerate().skip(1) {
let virt_addr = block_descriptor_nr << Size2MiB::SHIFT;
let (output_addr, attribute_fields) = match get_virt_addr_properties(virt_addr) {
Err(s) => return Err(s),
Ok((a, b)) => (a, b),
};
let block_desc =
match PageTableEntry::new_lvl2_block_descriptor(output_addr, attribute_fields) {
Err(s) => return Err(s),
Ok(desc) => desc,
};
*entry = block_desc.into();
}
// Finally, fill the single LVL3 table (4 KiB granule).
for (page_descriptor_nr, entry) in LVL3_TABLE.entries.iter_mut().enumerate() {
let virt_addr = page_descriptor_nr << Size4KiB::SHIFT;
let (output_addr, attribute_fields) = match get_virt_addr_properties(virt_addr) {
Err(s) => return Err(s),
Ok((a, b)) => (a, b),
};
let page_desc = match PageTableEntry::new_page_descriptor(output_addr, attribute_fields) {
Err(s) => return Err(s),
Ok(desc) => desc,
};
*entry = page_desc.into();
}
// Point to the LVL2 table base address in TTBR0.
TTBR0_EL1.set_baddr(LVL2_TABLE.entries.base_addr_u64()); // User (lo-)space addresses
// TTBR1_EL1.set_baddr(LVL2_TABLE.entries.base_addr_u64()); // Kernel (hi-)space addresses
// Configure various settings of stage 1 of the EL1 translation regime.
let ips = ID_AA64MMFR0_EL1.read(ID_AA64MMFR0_EL1::PARange);
TCR_EL1.write(
TCR_EL1::TBI0::Ignored // @todo TBI1 also set to Ignored??
+ TCR_EL1::IPS.val(ips) // Intermediate Physical Address Size
// ttbr0 user memory addresses
+ TCR_EL1::TG0::KiB_4 // 4 KiB granule
+ TCR_EL1::SH0::Inner
+ TCR_EL1::ORGN0::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::IRGN0::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::EPD0::EnableTTBR0Walks
+ TCR_EL1::T0SZ.val(34) // ARMv8ARM Table D5-11 minimum TxSZ for starting table level 2
// ttbr1 kernel memory addresses
+ TCR_EL1::TG1::KiB_4 // 4 KiB granule
+ TCR_EL1::SH1::Inner
+ TCR_EL1::ORGN1::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::IRGN1::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::EPD1::EnableTTBR1Walks
+ TCR_EL1::T1SZ.val(34), // ARMv8ARM Table D5-11 minimum TxSZ for starting table level 2
);
// Switch the MMU on.
//
// First, force all previous changes to be seen before the MMU is enabled.
barrier::isb(barrier::SY);
// use cortex_a::regs::RegisterReadWrite;
// Enable the MMU and turn on data and instruction caching.
SCTLR_EL1.modify(SCTLR_EL1::M::Enable + SCTLR_EL1::C::Cacheable + SCTLR_EL1::I::Cacheable);
// Force MMU init to complete before next instruction
/*
* Invalidate the local I-cache so that any instructions fetched
* speculatively from the PoC are discarded, since they may have
* been dynamically patched at the PoU.
*/
barrier::isb(barrier::SY);
Ok(())
}

View File

@ -0,0 +1,296 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
//! MMU initialisation.
//!
//! Paging is mostly based on [previous version](https://os.phil-opp.com/page-tables/) of
//! Phil Opp's [paging guide](https://os.phil-opp.com/paging-implementation/) and
//! [ARMv8 ARM memory addressing](https://static.docs.arm.com/100940/0100/armv8_a_address%20translation_100940_0100_en.pdf).
use {
crate::{
memory::{
mmu::{interface, interface::MMU, AddressSpace, MMUEnableError, TranslationGranule},
Address, Physical,
},
platform, println,
},
aarch64_cpu::{
asm::barrier,
registers::{ID_AA64MMFR0_EL1, SCTLR_EL1, TCR_EL1, TTBR0_EL1},
},
core::intrinsics::unlikely,
tock_registers::interfaces::{ReadWriteable, Readable, Writeable},
};
pub mod translation_table;
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
/// Memory Management Unit type.
struct MemoryManagementUnit;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub type Granule512MiB = TranslationGranule<{ 512 * 1024 * 1024 }>;
pub type Granule64KiB = TranslationGranule<{ 64 * 1024 }>;
/// Constants for indexing the MAIR_EL1.
#[allow(dead_code)]
pub mod mair {
// Three descriptive consts for indexing into the correct MAIR_EL1 attributes.
pub mod attr {
pub const NORMAL: u64 = 0;
pub const NORMAL_NON_CACHEABLE: u64 = 1;
pub const DEVICE_NGNRE: u64 = 2;
}
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static MMU: MemoryManagementUnit = MemoryManagementUnit;
//--------------------------------------------------------------------------------------------------
// Private Implementations
//--------------------------------------------------------------------------------------------------
impl<const AS_SIZE: usize> AddressSpace<AS_SIZE> {
/// Checks for architectural restrictions.
pub const fn arch_address_space_size_sanity_checks() {
// Size must be at least one full 512 MiB table.
assert!((AS_SIZE % Granule512MiB::SIZE) == 0); // assert!() is const-friendly
// Check for 48 bit virtual address size as maximum, which is supported by any ARMv8
// version.
assert!(AS_SIZE <= (1 << 48));
}
}
impl MemoryManagementUnit {
/// Setup function for the MAIR_EL1 register.
fn set_up_mair(&self) {
use aarch64_cpu::registers::MAIR_EL1;
// Define the three memory types that we will map: Normal DRAM, Uncached and device.
MAIR_EL1.write(
// Attribute 2 -- Device Memory
MAIR_EL1::Attr2_Device::nonGathering_nonReordering_EarlyWriteAck
// Attribute 1 -- Non Cacheable DRAM
+ MAIR_EL1::Attr1_Normal_Outer::NonCacheable
+ MAIR_EL1::Attr1_Normal_Inner::NonCacheable
// Attribute 0 -- Regular Cacheable
+ MAIR_EL1::Attr0_Normal_Outer::WriteBack_NonTransient_ReadWriteAlloc
+ MAIR_EL1::Attr0_Normal_Inner::WriteBack_NonTransient_ReadWriteAlloc,
);
}
/// Configure various settings of stage 1 of the EL1 translation regime.
fn configure_translation_control(&self) {
let t0sz = (64 - platform::memory::mmu::KernelVirtAddrSpace::SIZE_SHIFT) as u64;
TCR_EL1.write(
TCR_EL1::TBI0::Used
+ TCR_EL1::IPS::Bits_40
+ TCR_EL1::TG0::KiB_64
+ TCR_EL1::SH0::Inner
+ TCR_EL1::ORGN0::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::IRGN0::WriteBack_ReadAlloc_WriteAlloc_Cacheable
+ TCR_EL1::EPD0::EnableTTBR0Walks
+ TCR_EL1::A1::TTBR0 // TTBR0 defines the ASID
+ TCR_EL1::T0SZ.val(t0sz)
+ TCR_EL1::EPD1::DisableTTBR1Walks,
);
}
}
//--------------------------------------------------------------------------------------------------
// Public Implementations
//--------------------------------------------------------------------------------------------------
/// Return a reference to the MMU instance.
pub fn mmu() -> &'static impl interface::MMU {
&MMU
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
impl interface::MMU for MemoryManagementUnit {
unsafe fn enable_mmu_and_caching(
&self,
phys_tables_base_addr: Address<Physical>,
) -> Result<(), MMUEnableError> {
if unlikely(self.is_enabled()) {
return Err(MMUEnableError::AlreadyEnabled);
}
// Fail early if translation granule is not supported.
if unlikely(!ID_AA64MMFR0_EL1.matches_all(ID_AA64MMFR0_EL1::TGran64::Supported)) {
return Err(MMUEnableError::Other {
err: "Translation granule not supported by hardware",
});
}
// Prepare the memory attribute indirection register.
self.set_up_mair();
// // Populate translation tables.
// KERNEL_TABLES
// .populate_translation_table_entries()
// .map_err(|err| MMUEnableError::Other { err })?;
// Set the "Translation Table Base Register".
TTBR0_EL1.set_baddr(phys_tables_base_addr.as_usize() as u64);
self.configure_translation_control();
// Switch the MMU on.
//
// First, force all previous changes to be seen before the MMU is enabled.
barrier::isb(barrier::SY);
// Enable the MMU and turn on data and instruction caching.
SCTLR_EL1.modify(SCTLR_EL1::M::Enable + SCTLR_EL1::C::Cacheable + SCTLR_EL1::I::Cacheable);
// Force MMU init to complete before next instruction.
barrier::isb(barrier::SY);
Ok(())
}
#[inline(always)]
fn is_enabled(&self) -> bool {
SCTLR_EL1.matches_all(SCTLR_EL1::M::Enable)
}
/// Parse the ID_AA64MMFR0_EL1 register for runtime information about supported MMU features.
/// Print the current state of TCR register.
fn print_features(&self) {
// use crate::cortex_a::regs::RegisterReadWrite;
let sctlr = SCTLR_EL1.extract();
if let Some(SCTLR_EL1::M::Value::Enable) = sctlr.read_as_enum(SCTLR_EL1::M) {
println!("[i] MMU currently enabled");
}
if let Some(SCTLR_EL1::I::Value::Cacheable) = sctlr.read_as_enum(SCTLR_EL1::I) {
println!("[i] MMU I-cache enabled");
}
if let Some(SCTLR_EL1::C::Value::Cacheable) = sctlr.read_as_enum(SCTLR_EL1::C) {
println!("[i] MMU D-cache enabled");
}
let mmfr = ID_AA64MMFR0_EL1.extract();
if let Some(ID_AA64MMFR0_EL1::TGran4::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran4)
{
println!("[i] MMU: 4 KiB granule supported!");
}
if let Some(ID_AA64MMFR0_EL1::TGran16::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran16)
{
println!("[i] MMU: 16 KiB granule supported!");
}
if let Some(ID_AA64MMFR0_EL1::TGran64::Value::Supported) =
mmfr.read_as_enum(ID_AA64MMFR0_EL1::TGran64)
{
println!("[i] MMU: 64 KiB granule supported!");
}
match mmfr.read_as_enum(ID_AA64MMFR0_EL1::ASIDBits) {
Some(ID_AA64MMFR0_EL1::ASIDBits::Value::Bits_16) => {
println!("[i] MMU: 16 bit ASIDs supported!")
}
Some(ID_AA64MMFR0_EL1::ASIDBits::Value::Bits_8) => {
println!("[i] MMU: 8 bit ASIDs supported!")
}
_ => println!("[i] MMU: Invalid ASID bits specified!"),
}
match mmfr.read_as_enum(ID_AA64MMFR0_EL1::PARange) {
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_32) => {
println!("[i] MMU: Up to 32 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_36) => {
println!("[i] MMU: Up to 36 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_40) => {
println!("[i] MMU: Up to 40 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_42) => {
println!("[i] MMU: Up to 42 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_44) => {
println!("[i] MMU: Up to 44 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_48) => {
println!("[i] MMU: Up to 48 Bit physical address range supported!")
}
Some(ID_AA64MMFR0_EL1::PARange::Value::Bits_52) => {
println!("[i] MMU: Up to 52 Bit physical address range supported!")
}
_ => println!("[i] MMU: Invalid PARange specified!"),
}
let tcr = TCR_EL1.extract();
match tcr.read_as_enum(TCR_EL1::IPS) {
Some(TCR_EL1::IPS::Value::Bits_32) => {
println!("[i] MMU: 32 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_36) => {
println!("[i] MMU: 36 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_40) => {
println!("[i] MMU: 40 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_42) => {
println!("[i] MMU: 42 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_44) => {
println!("[i] MMU: 44 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_48) => {
println!("[i] MMU: 48 Bit intermediate physical address size supported!")
}
Some(TCR_EL1::IPS::Value::Bits_52) => {
println!("[i] MMU: 52 Bit intermediate physical address size supported!")
}
_ => println!("[i] MMU: Invalid IPS specified!"),
}
match tcr.read_as_enum(TCR_EL1::TG0) {
Some(TCR_EL1::TG0::Value::KiB_4) => println!("[i] MMU: TTBR0 4 KiB granule active!"),
Some(TCR_EL1::TG0::Value::KiB_16) => println!("[i] MMU: TTBR0 16 KiB granule active!"),
Some(TCR_EL1::TG0::Value::KiB_64) => println!("[i] MMU: TTBR0 64 KiB granule active!"),
_ => println!("[i] MMU: Invalid TTBR0 granule size specified!"),
}
let t0sz = tcr.read(TCR_EL1::T0SZ);
println!("[i] MMU: T0sz = 64-{} = {} bits", t0sz, 64 - t0sz);
match tcr.read_as_enum(TCR_EL1::TG1) {
Some(TCR_EL1::TG1::Value::KiB_4) => println!("[i] MMU: TTBR1 4 KiB granule active!"),
Some(TCR_EL1::TG1::Value::KiB_16) => println!("[i] MMU: TTBR1 16 KiB granule active!"),
Some(TCR_EL1::TG1::Value::KiB_64) => println!("[i] MMU: TTBR1 64 KiB granule active!"),
_ => println!("[i] MMU: Invalid TTBR1 granule size specified!"),
}
let t1sz = tcr.read(TCR_EL1::T1SZ);
println!("[i] MMU: T1sz = 64-{} = {} bits", t1sz, 64 - t1sz);
}
}

View File

@ -0,0 +1,441 @@
use {
super::{mair, Granule512MiB, Granule64KiB},
crate::{
memory::{
self,
mmu::{AccessPermissions, AttributeFields, MemAttributes, MemoryRegion, PageAddress},
Address, Physical, Virtual,
},
platform,
},
core::convert,
tock_registers::{
interfaces::{Readable, Writeable},
register_bitfields,
registers::InMemoryRegister,
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
register_bitfields! {
u64,
/// A table descriptor, as per ARMv8-A Architecture Reference Manual Figure D5-15.
/// AArch64 Reference Manual page 2150, D5-2445
STAGE1_TABLE_DESCRIPTOR [
/// Physical address of the next descriptor.
NEXT_LEVEL_TABLE_ADDR_64KiB OFFSET(16) NUMBITS(32) [], // [47:16]
NEXT_LEVEL_TABLE_ADDR_4KiB OFFSET(12) NUMBITS(36) [], // [47:12]
TYPE OFFSET(1) NUMBITS(1) [
Block = 0,
Table = 1
],
VALID OFFSET(0) NUMBITS(1) [
False = 0,
True = 1
]
]
}
register_bitfields! {
u64,
/// A level 3 page descriptor, as per ARMv8-A Architecture Reference Manual Figure D5-17.
/// AArch64 Reference Manual page 2150, D5-2445
STAGE1_PAGE_DESCRIPTOR [
/// Unprivileged execute-never.
UXN OFFSET(54) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
/// Privileged execute-never
PXN OFFSET(53) NUMBITS(1) [
Execute = 0,
NeverExecute = 1
],
/// Physical address of the next table descriptor (lvl2) or the page descriptor (lvl3).
OUTPUT_ADDR_64KiB OFFSET(16) NUMBITS(32) [], // [47:16]
OUTPUT_ADDR_4KiB OFFSET(21) NUMBITS(27) [], // [47:21]
/// Access flag
AF OFFSET(10) NUMBITS(1) [
NotAccessed = 0,
Accessed = 1
],
/// Shareability field
SH OFFSET(8) NUMBITS(2) [
OuterShareable = 0b10,
InnerShareable = 0b11
],
/// Access Permissions
AP OFFSET(6) NUMBITS(2) [
RW_EL1 = 0b00,
RW_EL1_EL0 = 0b01,
RO_EL1 = 0b10,
RO_EL1_EL0 = 0b11
],
/// Memory attributes index into the MAIR_EL1 register
AttrIndx OFFSET(2) NUMBITS(3) [],
TYPE OFFSET(1) NUMBITS(1) [
Reserved_Invalid = 0,
Page = 1
],
VALID OFFSET(0) NUMBITS(1) [
False = 0,
True = 1
]
]
}
/// A table descriptor with 64 KiB aperture.
///
/// The output points to the next table.
#[derive(Copy, Clone)]
#[repr(C)]
struct TableDescriptor {
value: u64,
}
/// A page descriptor with 64 KiB aperture.
///
/// The output points to physical memory.
#[derive(Copy, Clone)]
#[repr(C)]
struct PageDescriptor {
value: u64,
}
trait BaseAddr {
fn phys_start_addr(&self) -> Address<Physical>;
fn base_addr_u64(&self) -> u64;
fn base_addr_usize(&self) -> usize;
}
// const NUM_LVL2_TABLES: usize = platform::memory::mmu::KernelAddrSpace::SIZE >> Granule512MiB::SHIFT;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Big monolithic struct for storing the translation tables. Individual levels must be 64 KiB
/// aligned, so the lvl3 is put first.
#[repr(C)]
#[repr(align(65536))]
pub struct FixedSizeTranslationTable<const NUM_TABLES: usize> {
/// Page descriptors, covering 64 KiB windows per entry.
lvl3: [[PageDescriptor; 8192]; NUM_TABLES],
/// Table descriptors, covering 512 MiB windows.
lvl2: [TableDescriptor; NUM_TABLES],
/// Have the tables been initialized?
initialized: bool,
}
// /// A translation table type for the kernel space.
// pub type KernelTranslationTable = FixedSizeTranslationTable<NUM_LVL2_TABLES>;
//--------------------------------------------------------------------------------------------------
// Private Implementations
//--------------------------------------------------------------------------------------------------
impl<T, const N: usize> BaseAddr for [T; N] {
// The binary is still identity mapped, so we don't need to convert here.
fn phys_start_addr(&self) -> Address<Physical> {
Address::new(self as *const _ as usize)
}
fn base_addr_u64(&self) -> u64 {
self as *const T as u64
}
fn base_addr_usize(&self) -> usize {
self as *const T as usize
}
}
impl TableDescriptor {
/// Create an instance.
///
/// Descriptor is invalid by default.
pub const fn new_zeroed() -> Self {
Self { value: 0 }
}
/// Create an instance pointing to the supplied address.
pub fn from_next_lvl_table_addr(phys_next_lvl_table_addr: Address<Physical>) -> Self {
let val = InMemoryRegister::<u64, STAGE1_TABLE_DESCRIPTOR::Register>::new(0);
let shifted = phys_next_lvl_table_addr.as_usize() >> Granule64KiB::SHIFT;
val.write(
STAGE1_TABLE_DESCRIPTOR::NEXT_LEVEL_TABLE_ADDR_64KiB.val(shifted as u64)
+ STAGE1_TABLE_DESCRIPTOR::TYPE::Table
+ STAGE1_TABLE_DESCRIPTOR::VALID::True,
);
TableDescriptor { value: val.get() }
}
}
impl PageDescriptor {
/// Create an instance.
///
/// Descriptor is invalid by default.
pub const fn new_zeroed() -> Self {
Self { value: 0 }
}
/// Create an instance.
pub fn from_output_page_addr(
phys_output_page_addr: PageAddress<Physical>,
attribute_fields: &AttributeFields,
) -> Self {
let val = InMemoryRegister::<u64, STAGE1_PAGE_DESCRIPTOR::Register>::new(0);
let shifted = phys_output_page_addr.into_inner().as_usize() >> Granule64KiB::SHIFT;
val.write(
STAGE1_PAGE_DESCRIPTOR::OUTPUT_ADDR_64KiB.val(shifted as u64)
+ STAGE1_PAGE_DESCRIPTOR::AF::Accessed
+ STAGE1_PAGE_DESCRIPTOR::TYPE::Page
+ STAGE1_PAGE_DESCRIPTOR::VALID::True
+ (*attribute_fields).into(),
);
Self { value: val.get() }
}
/// Returns the valid bit.
fn is_valid(&self) -> bool {
InMemoryRegister::<u64, STAGE1_PAGE_DESCRIPTOR::Register>::new(self.value)
.is_set(STAGE1_PAGE_DESCRIPTOR::VALID)
}
}
/// Convert the kernel's generic memory attributes to HW-specific attributes of the MMU.
impl convert::From<AttributeFields>
for tock_registers::fields::FieldValue<u64, STAGE1_PAGE_DESCRIPTOR::Register>
{
fn from(attribute_fields: AttributeFields) -> Self {
// Memory attributes
let mut desc = match attribute_fields.mem_attributes {
MemAttributes::CacheableDRAM => {
STAGE1_PAGE_DESCRIPTOR::SH::InnerShareable
+ STAGE1_PAGE_DESCRIPTOR::AttrIndx.val(mair::attr::NORMAL)
}
MemAttributes::NonCacheableDRAM => {
STAGE1_PAGE_DESCRIPTOR::SH::InnerShareable
+ STAGE1_PAGE_DESCRIPTOR::AttrIndx.val(mair::attr::NORMAL_NON_CACHEABLE)
}
MemAttributes::Device => {
STAGE1_PAGE_DESCRIPTOR::SH::OuterShareable
+ STAGE1_PAGE_DESCRIPTOR::AttrIndx.val(mair::attr::DEVICE_NGNRE)
}
};
// Access Permissions
desc += match attribute_fields.acc_perms {
AccessPermissions::ReadOnly => STAGE1_PAGE_DESCRIPTOR::AP::RO_EL1,
AccessPermissions::ReadWrite => STAGE1_PAGE_DESCRIPTOR::AP::RW_EL1,
};
// The execute-never attribute is mapped to PXN in AArch64.
desc += if attribute_fields.execute_never {
STAGE1_PAGE_DESCRIPTOR::PXN::NeverExecute
} else {
STAGE1_PAGE_DESCRIPTOR::PXN::Execute
};
// Always set unprivileged execute-never as long as userspace is not implemented yet.
desc += STAGE1_PAGE_DESCRIPTOR::UXN::NeverExecute;
desc
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl<const AS_SIZE: usize> memory::mmu::AssociatedTranslationTable
for memory::mmu::AddressSpace<AS_SIZE>
where
[u8; Self::SIZE >> Granule512MiB::SHIFT]: Sized,
{
type TableStartFromBottom = FixedSizeTranslationTable<{ Self::SIZE >> Granule512MiB::SHIFT }>;
}
impl<const NUM_TABLES: usize> FixedSizeTranslationTable<NUM_TABLES> {
/// Create an instance.
#[allow(clippy::assertions_on_constants)]
pub const fn new() -> Self {
assert!(platform::memory::mmu::KernelGranule::SIZE == Granule64KiB::SIZE); // assert! is const-fn-friendly
// Can't have a zero-sized address space.
assert!(NUM_TABLES > 0);
Self {
lvl3: [[PageDescriptor::new_zeroed(); 8192]; NUM_TABLES],
lvl2: [TableDescriptor::new_zeroed(); NUM_TABLES],
initialized: false,
}
}
/// Helper to calculate the lvl2 and lvl3 indices from an address.
#[inline(always)]
fn lvl2_lvl3_index_from_page_addr(
&self,
virt_page_addr: PageAddress<Virtual>,
) -> Result<(usize, usize), &'static str> {
let addr = virt_page_addr.into_inner().as_usize();
let lvl2_index = addr >> Granule512MiB::SHIFT;
let lvl3_index = (addr & Granule512MiB::MASK) >> Granule64KiB::SHIFT;
if lvl2_index > (NUM_TABLES - 1) {
return Err("Virtual page is out of bounds of translation table");
}
Ok((lvl2_index, lvl3_index))
}
/// Sets the PageDescriptor corresponding to the supplied page address.
///
/// Doesn't allow overriding an already valid page.
#[inline(always)]
fn set_page_descriptor_from_page_addr(
&mut self,
virt_page_addr: PageAddress<Virtual>,
new_desc: &PageDescriptor,
) -> Result<(), &'static str> {
let (lvl2_index, lvl3_index) = self.lvl2_lvl3_index_from_page_addr(virt_page_addr)?;
let desc = &mut self.lvl3[lvl2_index][lvl3_index];
if desc.is_valid() {
return Err("Virtual page is already mapped");
}
*desc = *new_desc;
Ok(())
}
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
impl<const NUM_TABLES: usize> memory::mmu::translation_table::interface::TranslationTable
for FixedSizeTranslationTable<NUM_TABLES>
{
/// Iterates over all static translation table entries and fills them at once.
///
/// # Safety
///
/// - Modifies a `static mut`. Ensure it only happens from here.
// pub unsafe fn populate_translation_table_entries(&mut self) -> Result<(), &'static str> {
// for (l2_nr, l2_entry) in self.lvl2.iter_mut().enumerate() {
// *l2_entry =
// TableDescriptor::from_next_lvl_table_addr(self.lvl3[l2_nr].base_addr_usize());
//
// for (l3_nr, l3_entry) in self.lvl3[l2_nr].iter_mut().enumerate() {
// let virt_addr = (l2_nr << Granule512MiB::SHIFT) + (l3_nr << Granule64KiB::SHIFT);
//
// let (phys_output_addr, attribute_fields) =
// platform::memory::mmu::virt_mem_layout().virt_addr_properties(virt_addr)?;
//
// *l3_entry = PageDescriptor::from_output_addr(phys_output_addr, &attribute_fields);
// }
// }
//
// Ok(())
// }
fn init(&mut self) {
if self.initialized {
return;
}
// Populate the l2 entries.
for (lvl2_nr, lvl2_entry) in self.lvl2.iter_mut().enumerate() {
let phys_table_addr = self.lvl3[lvl2_nr].phys_start_addr();
let new_desc = TableDescriptor::from_next_lvl_table_addr(phys_table_addr);
*lvl2_entry = new_desc;
}
self.initialized = true;
}
fn phys_base_address(&self) -> Address<Physical> {
self.lvl2.phys_start_addr()
}
unsafe fn map_at(
&mut self,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str> {
assert!(self.initialized, "Translation tables not initialized");
if virt_region.size() != phys_region.size() {
return Err("Tried to map memory regions with different sizes");
}
if phys_region.end_exclusive_page_addr()
> platform::memory::phys_addr_space_end_exclusive_addr()
{
return Err("Tried to map outside of physical address space");
}
#[allow(clippy::useless_conversion)]
let iter = phys_region.into_iter().zip(virt_region.into_iter());
for (phys_page_addr, virt_page_addr) in iter {
let new_desc = PageDescriptor::from_output_page_addr(phys_page_addr, attr);
let virt_page = virt_page_addr;
self.set_page_descriptor_from_page_addr(virt_page, &new_desc)?;
}
Ok(())
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
pub type MinSizeTranslationTable = FixedSizeTranslationTable<1>;
#[cfg(test)]
mod tests {
use super::*;
/// Check if the size of `struct TableDescriptor` is as expected.
#[test_case]
fn size_of_tabledescriptor_equals_64_bit() {
assert_eq!(
core::mem::size_of::<TableDescriptor>(),
core::mem::size_of::<u64>()
);
}
/// Check if the size of `struct PageDescriptor` is as expected.
#[test_case]
fn size_of_pagedescriptor_equals_64_bit() {
assert_eq!(
core::mem::size_of::<PageDescriptor>(),
core::mem::size_of::<u64>()
);
}
}

View File

@ -5,343 +5,13 @@
//! Memory management functions for aarch64.
use {
crate::println,
core::{fmt, ops::RangeInclusive},
};
mod addr;
pub mod mmu;
pub use addr::{PhysAddr, VirtAddr};
// pub use addr::{PhysAddr, VirtAddr};
// aarch64 granules and page sizes howto:
// https://stackoverflow.com/questions/34269185/simultaneous-existence-of-different-sized-pages-on-aarch64
/// Default page size used by the kernel.
pub const PAGE_SIZE: usize = 4096;
/// System memory map.
/// This is a fixed memory map for RasPi3,
/// @todo we need to infer the memory map from the provided DTB.
#[rustfmt::skip]
pub mod map {
/// Beginning of memory.
pub const START: usize = 0x0000_0000;
/// End of memory.
pub const END: usize = 0x3FFF_FFFF;
/// Physical RAM addresses.
pub mod phys {
/// Base address of video (VC) memory.
pub const VIDEOMEM_BASE: usize = 0x3e00_0000;
/// Base address of MMIO register range.
pub const MMIO_BASE: usize = 0x3F00_0000;
/// Base address of ARM<->VC mailbox area.
pub const VIDEOCORE_MBOX_BASE: usize = MMIO_BASE + 0x0000_B880;
/// Base address of GPIO registers.
pub const GPIO_BASE: usize = MMIO_BASE + 0x0020_0000;
/// Base address of regular UART.
pub const PL011_UART_BASE: usize = MMIO_BASE + 0x0020_1000;
/// Base address of MiniUART.
pub const MINI_UART_BASE: usize = MMIO_BASE + 0x0021_5000;
/// End of MMIO memory.
pub const MMIO_END: usize = super::END;
}
/// Virtual (mapped) addresses.
pub mod virt {
/// Start (top) of kernel stack.
pub const KERN_STACK_START: usize = super::START;
/// End (bottom) of kernel stack. SP starts at KERN_STACK_END + 1.
pub const KERN_STACK_END: usize = 0x0007_FFFF;
/// Location of DMA-able memory region (in the second 2 MiB block).
pub const DMA_HEAP_START: usize = 0x0020_0000;
/// End of DMA-able memory region.
pub const DMA_HEAP_END: usize = 0x005F_FFFF;
}
}
/// Types used for compiling the virtual memory layout of the kernel using address ranges.
pub mod kernel_mem_range {
use core::ops::RangeInclusive;
/// Memory region attributes.
#[derive(Copy, Clone)]
pub enum MemAttributes {
/// Regular memory
CacheableDRAM,
/// Memory without caching
NonCacheableDRAM,
/// Device memory
Device,
}
/// Memory region access permissions.
#[derive(Copy, Clone)]
pub enum AccessPermissions {
/// Read-only access
ReadOnly,
/// Read-write access
ReadWrite,
}
/// Memory region translation.
#[allow(dead_code)]
#[derive(Copy, Clone)]
pub enum Translation {
/// One-to-one address mapping
Identity,
/// Mapping with a specified offset
Offset(usize),
}
/// Summary structure of memory region properties.
#[derive(Copy, Clone)]
pub struct AttributeFields {
/// Attributes
pub mem_attributes: MemAttributes,
/// Permissions
pub acc_perms: AccessPermissions,
/// Disable executable code in this region
pub execute_never: bool,
}
impl Default for AttributeFields {
fn default() -> AttributeFields {
AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
}
}
}
/// Memory region descriptor.
///
/// Used to construct iterable kernel memory ranges.
pub struct Descriptor {
/// Name of the region
pub name: &'static str,
/// Virtual memory range
pub virtual_range: fn() -> RangeInclusive<usize>,
/// Mapping translation
pub translation: Translation,
/// Attributes
pub attribute_fields: AttributeFields,
}
}
pub use kernel_mem_range::*;
/// A virtual memory layout that is agnostic of the paging granularity that the
/// hardware MMU will use.
///
/// Contains only special ranges, aka anything that is _not_ normal cacheable
/// DRAM.
static KERNEL_VIRTUAL_LAYOUT: [Descriptor; 6] = [
Descriptor {
name: "Kernel stack",
virtual_range: || {
RangeInclusive::new(map::virt::KERN_STACK_START, map::virt::KERN_STACK_END)
},
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
},
Descriptor {
name: "Boot code and data",
virtual_range: || {
// Using the linker script, we ensure that the boot area is consecutive and 4
// KiB aligned, and we export the boundaries via symbols:
//
// [__BOOT_START, __BOOT_END)
extern "C" {
// The inclusive start of the boot area, aka the address of the
// first byte of the area.
static __BOOT_START: u64;
// The exclusive end of the boot area, aka the address of
// the first byte _after_ the RO area.
static __BOOT_END: u64;
}
unsafe {
// Notice the subtraction to turn the exclusive end into an
// inclusive end
RangeInclusive::new(
&__BOOT_START as *const _ as usize,
&__BOOT_END as *const _ as usize - 1,
)
}
},
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadOnly,
execute_never: false,
},
},
Descriptor {
name: "Kernel code and RO data",
virtual_range: || {
// Using the linker script, we ensure that the RO area is consecutive and 4
// KiB aligned, and we export the boundaries via symbols:
//
// [__RO_START, __RO_END)
extern "C" {
// The inclusive start of the read-only area, aka the address of the
// first byte of the area.
static __RO_START: u64;
// The exclusive end of the read-only area, aka the address of
// the first byte _after_ the RO area.
static __RO_END: u64;
}
unsafe {
// Notice the subtraction to turn the exclusive end into an
// inclusive end
RangeInclusive::new(
&__RO_START as *const _ as usize,
&__RO_END as *const _ as usize - 1,
)
}
},
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadOnly,
execute_never: false,
},
},
Descriptor {
name: "Kernel data and BSS",
virtual_range: || {
extern "C" {
static __DATA_START: u64;
static __BSS_END: u64;
}
unsafe {
RangeInclusive::new(
&__DATA_START as *const _ as usize,
&__BSS_END as *const _ as usize - 1,
)
}
},
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
},
Descriptor {
name: "DMA heap pool",
virtual_range: || RangeInclusive::new(map::virt::DMA_HEAP_START, map::virt::DMA_HEAP_END),
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::NonCacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
},
Descriptor {
name: "Device MMIO",
virtual_range: || RangeInclusive::new(map::phys::VIDEOMEM_BASE, map::phys::MMIO_END),
translation: Translation::Identity,
attribute_fields: AttributeFields {
mem_attributes: MemAttributes::Device,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
},
];
/// For a given virtual address, find and return the output address and
/// according attributes.
///
/// If the address is not covered in VIRTUAL_LAYOUT, return a default for normal
/// cacheable DRAM.
pub fn get_virt_addr_properties(
virt_addr: usize,
) -> Result<(usize, AttributeFields), &'static str> {
if virt_addr > map::END {
return Err("Address out of range.");
}
for i in KERNEL_VIRTUAL_LAYOUT.iter() {
if (i.virtual_range)().contains(&virt_addr) {
let output_addr = match i.translation {
Translation::Identity => virt_addr,
Translation::Offset(a) => a + (virt_addr - (i.virtual_range)().start()),
};
return Ok((output_addr, i.attribute_fields));
}
}
Ok((virt_addr, AttributeFields::default()))
}
/// Human-readable output of a Descriptor.
impl fmt::Display for Descriptor {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
// Call the function to which self.range points, and dereference the
// result, which causes Rust to copy the value.
let start = *(self.virtual_range)().start();
let end = *(self.virtual_range)().end();
let size = end - start + 1;
// log2(1024)
const KIB_RSHIFT: u32 = 10;
// log2(1024 * 1024)
const MIB_RSHIFT: u32 = 20;
let (size, unit) = if (size >> MIB_RSHIFT) > 0 {
(size >> MIB_RSHIFT, "MiB")
} else if (size >> KIB_RSHIFT) > 0 {
(size >> KIB_RSHIFT, "KiB")
} else {
(size, "Byte")
};
let attr = match self.attribute_fields.mem_attributes {
MemAttributes::CacheableDRAM => "C",
MemAttributes::NonCacheableDRAM => "NC",
MemAttributes::Device => "Dev",
};
let acc_p = match self.attribute_fields.acc_perms {
AccessPermissions::ReadOnly => "RO",
AccessPermissions::ReadWrite => "RW",
};
let xn = if self.attribute_fields.execute_never {
"PXN"
} else {
"PX"
};
write!(
f,
" {:#010X} - {:#010X} | {: >3} {} | {: <3} {} {: <3} | {}",
start, end, size, unit, attr, acc_p, xn, self.name
)
}
}
/// Print the kernel memory layout.
pub fn print_layout() {
println!("[i] Kernel memory layout:");
for i in KERNEL_VIRTUAL_LAYOUT.iter() {
println!("{}", i);
}
}
pub const PAGE_SIZE: usize = 65536;

View File

@ -5,48 +5,7 @@
//! Implementation of aarch64 kernel functions.
use cortex_a::asm;
mod boot;
#[cfg(feature = "jtag")]
pub mod jtag;
pub mod cpu;
pub mod exception;
pub mod memory;
pub mod traps;
/// Loop forever in sleep mode.
#[inline]
pub fn endless_sleep() -> ! {
loop {
asm::wfe();
}
}
/// Loop for a given number of `nop` instructions.
#[inline]
pub fn loop_delay(rounds: u32) {
for _ in 0..rounds {
asm::nop();
}
}
/// Loop until a passed function returns `true`.
#[inline]
pub fn loop_until<F: Fn() -> bool>(f: F) {
loop {
if f() {
break;
}
asm::nop();
}
}
/// Loop while a passed function returns `true`.
#[inline]
pub fn loop_while<F: Fn() -> bool>(f: F) {
loop {
if !f() {
break;
}
asm::nop();
}
}
pub mod time;

View File

@ -0,0 +1,162 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! Architectural timer primitives.
//!
//! # Orientation
//!
//! Since arch modules are imported into generic modules using the path attribute, the path of this
//! file is:
//!
//! crate::time::arch_time
use {
crate::{synchronization, warn},
aarch64_cpu::{asm::barrier, registers::*},
core::{
num::{NonZeroU128, NonZeroU32, NonZeroU64},
ops::{Add, Div},
time::Duration,
},
once_cell::unsync::Lazy,
tock_registers::interfaces::Readable,
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
const NANOSEC_PER_SEC: NonZeroU64 = NonZeroU64::new(1_000_000_000).unwrap();
#[derive(Copy, Clone, PartialOrd, PartialEq)]
struct GenericTimerCounterValue(u64);
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
// @todo use InitStateLock here
static ARCH_TIMER_COUNTER_FREQUENCY: synchronization::IRQSafeNullLock<Lazy<NonZeroU32>> =
synchronization::IRQSafeNullLock::new(Lazy::new(|| {
NonZeroU32::try_from(CNTFRQ_EL0.get() as u32).unwrap()
}));
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
fn arch_timer_counter_frequency() -> NonZeroU32 {
use crate::synchronization::interface::Mutex;
ARCH_TIMER_COUNTER_FREQUENCY.lock(|inner| **inner)
}
impl GenericTimerCounterValue {
pub const MAX: Self = GenericTimerCounterValue(u64::MAX);
}
impl Add for GenericTimerCounterValue {
type Output = Self;
fn add(self, other: Self) -> Self {
GenericTimerCounterValue(self.0.wrapping_add(other.0))
}
}
impl From<GenericTimerCounterValue> for Duration {
fn from(counter_value: GenericTimerCounterValue) -> Self {
if counter_value.0 == 0 {
return Duration::ZERO;
}
let frequency: NonZeroU64 = arch_timer_counter_frequency().into();
// Div<NonZeroU64> implementation for u64 cannot panic.
let secs = counter_value.0.div(frequency);
// This is safe, because frequency can never be greater than u32::MAX, which means the
// largest theoretical value for sub_second_counter_value is (u32::MAX - 1). Therefore,
// (sub_second_counter_value * NANOSEC_PER_SEC) cannot overflow an u64.
//
// The subsequent division ensures the result fits into u32, since the max result is smaller
// than NANOSEC_PER_SEC. Therefore, just cast it to u32 using `as`.
let sub_second_counter_value = counter_value.0 % frequency;
let nanos = unsafe { sub_second_counter_value.unchecked_mul(u64::from(NANOSEC_PER_SEC)) }
.div(frequency) as u32;
Duration::new(secs, nanos)
}
}
fn max_duration() -> Duration {
Duration::from(GenericTimerCounterValue::MAX)
}
impl TryFrom<Duration> for GenericTimerCounterValue {
type Error = &'static str;
fn try_from(duration: Duration) -> Result<Self, Self::Error> {
if duration < resolution() {
return Ok(GenericTimerCounterValue(0));
}
if duration > max_duration() {
return Err("Conversion error. Duration too big");
}
let frequency: u128 = u32::from(arch_timer_counter_frequency()) as u128;
let duration: u128 = duration.as_nanos();
// This is safe, because frequency can never be greater than u32::MAX, and
// (Duration::MAX.as_nanos() * u32::MAX) < u128::MAX.
let counter_value =
unsafe { duration.unchecked_mul(frequency) }.div(NonZeroU128::from(NANOSEC_PER_SEC));
// Since we checked above that we are <= max_duration(), just cast to u64.
Ok(GenericTimerCounterValue(counter_value as u64))
}
}
#[inline(always)]
fn read_cntpct() -> GenericTimerCounterValue {
// Prevent that the counter is read ahead of time due to out-of-order execution.
barrier::isb(barrier::SY);
let cnt = CNTPCT_EL0.get();
GenericTimerCounterValue(cnt)
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// The timer's resolution.
pub fn resolution() -> Duration {
Duration::from(GenericTimerCounterValue(1))
}
/// The uptime since power-on of the device.
///
/// This includes time consumed by firmware and bootloaders.
pub fn uptime() -> Duration {
read_cntpct().into()
}
/// Spin for a given duration.
pub fn spin_for(duration: Duration) {
let curr_counter_value = read_cntpct();
let counter_value_delta: GenericTimerCounterValue = match duration.try_into() {
Err(msg) => {
warn!("spin_for: {}. Skipping", msg);
return;
}
Ok(val) => val,
};
let counter_value_target = curr_counter_value + counter_value_delta;
// Busy wait.
//
// Read CNTPCT_EL0 directly to avoid the ISB that is part of [`read_cntpct`].
while GenericTimerCounterValue(CNTPCT_EL0.get()) < counter_value_target {}
}

View File

@ -1,159 +1,5 @@
/*
* SPDX-License-Identifier: BlueOak-1.0.0
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
*/
//! Interrupt handling
//!
//! The base address is given by VBAR_ELn and each entry has a defined offset from this
//! base address. Each table has 16 entries, with each entry being 128 bytes (32 instructions)
//! in size. The table effectively consists of 4 sets of 4 entries.
//!
//! Minimal implementation to help catch MMU traps.
//! Reads ESR_ELx to understand why trap was taken.
//!
//! VBAR_EL1, VBAR_EL2, VBAR_EL3
//!
//! CurrentEL with SP0: +0x0
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! CurrentEL with SPx: +0x200
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! Lower EL using AArch64: +0x400
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! Lower EL using AArch32: +0x600
//!
//! * Synchronous
//! * IRQ/vIRQ
//! * FIQ
//! * SError/vSError
//!
//! When the processor takes an exception to AArch64 execution state,
//! all of the PSTATE interrupt masks is set automatically. This means
//! that further exceptions are disabled. If software is to support
//! nested exceptions, for example, to allow a higher priority interrupt
//! to interrupt the handling of a lower priority source, then software needs
//! to explicitly re-enable interrupts
use {
crate::{arch::endless_sleep, println},
cortex_a::{
asm::barrier,
registers::{ESR_EL1, FAR_EL1, VBAR_EL1},
},
snafu::Snafu,
tock_registers::{
interfaces::{Readable, Writeable},
register_bitfields, LocalRegisterCopy,
},
};
core::arch::global_asm!(include_str!("vectors.S"));
/// Errors possibly returned from the traps module.
#[derive(Debug, Snafu)]
pub enum Error {
/// IVT address is unaligned.
#[snafu(display("Unaligned base address for interrupt vector table"))]
Unaligned,
}
/// Configure base address of interrupt vectors table.
/// Checks that address is properly 2KiB aligned.
///
/// # Safety
///
/// Totally unsafe in the land of the hardware.
pub unsafe fn set_vbar_el1_checked(vec_base_addr: u64) -> Result<(), Error> {
if vec_base_addr.trailing_zeros() < 11 {
return Err(Error::Unaligned);
}
VBAR_EL1.set(vec_base_addr);
// Force VBAR update to complete before next instruction.
barrier::isb(barrier::SY);
Ok(())
}
/// A blob of general-purpose registers.
#[repr(C)]
pub struct GPR {
x: [u64; 31],
}
/// Saved exception context.
#[repr(C)]
pub struct ExceptionContext {
// General Purpose Registers
gpr: GPR,
spsr_el1: u64,
elr_el1: u64,
}
/// The default exception, invoked for every exception type unless the handler
/// is overridden.
/// Default pointer is configured in the linker script.
///
/// # Safety
///
/// Totally unsafe in the land of the hardware.
#[no_mangle]
unsafe extern "C" fn default_exception_handler() -> ! {
println!("Unexpected exception. Halting CPU.");
endless_sleep()
}
// To implement an exception handler, override it by defining the respective
// function below.
// Don't forget the #[no_mangle] attribute.
//
/// # Safety
///
/// Totally unsafe in the land of the hardware.
#[no_mangle]
unsafe extern "C" fn current_el0_synchronous(e: &mut ExceptionContext) {
println!("[!] USER synchronous exception happened.");
synchronous_common(e)
}
// unsafe extern "C" fn current_el0_irq(e: &mut ExceptionContext);
// unsafe extern "C" fn current_el0_serror(e: &mut ExceptionContext);
/// # Safety
///
/// Totally unsafe in the land of the hardware.
#[no_mangle]
unsafe extern "C" fn current_elx_synchronous(e: &mut ExceptionContext) {
println!("[!] KERNEL synchronous exception happened.");
synchronous_common(e);
}
// unsafe extern "C" fn current_elx_irq(e: &mut ExceptionContext);
/// # Safety
///
/// Totally unsafe in the land of the hardware.
#[no_mangle]
unsafe extern "C" fn current_elx_serror(e: &mut ExceptionContext) {
println!("[!] KERNEL serror exception happened.");
synchronous_common(e);
endless_sleep()
}
// @todo this file must be moved to exception/mod.rs
// @todo finish porting the exception printing part...
fn cause_to_string(cause: u64) -> &'static str {
if cause == ESR_EL1::EC::DataAbortCurrentEL.read(ESR_EL1::EC) {
@ -301,13 +147,7 @@ fn iss_dfsc_to_string(iss: IssForDataAbort) -> &'static str {
}
}
// unsafe extern "C" fn lower_aarch64_synchronous(e: &mut ExceptionContext);
// unsafe extern "C" fn lower_aarch64_irq(e: &mut ExceptionContext);
// unsafe extern "C" fn lower_aarch64_serror(e: &mut ExceptionContext);
// unsafe extern "C" fn lower_aarch32_synchronous(e: &mut ExceptionContext);
// unsafe extern "C" fn lower_aarch32_irq(e: &mut ExceptionContext);
// unsafe extern "C" fn lower_aarch32_serror(e: &mut ExceptionContext);
type SpsrCopy = LocalRegisterCopy<u64, SPSR_EL1::Register>;
/// Helper function to 1) display current exception, 2) skip the offending asm instruction.
/// Not for production use!
@ -315,7 +155,7 @@ fn synchronous_common(e: &mut ExceptionContext) {
println!(" ESR_EL1: {:#010x} (syndrome)", ESR_EL1.get());
let cause = ESR_EL1.read(ESR_EL1::EC);
println!(
" EC: {:#06b} (cause) -- {}",
" EC: {:#08b} (cause) -- {}",
cause,
cause_to_string(cause)
);
@ -350,22 +190,27 @@ fn synchronous_common(e: &mut ExceptionContext) {
);
println!(" Specific fault: {}", iss_dfsc_to_string(iss));
} else {
println!(" FAR_EL1: {:#016x} (location)", FAR_EL1.get());
println!(" Stack: {:#016x}", e.spsr_el1);
#[rustfmt::skip]
{
println!(" FAR_EL1: {:#016x} (location)", FAR_EL1.get());
println!(" SPSR_EL1: {:#016x} (state)", e.spsr_el1);
let spsr = SpsrCopy::new(e.spsr_el1);
println!(" N: {} (negative condition)", spsr.read(SPSR_EL1::N));
println!(" Z: {} (zero condition)", spsr.read(SPSR_EL1::Z));
println!(" C: {} (carry condition)", spsr.read(SPSR_EL1::C));
println!(" V: {} (overflow condition)", spsr.read(SPSR_EL1::V));
println!(" SS: {} (software step)", spsr.read(SPSR_EL1::SS));
println!(" IL: {} (illegal execution state)", spsr.read(SPSR_EL1::IL));
println!(" D: {} (debug masked)", spsr.read(SPSR_EL1::D));
println!(" A: {} (serror masked)", spsr.read(SPSR_EL1::A));
println!(" I: {} (irq masked)", spsr.read(SPSR_EL1::I));
println!(" F: {} (fiq masked)", spsr.read(SPSR_EL1::F));
println!(" M: {:#06b} (machine state)", spsr.read(SPSR_EL1::M));
}
}
println!(" ELR_EL1: {:#010x}", e.elr_el1);
println!(" ELR_EL1: {:#010x} (return to)", e.elr_el1);
println!(" x00: 0000000000000000 x01: {:016x}", e.gpr.x[0]);
for index in 0..15 {
println!(
" x{:02}: {:016x} x{:02}: {:016x}",
index * 2 + 2,
e.gpr.x[index * 2 + 1],
index * 2 + 3,
e.gpr.x[index * 2 + 2]
);
}
// GPRs
println!(
" Incrementing ELR_EL1 by 4 to continue with the first \
@ -374,6 +219,6 @@ fn synchronous_common(e: &mut ExceptionContext) {
e.elr_el1 += 4;
println!(" ELR_EL1 modified: {:#010x}", e.elr_el1);
println!(" ELR_EL1 modified: {:#010x} (return to)", e.elr_el1);
println!(" Returning from exception...\n");
}

View File

@ -6,5 +6,3 @@
#[cfg(target_arch = "aarch64")]
#[macro_use]
pub mod aarch64;
#[cfg(target_arch = "aarch64")]
pub use self::aarch64::*;

113
machine/src/console/mod.rs Normal file
View File

@ -0,0 +1,113 @@
/*
* SPDX-License-Identifier: BlueOak-1.0.0
*/
#![allow(dead_code)]
pub mod null_console;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Console interfaces.
pub mod interface {
use {crate::devices::serial::SerialOps, core::fmt};
/// Console write functions.
pub trait Write {
/// Write a Rust format string.
fn write_fmt(&self, args: fmt::Arguments) -> fmt::Result;
}
/// A trait that must be implemented by devices that are candidates for the
/// global console.
#[allow(unused_variables)]
pub trait ConsoleOps: SerialOps {
/// Send a character
fn write_char(&self, c: char) {
let mut bytes = [0u8; 4];
let _ = c.encode_utf8(&mut bytes);
for &b in bytes.iter().take(c.len_utf8()) {
self.write_byte(b);
}
}
/// Display a string
fn write_string(&self, string: &str) {
for c in string.chars() {
// convert newline to carriage return + newline
if c == '\n' {
self.write_char('\r')
}
self.write_char(c);
}
}
/// Receive a character -- FIXME: needs a state machine to read UTF-8 chars!
fn read_char(&self) -> char {
let mut ret = self.read_byte() as char;
// convert carriage return to newline
if ret == '\r' {
ret = '\n'
}
ret
}
}
/// Trait alias for a full-fledged console.
pub trait All: Write + ConsoleOps {}
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static CONSOLE: InitStateLock<&'static (dyn interface::All + Sync)> =
InitStateLock::new(&null_console::NULL_CONSOLE);
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
use crate::synchronization::{interface::ReadWriteEx, InitStateLock};
/// Register a new console.
pub fn register_console(new_console: &'static (dyn interface::All + Sync)) {
CONSOLE.write(|con| *con = new_console);
}
/// Return a reference to the currently registered console.
///
/// This is the global console used by all printing macros.
pub fn console() -> &'static dyn interface::All {
CONSOLE.read(|con| *con)
}
/// A command prompt.
pub fn command_prompt(buf: &mut [u8]) -> &[u8] {
use interface::ConsoleOps;
console().write_string("\n$> ");
let mut i = 0;
let mut input;
loop {
input = console().read_char();
if input == '\n' {
console().write_char('\n'); // do \r\n output
return &buf[..i];
} else {
if i < buf.len() {
buf[i] = input as u8;
i += 1;
} else {
return &buf[..i];
}
console().write_char(input);
}
}
}

View File

@ -0,0 +1,48 @@
use crate::{console::interface, devices::serial::SerialOps};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// A dummy console that just ignores all I/O.
pub struct NullConsole;
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
pub static NULL_CONSOLE: NullConsole = NullConsole {};
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl interface::Write for NullConsole {
fn write_fmt(&self, args: core::fmt::Arguments) -> core::fmt::Result {
Ok(())
}
}
impl interface::ConsoleOps for NullConsole {
fn write_char(&self, _c: char) {}
fn write_string(&self, _string: &str) {}
fn read_char(&self) -> char {
' '
}
}
impl SerialOps for NullConsole {
fn read_byte(&self) -> u8 {
0
}
fn write_byte(&self, _byte: u8) {}
fn flush(&self) {}
fn clear_rx(&self) {}
}
impl interface::All for NullConsole {}

10
machine/src/cpu/boot.rs Normal file
View File

@ -0,0 +1,10 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2021-2022 Andre Richter <andre.o.richter@gmail.com>
//! Boot code.
// Not used, arch/../cpu/boot.rs is used directly to generate boot code.
// #[cfg(target_arch = "aarch64")]
// #[path = "../arch/aarch64/cpu/boot.rs"]
// mod arch_boot;

48
machine/src/cpu/mod.rs Normal file
View File

@ -0,0 +1,48 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Processor code.
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::cpu as arch_cpu;
pub mod smp;
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
pub use arch_cpu::{endless_sleep, nop};
// #[cfg(feature = "test_build")]
// pub use arch_cpu::{qemu_exit_failure, qemu_exit_success};
/// Loop for a given number of `nop` instructions.
#[inline]
pub fn loop_delay(rounds: u32) {
for _ in 0..rounds {
nop();
}
}
/// Loop until a passed function returns `true`.
#[inline]
pub fn loop_until<F: Fn() -> bool>(f: F) {
loop {
if f() {
break;
}
nop();
}
}
/// Loop while a passed function returns `true`.
#[inline]
pub fn loop_while<F: Fn() -> bool>(f: F) {
loop {
if !f() {
break;
}
nop();
}
}

13
machine/src/cpu/smp.rs Normal file
View File

@ -0,0 +1,13 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! Symmetric multiprocessing.
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::cpu::smp as arch_smp;
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
pub use arch_smp::core_id;

View File

@ -1,8 +1,8 @@
//! JTAG helper functions.
use {
crate::cpu::nop,
core::ptr::{read_volatile, write_volatile},
cortex_a::asm,
};
#[no_mangle]
@ -13,7 +13,7 @@ static mut WAIT_FLAG: bool = true;
/// from inside this function's frame to continue running.
pub fn wait_debugger() {
while unsafe { read_volatile(&WAIT_FLAG) } {
asm::nop();
nop();
}
// Reset the flag so that next jtag::wait_debugger() would block again.
unsafe { write_volatile(&mut WAIT_FLAG, true) }

2
machine/src/debug/mod.rs Normal file
View File

@ -0,0 +1,2 @@
#[cfg(feature = "jtag")]
pub mod jtag;

View File

@ -1,177 +1,190 @@
/*
* SPDX-License-Identifier: BlueOak-1.0.0
*/
#![allow(dead_code)]
use {
crate::{devices::SerialOps, platform},
core::fmt,
};
/// A trait that must be implemented by devices that are candidates for the
/// global console.
#[allow(unused_variables)]
pub trait ConsoleOps: SerialOps {
/// Send a character
fn write_char(&self, c: char);
/// Display a string
fn write_string(&self, string: &str);
/// Receive a character
fn read_char(&self) -> char;
}
/// A dummy console that just ignores its inputs.
pub struct NullConsole;
impl Drop for NullConsole {
fn drop(&mut self) {}
}
impl ConsoleOps for NullConsole {
fn write_char(&self, _c: char) {}
fn write_string(&self, _string: &str) {}
fn read_char(&self) -> char {
' '
}
}
impl SerialOps for NullConsole {
fn read_byte(&self) -> u8 {
0
}
fn write_byte(&self, _byte: u8) {}
fn flush(&self) {}
fn clear_rx(&self) {}
}
/// Possible outputs which the console can store.
pub enum Output {
None(NullConsole),
MiniUart(platform::rpi3::mini_uart::PreparedMiniUart),
Uart(platform::rpi3::pl011_uart::PreparedPL011Uart),
}
/// Generate boilerplate for converting into one of Output enum values
macro output_from($name:ty, $optname:ident) {
impl From<$name> for Output {
fn from(instance: $name) -> Self {
Output::$optname(instance)
}
}
}
output_from!(NullConsole, None);
output_from!(platform::rpi3::mini_uart::PreparedMiniUart, MiniUart);
output_from!(platform::rpi3::pl011_uart::PreparedPL011Uart, Uart);
pub struct Console {
output: Output,
}
impl Default for Console {
fn default() -> Self {
Console {
output: (NullConsole {}).into(),
}
}
}
impl Console {
pub const fn new() -> Console {
Console {
output: Output::None(NullConsole {}),
}
}
fn current_ptr(&self) -> &dyn ConsoleOps {
match &self.output {
Output::None(i) => i,
Output::MiniUart(i) => i,
Output::Uart(i) => i,
}
}
/// Overwrite the current output. The old output will go out of scope and
/// it's Drop function will be called.
pub fn replace_with(&mut self, x: Output) {
self.current_ptr().flush();
self.output = x;
}
/// A command prompt.
pub fn command_prompt<'a>(&self, buf: &'a mut [u8]) -> &'a [u8] {
self.write_string("\n$> ");
let mut i = 0;
let mut input;
loop {
input = self.read_char();
if input == '\n' {
self.write_char('\n'); // do \r\n output
return &buf[..i];
} else {
if i < buf.len() {
buf[i] = input as u8;
i += 1;
} else {
return &buf[..i];
}
self.write_char(input);
}
}
}
}
impl Drop for Console {
fn drop(&mut self) {}
}
/// Dispatch the respective function to the currently stored output device.
impl ConsoleOps for Console {
fn write_char(&self, c: char) {
self.current_ptr().write_char(c);
}
fn write_string(&self, string: &str) {
self.current_ptr().write_string(string);
}
fn read_char(&self) -> char {
self.current_ptr().read_char()
}
}
impl SerialOps for Console {
fn read_byte(&self) -> u8 {
self.current_ptr().read_byte()
}
fn write_byte(&self, byte: u8) {
self.current_ptr().write_byte(byte)
}
fn flush(&self) {
self.current_ptr().flush()
}
fn clear_rx(&self) {
self.current_ptr().clear_rx()
}
}
/// Implementing this trait enables usage of the format_args! macros, which in
/// turn are used to implement the kernel's print! and println! macros.
///
/// See src/macros.rs.
impl fmt::Write for Console {
fn write_str(&mut self, s: &str) -> fmt::Result {
self.current_ptr().write_string(s);
Ok(())
}
}
// use {
// crate::{
// console::{interface, null_console::NullConsole},
// devices::serial::SerialOps,
// platform::raspberrypi::device_driver::{mini_uart::MiniUart, pl011_uart::PL011Uart},
// synchronization::IRQSafeNullLock,
// },
// core::fmt,
// };
//
// //--------------------------------------------------------------------------------------------------
// // Private Definitions
// //--------------------------------------------------------------------------------------------------
//
// /// The mutex protected part.
// struct ConsoleInner {
// output: Output,
// }
//
// //--------------------------------------------------------------------------------------------------
// // Public Definitions
// //--------------------------------------------------------------------------------------------------
//
// /// The main struct.
// pub struct Console {
// inner: IRQSafeNullLock<ConsoleInner>,
// }
//
// //--------------------------------------------------------------------------------------------------
// // Global instances
// //--------------------------------------------------------------------------------------------------
//
// static CONSOLE: Console = Console::new();
//
// //--------------------------------------------------------------------------------------------------
// // Private Code
// //--------------------------------------------------------------------------------------------------
//
// impl ConsoleInner {
// pub const fn new() -> Self {
// Self {
// output: Output::None(NullConsole {}),
// }
// }
//
// fn current_ptr(&self) -> &dyn interface::ConsoleOps {
// match &self.output {
// Output::None(inner) => inner,
// Output::MiniUart(inner) => inner,
// Output::Uart(inner) => inner,
// }
// }
//
// /// Overwrite the current output. The old output will go out of scope and
// /// its Drop function will be called.
// pub fn replace_with(&mut self, new_output: Output) {
// self.current_ptr().flush(); // crashed here with Data Abort
// // ...with ESR 0x25/0x96000000
// // ...with FAR 0x984f800000028
// // ...with ELR 0x946a8
//
// self.output = new_output;
// }
// }
//
// /// Implementing `core::fmt::Write` enables usage of the `format_args!` macros, which in turn are
// /// used to implement the `kernel`'s `print!` and `println!` macros. By implementing `write_str()`,
// /// we get `write_fmt()` automatically.
// /// See src/macros.rs.
// ///
// /// The function takes an `&mut self`, so it must be implemented for the inner struct.
// impl fmt::Write for ConsoleInner {
// fn write_str(&mut self, s: &str) -> fmt::Result {
// self.current_ptr().write_string(s);
// // for c in s.chars() {
// // // Convert newline to carrige return + newline.
// // if c == '\n' {
// // self.write_char('\r')
// // }
// //
// // self.write_char(c);
// // }
//
// Ok(())
// }
// }
//
// //--------------------------------------------------------------------------------------------------
// // Public Code
// //--------------------------------------------------------------------------------------------------
//
// impl Console {
// /// Create a new instance.
// pub const fn new() -> Console {
// Console {
// inner: NullLock::new(ConsoleInner::new()),
// }
// }
//
// pub fn replace_with(&mut self, new_output: Output) {
// self.inner.lock(|inner| inner.replace_with(new_output));
// }
// }
//
// /// The global console. Output of the kernel print! and println! macros goes here.
// pub fn console() -> &'static dyn crate::console::interface::All {
// &CONSOLE
// }
//
// //------------------------------------------------------------------------------
// // OS Interface Code
// //------------------------------------------------------------------------------
// use crate::synchronization::interface::Mutex;
//
// /// Passthrough of `args` to the `core::fmt::Write` implementation, but guarded by a Mutex to
// /// serialize access.
// impl interface::Write for Console {
// fn write_fmt(&self, args: core::fmt::Arguments) -> fmt::Result {
// self.inner.lock(|inner| fmt::Write::write_fmt(inner, args))
// }
// }
//
// /// Dispatch the respective function to the currently stored output device.
// impl interface::ConsoleOps for Console {
// // @todo implement utf8 serialization here!
// fn write_char(&self, c: char) {
// self.inner.lock(|con| con.current_ptr().write_char(c));
// }
//
// fn write_string(&self, string: &str) {
// self.inner
// .lock(|con| con.current_ptr().write_string(string));
// }
//
// // @todo implement utf8 deserialization here!
// fn read_char(&self) -> char {
// self.inner.lock(|con| con.current_ptr().read_char())
// }
// }
//
// impl SerialOps for Console {
// fn read_byte(&self) -> u8 {
// self.inner.lock(|con| con.current_ptr().read_byte())
// }
// fn write_byte(&self, byte: u8) {
// self.inner.lock(|con| con.current_ptr().write_byte(byte))
// }
// fn flush(&self) {
// self.inner.lock(|con| con.current_ptr().flush())
// }
// fn clear_rx(&self) {
// self.inner.lock(|con| con.current_ptr().clear_rx())
// }
// }
//
// impl interface::All for Console {}
//
// impl Default for Console {
// fn default() -> Self {
// Self::new()
// }
// }
//
// impl Drop for Console {
// fn drop(&mut self) {}
// }
//
// //------------------------------------------------------------------------------
// // Device Interface Code
// //------------------------------------------------------------------------------
//
// /// Possible outputs which the console can store.
// enum Output {
// None(NullConsole),
// MiniUart(MiniUart),
// Uart(PL011Uart),
// }
//
// /// Generate boilerplate for converting into one of Output enum values
// macro make_from($optname:ident, $name:ty) {
// impl From<$name> for Output {
// fn from(instance: $name) -> Self {
// Output::$optname(instance)
// }
// }
// }
//
// make_from!(None, NullConsole);
// make_from!(MiniUart, PreparedMiniUart);
// make_from!(Uart, PreparedPL011Uart);

View File

@ -1,10 +1,6 @@
/*
* SPDX-License-Identifier: BlueOak-1.0.0
*/
pub mod console;
pub mod serial;
pub use {
console::{Console, ConsoleOps},
serial::SerialOps,
};

211
machine/src/drivers.rs Normal file
View File

@ -0,0 +1,211 @@
use crate::{
exception, println,
synchronization::{interface::ReadWriteEx, IRQSafeNullLock, InitStateLock},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
const NUM_DRIVERS: usize = 5;
struct DriverManagerInner<T>
where
T: 'static,
{
next_index: usize,
descriptors: [Option<DeviceDriverDescriptor<T>>; NUM_DRIVERS],
}
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub mod interface {
pub trait DeviceDriver {
/// Different interrupt controllers might use different types for IRQ number.
type IRQNumberType: core::fmt::Display;
/// Return a compatibility string for identifying the driver.
fn compatible(&self) -> &'static str;
/// Called by the kernel to bring up the device.
/// The default implementation does nothing.
///
/// # Safety
///
/// - During init, drivers might do things with system-wide impact.
unsafe fn init(&self) -> Result<(), &'static str> {
Ok(())
}
/// Called by the kernel to register and enable the device's IRQ handler.
///
/// Rust's type system will prevent a call to this function unless the calling instance
/// itself has static lifetime.
fn register_and_enable_irq_handler(
&'static self,
irq_number: &Self::IRQNumberType,
) -> Result<(), &'static str> {
panic!(
"Attempt to enable IRQ {} for device {}, but driver does not support this",
irq_number,
self.compatible()
)
}
}
}
/// Type to be used as an optional callback after a driver's init() has run.
pub type DeviceDriverPostInitCallback = unsafe fn() -> Result<(), &'static str>;
/// A descriptor for device drivers.
#[derive(Copy, Clone)]
pub struct DeviceDriverDescriptor<T>
where
T: 'static,
{
device_driver: &'static (dyn interface::DeviceDriver<IRQNumberType = T> + Sync),
post_init_callback: Option<DeviceDriverPostInitCallback>,
irq_number: Option<T>,
}
/// Provides device driver management functions.
pub struct DriverManager<T>
where
T: 'static,
{
inner: InitStateLock<DriverManagerInner<T>>,
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static DRIVER_MANAGER: DriverManager<exception::asynchronous::IRQNumber> = DriverManager::new();
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl<T> DriverManagerInner<T>
where
T: 'static + Copy,
{
pub const fn new() -> Self {
Self {
next_index: 0,
descriptors: [None; NUM_DRIVERS],
}
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Return a reference to the global DriverManager.
pub fn driver_manager() -> &'static DriverManager<exception::asynchronous::IRQNumber> {
&DRIVER_MANAGER
}
impl<T> DeviceDriverDescriptor<T> {
pub fn new(
device_driver: &'static (dyn interface::DeviceDriver<IRQNumberType = T> + Sync),
post_init_callback: Option<DeviceDriverPostInitCallback>,
irq_number: Option<T>,
) -> Self {
Self {
device_driver,
post_init_callback,
irq_number,
}
}
}
impl<T> DriverManager<T>
where
T: core::fmt::Display + Copy,
{
pub const fn new() -> Self {
Self {
inner: InitStateLock::new(DriverManagerInner::new()),
}
}
/// Register a device driver with the kernel.
pub fn register_driver(&self, descriptor: DeviceDriverDescriptor<T>) {
self.inner.write(|inner| {
assert!(inner.next_index < NUM_DRIVERS);
inner.descriptors[inner.next_index] = Some(descriptor);
inner.next_index += 1;
})
}
/// Helper for iterating over registered drivers.
fn for_each_descriptor(&self, f: impl FnMut(&DeviceDriverDescriptor<T>)) {
self.inner.read(|inner| {
inner
.descriptors
.iter()
.filter_map(|x| x.as_ref())
.for_each(f)
})
}
/// Fully initialize all drivers.
///
/// # Safety
///
/// - During init, drivers might do things with system-wide impact.
pub unsafe fn init_drivers_and_irqs(&self) {
self.for_each_descriptor(|descriptor| {
// 1. Initialize driver.
if let Err(x) = descriptor.device_driver.init() {
panic!(
"Error initializing driver: {}: {}",
descriptor.device_driver.compatible(),
x
);
}
// 2. Call corresponding post init callback.
if let Some(callback) = &descriptor.post_init_callback {
if let Err(x) = callback() {
panic!(
"Error during driver post-init callback: {}: {}",
descriptor.device_driver.compatible(),
x
);
}
}
});
// 3. After all post-init callbacks were done, the interrupt controller should be
// registered and functional. So let drivers register with it now.
self.for_each_descriptor(|descriptor| {
if let Some(irq_number) = &descriptor.irq_number {
if let Err(x) = descriptor
.device_driver
.register_and_enable_irq_handler(irq_number)
{
panic!(
"Error during driver interrupt handler registration: {}: {}",
descriptor.device_driver.compatible(),
x
);
}
}
});
}
/// Enumerate all registered device drivers.
pub fn enumerate(&self) {
let mut i: usize = 1;
self.for_each_descriptor(|descriptor| {
println!(" {}. {}", i, descriptor.device_driver.compatible());
i += 1;
});
}
}

View File

@ -0,0 +1,181 @@
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::exception::asynchronous as arch_asynchronous;
mod null_irq_manager;
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
pub use arch_asynchronous::{
is_local_irq_masked, local_irq_mask, local_irq_mask_save, local_irq_restore, local_irq_unmask,
print_state,
};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Interrupt number as defined by the BSP.
pub type IRQNumber = crate::platform::exception::asynchronous::IRQNumber;
/// Interrupt descriptor.
#[derive(Copy, Clone)]
pub struct IRQHandlerDescriptor<T>
where
T: Copy,
{
/// The IRQ number.
number: T,
/// Descriptive name.
name: &'static str,
/// Reference to handler trait object.
handler: &'static (dyn interface::IRQHandler + Sync),
}
/// IRQContext token.
///
/// An instance of this type indicates that the local core is currently executing in IRQ
/// context, aka executing an interrupt vector or subcalls of it.
///
/// Concept and implementation derived from the `CriticalSection` introduced in
/// <https://github.com/rust-embedded/bare-metal>
#[derive(Clone, Copy)]
pub struct IRQContext<'irq_context> {
_0: PhantomData<&'irq_context ()>,
}
/// Asynchronous exception handling interfaces.
pub mod interface {
/// Implemented by types that handle IRQs.
pub trait IRQHandler {
/// Called when the corresponding interrupt is asserted.
fn handle(&self) -> Result<(), &'static str>;
}
/// IRQ management functions.
///
/// The `BSP` is supposed to supply one global instance. Typically implemented by the
/// platform's interrupt controller.
pub trait IRQManager {
/// The IRQ number type depends on the implementation.
type IRQNumberType: Copy;
/// Register a handler.
fn register_handler(
&self,
irq_handler_descriptor: super::IRQHandlerDescriptor<Self::IRQNumberType>,
) -> Result<(), &'static str>;
/// Enable an interrupt in the controller.
fn enable(&self, irq_number: &Self::IRQNumberType);
/// Handle pending interrupts.
///
/// This function is called directly from the CPU's IRQ exception vector. On AArch64,
/// this means that the respective CPU core has disabled exception handling.
/// This function can therefore not be preempted and runs start to finish.
///
/// Takes an IRQContext token to ensure it can only be called from IRQ context.
#[allow(clippy::trivially_copy_pass_by_ref)]
fn handle_pending_irqs<'irq_context>(
&'irq_context self,
ic: &super::IRQContext<'irq_context>,
);
/// Print list of registered handlers.
fn print_handler(&self) {}
}
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static IRQ_MANAGER: InitStateLock<
&'static (dyn interface::IRQManager<IRQNumberType = IRQNumber> + Sync),
> = InitStateLock::new(&null_irq_manager::NULL_IRQ_MANAGER);
use core::marker::PhantomData;
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
use crate::synchronization::{interface::ReadWriteEx, InitStateLock};
impl<T> IRQHandlerDescriptor<T>
where
T: Copy,
{
/// Create an instance.
pub const fn new(
number: T,
name: &'static str,
handler: &'static (dyn interface::IRQHandler + Sync),
) -> Self {
Self {
number,
name,
handler,
}
}
/// Return the number.
pub const fn number(&self) -> T {
self.number
}
/// Return the name.
pub const fn name(&self) -> &'static str {
self.name
}
/// Return the handler.
pub const fn handler(&self) -> &'static (dyn interface::IRQHandler + Sync) {
self.handler
}
}
impl<'irq_context> IRQContext<'irq_context> {
/// Creates an IRQContext token.
///
/// # Safety
///
/// - This must only be called when the current core is in an interrupt context and will not
/// live beyond the end of it. That is, creation is allowed in interrupt vector functions. For
/// example, in the ARMv8-A case, in `extern "C" fn current_elx_irq()`.
/// - Note that the lifetime `'irq_context` of the returned instance is unconstrained. User code
/// must not be able to influence the lifetime picked for this type, since that might cause it
/// to be inferred to `'static`.
#[inline(always)]
pub unsafe fn new() -> Self {
IRQContext { _0: PhantomData }
}
}
/// Executes the provided closure while IRQs are masked on the executing core.
///
/// While the function temporarily changes the HW state of the executing core, it restores it to the
/// previous state before returning, so this is deemed safe.
#[inline(always)]
pub fn exec_with_irq_masked<T>(f: impl FnOnce() -> T) -> T {
let saved = local_irq_mask_save();
let ret = f();
local_irq_restore(saved);
ret
}
/// Register a new IRQ manager.
pub fn register_irq_manager(
new_manager: &'static (dyn interface::IRQManager<IRQNumberType = IRQNumber> + Sync),
) {
IRQ_MANAGER.write(|manager| *manager = new_manager);
}
/// Return a reference to the currently registered IRQ manager.
///
/// This is the IRQ manager used by the architectural interrupt handling code.
pub fn irq_manager() -> &'static dyn interface::IRQManager<IRQNumberType = IRQNumber> {
IRQ_MANAGER.read(|manager| *manager)
}

View File

@ -0,0 +1,42 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2022 Andre Richter <andre.o.richter@gmail.com>
//! Null IRQ Manager.
use super::{interface, IRQContext, IRQHandlerDescriptor};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub struct NullIRQManager;
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
pub static NULL_IRQ_MANAGER: NullIRQManager = NullIRQManager {};
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl interface::IRQManager for NullIRQManager {
type IRQNumberType = super::IRQNumber;
fn register_handler(
&self,
_descriptor: IRQHandlerDescriptor<Self::IRQNumberType>,
) -> Result<(), &'static str> {
panic!("No IRQ Manager registered yet");
}
fn enable(&self, _irq_number: &Self::IRQNumberType) {
panic!("No IRQ Manager registered yet");
}
fn handle_pending_irqs<'irq_context>(&'irq_context self, _ic: &IRQContext<'irq_context>) {
panic!("No IRQ Manager registered yet");
}
}

View File

@ -0,0 +1,46 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Synchronous and asynchronous exception handling.
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::exception as arch_exception;
pub mod asynchronous;
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
pub use arch_exception::{current_privilege_level, handling_init};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Kernel privilege levels.
#[allow(missing_docs)]
#[derive(Eq, PartialEq)]
pub enum PrivilegeLevel {
User,
Kernel,
Hypervisor,
Unknown,
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
/// libmachine unit tests must execute in kernel mode.
#[test_case]
fn test_runner_executes_in_kernel_mode() {
let (level, _) = current_privilege_level();
assert!(level == PrivilegeLevel::Kernel)
}
}

View File

@ -1,10 +1,28 @@
#![no_std]
#![no_main]
#![feature(decl_macro)]
#![feature(allocator_api)]
#![allow(stable_features)]
#![allow(incomplete_features)]
#![allow(internal_features)]
#![feature(asm_const)]
#![feature(const_option)]
#![feature(core_intrinsics)]
#![feature(format_args_nl)]
#![feature(const_fn_fn_ptr_basics)]
#![feature(nonnull_slice_from_raw_parts)]
#![feature(generic_const_exprs)]
#![feature(int_roundings)]
#![feature(is_sorted)]
#![feature(linkage)]
#![feature(nonzero_min_max)]
#![feature(panic_info_message)]
#![feature(step_trait)]
#![feature(trait_alias)]
#![feature(unchecked_math)]
#![feature(decl_macro)]
#![feature(ptr_internals)]
#![feature(allocator_api)]
#![feature(strict_provenance)]
#![feature(stmt_expr_attributes)]
#![feature(slice_ptr_get)]
#![feature(nonnull_slice_from_raw_parts)] // stabilised in 1.71 nightly
#![feature(custom_test_frameworks)]
#![test_runner(crate::tests::test_runner)]
#![reexport_test_harness_main = "test_main"]
@ -13,6 +31,7 @@
#![allow(clippy::nonstandard_macro_braces)] // https://github.com/shepmaster/snafu/issues/296
#![allow(missing_docs)] // Temp: switch to deny
#![deny(warnings)]
#![allow(unused)]
#[cfg(not(target_arch = "aarch64"))]
use architecture_not_supported_sorry;
@ -20,38 +39,82 @@ use architecture_not_supported_sorry;
/// Architecture-specific code.
#[macro_use]
pub mod arch;
pub use arch::*;
pub mod console;
pub mod cpu;
pub mod debug;
pub mod devices;
pub mod drivers;
pub mod exception;
pub mod macros;
pub mod memory;
mod mm;
pub mod panic;
pub mod platform;
pub mod qemu;
mod sync;
pub mod state;
mod synchronization;
pub mod tests;
pub mod time;
pub mod write_to;
/// The global console. Output of the kernel print! and println! macros goes here.
pub static CONSOLE: sync::NullLock<devices::Console> = sync::NullLock::new(devices::Console::new());
/// Version string.
pub fn version() -> &'static str {
concat!(
env!("CARGO_PKG_NAME"),
" version ",
env!("CARGO_PKG_VERSION")
)
}
/// The global allocator for DMA-able memory. That is, memory which is tagged
/// non-cacheable in the page tables.
#[allow(dead_code)]
static DMA_ALLOCATOR: sync::NullLock<mm::BumpAllocator> =
sync::NullLock::new(mm::BumpAllocator::new(
// @todo Init this after we loaded boot memory map
memory::map::virt::DMA_HEAP_START as usize,
memory::map::virt::DMA_HEAP_END as usize,
"Global DMA Allocator",
// Try the following arguments instead to see all mailbox operations
// fail. It will cause the allocator to use memory that are marked
// cacheable and therefore not DMA-safe. The answer from the VideoCore
// won't be received by the CPU because it reads an old cached value
// that resembles an error case instead.
// The global allocator for DMA-able memory. That is, memory which is tagged
// non-cacheable in the page tables.
// #[allow(dead_code)]
// static DMA_ALLOCATOR: sync::NullLock<Lazy<BuddyAlloc>> =
// sync::NullLock::new(Lazy::new(|| unsafe {
// BuddyAlloc::new(BuddyAllocParam::new(
// // @todo Init this after we loaded boot memory map
// DMA_HEAP_START as *const u8,
// DMA_HEAP_END - DMA_HEAP_START,
// 64,
// ))
// }));
// Try the following arguments instead to see all mailbox operations
// fail. It will cause the allocator to use memory that is marked
// cacheable and therefore not DMA-safe. The answer from the VideoCore
// won't be received by the CPU because it reads an old cached value
// that resembles an error case instead.
// 0x00600000 as usize,
// 0x007FFFFF as usize,
// "Global Non-DMA Allocator",
));
// 0x00600000 as usize,
// 0x007FFFFF as usize,
#[cfg(test)]
mod lib_tests {
use super::*;
#[panic_handler]
fn panicked(info: &core::panic::PanicInfo) -> ! {
panic::handler_for_tests(info)
}
/// Main for running tests.
#[no_mangle]
pub unsafe fn main() -> ! {
exception::handling_init();
let phys_kernel_tables_base_addr = match memory::mmu::kernel_map_binary() {
Err(string) => panic!("Error mapping kernel binary: {}", string),
Ok(addr) => addr,
};
if let Err(e) = memory::mmu::enable_mmu_and_caching(phys_kernel_tables_base_addr) {
panic!("Enabling MMU failed: {}", e);
}
memory::mmu::post_enable_init();
platform::drivers::qemu_bring_up_console();
test_main();
qemu::semihosting::exit_success()
}
}

View File

@ -23,11 +23,8 @@ macro_rules! println {
#[doc(hidden)]
#[cfg(not(any(test, qemu)))]
pub fn _print(args: core::fmt::Arguments) {
use core::fmt::Write;
crate::CONSOLE.lock(|c| {
c.write_fmt(args).unwrap();
})
use {crate::console::console, core::fmt::Write};
console().write_fmt(args).unwrap();
}
/// qemu-based tests use semihosting write0 syscall.
@ -39,3 +36,54 @@ pub fn _print(args: core::fmt::Arguments) {
let mut buf = [0u8; 2048]; // Increase this buffer size to allow dumping larger panic texts.
qemu::semihosting::sys_write0_call(write_to::c_show(&mut buf, args).unwrap());
}
//--------------------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------------------
/// Prints info text, with a newline.
#[macro_export]
macro_rules! info {
($string:expr) => ({
let timestamp = $crate::time::time_manager().uptime();
$crate::macros::_print(format_args_nl!(
concat!("[ {:>3}.{:06}] ", $string),
timestamp.as_secs(),
timestamp.subsec_micros(),
));
});
($format_string:expr, $($arg:tt)*) => ({
let timestamp = $crate::time::time_manager().uptime();
$crate::macros::_print(format_args_nl!(
concat!("[ {:>3}.{:06}] ", $format_string),
timestamp.as_secs(),
timestamp.subsec_micros(),
$($arg)*
));
})
}
/// Prints warning text, with a newline.
#[macro_export]
macro_rules! warn {
($string:expr) => ({
let timestamp = $crate::time::time_manager().uptime();
$crate::macros::_print(format_args_nl!(
concat!("[W {:>3}.{:06}] ", $string),
timestamp.as_secs(),
timestamp.subsec_micros(),
));
});
($format_string:expr, $($arg:tt)*) => ({
let timestamp = $crate::time::time_manager().uptime();
$crate::macros::_print(format_args_nl!(
concat!("[W {:>3}.{:06}] ", $format_string),
timestamp.as_secs(),
timestamp.subsec_micros(),
$($arg)*
));
})
}

View File

@ -0,0 +1,257 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! A record of mapped pages.
use {
super::{
types::{AccessPermissions, AttributeFields, MMIODescriptor, MemAttributes, MemoryRegion},
Address, Physical, Virtual,
},
crate::{
info, mm, platform,
synchronization::{self, InitStateLock},
warn,
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
/// Type describing a virtual memory mapping.
#[allow(missing_docs)]
#[derive(Copy, Clone)]
struct MappingRecordEntry {
pub users: [Option<&'static str>; 5],
pub phys_start_addr: Address<Physical>,
pub virt_start_addr: Address<Virtual>,
pub num_pages: usize,
pub attribute_fields: AttributeFields,
}
struct MappingRecord {
inner: [Option<MappingRecordEntry>; 12],
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static KERNEL_MAPPING_RECORD: InitStateLock<MappingRecord> =
InitStateLock::new(MappingRecord::new());
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl MappingRecordEntry {
pub fn new(
name: &'static str,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Self {
Self {
users: [Some(name), None, None, None, None],
phys_start_addr: phys_region.start_addr(),
virt_start_addr: virt_region.start_addr(),
num_pages: phys_region.num_pages(),
attribute_fields: *attr,
}
}
fn find_next_free_user(&mut self) -> Result<&mut Option<&'static str>, &'static str> {
if let Some(x) = self.users.iter_mut().find(|x| x.is_none()) {
return Ok(x);
};
Err("Storage for user info exhausted")
}
pub fn add_user(&mut self, user: &'static str) -> Result<(), &'static str> {
let x = self.find_next_free_user()?;
*x = Some(user);
Ok(())
}
}
impl MappingRecord {
pub const fn new() -> Self {
Self { inner: [None; 12] }
}
fn size(&self) -> usize {
self.inner.iter().filter(|x| x.is_some()).count()
}
fn sort(&mut self) {
let upper_bound_exclusive = self.size();
let entries = &mut self.inner[0..upper_bound_exclusive];
if !entries.is_sorted_by_key(|item| item.unwrap().virt_start_addr) {
entries.sort_unstable_by_key(|item| item.unwrap().virt_start_addr)
}
}
fn find_next_free(&mut self) -> Result<&mut Option<MappingRecordEntry>, &'static str> {
if let Some(x) = self.inner.iter_mut().find(|x| x.is_none()) {
return Ok(x);
}
Err("Storage for mapping info exhausted")
}
fn find_duplicate(
&mut self,
phys_region: &MemoryRegion<Physical>,
) -> Option<&mut MappingRecordEntry> {
self.inner
.iter_mut()
.filter_map(|x| x.as_mut())
.filter(|x| x.attribute_fields.mem_attributes == MemAttributes::Device)
.find(|x| {
if x.phys_start_addr != phys_region.start_addr() {
return false;
}
if x.num_pages != phys_region.num_pages() {
return false;
}
true
})
}
/// Adds a new mapping to the mapping record.
///
/// # Arguments
///
/// * `name` - The name of the entity that owns the mapping.
/// * `virt_region` - The virtual memory region being mapped.
/// * `phys_region` - The physical memory region being mapped.
/// * `attr` - The memory attributes of the mapping.
///
/// # Returns
///
/// Returns `Ok(())` on success, or a string error message on failure.
pub fn add(
&mut self,
name: &'static str,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str> {
let x = self.find_next_free()?;
*x = Some(MappingRecordEntry::new(
name,
virt_region,
phys_region,
attr,
));
self.sort();
Ok(())
}
pub fn print(&self) {
info!(" -------------------------------------------------------------------------------------------------------------------------------------------");
info!(
" {:^44} {:^30} {:^7} {:^9} {:^35}",
"Virtual", "Physical", "Size", "Attr", "Entity"
);
info!(" -------------------------------------------------------------------------------------------------------------------------------------------");
for i in self.inner.iter().flatten() {
let size = i.num_pages * platform::memory::mmu::KernelGranule::SIZE;
let virt_start = i.virt_start_addr;
let virt_end_inclusive = virt_start + (size - 1);
let phys_start = i.phys_start_addr;
let phys_end_inclusive = phys_start + (size - 1);
let (size, unit) = mm::size_human_readable_ceil(size);
let attr = match i.attribute_fields.mem_attributes {
MemAttributes::CacheableDRAM => "C",
MemAttributes::NonCacheableDRAM => "NC",
MemAttributes::Device => "Dev",
};
let acc_p = match i.attribute_fields.acc_perms {
AccessPermissions::ReadOnly => "RO",
AccessPermissions::ReadWrite => "RW",
};
let xn = if i.attribute_fields.execute_never {
"XN"
} else {
"X"
};
info!(
" {}..{} --> {}..{} | {:>3} {} | {:<3} {} {:<2} | {}",
virt_start,
virt_end_inclusive,
phys_start,
phys_end_inclusive,
size,
unit,
attr,
acc_p,
xn,
i.users[0].unwrap()
);
for k in i.users[1..].iter() {
if let Some(additional_user) = *k {
info!(
" | {}",
additional_user
);
}
}
}
info!(" -------------------------------------------------------------------------------------------------------------------------------------------");
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
use synchronization::interface::ReadWriteEx;
/// Add an entry to the mapping info record.
pub fn kernel_add(
name: &'static str,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str> {
KERNEL_MAPPING_RECORD.write(|mr| mr.add(name, virt_region, phys_region, attr))
}
pub fn kernel_find_and_insert_mmio_duplicate(
mmio_descriptor: &MMIODescriptor,
new_user: &'static str,
) -> Option<Address<Virtual>> {
let phys_region: MemoryRegion<Physical> = (*mmio_descriptor).into();
KERNEL_MAPPING_RECORD.write(|mr| {
let dup = mr.find_duplicate(&phys_region)?;
if let Err(x) = dup.add_user(new_user) {
warn!("{}", x);
}
Some(dup.virt_start_addr)
})
}
/// Human-readable print of all recorded kernel mappings.
pub fn kernel_print() {
KERNEL_MAPPING_RECORD.read(|mr| mr.print());
}

View File

@ -0,0 +1,311 @@
use {
crate::{
memory::{Address, Physical, Virtual},
platform, println, synchronization, warn,
},
core::{
fmt::{self, Formatter},
num::NonZeroUsize,
ops::RangeInclusive,
},
snafu::Snafu,
};
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::memory::mmu as arch_mmu;
mod mapping_record;
mod page_alloc;
pub(crate) mod translation_table;
mod types;
pub use types::*;
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
// pub use arch_mmu::mmu;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// MMU enable errors variants.
#[allow(missing_docs)]
#[derive(Debug, Snafu)]
pub enum MMUEnableError {
#[snafu(display("MMU is already enabled"))]
AlreadyEnabled,
#[snafu(display("{}", err))]
Other { err: &'static str },
}
/// Memory Management interfaces.
pub mod interface {
use super::*;
/// MMU functions.
pub trait MMU {
/// Turns on the MMU for the first time and enables data and instruction caching.
///
/// # Safety
///
/// - Changes the hardware's global state.
unsafe fn enable_mmu_and_caching(
&self,
phys_tables_base_addr: Address<Physical>,
) -> Result<(), MMUEnableError>;
/// Returns true if the MMU is enabled, false otherwise.
fn is_enabled(&self) -> bool;
fn print_features(&self); // debug
}
}
/// Describes the characteristics of a translation granule.
pub struct TranslationGranule<const GRANULE_SIZE: usize>;
/// Describes properties of an address space.
pub struct AddressSpace<const AS_SIZE: usize>;
/// Intended to be implemented for [`AddressSpace`].
pub trait AssociatedTranslationTable {
/// A translation table whose address range is:
///
/// [AS_SIZE - 1, 0]
type TableStartFromBottom;
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
use {
interface::MMU, synchronization::interface::*, translation_table::interface::TranslationTable,
};
/// Query the platform for the reserved virtual addresses for MMIO remapping
/// and initialize the kernel's MMIO VA allocator with it.
fn kernel_init_mmio_va_allocator() {
let region = platform::memory::mmu::virt_mmio_remap_region();
page_alloc::kernel_mmio_va_allocator().lock(|allocator| allocator.init(region));
}
/// Map a region in the kernel's translation tables.
///
/// No input checks done, input is passed through to the architectural implementation.
///
/// # Safety
///
/// - See `map_at()`.
/// - Does not prevent aliasing.
unsafe fn kernel_map_at_unchecked(
name: &'static str,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str> {
platform::memory::mmu::kernel_translation_tables()
.write(|tables| tables.map_at(virt_region, phys_region, attr))?;
if let Err(x) = mapping_record::kernel_add(name, virt_region, phys_region, attr) {
warn!("{}", x);
}
Ok(())
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl<const GRANULE_SIZE: usize> TranslationGranule<GRANULE_SIZE> {
/// The granule's size.
pub const SIZE: usize = Self::size_checked();
/// The granule's mask.
pub const MASK: usize = Self::SIZE - 1;
/// The granule's shift, aka log2(size).
pub const SHIFT: usize = Self::SIZE.trailing_zeros() as usize;
const fn size_checked() -> usize {
assert!(GRANULE_SIZE.is_power_of_two());
GRANULE_SIZE
}
}
impl<const AS_SIZE: usize> AddressSpace<AS_SIZE> {
/// The address space size.
pub const SIZE: usize = Self::size_checked();
/// The address space shift, aka log2(size).
pub const SIZE_SHIFT: usize = Self::SIZE.trailing_zeros() as usize;
const fn size_checked() -> usize {
assert!(AS_SIZE.is_power_of_two());
// Check for architectural restrictions as well.
Self::arch_address_space_size_sanity_checks();
AS_SIZE
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Raw mapping of a virtual to physical region in the kernel translation tables.
///
/// Prevents mapping into the MMIO range of the tables.
///
/// # Safety
///
/// - See `kernel_map_at_unchecked()`.
/// - Does not prevent aliasing. Currently, the callers must be trusted.
pub unsafe fn kernel_map_at(
name: &'static str,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str> {
if platform::memory::mmu::virt_mmio_remap_region().overlaps(virt_region) {
return Err("Attempt to manually map into MMIO region");
}
kernel_map_at_unchecked(name, virt_region, phys_region, attr)?;
Ok(())
}
/// MMIO remapping in the kernel translation tables.
///
/// Typically used by device drivers.
///
/// # Safety
///
/// - Same as `kernel_map_at_unchecked()`, minus the aliasing part.
pub unsafe fn kernel_map_mmio(
name: &'static str,
mmio_descriptor: &MMIODescriptor,
) -> Result<Address<Virtual>, &'static str> {
let phys_region = MemoryRegion::from(*mmio_descriptor);
let offset_into_start_page = mmio_descriptor.start_addr().offset_into_page();
// Check if an identical region has been mapped for another driver. If so, reuse it.
let virt_addr = if let Some(addr) =
mapping_record::kernel_find_and_insert_mmio_duplicate(mmio_descriptor, name)
{
addr
// Otherwise, allocate a new region and map it.
} else {
let num_pages = match NonZeroUsize::new(phys_region.num_pages()) {
None => return Err("Requested 0 pages"),
Some(x) => x,
};
let virt_region =
page_alloc::kernel_mmio_va_allocator().lock(|allocator| allocator.alloc(num_pages))?;
kernel_map_at_unchecked(
name,
&virt_region,
&phys_region,
&AttributeFields {
mem_attributes: MemAttributes::Device,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
)?;
virt_region.start_addr()
};
Ok(virt_addr + offset_into_start_page)
}
/// Map the kernel's binary. Returns the translation table's base address.
///
/// # Safety
///
/// - See [`bsp::memory::mmu::kernel_map_binary()`].
pub unsafe fn kernel_map_binary() -> Result<Address<Physical>, &'static str> {
let phys_kernel_tables_base_addr =
platform::memory::mmu::kernel_translation_tables().write(|tables| {
tables.init();
tables.phys_base_address()
});
platform::memory::mmu::kernel_map_binary()?;
Ok(phys_kernel_tables_base_addr)
}
/// Enable the MMU and data + instruction caching.
///
/// # Safety
///
/// - Crucial function during kernel init. Changes the the complete memory view of the processor.
#[inline]
pub unsafe fn enable_mmu_and_caching(
phys_tables_base_addr: Address<Physical>,
) -> Result<(), MMUEnableError> {
arch_mmu::mmu().enable_mmu_and_caching(phys_tables_base_addr)
}
/// Finish initialization of the MMU subsystem.
#[inline]
pub fn post_enable_init() {
kernel_init_mmio_va_allocator();
}
/// Human-readable print of all recorded kernel mappings.
#[inline]
pub fn kernel_print_mappings() {
mapping_record::kernel_print()
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use {
super::*,
crate::memory::mmu::types::{
AccessPermissions, AttributeFields, MemAttributes, MemoryRegion, PageAddress,
},
core::num::NonZeroUsize,
};
/// Check that you cannot map into the MMIO VA range from kernel_map_at().
#[test_case]
fn no_manual_mmio_map() {
let phys_start_page_addr: PageAddress<Physical> = PageAddress::from(0);
let phys_end_exclusive_page_addr: PageAddress<Physical> =
phys_start_page_addr.checked_offset(5).unwrap();
let phys_region = MemoryRegion::new(phys_start_page_addr, phys_end_exclusive_page_addr);
let num_pages = NonZeroUsize::new(phys_region.num_pages()).unwrap();
let virt_region = page_alloc::kernel_mmio_va_allocator()
.lock(|allocator| allocator.alloc(num_pages))
.unwrap();
let attr = AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
};
unsafe {
assert_eq!(
kernel_map_at("test", &virt_region, &phys_region, &attr),
Err("Attempt to manually map into MMIO region")
)
};
}
}

View File

@ -0,0 +1,72 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2021-2022 Andre Richter <andre.o.richter@gmail.com>
//! Page allocation.
use {
super::MemoryRegion,
crate::{
memory::{AddressType, Virtual},
synchronization::IRQSafeNullLock,
warn,
},
core::num::NonZeroUsize,
};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// A page allocator that can be lazyily initialized.
pub struct PageAllocator<ATYPE: AddressType> {
pool: Option<MemoryRegion<ATYPE>>,
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static KERNEL_MMIO_VA_ALLOCATOR: IRQSafeNullLock<PageAllocator<Virtual>> =
IRQSafeNullLock::new(PageAllocator::new());
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Return a reference to the kernel's MMIO virtual address allocator.
pub fn kernel_mmio_va_allocator() -> &'static IRQSafeNullLock<PageAllocator<Virtual>> {
&KERNEL_MMIO_VA_ALLOCATOR
}
impl<ATYPE: AddressType> PageAllocator<ATYPE> {
/// Create an instance.
pub const fn new() -> Self {
Self { pool: None }
}
/// Initialize the allocator.
pub fn init(&mut self, pool: MemoryRegion<ATYPE>) {
if self.pool.is_some() {
warn!("Already initialized");
return;
}
self.pool = Some(pool);
}
/// Allocate a number of pages.
pub fn alloc(
&mut self,
num_requested_pages: NonZeroUsize,
) -> Result<MemoryRegion<ATYPE>, &'static str> {
if self.pool.is_none() {
return Err("Allocator not initialized");
}
self.pool
.as_mut()
.unwrap()
.take_first_n_pages(num_requested_pages)
}
}

View File

@ -0,0 +1,96 @@
//! Translation table.
#[cfg(target_arch = "aarch64")]
use crate::arch::aarch64::memory::mmu::translation_table as arch_translation_table;
use {
super::{AttributeFields, MemoryRegion},
crate::memory::{Address, Physical, Virtual},
};
//--------------------------------------------------------------------------------------------------
// Architectural Public Reexports
//--------------------------------------------------------------------------------------------------
#[cfg(target_arch = "aarch64")]
pub use arch_translation_table::FixedSizeTranslationTable;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Translation table interfaces.
pub mod interface {
use super::*;
/// Translation table operations.
pub trait TranslationTable {
/// Anything that needs to run before any of the other provided functions can be used.
///
/// # Safety
///
/// - Implementor must ensure that this function can run only once or is harmless if invoked
/// multiple times.
fn init(&mut self);
/// The translation table's base address to be used for programming the MMU.
fn phys_base_address(&self) -> Address<Physical>;
/// Map the given virtual memory region to the given physical memory region.
///
/// # Safety
///
/// - Using wrong attributes can cause multiple issues of different nature in the system.
/// - It is not required that the architectural implementation prevents aliasing. That is,
/// mapping to the same physical memory using multiple virtual addresses, which would
/// break Rust's ownership assumptions. This should be protected against in the kernel's
/// generic MMU code.
unsafe fn map_at(
&mut self,
virt_region: &MemoryRegion<Virtual>,
phys_region: &MemoryRegion<Physical>,
attr: &AttributeFields,
) -> Result<(), &'static str>;
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use {
super::*,
crate::memory::mmu::{AccessPermissions, MemAttributes, PageAddress},
arch_translation_table::MinSizeTranslationTable,
interface::TranslationTable,
};
/// Sanity checks for the TranslationTable implementation.
#[test_case]
fn translation_table_implementation_sanity() {
// This will occupy a lot of space on the stack.
let mut tables = MinSizeTranslationTable::new();
tables.init();
let virt_start_page_addr: PageAddress<Virtual> = PageAddress::from(0);
let virt_end_exclusive_page_addr: PageAddress<Virtual> =
virt_start_page_addr.checked_offset(5).unwrap();
let phys_start_page_addr: PageAddress<Physical> = PageAddress::from(0);
let phys_end_exclusive_page_addr: PageAddress<Physical> =
phys_start_page_addr.checked_offset(5).unwrap();
let virt_region = MemoryRegion::new(virt_start_page_addr, virt_end_exclusive_page_addr);
let phys_region = MemoryRegion::new(phys_start_page_addr, phys_end_exclusive_page_addr);
let attr = AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
};
unsafe { assert_eq!(tables.map_at(&virt_region, &phys_region, &attr), Ok(())) };
}
}

View File

@ -0,0 +1,402 @@
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
use {
crate::{
memory::{Address, AddressType, Physical},
mm,
platform::{self, memory::mmu::KernelGranule},
},
core::{
fmt::{self, Formatter},
iter::Step,
num::NonZeroUsize,
ops::Range,
},
};
/// A wrapper type around [Address] that ensures page alignment.
#[derive(Copy, Clone, Debug, Eq, PartialOrd, PartialEq)]
pub struct PageAddress<ATYPE: AddressType> {
inner: Address<ATYPE>,
}
/// A type that describes a region of memory in quantities of pages.
#[derive(Copy, Clone, Debug, Eq, PartialOrd, PartialEq)]
pub struct MemoryRegion<ATYPE: AddressType> {
start: PageAddress<ATYPE>,
end_exclusive: PageAddress<ATYPE>,
}
/// Architecture agnostic memory attributes.
#[derive(Copy, Clone, Debug, Eq, PartialOrd, PartialEq)]
pub enum MemAttributes {
/// Regular memory
CacheableDRAM,
/// Memory without caching
NonCacheableDRAM,
/// Device memory
Device,
}
/// Architecture agnostic memory region access permissions.
#[derive(Copy, Clone, Debug, Eq, PartialOrd, PartialEq)]
pub enum AccessPermissions {
/// Read-only access
ReadOnly,
/// Read-write access
ReadWrite,
}
/// Summary structure of memory region properties.
#[derive(Copy, Clone, Debug, Eq, PartialOrd, PartialEq)]
pub struct AttributeFields {
/// Attributes
pub mem_attributes: MemAttributes,
/// Permissions
pub acc_perms: AccessPermissions,
/// Disable executable code in this region
pub execute_never: bool,
}
/// An MMIO descriptor for use in device drivers.
#[derive(Copy, Clone)]
pub struct MMIODescriptor {
start_addr: Address<Physical>,
end_addr_exclusive: Address<Physical>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
//------------------------------------------------------------------------------
// PageAddress
//------------------------------------------------------------------------------
impl<ATYPE: AddressType> PageAddress<ATYPE> {
/// Unwraps the value.
pub fn into_inner(self) -> Address<ATYPE> {
self.inner
}
/// Calculates the offset from the page address.
///
/// `count` is in units of [PageAddress]. For example, a count of 2 means `result = self + 2 *
/// page_size`.
pub fn checked_offset(self, count: isize) -> Option<Self> {
if count == 0 {
return Some(self);
}
let delta = count.unsigned_abs().checked_mul(KernelGranule::SIZE)?;
let result = if count.is_positive() {
self.inner.as_usize().checked_add(delta)?
} else {
self.inner.as_usize().checked_sub(delta)?
};
Some(Self {
inner: Address::new(result),
})
}
}
impl<ATYPE: AddressType> From<usize> for PageAddress<ATYPE> {
fn from(addr: usize) -> Self {
assert!(
mm::is_aligned(addr, KernelGranule::SIZE),
"Input usize not page aligned"
);
Self {
inner: Address::new(addr),
}
}
}
impl<ATYPE: AddressType> From<Address<ATYPE>> for PageAddress<ATYPE> {
fn from(addr: Address<ATYPE>) -> Self {
assert!(addr.is_page_aligned(), "Input Address not page aligned");
Self { inner: addr }
}
}
impl<ATYPE: AddressType> Step for PageAddress<ATYPE> {
fn steps_between(start: &Self, end: &Self) -> Option<usize> {
if start > end {
return None;
}
// Since start <= end, do unchecked arithmetic.
Some((end.inner.as_usize() - start.inner.as_usize()) >> KernelGranule::SHIFT)
}
fn forward_checked(start: Self, count: usize) -> Option<Self> {
start.checked_offset(count as isize)
}
fn backward_checked(start: Self, count: usize) -> Option<Self> {
start.checked_offset(-(count as isize))
}
}
//------------------------------------------------------------------------------
// MemoryRegion
//------------------------------------------------------------------------------
impl<ATYPE: AddressType> MemoryRegion<ATYPE> {
/// Create an instance.
pub fn new(start: PageAddress<ATYPE>, end_exclusive: PageAddress<ATYPE>) -> Self {
assert!(start <= end_exclusive);
Self {
start,
end_exclusive,
}
}
fn as_range(&self) -> Range<PageAddress<ATYPE>> {
self.into_iter()
}
/// Returns the start page address.
pub fn start_page_addr(&self) -> PageAddress<ATYPE> {
self.start
}
/// Returns the start address.
pub fn start_addr(&self) -> Address<ATYPE> {
self.start.into_inner()
}
/// Returns the exclusive end page address.
pub fn end_exclusive_page_addr(&self) -> PageAddress<ATYPE> {
self.end_exclusive
}
/// Returns the exclusive end page address.
pub fn end_inclusive_page_addr(&self) -> PageAddress<ATYPE> {
self.end_exclusive.checked_offset(-1).unwrap()
}
/// Checks if self contains an address.
pub fn contains(&self, addr: Address<ATYPE>) -> bool {
let page_addr = PageAddress::from(addr.align_down_page());
self.as_range().contains(&page_addr)
}
/// Checks if there is an overlap with another memory region.
pub fn overlaps(&self, other_region: &Self) -> bool {
let self_range = self.as_range();
self_range.contains(&other_region.start_page_addr())
|| self_range.contains(&other_region.end_inclusive_page_addr())
}
/// Returns the number of pages contained in this region.
pub fn num_pages(&self) -> usize {
PageAddress::steps_between(&self.start, &self.end_exclusive).unwrap()
}
/// Returns the size in bytes of this region.
pub fn size(&self) -> usize {
// Invariant: start <= end_exclusive, so do unchecked arithmetic.
let end_exclusive = self.end_exclusive.into_inner().as_usize();
let start = self.start.into_inner().as_usize();
end_exclusive - start
}
/// Splits the MemoryRegion like:
///
/// --------------------------------------------------------------------------------
/// | | | | | | | | | | | | | | | | | | |
/// --------------------------------------------------------------------------------
/// ^ ^ ^
/// | | |
/// left_start left_end_exclusive |
/// |
/// ^ |
/// | |
/// right_start right_end_exclusive
///
/// Left region is returned to the caller. Right region is the new region for this struct.
pub fn take_first_n_pages(&mut self, num_pages: NonZeroUsize) -> Result<Self, &'static str> {
let count: usize = num_pages.into();
let left_end_exclusive = self.start.checked_offset(count as isize);
let left_end_exclusive = match left_end_exclusive {
None => return Err("Overflow while calculating left_end_exclusive"),
Some(x) => x,
};
if left_end_exclusive > self.end_exclusive {
return Err("Not enough free pages");
}
let allocation = Self {
start: self.start,
end_exclusive: left_end_exclusive,
};
self.start = left_end_exclusive;
Ok(allocation)
}
}
impl<ATYPE: AddressType> IntoIterator for MemoryRegion<ATYPE> {
type Item = PageAddress<ATYPE>;
type IntoIter = Range<Self::Item>;
fn into_iter(self) -> Self::IntoIter {
Range {
start: self.start,
end: self.end_exclusive,
}
}
}
impl From<MMIODescriptor> for MemoryRegion<Physical> {
fn from(desc: MMIODescriptor) -> Self {
let start = PageAddress::from(desc.start_addr.align_down_page());
let end_exclusive = PageAddress::from(desc.end_addr_exclusive().align_up_page());
Self {
start,
end_exclusive,
}
}
}
//------------------------------------------------------------------------------
// MMIODescriptor
//------------------------------------------------------------------------------
impl MMIODescriptor {
/// Create an instance.
pub const fn new(start_addr: Address<Physical>, size: usize) -> Self {
assert!(size > 0);
let end_addr_exclusive = Address::new(start_addr.as_usize() + size);
Self {
start_addr,
end_addr_exclusive,
}
}
/// Return the start address.
pub const fn start_addr(&self) -> Address<Physical> {
self.start_addr
}
/// Return the exclusive end address.
pub fn end_addr_exclusive(&self) -> Address<Physical> {
self.end_addr_exclusive
}
}
//------------------------------------------------------------------------------
// AttributeFields
//------------------------------------------------------------------------------
impl Default for AttributeFields {
fn default() -> AttributeFields {
AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
}
}
}
/// Human-readable output of AttributeFields
impl fmt::Display for AttributeFields {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
let attr = match self.mem_attributes {
MemAttributes::CacheableDRAM => "C",
MemAttributes::NonCacheableDRAM => "NC",
MemAttributes::Device => "Dev",
};
let acc_p = match self.acc_perms {
AccessPermissions::ReadOnly => "RO",
AccessPermissions::ReadWrite => "RW",
};
let xn = if self.execute_never { "PXN" } else { "PX" };
write!(f, "{: <3} {} {: <3}", attr, acc_p, xn)
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use {super::*, crate::memory::Virtual};
/// Sanity of [PageAddress] methods.
#[test_case]
fn pageaddress_type_method_sanity() {
let page_addr: PageAddress<Virtual> = PageAddress::from(KernelGranule::SIZE * 2);
assert_eq!(
page_addr.checked_offset(-2),
Some(PageAddress::<Virtual>::from(0))
);
assert_eq!(
page_addr.checked_offset(2),
Some(PageAddress::<Virtual>::from(KernelGranule::SIZE * 4))
);
assert_eq!(
PageAddress::<Virtual>::from(0).checked_offset(0),
Some(PageAddress::<Virtual>::from(0))
);
assert_eq!(PageAddress::<Virtual>::from(0).checked_offset(-1), None);
let max_page_addr = Address::<Virtual>::new(usize::MAX).align_down_page();
assert_eq!(
PageAddress::<Virtual>::from(max_page_addr).checked_offset(1),
None
);
let zero = PageAddress::<Virtual>::from(0);
let three = PageAddress::<Virtual>::from(KernelGranule::SIZE * 3);
assert_eq!(PageAddress::steps_between(&zero, &three), Some(3));
}
/// Sanity of [MemoryRegion] methods.
#[test_case]
fn memoryregion_type_method_sanity() {
let zero = PageAddress::<Virtual>::from(0);
let zero_region = MemoryRegion::new(zero, zero);
assert_eq!(zero_region.num_pages(), 0);
assert_eq!(zero_region.size(), 0);
let one = PageAddress::<Virtual>::from(KernelGranule::SIZE);
let one_region = MemoryRegion::new(zero, one);
assert_eq!(one_region.num_pages(), 1);
assert_eq!(one_region.size(), KernelGranule::SIZE);
let three = PageAddress::<Virtual>::from(KernelGranule::SIZE * 3);
let mut three_region = MemoryRegion::new(zero, three);
assert!(three_region.contains(zero.into_inner()));
assert!(!three_region.contains(three.into_inner()));
assert!(three_region.overlaps(&one_region));
let allocation = three_region
.take_first_n_pages(NonZeroUsize::new(2).unwrap())
.unwrap();
assert_eq!(allocation.num_pages(), 2);
assert_eq!(three_region.num_pages(), 1);
for (i, alloc) in allocation.into_iter().enumerate() {
assert_eq!(alloc.into_inner().as_usize(), i * KernelGranule::SIZE);
}
}
}

View File

@ -0,0 +1,124 @@
//--------------------------------------------------------------------------------------------------
// Laterrrr
//--------------------------------------------------------------------------------------------------
/// Architecture agnostic memory region translation types.
#[allow(dead_code)]
#[derive(Copy, Clone)]
pub enum Translation {
/// One-to-one address mapping
Identity,
/// Mapping with a specified offset
Offset(usize),
}
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Types used for compiling the virtual memory layout of the kernel using address ranges.
///
/// Memory region descriptor.
///
/// Used to construct iterable kernel memory ranges.
pub struct TranslationDescriptor {
/// Name of the region
pub name: &'static str,
/// Virtual memory range
pub virtual_range: fn() -> RangeInclusive<usize>,
/// Mapping translation
pub physical_range_translation: Translation,
/// Attributes
pub attribute_fields: AttributeFields,
}
/// Type for expressing the kernel's virtual memory layout.
pub struct KernelVirtualLayout<const NUM_SPECIAL_RANGES: usize> {
/// The last (inclusive) address of the address space.
max_virt_addr_inclusive: usize,
/// Array of descriptors for non-standard (normal cacheable DRAM) memory regions.
inner: [TranslationDescriptor; NUM_SPECIAL_RANGES],
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Human-readable output of a Descriptor.
impl fmt::Display for TranslationDescriptor {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
// Call the function to which self.range points, and dereference the
// result, which causes Rust to copy the value.
let start = *(self.virtual_range)().start();
let end = *(self.virtual_range)().end();
let size = end - start + 1;
// log2(1024)
const KIB_SHIFT: u32 = 10;
// log2(1024 * 1024)
const MIB_SHIFT: u32 = 20;
let (size, unit) = if (size >> MIB_SHIFT) > 0 {
(size >> MIB_SHIFT, "MiB")
} else if (size >> KIB_SHIFT) > 0 {
(size >> KIB_SHIFT, "KiB")
} else {
(size, "Byte")
};
write!(
f,
" {:#010x} - {:#010x} | {: >3} {} | {} | {}",
start, end, size, unit, self.attribute_fields, self.name
)
}
}
impl<const NUM_SPECIAL_RANGES: usize> KernelVirtualLayout<{ NUM_SPECIAL_RANGES }> {
/// Create a new instance.
pub const fn new(max: usize, layout: [TranslationDescriptor; NUM_SPECIAL_RANGES]) -> Self {
Self {
max_virt_addr_inclusive: max,
inner: layout,
}
}
/// For a given virtual address, find and return the output address and
/// corresponding attributes.
///
/// If the address is not found in `inner`, return an identity mapped default for normal
/// cacheable DRAM.
pub fn virt_addr_properties(
&self,
virt_addr: usize,
) -> Result<(usize, AttributeFields), &'static str> {
if virt_addr > self.max_virt_addr_inclusive {
return Err("Address out of range");
}
for i in self.inner.iter() {
if (i.virtual_range)().contains(&virt_addr) {
let output_addr = match i.physical_range_translation {
Translation::Identity => virt_addr,
Translation::Offset(a) => a + (virt_addr - (i.virtual_range)().start()),
};
return Ok((output_addr, i.attribute_fields));
}
}
Ok((virt_addr, AttributeFields::default()))
}
/// Print the kernel memory layout.
pub fn print_layout(&self) {
println!("[i] Kernel memory layout:"); //info!
for i in self.inner.iter() {
// for i in KERNEL_VIRTUAL_LAYOUT.iter() {
println!("{}", i); //info!
}
}
}

168
machine/src/memory/mod.rs Normal file
View File

@ -0,0 +1,168 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! Memory Management.
use {
crate::{mm, platform},
core::{
fmt,
marker::PhantomData,
ops::{Add, Sub},
},
};
pub mod mmu;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Metadata trait for marking the type of an address.
pub trait AddressType: Copy + Clone + PartialOrd + PartialEq + Ord + Eq {}
/// Zero-sized type to mark a physical address.
#[derive(Copy, Clone, Debug, PartialOrd, PartialEq, Ord, Eq)]
pub enum Physical {}
/// Zero-sized type to mark a virtual address.
#[derive(Copy, Clone, Debug, PartialOrd, PartialEq, Ord, Eq)]
pub enum Virtual {}
/// Generic address type.
#[derive(Copy, Clone, Debug, PartialOrd, PartialEq, Ord, Eq)]
pub struct Address<ATYPE: AddressType> {
value: usize,
_address_type: PhantomData<fn() -> ATYPE>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl AddressType for Physical {}
impl AddressType for Virtual {}
impl<ATYPE: AddressType> Address<ATYPE> {
/// Create an instance.
pub const fn new(value: usize) -> Self {
Self {
value,
_address_type: PhantomData,
}
}
/// Convert to usize.
pub const fn as_usize(self) -> usize {
self.value
}
/// Align down to page size.
#[must_use]
pub const fn align_down_page(self) -> Self {
let aligned = mm::align_down(self.value, platform::memory::mmu::KernelGranule::SIZE);
Self::new(aligned)
}
/// Align up to page size.
#[must_use]
pub const fn align_up_page(self) -> Self {
let aligned = mm::align_up(self.value, platform::memory::mmu::KernelGranule::SIZE);
Self::new(aligned)
}
/// Checks if the address is page aligned.
pub const fn is_page_aligned(&self) -> bool {
mm::is_aligned(self.value, platform::memory::mmu::KernelGranule::SIZE)
}
/// Return the address' offset into the corresponding page.
pub const fn offset_into_page(&self) -> usize {
self.value & platform::memory::mmu::KernelGranule::MASK
}
}
impl<ATYPE: AddressType> Add<usize> for Address<ATYPE> {
type Output = Self;
#[inline(always)]
fn add(self, rhs: usize) -> Self::Output {
match self.value.checked_add(rhs) {
None => panic!("Overflow on Address::add"),
Some(x) => Self::new(x),
}
}
}
impl<ATYPE: AddressType> Sub<Address<ATYPE>> for Address<ATYPE> {
type Output = Self;
#[inline(always)]
fn sub(self, rhs: Address<ATYPE>) -> Self::Output {
match self.value.checked_sub(rhs.value) {
None => panic!("Overflow on Address::sub"),
Some(x) => Self::new(x),
}
}
}
impl fmt::Display for Address<Physical> {
// Don't expect to see physical addresses greater than 40 bit.
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let q3: u8 = ((self.value >> 32) & 0xff) as u8;
let q2: u16 = ((self.value >> 16) & 0xffff) as u16;
let q1: u16 = (self.value & 0xffff) as u16;
write!(f, "0x")?;
write!(f, "{:02x}_", q3)?;
write!(f, "{:04x}_", q2)?;
write!(f, "{:04x}", q1)
}
}
impl fmt::Display for Address<Virtual> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
let q4: u16 = ((self.value >> 48) & 0xffff) as u16;
let q3: u16 = ((self.value >> 32) & 0xffff) as u16;
let q2: u16 = ((self.value >> 16) & 0xffff) as u16;
let q1: u16 = (self.value & 0xffff) as u16;
write!(f, "0x")?;
write!(f, "{:04x}_", q4)?;
write!(f, "{:04x}_", q3)?;
write!(f, "{:04x}_", q2)?;
write!(f, "{:04x}", q1)
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
/// Sanity of [Address] methods.
#[test_case]
fn address_type_method_sanity() {
let addr = Address::<Virtual>::new(platform::memory::mmu::KernelGranule::SIZE + 100);
assert_eq!(
addr.align_down_page().as_usize(),
platform::memory::mmu::KernelGranule::SIZE
);
assert_eq!(
addr.align_up_page().as_usize(),
platform::memory::mmu::KernelGranule::SIZE * 2
);
assert!(!addr.is_page_aligned());
assert_eq!(addr.offset_into_page(), 100);
}
}

View File

@ -3,29 +3,67 @@
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
*/
pub mod bump_allocator;
mod bump_allocator;
pub use bump_allocator::BumpAllocator;
/// Align address downwards.
///
/// Returns the greatest x with alignment `align` so that x <= addr.
/// The alignment must be a power of 2.
pub fn align_down(addr: u64, align: u64) -> u64 {
assert!(align.is_power_of_two(), "`align` must be a power of two");
addr & !(align - 1)
#[inline(always)]
pub const fn align_down(addr: usize, alignment: usize) -> usize {
assert!(
alignment.is_power_of_two(),
"`alignment` must be a power of two"
);
addr & !(alignment - 1)
}
/// Align address upwards.
///
/// Returns the smallest x with alignment `align` so that x >= addr.
/// The alignment must be a power of 2.
pub fn align_up(addr: u64, align: u64) -> u64 {
assert!(align.is_power_of_two(), "`align` must be a power of two");
let align_mask = align - 1;
if addr & align_mask == 0 {
addr // already aligned
#[inline(always)]
pub const fn align_up(value: usize, alignment: usize) -> usize {
assert!(
alignment.is_power_of_two(),
"`alignment` must be a power of two"
);
let align_mask = alignment - 1;
if value & align_mask == 0 {
value // already aligned
} else {
(addr | align_mask) + 1
(value | align_mask) + 1
}
}
/// Check if a value is aligned to a given alignment.
/// The alignment must be a power of 2.
#[inline(always)]
pub const fn is_aligned(value: usize, alignment: usize) -> bool {
assert!(
alignment.is_power_of_two(),
"`alignment` must be a power of two"
);
(value & (alignment - 1)) == 0
}
/// Convert a size into human readable format.
pub const fn size_human_readable_ceil(size: usize) -> (usize, &'static str) {
const KIB: usize = 1024;
const MIB: usize = 1024 * 1024;
const GIB: usize = 1024 * 1024 * 1024;
if (size / GIB) > 0 {
(size.div_ceil(GIB), "GiB")
} else if (size / MIB) > 0 {
(size.div_ceil(MIB), "MiB")
} else if (size / KIB) > 0 {
(size.div_ceil(KIB), "KiB")
} else {
(size, "Byte")
}
}

View File

@ -1,12 +1,74 @@
pub fn handler(info: &core::panic::PanicInfo) -> ! {
//! A panic handler for hardware and for QEMU.
use core::panic::PanicInfo;
fn print_panic_info(info: &PanicInfo) {
let (location, line, column) = match info.location() {
Some(loc) => (loc.file(), loc.line(), loc.column()),
_ => ("???", 0, 0),
};
// @todo This may fail to print if the panic message is too long for local print buffer.
crate::println!("{}", info);
crate::endless_sleep()
crate::info!(
"Kernel panic!\n\n\
Panic location:\n File '{}', line {}, column {}\n\n\
{}",
location,
line,
column,
info.message().unwrap_or(&format_args!("")),
);
}
pub fn handler_for_tests(info: &core::panic::PanicInfo) -> ! {
pub fn handler(info: &PanicInfo) -> ! {
crate::exception::asynchronous::local_irq_mask();
// Protect against panic infinite loops if any of the following code panics itself.
panic_prevent_reenter();
print_panic_info(info);
crate::cpu::endless_sleep()
}
/// We have two separate handlers because other crates may use machine crate as a dependency for
/// running their tests, and this means machine could be compiled with different features.
pub fn handler_for_tests(info: &PanicInfo) -> ! {
crate::println!("\n[failed]\n");
// @todo This may fail to print if the panic message is too long for local print buffer.
crate::println!("\nError: {}\n", info);
// Protect against panic infinite loops if any of the following code panics itself.
panic_prevent_reenter();
print_panic_info(info);
crate::qemu::semihosting::exit_failure()
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
/// Stop immediately if called a second time.
///
/// # Note
///
/// Using atomics here relieves us from needing to use `unsafe` for the static variable.
///
/// On `AArch64`, which is the only implemented architecture at the time of writing this,
/// [`AtomicBool::load`] and [`AtomicBool::store`] are lowered to ordinary load and store
/// instructions. They are therefore safe to use even with MMU + caching deactivated.
///
/// [`AtomicBool::load`]: core::sync::atomic::AtomicBool::load
/// [`AtomicBool::store`]: core::sync::atomic::AtomicBool::store
fn panic_prevent_reenter() {
use core::sync::atomic::{AtomicBool, Ordering};
#[cfg(not(target_arch = "aarch64"))]
compile_error!("Add the target_arch to above check if the following code is safe to use");
static PANIC_IN_PROGRESS: AtomicBool = AtomicBool::new(false);
if !PANIC_IN_PROGRESS.load(Ordering::Relaxed) {
PANIC_IN_PROGRESS.store(true, Ordering::Relaxed);
return;
}
#[cfg(qemu)]
crate::qemu::semihosting::exit_failure();
#[cfg(not(qemu))]
crate::cpu::endless_sleep()
}

View File

@ -2,51 +2,9 @@
* SPDX-License-Identifier: BlueOak-1.0.0
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
*/
use core::{marker::PhantomData, ops};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
#[cfg(any(feature = "rpi3", feature = "rpi4"))]
pub mod raspberrypi;
pub mod rpi3;
pub struct MMIODerefWrapper<T> {
base_addr: usize,
phantom: PhantomData<fn() -> T>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl<T> MMIODerefWrapper<T> {
/// Create an instance.
///
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(start_addr: usize) -> Self {
Self {
base_addr: start_addr,
phantom: PhantomData,
}
}
}
/// Deref to RegisterBlock
///
/// Allows writing
/// ```
/// self.GPPUD.read()
/// ```
/// instead of something along the lines of
/// ```
/// unsafe { (*GPIO::ptr()).GPPUD.read() }
/// ```
impl<T> ops::Deref for MMIODerefWrapper<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { &*(self.base_addr as *const _) }
}
}
#[cfg(any(feature = "rpi3", feature = "rpi4"))]
pub use raspberrypi::*;

View File

@ -0,0 +1 @@
pub const BOOT_CORE_ID: u64 = 0;

View File

@ -0,0 +1,147 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! GICC Driver - GIC CPU interface.
use {
crate::{
exception,
memory::{Address, Virtual},
platform::device_driver::common::MMIODerefWrapper,
},
tock_registers::{
interfaces::{Readable, Writeable},
register_bitfields, register_structs,
registers::ReadWrite,
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
register_bitfields! {
u32,
/// CPU Interface Control Register
CTLR [
Enable OFFSET(0) NUMBITS(1) []
],
/// Interrupt Priority Mask Register
PMR [
Priority OFFSET(0) NUMBITS(8) []
],
/// Interrupt Acknowledge Register
IAR [
InterruptID OFFSET(0) NUMBITS(10) []
],
/// End of Interrupt Register
EOIR [
EOIINTID OFFSET(0) NUMBITS(10) []
]
}
register_structs! {
#[allow(non_snake_case)]
pub RegisterBlock {
(0x000 => CTLR: ReadWrite<u32, CTLR::Register>),
(0x004 => PMR: ReadWrite<u32, PMR::Register>),
(0x008 => _reserved1),
(0x00C => IAR: ReadWrite<u32, IAR::Register>),
(0x010 => EOIR: ReadWrite<u32, EOIR::Register>),
(0x014 => @END),
}
}
/// Abstraction for the associated MMIO registers.
type Registers = MMIODerefWrapper<RegisterBlock>;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Representation of the GIC CPU interface.
pub struct GICC {
registers: Registers,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl GICC {
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_start_addr: Address<Virtual>) -> Self {
Self {
registers: Registers::new(mmio_start_addr),
}
}
/// Accept interrupts of any priority.
///
/// Quoting the GICv2 Architecture Specification:
///
/// "Writing 255 to the GICC_PMR always sets it to the largest supported priority field
/// value."
///
/// # Safety
///
/// - GICC MMIO registers are banked per CPU core. It is therefore safe to have `&self` instead
/// of `&mut self`.
pub fn priority_accept_all(&self) {
self.registers.PMR.write(PMR::Priority.val(255)); // Comment in arch spec.
}
/// Enable the interface - start accepting IRQs.
///
/// # Safety
///
/// - GICC MMIO registers are banked per CPU core. It is therefore safe to have `&self` instead
/// of `&mut self`.
pub fn enable(&self) {
self.registers.CTLR.write(CTLR::Enable::SET);
}
/// Extract the number of the highest-priority pending IRQ.
///
/// Can only be called from IRQ context, which is ensured by taking an `IRQContext` token.
///
/// # Safety
///
/// - GICC MMIO registers are banked per CPU core. It is therefore safe to have `&self` instead
/// of `&mut self`.
#[allow(clippy::trivially_copy_pass_by_ref)]
pub fn pending_irq_number<'irq_context>(
&self,
_ic: &exception::asynchronous::IRQContext<'irq_context>,
) -> usize {
self.registers.IAR.read(IAR::InterruptID) as usize
}
/// Complete handling of the currently active IRQ.
///
/// Can only be called from IRQ context, which is ensured by taking an `IRQContext` token.
///
/// To be called after `pending_irq_number()`.
///
/// # Safety
///
/// - GICC MMIO registers are banked per CPU core. It is therefore safe to have `&self` instead
/// of `&mut self`.
#[allow(clippy::trivially_copy_pass_by_ref)]
pub fn mark_comleted<'irq_context>(
&self,
irq_number: u32,
_ic: &exception::asynchronous::IRQContext<'irq_context>,
) {
self.registers.EOIR.write(EOIR::EOIINTID.val(irq_number));
}
}

View File

@ -0,0 +1,203 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! GICD Driver - GIC Distributor.
//!
//! # Glossary
//! - SPI - Shared Peripheral Interrupt.
use {
crate::{
memory::{Address, Virtual},
platform::device_driver::common::MMIODerefWrapper,
state,
synchronization::{self, IRQSafeNullLock},
},
tock_registers::{
interfaces::{Readable, Writeable},
register_bitfields, register_structs,
registers::{ReadOnly, ReadWrite},
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
register_bitfields! {
u32,
/// Distributor Control Register
CTLR [
Enable OFFSET(0) NUMBITS(1) []
],
/// Interrupt Controller Type Register
TYPER [
ITLinesNumber OFFSET(0) NUMBITS(5) []
],
/// Interrupt Processor Targets Registers
ITARGETSR [
Offset3 OFFSET(24) NUMBITS(8) [],
Offset2 OFFSET(16) NUMBITS(8) [],
Offset1 OFFSET(8) NUMBITS(8) [],
Offset0 OFFSET(0) NUMBITS(8) []
]
}
register_structs! {
#[allow(non_snake_case)]
SharedRegisterBlock {
(0x000 => CTLR: ReadWrite<u32, CTLR::Register>),
(0x004 => TYPER: ReadOnly<u32, TYPER::Register>),
(0x008 => _reserved1),
(0x104 => ISENABLER: [ReadWrite<u32>; 31]),
(0x180 => _reserved2),
(0x820 => ITARGETSR: [ReadWrite<u32, ITARGETSR::Register>; 248]),
(0xC00 => @END),
}
}
register_structs! {
#[allow(non_snake_case)]
BankedRegisterBlock {
(0x000 => _reserved1),
(0x100 => ISENABLER: ReadWrite<u32>),
(0x104 => _reserved2),
(0x800 => ITARGETSR: [ReadOnly<u32, ITARGETSR::Register>; 8]),
(0x820 => @END),
}
}
/// Abstraction for the non-banked parts of the associated MMIO registers.
type SharedRegisters = MMIODerefWrapper<SharedRegisterBlock>;
/// Abstraction for the banked parts of the associated MMIO registers.
type BankedRegisters = MMIODerefWrapper<BankedRegisterBlock>;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Representation of the GIC Distributor.
pub struct GICD {
/// Access to shared registers is guarded with a lock.
shared_registers: IRQSafeNullLock<SharedRegisters>,
/// Access to banked registers is unguarded.
banked_registers: BankedRegisters,
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl SharedRegisters {
/// Return the number of IRQs that this HW implements.
#[inline(always)]
fn num_irqs(&mut self) -> usize {
// Query number of implemented IRQs.
//
// Refer to GICv2 Architecture Specification, Section 4.3.2.
((self.TYPER.read(TYPER::ITLinesNumber) as usize) + 1) * 32
}
/// Return a slice of the implemented ITARGETSR.
#[inline(always)]
fn implemented_itargets_slice(&mut self) -> &[ReadWrite<u32, ITARGETSR::Register>] {
assert!(self.num_irqs() >= 36);
// Calculate the max index of the shared ITARGETSR array.
//
// The first 32 IRQs are private, so not included in `shared_registers`. Each ITARGETS
// register has four entries, so shift right by two. Subtract one because we start
// counting at zero.
let spi_itargetsr_max_index = ((self.num_irqs() - 32) >> 2) - 1;
// Rust automatically inserts slice range sanity check, i.e. max >= min.
&self.ITARGETSR[0..spi_itargetsr_max_index]
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
use synchronization::interface::Mutex;
impl GICD {
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_start_addr: Address<Virtual>) -> Self {
Self {
shared_registers: IRQSafeNullLock::new(SharedRegisters::new(mmio_start_addr)),
banked_registers: BankedRegisters::new(mmio_start_addr),
}
}
/// Use a banked ITARGETSR to retrieve the executing core's GIC target mask.
///
/// Quoting the GICv2 Architecture Specification:
///
/// "GICD_ITARGETSR0 to GICD_ITARGETSR7 are read-only, and each field returns a value that
/// corresponds only to the processor reading the register."
fn local_gic_target_mask(&self) -> u32 {
self.banked_registers.ITARGETSR[0].read(ITARGETSR::Offset0)
}
/// Route all SPIs to the boot core and enable the distributor.
pub fn boot_core_init(&self) {
assert!(
state::state_manager().is_init(),
"Only allowed during kernel init phase"
);
// Target all SPIs to the boot core only.
let mask = self.local_gic_target_mask();
self.shared_registers.lock(|regs| {
for i in regs.implemented_itargets_slice().iter() {
i.write(
ITARGETSR::Offset3.val(mask)
+ ITARGETSR::Offset2.val(mask)
+ ITARGETSR::Offset1.val(mask)
+ ITARGETSR::Offset0.val(mask),
);
}
regs.CTLR.write(CTLR::Enable::SET);
});
}
/// Enable an interrupt.
pub fn enable(&self, irq_num: &super::IRQNumber) {
let irq_num = irq_num.get();
// Each bit in the u32 enable register corresponds to one IRQ number. Shift right by 5
// (division by 32) and arrive at the index for the respective ISENABLER[i].
let enable_reg_index = irq_num >> 5;
let enable_bit: u32 = 1u32 << (irq_num % 32);
// Check if we are handling a private or shared IRQ.
match irq_num {
// Private.
0..=31 => {
let enable_reg = &self.banked_registers.ISENABLER;
enable_reg.set(enable_reg.get() | enable_bit);
}
// Shared.
_ => {
let enable_reg_index_shared = enable_reg_index - 1;
self.shared_registers.lock(|regs| {
let enable_reg = &regs.ISENABLER[enable_reg_index_shared];
enable_reg.set(enable_reg.get() | enable_bit);
});
}
}
}
}

View File

@ -0,0 +1,230 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! GICv2 Driver - ARM Generic Interrupt Controller v2.
//!
//! The following is a collection of excerpts with useful information from
//! - `Programmer's Guide for ARMv8-A`
//! - `ARM Generic Interrupt Controller Architecture Specification`
//!
//! # Programmer's Guide - 10.6.1 Configuration
//!
//! The GIC is accessed as a memory-mapped peripheral.
//!
//! All cores can access the common Distributor, but the CPU interface is banked, that is, each core
//! uses the same address to access its own private CPU interface.
//!
//! It is not possible for a core to access the CPU interface of another core.
//!
//! # Architecture Specification - 10.6.2 Initialization
//!
//! Both the Distributor and the CPU interfaces are disabled at reset. The GIC must be initialized
//! after reset before it can deliver interrupts to the core.
//!
//! In the Distributor, software must configure the priority, target, security and enable individual
//! interrupts. The Distributor must subsequently be enabled through its control register
//! (GICD_CTLR). For each CPU interface, software must program the priority mask and preemption
//! settings.
//!
//! Each CPU interface block itself must be enabled through its control register (GICD_CTLR). This
//! prepares the GIC to deliver interrupts to the core.
//!
//! Before interrupts are expected in the core, software prepares the core to take interrupts by
//! setting a valid interrupt vector in the vector table, and clearing interrupt mask bits in
//! PSTATE, and setting the routing controls.
//!
//! The entire interrupt mechanism in the system can be disabled by disabling the Distributor.
//! Interrupt delivery to an individual core can be disabled by disabling its CPU interface.
//! Individual interrupts can also be disabled (or enabled) in the distributor.
//!
//! For an interrupt to reach the core, the individual interrupt, Distributor and CPU interface must
//! all be enabled. The interrupt also needs to be of sufficient priority, that is, higher than the
//! core's priority mask.
//!
//! # Architecture Specification - 1.4.2 Interrupt types
//!
//! - Peripheral interrupt
//! - Private Peripheral Interrupt (PPI)
//! - This is a peripheral interrupt that is specific to a single processor.
//! - Shared Peripheral Interrupt (SPI)
//! - This is a peripheral interrupt that the Distributor can route to any of a specified
//! combination of processors.
//!
//! - Software-generated interrupt (SGI)
//! - This is an interrupt generated by software writing to a GICD_SGIR register in the GIC. The
//! system uses SGIs for interprocessor communication.
//! - An SGI has edge-triggered properties. The software triggering of the interrupt is
//! equivalent to the edge transition of the interrupt request signal.
//! - When an SGI occurs in a multiprocessor implementation, the CPUID field in the Interrupt
//! Acknowledge Register, GICC_IAR, or the Aliased Interrupt Acknowledge Register, GICC_AIAR,
//! identifies the processor that requested the interrupt.
//!
//! # Architecture Specification - 2.2.1 Interrupt IDs
//!
//! Interrupts from sources are identified using ID numbers. Each CPU interface can see up to 1020
//! interrupts. The banking of SPIs and PPIs increases the total number of interrupts supported by
//! the Distributor.
//!
//! The GIC assigns interrupt ID numbers ID0-ID1019 as follows:
//! - Interrupt numbers 32..1019 are used for SPIs.
//! - Interrupt numbers 0..31 are used for interrupts that are private to a CPU interface. These
//! interrupts are banked in the Distributor.
//! - A banked interrupt is one where the Distributor can have multiple interrupts with the
//! same ID. A banked interrupt is identified uniquely by its ID number and its associated
//! CPU interface number. Of the banked interrupt IDs:
//! - 00..15 SGIs
//! - 16..31 PPIs
mod gicc;
mod gicd;
use crate::{
cpu, drivers, exception,
memory::{Address, Virtual},
platform::{self, cpu::BOOT_CORE_ID, device_driver::common::BoundedUsize},
synchronization::{self, InitStateLock},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
type HandlerTable = [Option<exception::asynchronous::IRQHandlerDescriptor<IRQNumber>>;
IRQNumber::MAX_INCLUSIVE + 1];
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Used for the associated type of trait [`exception::asynchronous::interface::IRQManager`].
pub type IRQNumber = BoundedUsize<{ GICv2::MAX_IRQ_NUMBER }>;
/// Representation of the GIC.
pub struct GICv2 {
/// The Distributor.
gicd: gicd::GICD,
/// The CPU Interface.
gicc: gicc::GICC,
/// Stores registered IRQ handlers. Writable only during kernel init. RO afterwards.
handler_table: InitStateLock<HandlerTable>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl GICv2 {
const MAX_IRQ_NUMBER: usize = 300; // Normally 1019, but keep it lower to save some space.
pub const COMPATIBLE: &'static str = "GICv2 (ARM Generic Interrupt Controller v2)";
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(
gicd_mmio_start_addr: Address<Virtual>,
gicc_mmio_start_addr: Address<Virtual>,
) -> Self {
Self {
gicd: gicd::GICD::new(gicd_mmio_start_addr),
gicc: gicc::GICC::new(gicc_mmio_start_addr),
handler_table: InitStateLock::new([None; IRQNumber::MAX_INCLUSIVE + 1]),
}
}
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
use synchronization::interface::ReadWriteEx;
impl drivers::interface::DeviceDriver for GICv2 {
type IRQNumberType = IRQNumber;
fn compatible(&self) -> &'static str {
Self::COMPATIBLE
}
unsafe fn init(&self) -> Result<(), &'static str> {
if BOOT_CORE_ID == cpu::smp::core_id() {
self.gicd.boot_core_init();
}
self.gicc.priority_accept_all();
self.gicc.enable();
Ok(())
}
}
impl exception::asynchronous::interface::IRQManager for GICv2 {
type IRQNumberType = IRQNumber;
fn register_handler(
&self,
irq_handler_descriptor: exception::asynchronous::IRQHandlerDescriptor<Self::IRQNumberType>,
) -> Result<(), &'static str> {
self.handler_table.write(|table| {
let irq_number = irq_handler_descriptor.number().get();
if table[irq_number].is_some() {
return Err("IRQ handler already registered");
}
table[irq_number] = Some(irq_handler_descriptor);
Ok(())
})
}
fn enable(&self, irq_number: &Self::IRQNumberType) {
self.gicd.enable(irq_number);
}
fn handle_pending_irqs<'irq_context>(
&'irq_context self,
ic: &exception::asynchronous::IRQContext<'irq_context>,
) {
// Extract the highest priority pending IRQ number from the Interrupt Acknowledge Register
// (IAR).
let irq_number = self.gicc.pending_irq_number(ic);
// Guard against spurious interrupts.
if irq_number > GICv2::MAX_IRQ_NUMBER {
return;
}
// Call the IRQ handler. Panic if there is none.
self.handler_table.read(|table| {
match table[irq_number] {
None => panic!("No handler registered for IRQ {}", irq_number),
Some(descriptor) => {
// Call the IRQ handler. Panics on failure.
descriptor.handler().handle().expect("Error handling IRQ");
}
}
});
// Signal completion of handling.
self.gicc.mark_comleted(irq_number as u32, ic);
}
fn print_handler(&self) {
use crate::info;
info!(" Peripheral handler:");
self.handler_table.read(|table| {
for (i, opt) in table.iter().skip(32).enumerate() {
if let Some(handler) = opt {
info!(" {: >3}. {}", i + 32, handler.name());
}
}
});
}
}

View File

@ -0,0 +1,9 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! ARM driver top level.
pub mod gicv2;
pub use gicv2::*;

View File

@ -0,0 +1,440 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
use {
crate::{
memory::{Address, Virtual},
platform::{
device_driver::{common::MMIODerefWrapper, IRQNumber},
BcmHost,
},
synchronization::{interface::Mutex, IRQSafeNullLock},
time,
},
core::{marker::PhantomData, time::Duration},
tock_registers::{
fields::FieldValue,
interfaces::{ReadWriteable, Readable, Writeable},
register_structs,
registers::{ReadOnly, ReadWrite, WriteOnly},
},
};
// Descriptions taken from
// https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf
/// Generates `pub enums` with no variants for each `ident` passed in.
macro states($($name:ident),*) {
$(pub enum $name { })*
}
// Possible states for a GPIO pin.
states! {
Uninitialized, Input, Output, Alt
}
#[cfg(feature = "rpi3")]
register_structs! {
/// The offsets for each register.
/// From <https://wiki.osdev.org/Raspberry_Pi_Bare_Bones> and
/// <https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf>
#[allow(non_snake_case)]
RegisterBlock {
(0x00 => pub FunctionSelect: [ReadWrite<u32>; 6]), // function select
(0x18 => __reserved_1),
(0x1c => pub SetPin: [WriteOnly<u32>; 2]), // set output pin
(0x24 => __reserved_2),
(0x28 => pub ClearPin: [WriteOnly<u32>; 2]), // clear output pin
(0x30 => __reserved_3),
(0x34 => pub PinLevel: [ReadOnly<u32>; 2]), // get input pin level
(0x3c => __reserved_4),
// Everything below is unused atm!
// (0x40 => pub EDS: [ReadWrite<u32>; 2]),
// (0x48 => __reserved_5),
// (0x4c => pub REN: [ReadWrite<u32>; 2]),
// (0x54 => __reserved_6),
// (0x58 => pub FEN: [ReadWrite<u32>; 2]),
// (0x60 => __reserved_7),
// (0x64 => pub HEN: [ReadWrite<u32>; 2]),
// (0x6c => __reserved_8),
// (0x70 => pub LEN: [ReadWrite<u32>; 2]),
// (0x78 => __reserved_9),
// (0x7c => pub AREN: [ReadWrite<u32>; 2]),
// (0x84 => __reserved_10),
// (0x88 => pub AFEN: [ReadWrite<u32>; 2]),
// (0x90 => __reserved_11),
(0x94 => pub PullUpDown: ReadWrite<u32>),
(0x98 => pub PullUpDownEnableClock: [ReadWrite<u32>; 2]),
(0xa0 => @END),
}
}
#[cfg(feature = "rpi4")]
register_structs! {
/// The offsets for each register.
/// From <https://wiki.osdev.org/Raspberry_Pi_Bare_Bones> and
/// <https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf>
#[allow(non_snake_case)]
RegisterBlock {
(0x00 => pub FunctionSelect: [ReadWrite<u32>; 6]), // function select
(0x18 => __reserved_1),
(0x1c => pub SetPin: [WriteOnly<u32>; 2]), // set output pin
(0x24 => __reserved_2),
(0x28 => pub ClearPin: [WriteOnly<u32>; 2]), // clear output pin
(0x30 => __reserved_3),
(0x34 => pub PinLevel: [ReadOnly<u32>; 2]), // get input pin level
(0x3c => __reserved_4),
(0xe4 => PullUpDownControl: [ReadWrite<u32>; 4]),
(0xf4 => @END),
}
}
// Hide RegisterBlock from public api.
type Registers = MMIODerefWrapper<RegisterBlock>;
struct GPIOInner {
registers: Registers,
}
/// Public interface to the GPIO MMIO area
pub struct GPIO {
inner: IRQSafeNullLock<GPIOInner>,
}
impl GPIOInner {
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
registers: Registers::new(mmio_base_addr),
}
}
#[cfg(feature = "rpi3")]
pub fn power_off(&self) {
// power off gpio pins (but not VCC pins)
for bank in 0..5 {
self.registers.FunctionSelect[bank].set(0);
}
self.registers.PullUpDown.set(0);
// The Linux 2837 GPIO driver waits 1 µs between the steps.
const DELAY: Duration = Duration::from_micros(1);
time::time_manager().spin_for(DELAY);
self.registers.PullUpDownEnableClock[0].set(0xffff_ffff);
self.registers.PullUpDownEnableClock[1].set(0xffff_ffff);
time::time_manager().spin_for(DELAY);
// flush GPIO setup
self.registers.PullUpDownEnableClock[0].set(0);
self.registers.PullUpDownEnableClock[1].set(0);
}
#[cfg(feature = "rpi4")]
pub fn power_off(&self) {
todo!()
}
#[cfg(feature = "rpi3")]
pub fn set_pull_up_down(&self, pin: usize, pull: PullUpDown) {
let bank = pin / 32;
let off = pin % 32;
self.registers.PullUpDown.set(0);
// The Linux 2837 GPIO driver waits 1 µs between the steps.
const DELAY: Duration = Duration::from_micros(1);
time::time_manager().spin_for(DELAY);
self.registers.PullUpDownEnableClock[bank].modify(FieldValue::<u32, ()>::new(
0b1,
off,
(pull == PullUpDown::Up).into(),
));
time::time_manager().spin_for(DELAY);
self.registers.PullUpDown.set(0);
self.registers.PullUpDownEnableClock[bank].set(0);
}
#[cfg(feature = "rpi4")]
pub fn set_pull_up_down(&self, pin: usize, pull: PullUpDown) {
let bank = pin / 16;
let off = pin % 16;
self.registers.PullUpDownControl[bank].modify(FieldValue::<u32, ()>::new(
0b11,
off * 2,
pull.into(),
));
}
pub fn to_alt(&self, pin: usize, function: Function) {
let bank = pin / 10;
let off = pin % 10;
self.registers.FunctionSelect[bank].modify(FieldValue::<u32, ()>::new(
0b111,
off * 3,
function.into(),
));
}
pub fn set_pin(&mut self, pin: usize) {
// Guarantees: pin number is between [0; 53] by construction.
let bank = pin / 32;
let shift = pin % 32;
self.registers.SetPin[bank].set(1 << shift);
}
pub fn clear_pin(&mut self, pin: usize) {
// Guarantees: pin number is between [0; 53] by construction.
let bank = pin / 32;
let shift = pin % 32;
self.registers.ClearPin[bank].set(1 << shift);
}
pub fn get_level(&self, pin: usize) -> Level {
// Guarantees: pin number is between [0; 53] by construction.
let bank = pin / 32;
let off = pin % 32;
self.registers.PinLevel[bank].matches_all(FieldValue::<u32, ()>::new(1, off, 1))
}
}
impl GPIO {
pub const COMPATIBLE: &'static str = "BCM GPIO";
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
inner: IRQSafeNullLock::new(GPIOInner::new(mmio_base_addr)),
}
}
pub fn get_pin(&self, pin: usize) -> Pin<Uninitialized> {
unsafe { Pin::new(pin, &self.inner) } // todo: expose only inner to avoid unlocked access
}
pub fn power_off(&self) {
self.inner.lock(|inner| inner.power_off());
}
}
/// An alternative GPIO function.
#[repr(u8)]
pub enum Function {
Input = 0b000,
Output = 0b001,
Alt0 = 0b100,
Alt1 = 0b101,
Alt2 = 0b110,
Alt3 = 0b111,
Alt4 = 0b011,
Alt5 = 0b010,
}
impl ::core::convert::From<Function> for u32 {
fn from(f: Function) -> Self {
f as u32
}
}
/// Pull up/down resistor setup.
#[repr(u8)]
#[derive(PartialEq, Eq)]
pub enum PullUpDown {
None = 0b00,
Up = 0b01,
Down = 0b10,
}
impl ::core::convert::From<PullUpDown> for u32 {
fn from(p: PullUpDown) -> Self {
p as u32
}
}
/// A GPIO pin in state `State`.
///
/// The `State` generic always corresponds to an un-instantiable type that is
/// used solely to mark and track the state of a given GPIO pin. A `Pin`
/// structure starts in the `Uninitialized` state and must be transitioned into
/// one of `Input`, `Output`, or `Alt` via the `into_input`, `into_output`, and
/// `into_alt` methods before it can be used.
pub struct Pin<'outer, State> {
pin: usize,
inner: &'outer IRQSafeNullLock<GPIOInner>,
_state: PhantomData<State>,
}
impl<'outer, State> Pin<'outer, State> {
/// Transitions `self` to state `NewState`, consuming `self` and returning a new
/// `Pin` instance in state `NewState`. This method should _never_ be exposed to
/// the public!
#[inline(always)]
fn transition<NewState>(self) -> Pin<'outer, NewState> {
Pin {
pin: self.pin,
inner: self.inner,
_state: PhantomData,
}
}
pub fn set_pull_up_down(&self, pull: PullUpDown) {
self.inner
.lock(|inner| inner.set_pull_up_down(self.pin, pull))
}
}
impl<'outer> Pin<'outer, Uninitialized> {
/// Returns a new GPIO `Pin` structure for pin number `pin`.
///
/// # Panics
///
/// Panics if `pin` > `53`.
unsafe fn new(
pin: usize,
inner: &'outer IRQSafeNullLock<GPIOInner>,
) -> Pin<'outer, Uninitialized> {
if pin > 53 {
panic!("gpio::Pin::new(): pin {pin} exceeds maximum of 53");
}
Pin {
inner,
pin,
_state: PhantomData,
}
}
/// Enables the alternative function `function` for `self`. Consumes self
/// and returns a `Pin` structure in the `Alt` state.
pub fn into_alt(self, function: Function) -> Pin<'outer, Alt> {
self.inner.lock(|inner| inner.to_alt(self.pin, function));
self.transition()
}
/// Sets this pin to be an _output_ pin. Consumes self and returns a `Pin`
/// structure in the `Output` state.
pub fn into_output(self) -> Pin<'outer, Output> {
self.into_alt(Function::Output).transition()
}
/// Sets this pin to be an _input_ pin. Consumes self and returns a `Pin`
/// structure in the `Input` state.
pub fn into_input(self) -> Pin<'outer, Input> {
self.into_alt(Function::Input).transition()
}
}
impl<'outer> Pin<'outer, Output> {
/// Sets (turns on) this pin.
pub fn set(&mut self) {
self.inner.lock(|inner| inner.set_pin(self.pin));
}
/// Clears (turns off) this pin.
pub fn clear(&mut self) {
self.inner.lock(|inner| inner.clear_pin(self.pin));
}
}
pub type Level = bool;
impl<'outer> Pin<'outer, Input> {
/// Reads the pin's value. Returns `true` if the level is high and `false`
/// if the level is low.
pub fn level(&self) -> Level {
self.inner.lock(|inner| inner.get_level(self.pin))
}
}
//--------------------------------------------------------------------------------------------------
// OS Interface Code
//--------------------------------------------------------------------------------------------------
impl crate::drivers::interface::DeviceDriver for GPIO {
type IRQNumberType = IRQNumber;
fn compatible(&self) -> &'static str {
Self::COMPATIBLE
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test_case]
fn test_pin_transitions() {
let mut reg = [0u32; 40];
let mmio_base_addr = Address::<Virtual>::new(&mut reg as *mut _ as usize);
let gpio = unsafe { GPIO::new(mmio_base_addr) };
let _out = gpio.get_pin(1).into_output();
assert_eq!(reg[0], 0b001_000);
let _inp = gpio.get_pin(12).into_input();
assert_eq!(reg[1], 0b000_000_000);
let _alt = gpio.get_pin(35).into_alt(Function::Alt1);
assert_eq!(reg[3], 0b101_000_000_000_000_000);
}
#[test_case]
fn test_pin_outputs() {
let mut reg = [0u32; 40];
let mmio_base_addr = Address::<Virtual>::new(&mut reg as *mut _ as usize);
let gpio = unsafe { GPIO::new(mmio_base_addr) };
let pin = gpio.get_pin(1);
let mut out = pin.into_output();
out.set();
assert_eq!(reg[7], 0b10); // SET pin 1 = 1 << 1
out.clear();
assert_eq!(reg[10], 0b10); // CLR pin 1 = 1 << 1
let pin = gpio.get_pin(35);
let mut out = pin.into_output();
out.set();
assert_eq!(reg[8], 0b1000); // SET pin 35 = 1 << (35 - 32)
out.clear();
assert_eq!(reg[11], 0b1000); // CLR pin 35 = 1 << (35 - 32)
}
#[test_case]
fn test_pin_inputs() {
let mut reg = [0u32; 40];
let mmio_base_addr = Address::<Virtual>::new(&mut reg as *mut _ as usize);
let gpio = unsafe { GPIO::new(mmio_base_addr) };
let pin = gpio.get_pin(1);
let inp = pin.into_input();
assert_eq!(inp.level(), false);
reg[13] = 0b10;
assert_eq!(inp.level(), true);
let pin = gpio.get_pin(35);
let inp = pin.into_input();
assert_eq!(inp.level(), false);
reg[14] = 0b1000;
assert_eq!(inp.level(), true);
}
}

View File

@ -0,0 +1,155 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Interrupt Controller Driver.
mod peripheral_ic;
use {
crate::{
drivers,
exception::{self, asynchronous::IRQHandlerDescriptor},
memory::{Address, Virtual},
platform::device_driver::common::BoundedUsize,
},
core::fmt,
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
/// Wrapper struct for a bitmask indicating pending IRQ numbers.
struct PendingIRQs {
bitmask: u64,
}
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub type LocalIRQ = BoundedUsize<{ InterruptController::MAX_LOCAL_IRQ_NUMBER }>;
pub type PeripheralIRQ = BoundedUsize<{ InterruptController::MAX_PERIPHERAL_IRQ_NUMBER }>;
/// Used for the associated type of trait [`exception::asynchronous::interface::IRQManager`].
#[derive(Copy, Clone)]
#[allow(missing_docs)]
pub enum IRQNumber {
Local(LocalIRQ),
Peripheral(PeripheralIRQ),
}
/// Representation of the Interrupt Controller.
pub struct InterruptController {
periph: peripheral_ic::PeripheralIC,
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl PendingIRQs {
pub fn new(bitmask: u64) -> Self {
Self { bitmask }
}
}
impl Iterator for PendingIRQs {
type Item = usize;
fn next(&mut self) -> Option<Self::Item> {
if self.bitmask == 0 {
return None;
}
let next = self.bitmask.trailing_zeros() as usize;
self.bitmask &= self.bitmask.wrapping_sub(1);
Some(next)
}
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl fmt::Display for IRQNumber {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Self::Local(number) => write!(f, "Local({})", number),
Self::Peripheral(number) => write!(f, "Peripheral({})", number),
}
}
}
impl InterruptController {
// Restrict to 3 for now. This makes future code for local_ic.rs more straight forward.
const MAX_LOCAL_IRQ_NUMBER: usize = 3;
const MAX_PERIPHERAL_IRQ_NUMBER: usize = 63;
pub const COMPATIBLE: &'static str = "BCM Interrupt Controller";
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(periph_mmio_start_addr: Address<Virtual>) -> Self {
Self {
periph: peripheral_ic::PeripheralIC::new(periph_mmio_start_addr),
}
}
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
impl drivers::interface::DeviceDriver for InterruptController {
type IRQNumberType = IRQNumber;
fn compatible(&self) -> &'static str {
Self::COMPATIBLE
}
}
impl exception::asynchronous::interface::IRQManager for InterruptController {
type IRQNumberType = IRQNumber;
fn register_handler(
&self,
irq_handler_descriptor: exception::asynchronous::IRQHandlerDescriptor<Self::IRQNumberType>,
) -> Result<(), &'static str> {
match irq_handler_descriptor.number() {
IRQNumber::Local(_) => unimplemented!("Local IRQ controller not implemented."),
IRQNumber::Peripheral(pirq) => {
let periph_descriptor = IRQHandlerDescriptor::new(
pirq,
irq_handler_descriptor.name(),
irq_handler_descriptor.handler(),
);
self.periph.register_handler(periph_descriptor)
}
}
}
fn enable(&self, irq: &Self::IRQNumberType) {
match irq {
IRQNumber::Local(_) => unimplemented!("Local IRQ controller not implemented."),
IRQNumber::Peripheral(pirq) => self.periph.enable(pirq),
}
}
fn handle_pending_irqs<'irq_context>(
&'irq_context self,
ic: &exception::asynchronous::IRQContext<'irq_context>,
) {
// It can only be a peripheral IRQ pending because enable() does not support local IRQs yet.
self.periph.handle_pending_irqs(ic)
}
fn print_handler(&self) {
self.periph.print_handler();
}
}

View File

@ -0,0 +1,175 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Peripheral Interrupt Controller Driver.
//!
//! # Resources
//!
//! - <https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf>
use {
super::{PendingIRQs, PeripheralIRQ},
crate::{
exception,
platform::device_driver::common::MMIODerefWrapper,
synchronization::{self, IRQSafeNullLock, InitStateLock},
},
tock_registers::{
interfaces::{Readable, Writeable},
register_structs,
registers::{ReadOnly, WriteOnly},
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
register_structs! {
#[allow(non_snake_case)]
WORegisterBlock {
(0x00 => _reserved1),
(0x10 => ENABLE_1: WriteOnly<u32>),
(0x14 => ENABLE_2: WriteOnly<u32>),
(0x18 => @END),
}
}
register_structs! {
#[allow(non_snake_case)]
RORegisterBlock {
(0x00 => _reserved1),
(0x04 => PENDING_1: ReadOnly<u32>),
(0x08 => PENDING_2: ReadOnly<u32>),
(0x0c => @END),
}
}
/// Abstraction for the WriteOnly parts of the associated MMIO registers.
type WriteOnlyRegisters = MMIODerefWrapper<WORegisterBlock>;
/// Abstraction for the ReadOnly parts of the associated MMIO registers.
type ReadOnlyRegisters = MMIODerefWrapper<RORegisterBlock>;
type HandlerTable = [Option<exception::asynchronous::IRQHandlerDescriptor<PeripheralIRQ>>;
PeripheralIRQ::MAX_INCLUSIVE + 1];
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Representation of the peripheral interrupt controller.
pub struct PeripheralIC {
/// Access to write registers is guarded with a lock.
wo_registers: IRQSafeNullLock<WriteOnlyRegisters>,
/// Register read access is unguarded.
ro_registers: ReadOnlyRegisters,
/// Stores registered IRQ handlers. Writable only during kernel init. RO afterwards.
handler_table: InitStateLock<HandlerTable>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl PeripheralIC {
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_start_addr: Address<Virtual>) -> Self {
Self {
wo_registers: IRQSafeNullLock::new(WriteOnlyRegisters::new(mmio_start_addr)),
ro_registers: ReadOnlyRegisters::new(mmio_start_addr),
handler_table: InitStateLock::new([None; PeripheralIRQ::MAX_INCLUSIVE + 1]),
}
}
/// Query the list of pending IRQs.
fn pending_irqs(&self) -> PendingIRQs {
let pending_mask: u64 = (u64::from(self.ro_registers.PENDING_2.get()) << 32)
| u64::from(self.ro_registers.PENDING_1.get());
PendingIRQs::new(pending_mask)
}
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
use {
crate::memory::{Address, Virtual},
synchronization::interface::{Mutex, ReadWriteEx},
};
impl exception::asynchronous::interface::IRQManager for PeripheralIC {
type IRQNumberType = PeripheralIRQ;
fn register_handler(
&self,
irq_handler_descriptor: exception::asynchronous::IRQHandlerDescriptor<Self::IRQNumberType>,
) -> Result<(), &'static str> {
self.handler_table.write(|table| {
let irq_number = irq_handler_descriptor.number().get();
if table[irq_number].is_some() {
return Err("IRQ handler already registered");
}
table[irq_number] = Some(irq_handler_descriptor);
Ok(())
})
}
fn enable(&self, irq: &Self::IRQNumberType) {
self.wo_registers.lock(|regs| {
let enable_reg = if irq.get() <= 31 {
&regs.ENABLE_1
} else {
&regs.ENABLE_2
};
let enable_bit: u32 = 1 << (irq.get() % 32);
// Writing a 1 to a bit will set the corresponding IRQ enable bit. All other IRQ enable
// bits are unaffected. So we don't need read and OR'ing here.
enable_reg.set(enable_bit);
});
}
fn handle_pending_irqs<'irq_context>(
&'irq_context self,
_ic: &exception::asynchronous::IRQContext<'irq_context>,
) {
self.handler_table.read(|table| {
for irq_number in self.pending_irqs() {
match table[irq_number] {
None => panic!("No handler registered for IRQ {}", irq_number),
Some(descriptor) => {
// Call the IRQ handler. Panics on failure.
descriptor.handler().handle().expect("Error handling IRQ");
}
}
}
})
}
fn print_handler(&self) {
use crate::info;
info!(" Peripheral handler:");
self.handler_table.read(|table| {
for (i, opt) in table.iter().enumerate() {
if let Some(handler) = opt {
info!(" {: >3}. {}", i, handler.name());
}
}
});
}
}

View File

@ -7,18 +7,27 @@
*/
//! Broadcom mailbox interface between the VideoCore and the ARM Core.
//!
//! Mailbox is controlled by two parts: a MAILBOX driver that drives the MMIO registers and
//! a MailboxCommand, that incorporates a command buffer and concurrency controls.
#![allow(dead_code)]
use crate::synchronization::IRQSafeNullLock;
use {
super::BcmHost,
crate::{platform::MMIODerefWrapper, println},
crate::{
memory::{Address, Virtual},
platform::device_driver::common::MMIODerefWrapper,
println,
}, //DMA_ALLOCATOR
aarch64_cpu::asm::barrier,
core::{
alloc::{AllocError, Allocator, Layout},
mem,
ptr::NonNull,
result::Result as CoreResult,
sync::atomic::{compiler_fence, Ordering},
},
cortex_a::asm::barrier,
snafu::Snafu,
tock_registers::{
interfaces::{Readable, Writeable},
@ -27,26 +36,33 @@ use {
},
};
/// Mailbox MMIO registers access.
struct MailboxInner {
registers: Registers,
}
/// Mailbox driver
pub struct Mailbox {
inner: IRQSafeNullLock<MailboxInner>,
}
/// Public interface to the mailbox.
/// The address for the buffer needs to be 16-byte aligned
/// so that the VideoCore can handle it properly.
/// The reason is that lowest 4 bits of the address will contain the channel number.
pub struct Mailbox<const N_SLOTS: usize, Storage = LocalMailboxStorage<N_SLOTS>> {
registers: Registers,
pub struct MailboxCommand<const N_SLOTS: usize, Storage = DmaBackedMailboxStorage<N_SLOTS>> {
pub buffer: Storage,
}
/// Mailbox that is ready to be called.
/// This prevents invalid use of the mailbox until it is fully prepared.
pub struct PreparedMailbox<const N_SLOTS: usize, Storage = LocalMailboxStorage<N_SLOTS>>(
Mailbox<N_SLOTS, Storage>,
/// Mailbox command that is ready to be called.
/// This prevents invalid use of the mailbox command until it is fully prepared.
pub struct PreparedMailboxCommand<const N_SLOTS: usize, Storage = DmaBackedMailboxStorage<N_SLOTS>>(
MailboxCommand<N_SLOTS, Storage>,
);
const MAILBOX_ALIGNMENT: usize = 16;
const MAILBOX_ITEMS_COUNT: usize = 36;
/// We've identity mapped the MMIO register region on kernel start.
const MAILBOX_BASE: usize = BcmHost::get_peripheral_address() + 0xb880;
/// Lowest 4-bits are channel ID.
const CHANNEL_MASK: u32 = 0xf;
@ -96,6 +112,8 @@ pub enum MailboxError {
Unknown,
#[snafu(display("Timeout"))]
Timeout,
#[snafu(display("AllocError"))]
Alloc,
}
pub type Result<T> = CoreResult<T, MailboxError>;
@ -104,14 +122,16 @@ pub type Result<T> = CoreResult<T, MailboxError>;
pub trait MailboxOps {
fn write(&self, channel: u32) -> Result<()>;
fn read(&self, channel: u32) -> Result<()>;
fn call(&self, channel: u32) -> Result<()> {
self.write(channel)?;
self.read(channel)
}
fn call(&self, channel: u32) -> Result<()>; //{
// self.write(channel)?;
// self.read(channel)
// }
}
pub trait MailboxStorage {
fn new() -> Self;
fn new() -> Result<Self>
where
Self: Sized;
}
pub trait MailboxStorageRef {
@ -127,11 +147,50 @@ pub struct LocalMailboxStorage<const N_SLOTS: usize> {
pub storage: [u32; N_SLOTS],
}
pub struct DmaBackedMailboxStorage<const N_SLOTS: usize> {
pub storage: *mut u32,
}
impl<const N_SLOTS: usize> MailboxStorage for LocalMailboxStorage<N_SLOTS> {
fn new() -> Self {
Self {
fn new() -> Result<Self> {
Ok(Self {
storage: [0u32; N_SLOTS],
}
})
}
}
impl<const N_SLOTS: usize> MailboxStorage for DmaBackedMailboxStorage<N_SLOTS> {
fn new() -> Result<Self> {
use crate::platform::memory::map::virt::DMA_HEAP_START;
Ok(Self {
storage: DMA_HEAP_START
// storage: DMA_ALLOCATOR
// .lock(|a| {
// a.allocate(
// Layout::from_size_align(N_SLOTS * mem::size_of::<u32>(), 16)
// .map_err(|_| AllocError)?,
// )
// })
// .map_err(|_| MailboxError::Alloc)?
// .as_mut_ptr()
as *mut u32,
})
}
}
impl<const N_SLOTS: usize> Drop for DmaBackedMailboxStorage<N_SLOTS> {
fn drop(&mut self) {
// DMA_ALLOCATOR
// .lock::<_, Result<()>>(|a| unsafe {
// #[allow(clippy::unit_arg)]
// Ok(a.deallocate(
// NonNull::new_unchecked(self.storage as *mut u8),
// Layout::from_size_align(N_SLOTS * mem::size_of::<u32>(), 16)
// .map_err(|_| MailboxError::Alloc)?,
// ))
// })
// .unwrap_or(())
}
}
@ -154,6 +213,25 @@ impl<const N_SLOTS: usize> MailboxStorageRef for LocalMailboxStorage<N_SLOTS> {
}
}
impl<const N_SLOTS: usize> MailboxStorageRef for DmaBackedMailboxStorage<N_SLOTS> {
fn as_ref(&self) -> &[u32] {
unsafe { core::slice::from_raw_parts(self.storage.cast(), N_SLOTS) }
}
fn as_mut(&mut self) -> &mut [u32] {
unsafe { core::slice::from_raw_parts_mut(self.storage.cast(), N_SLOTS) }
}
fn as_ptr(&self) -> *const u32 {
self.storage.cast()
}
// @todo Probably need a ResultMailbox for accessing data after call()?
fn value_at(&self, index: usize) -> u32 {
self.as_ref()[index]
}
}
/*
* Source https://elinux.org/RPi_Framebuffer
* Source for channels 8 and 9: https://github.com/raspberrypi/firmware/wiki/Mailboxes
@ -302,20 +380,14 @@ impl<const N_SLOTS: usize> core::fmt::Debug for PreparedMailbox<N_SLOTS> {
}
}
impl<const N_SLOTS: usize> Default for Mailbox<N_SLOTS> {
fn default() -> Self {
unsafe { Self::new(MAILBOX_BASE) }.expect("Couldn't allocate a default mailbox")
}
}
impl<const N_SLOTS: usize, Storage: MailboxStorage + MailboxStorageRef> Mailbox<N_SLOTS, Storage> {
/// Create a new mailbox locally in an aligned stack space.
/// Create a new mailbox locally in an aligned storage space.
/// # Safety
/// Caller is responsible for picking the correct MMIO register base address.
pub unsafe fn new(base_addr: usize) -> Result<Mailbox<N_SLOTS, Storage>> {
pub unsafe fn new(mmio_base_addr: Address<Virtual>) -> Result<Mailbox<N_SLOTS, Storage>> {
Ok(Mailbox {
registers: Registers::new(base_addr),
buffer: Storage::new(),
registers: Registers::new(mmio_base_addr),
buffer: Storage::new()?,
})
}
@ -400,7 +472,7 @@ impl<const N_SLOTS: usize, Storage: MailboxStorage + MailboxStorageRef> Mailbox<
buf[index + 1] = 8; // Buffer size // val buf size
buf[index + 2] = 0; // Response size // val size
buf[index + 3] = 130; // Pin Number
buf[index + 4] = if enable { 1 } else { 0 };
buf[index + 4] = enable.into();
index + 5
}
@ -482,11 +554,11 @@ impl<const N_SLOTS: usize, Storage: MailboxStorage + MailboxStorageRef> Mailbox<
/// when passing memory addresses as the data part of a mailbox message,
/// the addresses should be **bus addresses as seen from the VC.**
pub fn do_write(&self, channel: u32) -> Result<()> {
let buf_ptr = self.buffer.as_ptr() as *const u32 as u32;
let buf_ptr = self.buffer.as_ptr();
let buf_ptr = if channel != channel::PropertyTagsArmToVc {
BcmHost::phys2bus(buf_ptr as usize) as u32
} else {
buf_ptr
buf_ptr as u32
};
let mut count: u32 = 0;
@ -504,9 +576,7 @@ impl<const N_SLOTS: usize, Storage: MailboxStorage + MailboxStorageRef> Mailbox<
return Err(MailboxError::Timeout);
}
}
unsafe {
barrier::dmb(barrier::SY);
}
barrier::dmb(barrier::SY);
self.registers
.WRITE
.set((buf_ptr & !CHANNEL_MASK) | (channel & CHANNEL_MASK));
@ -610,7 +680,7 @@ mod tests {
// by the end() fn.
#[test_case]
fn test_prepare_mailbox() {
let mut mailbox = Mailbox::default();
let mut mailbox = Mailbox::<8>::default();
let index = mailbox.request();
let index = mailbox.set_led_on(index, true);
let mailbox = mailbox.end(index);

View File

@ -0,0 +1,376 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
#[cfg(not(feature = "noserial"))]
use tock_registers::interfaces::{Readable, Writeable};
use {
crate::{
console::interface,
devices::serial::SerialOps,
exception::asynchronous::IRQNumber,
memory::{Address, Virtual},
platform::{
device_driver::{common::MMIODerefWrapper, gpio},
BcmHost,
},
synchronization::{interface::Mutex, IRQSafeNullLock},
},
cfg_if::cfg_if,
core::{
convert::From,
fmt::{self, Arguments},
},
tock_registers::{
interfaces::ReadWriteable,
register_bitfields, register_structs,
registers::{ReadOnly, ReadWrite, WriteOnly},
},
};
// Auxiliary mini UART registers
//
// Descriptions taken from
// https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf
register_bitfields! {
u32,
/// Auxiliary enables
AUX_ENABLES [
/// If set the mini UART is enabled. The UART will immediately
/// start receiving data, especially if the UART1_RX line is
/// low.
/// If clear the mini UART is disabled. That also disables any
/// mini UART register access
MINI_UART_ENABLE OFFSET(0) NUMBITS(1) []
],
/// Mini Uart Interrupt Identify
AUX_MU_IIR [
/// Writing with bit 1 set will clear the receive FIFO
/// Writing with bit 2 set will clear the transmit FIFO
FIFO_CLEAR OFFSET(1) NUMBITS(2) [
Rx = 0b01,
Tx = 0b10,
All = 0b11
]
],
/// Mini Uart Line Control
AUX_MU_LCR [
/// Mode the UART works in
DATA_SIZE OFFSET(0) NUMBITS(2) [
SevenBit = 0b00,
EightBit = 0b11
]
],
/// Mini Uart Line Status
AUX_MU_LSR [
/// This bit is set if the transmit FIFO is empty and the transmitter is
/// idle. (Finished shifting out the last bit).
TX_IDLE OFFSET(6) NUMBITS(1) [],
/// This bit is set if the transmit FIFO can accept at least
/// one byte.
TX_EMPTY OFFSET(5) NUMBITS(1) [],
/// This bit is set if the receive FIFO holds at least 1
/// symbol.
DATA_READY OFFSET(0) NUMBITS(1) []
],
/// Mini Uart Extra Control
AUX_MU_CNTL [
/// If this bit is set the mini UART transmitter is enabled.
/// If this bit is clear the mini UART transmitter is disabled.
TX_EN OFFSET(1) NUMBITS(1) [
Disabled = 0,
Enabled = 1
],
/// If this bit is set the mini UART receiver is enabled.
/// If this bit is clear the mini UART receiver is disabled.
RX_EN OFFSET(0) NUMBITS(1) [
Disabled = 0,
Enabled = 1
]
],
/// Mini Uart Status
AUX_MU_STAT [
TX_DONE OFFSET(9) NUMBITS(1) [
No = 0,
Yes = 1
],
/// This bit is set if the transmit FIFO can accept at least
/// one byte.
SPACE_AVAILABLE OFFSET(1) NUMBITS(1) [
No = 0,
Yes = 1
],
/// This bit is set if the receive FIFO holds at least 1
/// symbol.
SYMBOL_AVAILABLE OFFSET(0) NUMBITS(1) [
No = 0,
Yes = 1
]
],
/// Mini Uart Baud rate
AUX_MU_BAUD [
/// Mini UART baud rate counter
RATE OFFSET(0) NUMBITS(16) []
]
}
register_structs! {
#[allow(non_snake_case)]
RegisterBlock {
// 0x00 - AUX_IRQ?
(0x00 => __reserved_1),
(0x04 => AUX_ENABLES: ReadWrite<u32, AUX_ENABLES::Register>),
(0x08 => __reserved_2),
(0x40 => AUX_MU_IO: ReadWrite<u32>),//Mini Uart I/O Data
(0x44 => AUX_MU_IER: WriteOnly<u32>),//Mini Uart Interrupt Enable
(0x48 => AUX_MU_IIR: WriteOnly<u32, AUX_MU_IIR::Register>),
(0x4c => AUX_MU_LCR: WriteOnly<u32, AUX_MU_LCR::Register>),
(0x50 => AUX_MU_MCR: WriteOnly<u32>),
(0x54 => AUX_MU_LSR: ReadOnly<u32, AUX_MU_LSR::Register>),
// 0x58 - AUX_MU_MSR
// 0x5c - AUX_MU_SCRATCH
(0x58 => __reserved_3),
(0x60 => AUX_MU_CNTL: WriteOnly<u32, AUX_MU_CNTL::Register>),
(0x64 => AUX_MU_STAT: ReadOnly<u32, AUX_MU_STAT::Register>),
(0x68 => AUX_MU_BAUD: WriteOnly<u32, AUX_MU_BAUD::Register>),
(0x6c => @END),
}
}
type Registers = MMIODerefWrapper<RegisterBlock>;
struct MiniUartInner {
registers: Registers,
}
pub struct MiniUart {
inner: IRQSafeNullLock<MiniUartInner>,
}
/// Divisor values for common baud rates
pub enum Rate {
Baud115200 = 270,
}
impl From<Rate> for u32 {
fn from(r: Rate) -> Self {
r as u32
}
}
// [temporary] Used in mmu.rs to set up local paging
pub const UART1_BASE: usize = BcmHost::get_peripheral_address() + 0x21_5000;
impl crate::drivers::interface::DeviceDriver for MiniUart {
type IRQNumberType = IRQNumber;
fn compatible(&self) -> &'static str {
Self::COMPATIBLE
}
unsafe fn init(&self) -> Result<(), &'static str> {
self.inner.lock(|inner| inner.prepare())
}
}
impl MiniUart {
pub const COMPATIBLE: &'static str = "BCM MINI UART";
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
inner: IRQSafeNullLock::new(MiniUartInner::new(mmio_base_addr)),
}
}
/// GPIO pins should be set up first before enabling the UART
pub fn prepare_gpio(gpio: &gpio::GPIO) {
// Pin 14
const MINI_UART_TXD: gpio::Function = gpio::Function::Alt5;
// Pin 15
const MINI_UART_RXD: gpio::Function = gpio::Function::Alt5;
// map UART1 to GPIO pins
gpio.get_pin(14)
.into_alt(MINI_UART_TXD)
.set_pull_up_down(gpio::PullUpDown::Up);
gpio.get_pin(15)
.into_alt(MINI_UART_RXD)
.set_pull_up_down(gpio::PullUpDown::Up);
}
}
impl MiniUartInner {
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
registers: Registers::new(mmio_base_addr),
}
}
/// Set baud rate and characteristics (115200 8N1) and map to GPIO
pub fn prepare(&self) -> Result<(), &'static str> {
use tock_registers::interfaces::Writeable;
// initialize UART
self.registers
.AUX_ENABLES
.modify(AUX_ENABLES::MINI_UART_ENABLE::SET);
self.registers.AUX_MU_IER.set(0);
self.registers.AUX_MU_CNTL.set(0);
self.registers
.AUX_MU_LCR
.write(AUX_MU_LCR::DATA_SIZE::EightBit);
self.registers.AUX_MU_MCR.set(0);
self.registers.AUX_MU_IER.set(0);
self.registers
.AUX_MU_BAUD
.write(AUX_MU_BAUD::RATE.val(Rate::Baud115200.into()));
// Clear FIFOs before using the device
self.registers.AUX_MU_IIR.write(AUX_MU_IIR::FIFO_CLEAR::All);
self.registers
.AUX_MU_CNTL
.write(AUX_MU_CNTL::RX_EN::Enabled + AUX_MU_CNTL::TX_EN::Enabled);
Ok(())
}
fn flush_internal(&self) {
use tock_registers::interfaces::Readable;
crate::cpu::loop_until(|| self.registers.AUX_MU_STAT.is_set(AUX_MU_STAT::TX_DONE));
}
}
impl Drop for MiniUartInner {
fn drop(&mut self) {
self.registers
.AUX_ENABLES
.modify(AUX_ENABLES::MINI_UART_ENABLE::CLEAR);
// @todo disable gpio.PUD ?
}
}
impl SerialOps for MiniUartInner {
/// Receive a byte without console translation
fn read_byte(&self) -> u8 {
use tock_registers::interfaces::Readable;
// wait until something is in the buffer
crate::cpu::loop_until(|| {
self.registers
.AUX_MU_STAT
.is_set(AUX_MU_STAT::SYMBOL_AVAILABLE)
});
// read it and return
self.registers.AUX_MU_IO.get() as u8
}
fn write_byte(&self, b: u8) {
use tock_registers::interfaces::{Readable, Writeable};
// wait until we can send
crate::cpu::loop_until(|| {
self.registers
.AUX_MU_STAT
.is_set(AUX_MU_STAT::SPACE_AVAILABLE)
});
// write the character to the buffer
self.registers.AUX_MU_IO.set(b as u32);
}
/// Wait until the TX FIFO is empty, aka all characters have been put on the
/// line.
fn flush(&self) {
self.flush_internal();
}
/// Consume input until RX FIFO is empty, aka all pending characters have been
/// consumed.
fn clear_rx(&self) {
use tock_registers::interfaces::Readable;
crate::cpu::loop_while(|| {
let pending = self
.registers
.AUX_MU_STAT
.is_set(AUX_MU_STAT::SYMBOL_AVAILABLE);
if pending {
self.read_byte();
}
pending
});
}
}
impl interface::ConsoleOps for MiniUartInner {}
impl fmt::Write for MiniUartInner {
fn write_str(&mut self, s: &str) -> fmt::Result {
use interface::ConsoleOps;
self.write_string(s);
Ok(())
}
}
impl interface::Write for MiniUart {
fn write_fmt(&self, args: Arguments) -> fmt::Result {
self.inner.lock(|inner| fmt::Write::write_fmt(inner, args))
}
}
impl SerialOps for MiniUart {
fn read_byte(&self) -> u8 {
self.inner.lock(|inner| inner.read_byte())
}
fn write_byte(&self, byte: u8) {
self.inner.lock(|inner| inner.write_byte(byte))
}
fn flush(&self) {
self.inner.lock(|inner| inner.flush())
}
fn clear_rx(&self) {
self.inner.lock(|inner| inner.clear_rx())
}
}
impl interface::ConsoleOps for MiniUart {
fn write_char(&self, c: char) {
self.inner.lock(|inner| inner.write_char(c))
}
fn write_string(&self, string: &str) {
self.inner.lock(|inner| inner.write_string(string))
}
fn read_char(&self) -> char {
self.inner.lock(|inner| inner.read_char())
}
}
impl interface::All for MiniUart {}

View File

@ -0,0 +1,17 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! BCM driver top level.
pub mod gpio;
#[cfg(feature = "rpi3")]
pub mod interrupt_controller;
// pub mod mailbox;
pub mod mini_uart;
pub mod pl011_uart;
// pub mod power;
#[cfg(feature = "rpi3")]
pub use interrupt_controller::*;
pub use {gpio::*, mini_uart::*, pl011_uart::*};

View File

@ -9,16 +9,16 @@
*/
use {
super::{
gpio,
mailbox::{self, Mailbox, MailboxOps},
BcmHost,
},
crate::{
arch::loop_while,
devices::{ConsoleOps, SerialOps},
platform::MMIODerefWrapper,
console::interface,
cpu::loop_while,
devices::serial::SerialOps,
exception,
memory::{Address, Virtual},
platform::device_driver::{common::MMIODerefWrapper, gpio, IRQNumber},
synchronization::{interface::Mutex, IRQSafeNullLock},
},
core::fmt::{self, Arguments},
snafu::Snafu,
tock_registers::{
interfaces::{ReadWriteable, Readable, Writeable},
@ -27,6 +27,10 @@ use {
},
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
// PL011 UART registers.
//
// Descriptions taken from
@ -76,18 +80,27 @@ register_bitfields! {
/// Integer Baud rate divisor
IBRD [
/// Integer Baud rate divisor
IBRD OFFSET(0) NUMBITS(16) []
BAUD_DIVINT OFFSET(0) NUMBITS(16) []
],
/// Fractional Baud rate divisor
FBRD [
/// Fractional Baud rate divisor
FBRD OFFSET(0) NUMBITS(6) []
BAUD_DIVFRAC OFFSET(0) NUMBITS(6) []
],
/// Line Control register
LCRH [
Parity OFFSET(1) NUMBITS(1) [
LCR_H [
/// Word length. These bits indicate the number of data bits
/// transmitted or received in a frame.
WordLength OFFSET(5) NUMBITS(2) [
FiveBit = 0b00,
SixBit = 0b01,
SevenBit = 0b10,
EightBit = 0b11
],
Fifos OFFSET(4) NUMBITS(1) [
Disabled = 0,
Enabled = 1
],
@ -98,19 +111,10 @@ register_bitfields! {
Enabled = 1
],
Fifo OFFSET(4) NUMBITS(1) [
Parity OFFSET(1) NUMBITS(1) [
Disabled = 0,
Enabled = 1
],
/// Word length. These bits indicate the number of data bits
/// transmitted or received in a frame.
WordLength OFFSET(5) NUMBITS(2) [
FiveBit = 0b00,
SixBit = 0b01,
SevenBit = 0b10,
EightBit = 0b11
]
],
/// Control Register
@ -145,15 +149,56 @@ register_bitfields! {
]
],
/// Interupt Clear Register
ICR [
/// Meta field for all pending interrupts
ALL OFFSET(0) NUMBITS(11) []
/// Interrupt FIFO Level Select Register.
IFLS [
/// Receive interrupt FIFO level select.
/// The trigger points for the receive interrupt are as follows.
RXIFLSEL OFFSET(3) NUMBITS(5) [
OneEigth = 0b000,
OneQuarter = 0b001,
OneHalf = 0b010,
ThreeQuarters = 0b011,
SevenEights = 0b100
]
],
/// Interupt Mask Set/Clear Register
/// Interrupt Mask Set/Clear Register.
IMSC [
/// Meta field for all interrupts
/// Receive timeout interrupt mask. A read returns the current mask for the UARTRTINTR
/// interrupt.
///
/// - On a write of 1, the mask of the UARTRTINTR interrupt is set.
/// - A write of 0 clears the mask.
RTIM OFFSET(6) NUMBITS(1) [
Disabled = 0,
Enabled = 1
],
/// Receive interrupt mask. A read returns the current mask for the UARTRXINTR interrupt.
///
/// - On a write of 1, the mask of the UARTRXINTR interrupt is set.
/// - A write of 0 clears the mask.
RXIM OFFSET(4) NUMBITS(1) [
Disabled = 0,
Enabled = 1
]
],
/// Masked Interrupt Status Register.
MIS [
/// Receive timeout masked interrupt status. Returns the masked interrupt state of the
/// UARTRTINTR interrupt.
RTMIS OFFSET(6) NUMBITS(1) [],
/// Receive masked interrupt status. Returns the masked interrupt state of the UARTRXINTR
/// interrupt.
RXMIS OFFSET(4) NUMBITS(1) []
],
/// Interrupt Clear Register
ICR [
/// Meta field for all pending interrupts
/// On a write of 1, the corresponding interrupt is cleared. A write of 0 has no effect.
ALL OFFSET(0) NUMBITS(11) []
],
@ -182,52 +227,62 @@ register_structs! {
(0x08 => __reserved_1),
(0x18 => Flag: ReadOnly<u32, FR::Register>),
(0x1c => __reserved_2),
(0x24 => IntegerBaudRate: ReadWrite<u32, IBRD::Register>),
(0x28 => FractionalBaudRate: ReadWrite<u32, FBRD::Register>),
(0x2c => LineControl: ReadWrite<u32, LCRH::Register>),
(0x30 => Control: ReadWrite<u32, CR::Register>),
(0x34 => InterruptFifoLevelSelect: ReadWrite<u32>),
(0x24 => IntegerBaudRate: WriteOnly<u32, IBRD::Register>),
(0x28 => FractionalBaudRate: WriteOnly<u32, FBRD::Register>),
(0x2c => LineControl: ReadWrite<u32, LCR_H::Register>), // @todo write-only?
(0x30 => Control: WriteOnly<u32, CR::Register>),
(0x34 => InterruptFifoLevelSelect: ReadWrite<u32, IFLS::Register>),
(0x38 => InterruptMaskSetClear: ReadWrite<u32, IMSC::Register>),
(0x3c => RawInterruptStatus: ReadOnly<u32>),
(0x40 => MaskedInterruptStatus: ReadOnly<u32>),
(0x40 => MaskedInterruptStatus: ReadOnly<u32, MIS::Register>),
(0x44 => InterruptClear: WriteOnly<u32, ICR::Register>),
(0x48 => DmaControl: ReadWrite<u32, DMACR::Register>),
(0x48 => DmaControl: WriteOnly<u32, DMACR::Register>),
(0x4c => __reserved_3),
(0x1000 => @END),
}
}
#[derive(Debug, Snafu)]
pub enum PL011UartError {
#[snafu(display("PL011 UART setup failed in mailbox operation"))]
MailboxError,
#[snafu(display(
"PL011 UART setup failed due to integer baud rate divisor out of range ({})",
ibrd
))]
InvalidIntegerDivisor { ibrd: u32 },
#[snafu(display(
"PL011 UART setup failed due to fractional baud rate divisor out of range ({})",
fbrd
))]
InvalidFractionalDivisor { fbrd: u32 },
}
pub type Result<T> = ::core::result::Result<T, PL011UartError>;
// #[derive(Debug, Snafu)]
// pub enum PL011UartError {
// #[snafu(display("PL011 UART setup failed in mailbox operation"))]
// MailboxError,
// #[snafu(display(
// "PL011 UART setup failed due to integer baud rate divisor out of range ({})",
// ibrd
// ))]
// InvalidIntegerDivisor { ibrd: u32 },
// #[snafu(display(
// "PL011 UART setup failed due to fractional baud rate divisor out of range ({})",
// fbrd
// ))]
// InvalidFractionalDivisor { fbrd: u32 },
// }
//
// pub type Result<T> = ::core::result::Result<T, PL011UartError>;
type Registers = MMIODerefWrapper<RegisterBlock>;
pub struct PL011Uart {
struct PL011UartInner {
registers: Registers,
}
pub struct PreparedPL011Uart(PL011Uart);
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub struct PL011Uart {
inner: IRQSafeNullLock<PL011UartInner>,
}
pub struct RateDivisors {
integer_baud_rate_divisor: u32,
fractional_baud_rate_divisor: u32,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl RateDivisors {
// Set integer & fractional part of baud rate.
// Integer = clock/(16 * Baud)
@ -238,18 +293,20 @@ impl RateDivisors {
// Use integer-only calculation based on [this page](https://krinkinmu.github.io/2020/11/29/PL011.html)
// Calculate 64 * clock / (16 * rate) = 4 * clock / rate, then extract 6 lowest bits for fractional part
// and the next 16 bits for integer part.
pub fn from_clock_and_rate(clock: u64, baud_rate: u32) -> Result<RateDivisors> {
pub fn from_clock_and_rate(clock: u64, baud_rate: u32) -> Result<RateDivisors, &'static str> {
let value = 4 * clock / baud_rate as u64;
let i = ((value >> 6) & 0xffff) as u32;
let f = (value & 0x3f) as u32;
// TODO: check for integer overflow, i.e. any bits set above the 0x3fffff mask.
// FIXME: can't happen due to calculation above
if i > 65535 {
return Err(PL011UartError::InvalidIntegerDivisor { ibrd: i });
return Err("PL011 UART setup failed due to integer baud rate divisor out of range");
// return Err(PL011UartError::InvalidIntegerDivisor { ibrd: i });
}
// FIXME: can't happen due to calculation above
if f > 63 {
return Err(PL011UartError::InvalidFractionalDivisor { fbrd: f });
return Err("PL011 UART setup failed due to fractional baud rate divisor out of range");
// return Err(PL011UartError::InvalidFractionalDivisor { fbrd: f });
}
Ok(RateDivisors {
integer_baud_rate_divisor: i,
@ -258,49 +315,22 @@ impl RateDivisors {
}
}
pub const UART0_START: usize = 0x20_1000;
impl Default for PL011Uart {
fn default() -> Self {
const UART0_BASE: usize = BcmHost::get_peripheral_address() + UART0_START;
unsafe { PL011Uart::new(UART0_BASE) }
}
}
impl PL011Uart {
pub const COMPATIBLE: &'static str = "BCM PL011 UART";
/// Create an instance.
///
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(base_addr: usize) -> PL011Uart {
PL011Uart {
registers: Registers::new(base_addr),
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
inner: IRQSafeNullLock::new(PL011UartInner::new(mmio_base_addr)),
}
}
/// Set baud rate and characteristics (115200 8N1) and map to GPIO
pub fn prepare(self, gpio: &gpio::GPIO) -> Result<PreparedPL011Uart> {
// Turn off UART
self.registers.Control.set(0);
// Wait for any ongoing transmissions to complete
self.flush_internal();
// Flush TX FIFO
self.registers.LineControl.modify(LCRH::Fifo::Disabled);
// set up clock for consistent divisor values
const CLOCK: u32 = 4_000_000; // 4Mhz
const BAUD_RATE: u32 = 115_200;
let mut mailbox = Mailbox::<9>::default();
let index = mailbox.request();
let index = mailbox.set_clock_rate(index, mailbox::clock::UART, CLOCK);
let mailbox = mailbox.end(index);
if mailbox.call(mailbox::channel::PropertyTagsArmToVc).is_err() {
return Err(PL011UartError::MailboxError); // Abort if UART clocks couldn't be set
};
/// GPIO pins should be set up first before enabling the UART
pub fn prepare_gpio(gpio: &gpio::GPIO) {
// Pin 14
const UART_TXD: gpio::Function = gpio::Function::Alt0;
// Pin 15
@ -313,10 +343,57 @@ impl PL011Uart {
gpio.get_pin(15)
.into_alt(UART_RXD)
.set_pull_up_down(gpio::PullUpDown::Up);
}
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
impl PL011UartInner {
/// Create an instance.
///
/// # Safety
///
/// - The user must ensure to provide a correct MMIO start address.
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Self {
Self {
registers: Registers::new(mmio_base_addr),
}
}
/// Set baud rate and characteristics (115200 8N1) and map to GPIO
pub fn prepare(&self) -> core::result::Result<(), &'static str> {
use tock_registers::interfaces::Writeable;
// Turn off UART
self.registers.Control.set(0);
// Wait for any ongoing transmissions to complete
self.flush_internal();
// Flush TX FIFO
self.registers.LineControl.modify(LCR_H::Fifos::Disabled);
// Clear pending interrupts
self.registers.InterruptClear.write(ICR::ALL::SET);
// set up clock for consistent divisor values
const CLOCK: u32 = 4_000_000; // 4Mhz
const BAUD_RATE: u32 = 115_200;
// // Should have a MailboxCommand with ref to a command buffer, and access to global MAILBOX
// // driver to run those commands atomically..
// let mut mailbox = Mailbox::<9>::default();
// let index = mailbox.request();
// let index = mailbox.set_clock_rate(index, mailbox::clock::UART, CLOCK);
// let mailbox = mailbox.end(index);
//
// if mailbox.call(mailbox::channel::PropertyTagsArmToVc).is_err() {
// return Err("PL011 UART setup failed in mailbox operation");
// // return Err(PL011UartError::MailboxError); // Abort if UART clocks couldn't be set
// };
// From the PL011 Technical Reference Manual:
//
// The LCR_H, IBRD, and FBRD registers form the single 30-bit wide LCR Register that is
@ -327,19 +404,26 @@ impl PL011Uart {
let divisors = RateDivisors::from_clock_and_rate(CLOCK.into(), BAUD_RATE)?;
self.registers
.IntegerBaudRate
.write(IBRD::IBRD.val(divisors.integer_baud_rate_divisor & 0xffff));
.write(IBRD::BAUD_DIVINT.val(divisors.integer_baud_rate_divisor & 0xffff));
self.registers
.FractionalBaudRate
.write(FBRD::FBRD.val(divisors.fractional_baud_rate_divisor & 0b11_1111));
.write(FBRD::BAUD_DIVFRAC.val(divisors.fractional_baud_rate_divisor & 0b11_1111));
self.registers.LineControl.write(
LCRH::WordLength::EightBit
+ LCRH::Fifo::Enabled
+ LCRH::Parity::Disabled
+ LCRH::Stop2::Disabled,
LCR_H::WordLength::EightBit
+ LCR_H::Fifos::Enabled
+ LCR_H::Parity::Disabled
+ LCR_H::Stop2::Disabled,
);
// Mask all interrupts by setting corresponding bits to 1
self.registers.InterruptMaskSetClear.write(IMSC::ALL::SET);
// Set RX FIFO fill level at 1/8.
self.registers
.InterruptFifoLevelSelect
.write(IFLS::RXIFLSEL::OneEigth);
// Enable RX IRQ + RX timeout IRQ.
self.registers
.InterruptMaskSetClear
.write(IMSC::RXIM::Enabled + IMSC::RTIM::Enabled);
// Disable DMA
self.registers
@ -351,7 +435,7 @@ impl PL011Uart {
.Control
.write(CR::UARTEN::Enabled + CR::TXE::Enabled + CR::RXE::Enabled);
Ok(PreparedPL011Uart(self))
Ok(())
}
fn flush_internal(&self) {
@ -359,40 +443,40 @@ impl PL011Uart {
}
}
impl Drop for PreparedPL011Uart {
impl Drop for PL011UartInner {
fn drop(&mut self) {
self.0.registers.Control.set(0);
self.registers.Control.set(0);
}
}
impl SerialOps for PreparedPL011Uart {
impl SerialOps for PL011UartInner {
fn read_byte(&self) -> u8 {
// wait until something is in the buffer
loop_while(|| self.0.registers.Flag.is_set(FR::RXFE));
loop_while(|| self.registers.Flag.is_set(FR::RXFE));
// read it and return
self.0.registers.Data.get() as u8
self.registers.Data.get() as u8
}
fn write_byte(&self, b: u8) {
// wait until we can send
loop_while(|| self.0.registers.Flag.is_set(FR::TXFF));
loop_while(|| self.registers.Flag.is_set(FR::TXFF));
// write the character to the buffer
self.0.registers.Data.set(b as u32);
self.registers.Data.set(b as u32);
}
/// Wait until the TX FIFO is empty, aka all characters have been put on the
/// line.
fn flush(&self) {
self.0.flush_internal();
self.flush_internal();
}
/// Consume input until RX FIFO is empty, aka all pending characters have been
/// consumed.
fn clear_rx(&self) {
loop_while(|| {
let pending = !self.0.registers.Flag.is_set(FR::RXFE);
let pending = !self.registers.Flag.is_set(FR::RXFE);
if pending {
self.read_byte();
}
@ -401,37 +485,113 @@ impl SerialOps for PreparedPL011Uart {
}
}
impl ConsoleOps for PreparedPL011Uart {
/// Send a character
fn write_char(&self, c: char) {
self.write_byte(c as u8)
}
impl interface::ConsoleOps for PL011UartInner {}
/// Display a string
fn write_string(&self, string: &str) {
for c in string.chars() {
// convert newline to carriage return + newline
if c == '\n' {
self.write_char('\r')
}
self.write_char(c);
}
}
/// Receive a character
fn read_char(&self) -> char {
let mut ret = self.read_byte() as char;
// convert carriage return to newline
if ret == '\r' {
ret = '\n'
}
ret
impl fmt::Write for PL011UartInner {
fn write_str(&mut self, s: &str) -> fmt::Result {
use interface::ConsoleOps;
self.write_string(s);
Ok(())
}
}
impl interface::Write for PL011Uart {
fn write_fmt(&self, args: Arguments) -> fmt::Result {
self.inner.lock(|inner| fmt::Write::write_fmt(inner, args))
}
}
//--------------------------------------------------------------------------------------------------
// OS Interface Code
//--------------------------------------------------------------------------------------------------
impl crate::drivers::interface::DeviceDriver for PL011Uart {
type IRQNumberType = IRQNumber;
fn compatible(&self) -> &'static str {
Self::COMPATIBLE
}
unsafe fn init(&self) -> core::result::Result<(), &'static str> {
self.inner.lock(|inner| inner.prepare())
}
fn register_and_enable_irq_handler(
&'static self,
irq_number: &Self::IRQNumberType,
) -> Result<(), &'static str> {
use exception::asynchronous::{irq_manager, IRQHandlerDescriptor};
let descriptor = IRQHandlerDescriptor::new(*irq_number, Self::COMPATIBLE, self);
irq_manager().register_handler(descriptor)?;
irq_manager().enable(irq_number);
Ok(())
}
}
impl SerialOps for PL011Uart {
fn read_byte(&self) -> u8 {
self.inner.lock(|inner| inner.read_byte())
}
fn write_byte(&self, byte: u8) {
self.inner.lock(|inner| inner.write_byte(byte))
}
fn flush(&self) {
self.inner.lock(|inner| inner.flush())
}
fn clear_rx(&self) {
self.inner.lock(|inner| inner.clear_rx())
}
}
impl interface::ConsoleOps for PL011Uart {
fn write_char(&self, c: char) {
self.inner.lock(|inner| inner.write_char(c))
}
fn write_string(&self, string: &str) {
self.inner.lock(|inner| inner.write_string(string))
}
fn read_char(&self) -> char {
self.inner.lock(|inner| inner.read_char())
}
}
impl interface::All for PL011Uart {}
impl exception::asynchronous::interface::IRQHandler for PL011Uart {
fn handle(&self) -> Result<(), &'static str> {
use interface::ConsoleOps;
self.inner.lock(|inner| {
let pending = inner.registers.MaskedInterruptStatus.extract();
// Clear all pending IRQs.
inner.registers.InterruptClear.write(ICR::ALL::SET);
// Check for any kind of RX interrupt.
if pending.matches_any(MIS::RXMIS::SET + MIS::RTMIS::SET) {
// Echo any received characters.
// while let Some(c) = inner.read_char() {
// inner.write_char(c)
// }
}
});
Ok(())
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
@ -442,6 +602,8 @@ mod tests {
const BAUD_RATE: u32 = 115_200;
let divisors = RateDivisors::from_clock_and_rate(CLOCK, BAUD_RATE);
assert!(divisors.is_ok());
let divisors = divisors.unwrap();
assert_eq!(divisors.integer_baud_rate_divisor, 1);
assert_eq!(divisors.fractional_baud_rate_divisor, 40);
}

View File

@ -7,11 +7,14 @@
use {
super::{
gpio,
device_driver::gpio,
mailbox::{channel, Mailbox, MailboxOps},
BcmHost,
},
crate::platform::MMIODerefWrapper,
crate::{
memory::{Address, Virtual},
platform::device_driver::common::MMIODerefWrapper,
},
snafu::Snafu,
tock_registers::{
interfaces::{Readable, Writeable},
@ -56,27 +59,18 @@ pub type Result<T> = ::core::result::Result<T, PowerError>;
type Registers = MMIODerefWrapper<RegisterBlock>;
const POWER_START: usize = 0x0010_0000;
/// Public interface to the Power subsystem
pub struct Power {
registers: Registers,
}
impl Default for Power {
fn default() -> Power {
const POWER_BASE: usize = BcmHost::get_peripheral_address() + POWER_START;
unsafe { Power::new(POWER_BASE) }
}
}
impl Power {
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(base_addr: usize) -> Power {
pub const unsafe fn new(mmio_base_addr: Address<Virtual>) -> Power {
Power {
registers: Registers::new(base_addr),
registers: Registers::new(mmio_base_addr),
}
}
@ -116,6 +110,6 @@ impl Power {
val |= PM_PASSWORD | PM_RSTC_WRCFG_FULL_RESET;
self.registers.PM_RSTC.set(val);
crate::endless_sleep()
crate::cpu::endless_sleep()
}
}

View File

@ -0,0 +1,77 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Common device driver code.
use {
crate::memory::{Address, Virtual},
core::{fmt, marker::PhantomData, ops},
};
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
pub struct MMIODerefWrapper<T> {
pub base_addr: Address<Virtual>, // @todo unmake public, GPIO::Pin uses it
phantom: PhantomData<fn() -> T>,
}
/// A wrapper type for usize with integrated range bound check.
#[derive(Copy, Clone)]
pub struct BoundedUsize<const MAX_INCLUSIVE: usize>(usize);
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
impl<T> MMIODerefWrapper<T> {
/// Create an instance.
pub const fn new(base_addr: Address<Virtual>) -> Self {
Self {
base_addr,
phantom: PhantomData,
}
}
}
// Deref to RegisterBlock
///
/// Allows writing
/// ```
/// self.GPPUD.read()
/// ```
/// instead of something along the lines of
/// ```
/// unsafe { (*GPIO::ptr()).GPPUD.read() }
/// ```
impl<T> ops::Deref for MMIODerefWrapper<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
unsafe { &*(self.base_addr.as_usize() as *const _) }
}
}
impl<const MAX_INCLUSIVE: usize> BoundedUsize<{ MAX_INCLUSIVE }> {
pub const MAX_INCLUSIVE: usize = MAX_INCLUSIVE;
/// Creates a new instance if number <= MAX_INCLUSIVE.
pub const fn new(number: usize) -> Self {
assert!(number <= MAX_INCLUSIVE);
Self(number)
}
/// Return the wrapped number.
pub const fn get(self) -> usize {
self.0
}
}
impl<const MAX_INCLUSIVE: usize> fmt::Display for BoundedUsize<{ MAX_INCLUSIVE }> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.0)
}
}

View File

@ -0,0 +1,17 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2018-2022 Andre Richter <andre.o.richter@gmail.com>
//! Device driver.
#[cfg(feature = "rpi4")]
mod arm;
#[cfg(any(feature = "rpi3", feature = "rpi4"))]
mod bcm;
pub mod common;
#[cfg(feature = "rpi4")]
pub use arm::*;
#[cfg(any(feature = "rpi3", feature = "rpi4"))]
pub use bcm::*;

View File

@ -43,7 +43,7 @@ impl Color {
}
}
#[derive(PartialEq)]
#[derive(PartialEq, Eq)]
pub enum PixelOrder {
BGR,
RGB,

View File

@ -0,0 +1,186 @@
use {
super::exception,
crate::{
console, drivers,
exception::{self as generic_exception},
memory::{self, mmu::MMIODescriptor},
platform::{device_driver, memory::map::mmio},
},
core::{
mem::MaybeUninit,
sync::atomic::{AtomicBool, Ordering},
},
};
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Initialize the driver subsystem.
///
/// # Safety
///
/// See child function calls.
///
/// # Note
///
/// Using atomics here relieves us from needing to use `unsafe` for the static variable.
///
/// On `AArch64`, which is the only implemented architecture at the time of writing this,
/// [`AtomicBool::load`] and [`AtomicBool::store`] are lowered to ordinary load and store
/// instructions. They are therefore safe to use even with MMU + caching deactivated.
///
/// [`AtomicBool::load`]: core::sync::atomic::AtomicBool::load
/// [`AtomicBool::store`]: core::sync::atomic::AtomicBool::store
pub unsafe fn init() -> Result<(), &'static str> {
static INIT_DONE: AtomicBool = AtomicBool::new(false);
if INIT_DONE.load(Ordering::Relaxed) {
return Err("Init already done");
}
#[cfg(not(feature = "noserial"))]
driver_uart()?;
driver_gpio()?;
driver_interrupt_controller()?;
INIT_DONE.store(true, Ordering::Relaxed);
Ok(())
}
/// Minimal code needed to bring up the console in QEMU (for testing only). This is often less steps
/// than on real hardware due to QEMU's abstractions.
#[cfg(test)]
pub fn qemu_bring_up_console() {
unsafe {
instantiate_uart().unwrap_or_else(|_| crate::qemu::semihosting::exit_failure());
console::register_console(PL011_UART.assume_init_ref());
};
}
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static mut PL011_UART: MaybeUninit<device_driver::PL011Uart> = MaybeUninit::uninit();
static mut GPIO: MaybeUninit<device_driver::GPIO> = MaybeUninit::uninit();
#[cfg(feature = "rpi3")]
static mut INTERRUPT_CONTROLLER: MaybeUninit<device_driver::InterruptController> =
MaybeUninit::uninit();
#[cfg(feature = "rpi4")]
static mut INTERRUPT_CONTROLLER: MaybeUninit<device_driver::GICv2> = MaybeUninit::uninit();
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
/// This must be called only after successful init of the memory subsystem.
unsafe fn instantiate_uart() -> Result<(), &'static str> {
let mmio_descriptor = MMIODescriptor::new(mmio::PL011_UART_BASE, mmio::PL011_UART_SIZE);
let virt_addr =
memory::mmu::kernel_map_mmio(device_driver::PL011Uart::COMPATIBLE, &mmio_descriptor)?;
PL011_UART.write(device_driver::PL011Uart::new(virt_addr));
Ok(())
}
/// This must be called only after successful init of the PL011 UART driver.
unsafe fn post_init_pl011_uart() -> Result<(), &'static str> {
console::register_console(PL011_UART.assume_init_ref());
crate::info!("UART0 is live!");
Ok(())
}
/// This must be called only after successful init of the memory subsystem.
unsafe fn instantiate_gpio() -> Result<(), &'static str> {
let mmio_descriptor = MMIODescriptor::new(mmio::GPIO_BASE, mmio::GPIO_SIZE);
let virt_addr =
memory::mmu::kernel_map_mmio(device_driver::GPIO::COMPATIBLE, &mmio_descriptor)?;
GPIO.write(device_driver::GPIO::new(virt_addr));
Ok(())
}
/// This must be called only after successful init of the GPIO driver.
unsafe fn post_init_gpio() -> Result<(), &'static str> {
device_driver::PL011Uart::prepare_gpio(GPIO.assume_init_ref());
Ok(())
}
/// This must be called only after successful init of the memory subsystem.
#[cfg(feature = "rpi3")]
unsafe fn instantiate_interrupt_controller() -> Result<(), &'static str> {
let periph_mmio_descriptor =
MMIODescriptor::new(mmio::PERIPHERAL_IC_BASE, mmio::PERIPHERAL_IC_SIZE);
let periph_virt_addr = memory::mmu::kernel_map_mmio(
device_driver::InterruptController::COMPATIBLE,
&periph_mmio_descriptor,
)?;
INTERRUPT_CONTROLLER.write(device_driver::InterruptController::new(periph_virt_addr));
Ok(())
}
/// This must be called only after successful init of the memory subsystem.
#[cfg(feature = "rpi4")]
unsafe fn instantiate_interrupt_controller() -> Result<(), &'static str> {
let gicd_mmio_descriptor = MMIODescriptor::new(mmio::GICD_BASE, mmio::GICD_SIZE);
let gicd_virt_addr = memory::mmu::kernel_map_mmio("GICv2 GICD", &gicd_mmio_descriptor)?;
let gicc_mmio_descriptor = MMIODescriptor::new(mmio::GICC_BASE, mmio::GICC_SIZE);
let gicc_virt_addr = memory::mmu::kernel_map_mmio("GICV2 GICC", &gicc_mmio_descriptor)?;
INTERRUPT_CONTROLLER.write(device_driver::GICv2::new(gicd_virt_addr, gicc_virt_addr));
Ok(())
}
/// This must be called only after successful init of the interrupt controller driver.
unsafe fn post_init_interrupt_controller() -> Result<(), &'static str> {
generic_exception::asynchronous::register_irq_manager(INTERRUPT_CONTROLLER.assume_init_ref());
Ok(())
}
/// Function needs to ensure that driver registration happens only after correct instantiation.
unsafe fn driver_uart() -> Result<(), &'static str> {
instantiate_uart()?;
let uart_descriptor = drivers::DeviceDriverDescriptor::new(
PL011_UART.assume_init_ref(),
Some(post_init_pl011_uart),
Some(exception::asynchronous::irq_map::PL011_UART),
);
drivers::driver_manager().register_driver(uart_descriptor);
Ok(())
}
/// Function needs to ensure that driver registration happens only after correct instantiation.
unsafe fn driver_gpio() -> Result<(), &'static str> {
instantiate_gpio()?;
let gpio_descriptor =
drivers::DeviceDriverDescriptor::new(GPIO.assume_init_ref(), Some(post_init_gpio), None);
drivers::driver_manager().register_driver(gpio_descriptor);
Ok(())
}
/// Function needs to ensure that driver registration happens only after correct instantiation.
unsafe fn driver_interrupt_controller() -> Result<(), &'static str> {
instantiate_interrupt_controller()?;
let interrupt_controller_descriptor = drivers::DeviceDriverDescriptor::new(
INTERRUPT_CONTROLLER.assume_init_ref(),
Some(post_init_interrupt_controller),
None,
);
drivers::driver_manager().register_driver(interrupt_controller_descriptor);
Ok(())
}

View File

@ -0,0 +1,26 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! Platform asynchronous exception handling.
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Export for reuse in generic asynchronous.rs.
pub use crate::platform::device_driver::IRQNumber;
#[cfg(feature = "rpi3")]
pub(in crate::platform) mod irq_map {
use crate::platform::device_driver::{IRQNumber, PeripheralIRQ};
pub const PL011_UART: IRQNumber = IRQNumber::Peripheral(PeripheralIRQ::new(57));
}
#[cfg(feature = "rpi4")]
pub(in crate::platform) mod irq_map {
use crate::platform::device_driver::IRQNumber;
pub const PL011_UART: IRQNumber = IRQNumber::new(153);
}

View File

@ -0,0 +1 @@
pub mod asynchronous;

View File

@ -1,4 +1,7 @@
use super::mailbox::{self, LocalMailboxStorage, Mailbox, MailboxError, MailboxOps};
use {
super::mailbox::{self, LocalMailboxStorage, Mailbox, MailboxError, MailboxOps},
crate::memory::{Address, Virtual},
};
/// FrameBuffer channel supported structure - use with mailbox::channel::FrameBuffer
/// Must have the same alignment as the mailbox buffers.
@ -13,10 +16,11 @@ mod index {
pub const DEPTH: usize = 5;
pub const X_OFFSET: usize = 6;
pub const Y_OFFSET: usize = 7;
pub const POINTER: usize = 8; // FIXME: could be 4096 for the alignment restriction.
pub const POINTER: usize = 8; // FIXME: Value could be 4096 for the alignment restriction.
pub const SIZE: usize = 9;
}
// control: MailboxCommand<10, FrameBufferData>
pub struct FrameBuffer {
mailbox: Mailbox<10, FrameBufferData>,
}
@ -42,13 +46,13 @@ impl core::fmt::Debug for FrameBufferData {
impl FrameBuffer {
pub fn new(
base_addr: usize,
mmio_base_addr: Address<Virtual>, // skip this, use MAILBOX driver
width: u32,
height: u32,
depth: u32,
) -> Result<FrameBuffer, MailboxError> {
let mut fb = FrameBuffer {
mailbox: unsafe { Mailbox::<10, FrameBufferData>::new(base_addr)? },
mailbox: unsafe { Mailbox::<10, FrameBufferData>::new(mmio_base_addr)? },
};
fb.mailbox.buffer.storage[index::WIDTH] = width;
fb.mailbox.buffer.storage[index::VIRTUAL_WIDTH] = width;

View File

@ -0,0 +1,138 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
PAGE_SIZE = 64K;
PAGE_MASK = PAGE_SIZE - 1;
__phys_mem_start = 0x0;
__phys_load_addr = 0x80000;
ENTRY(__phys_load_addr);
/* Flags:
* 4 == R
* 5 == RX
* 6 == RW
*
* Segments are marked PT_LOAD below so that the ELF file provides virtual and physical addresses.
* It doesn't mean all of them need actually be loaded.
*/
PHDRS
{
segment_boot_core_stack PT_LOAD FLAGS(6);
segment_code PT_LOAD FLAGS(5);
segment_data PT_LOAD FLAGS(6);
}
/* Symbols between __BOOT_START and __BOOT_END should be dropped after init is complete.
Symbols between __CODE_START and __CODE_END are the kernel code.
Symbols between __BSS_START and __BSS_END must be initialized to zero by startup code in the kernel.
*/
SECTIONS
{
. = __phys_mem_start;
/***********************************************************************************************
* Boot Core Stack
***********************************************************************************************/
.boot_core_stack (NOLOAD) :
{
__STACK_BOTTOM = .; /* ^ */
/* | stack */
. = __phys_load_addr; /* | growth AArch64 boot address is 0x80000, 4K-aligned */
/* | direction */
__STACK_TOP = .; /* | Stack grows from here towards 0x0. */
} :segment_boot_core_stack
ASSERT((. & PAGE_MASK) == 0, "End of boot core stack is not page aligned")
/***********************************************************************************************
* Code + RO Data
***********************************************************************************************/
.text :
{
/*******************************************************************************************
* Boot Code + Boot Data
*******************************************************************************************/
__BOOT_START = .;
KEEP(*(.text.main.entry))
*(.text.boot)
*(.data.boot)
. = ALIGN(PAGE_SIZE);
__BOOT_END = .; /* Here the boot code ends */
ASSERT((__BOOT_END & PAGE_MASK) == 0, "End of boot code is not page aligned")
/*******************************************************************************************
* Regular Kernel Code
*******************************************************************************************/
__CODE_START = .;
*(.text*)
} :segment_code
.vectors :
{
. = ALIGN(2048);
__EXCEPTION_VECTORS_START = .;
KEEP(*(.vectors))
} :segment_code
.rodata :
{
. = ALIGN(4);
*(.rodata*)
FILL(0x00)
. = ALIGN(PAGE_SIZE); /* Fill up to page size */
__CODE_END = .;
ASSERT((__CODE_END & PAGE_MASK) == 0, "End of kernel code is not page aligned")
} :segment_code
/***********************************************************************************************
* Data + BSS
***********************************************************************************************/
.data :
{
__DATA_START = .;
ASSERT((__DATA_START & PAGE_MASK) == 0, "Start of kernel data is not page aligned")
*(.data*)
FILL(0x00)
} :segment_data
.bss (NOLOAD):
{
. = ALIGN(PAGE_SIZE);
__BSS_START = .;
*(.bss*)
. = ALIGN(PAGE_SIZE); /* Align up to page size */
__BSS_END = .;
__BSS_SIZE_U64S = (__BSS_END - __BSS_START) / 8;
} :segment_data
__DATA_END = .;
/***********************************************************************************************
* MMIO Remap Reserved
***********************************************************************************************/
__MMIO_REMAP_START = .;
. += 8 * 1024 * 1024;
__MMIO_REMAP_END = .;
ASSERT((. & PAGE_MASK) == 0, "MMIO remap reservation is not page aligned")
/***********************************************************************************************
* Misc
***********************************************************************************************/
.got : { *(.got*) }
ASSERT(SIZEOF(.got) == 0, "Relocation support not expected")
/DISCARD/ : { *(.comment*) *(.gnu*) *(.note*) *(.eh_frame*) *(.text.chainboot*) }
}
INCLUDE machine/src/arch/aarch64/linker/aarch64-exceptions.ld

View File

@ -0,0 +1,336 @@
//! Platform memory management unit.
use crate::{
memory::{
mmu::{
self as generic_mmu, AccessPermissions, AddressSpace, AssociatedTranslationTable,
AttributeFields, MemAttributes, MemoryRegion, PageAddress, TranslationGranule,
},
Physical, Virtual,
},
synchronization::InitStateLock,
};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
type KernelTranslationTable =
<KernelVirtAddrSpace as AssociatedTranslationTable>::TableStartFromBottom;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// The translation granule chosen by this platform. This will be used everywhere else
/// in the kernel to derive respective data structures and their sizes.
/// For example, the `crate::memory::mmu::Page`.
pub type KernelGranule = TranslationGranule<{ 64 * 1024 }>;
/// The kernel's virtual address space defined by this platform.
pub type KernelVirtAddrSpace = AddressSpace<{ 1024 * 1024 * 1024 }>;
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
/// The kernel translation tables.
///
/// It is mandatory that InitStateLock is transparent.
/// That is, `size_of(InitStateLock<KernelTranslationTable>) == size_of(KernelTranslationTable)`.
/// There is a unit tests that checks this property.
static KERNEL_TABLES: InitStateLock<KernelTranslationTable> =
InitStateLock::new(KernelTranslationTable::new());
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
/// Helper function for calculating the number of pages the given parameter spans.
const fn size_to_num_pages(size: usize) -> usize {
assert!(size > 0);
assert!(size % KernelGranule::SIZE == 0); // assert! is const-fn-friendly
size >> KernelGranule::SHIFT
}
/// The code pages of the kernel binary.
fn virt_code_region() -> MemoryRegion<Virtual> {
let num_pages = size_to_num_pages(super::code_size());
let start_page_addr = super::virt_code_start();
let end_exclusive_page_addr = start_page_addr.checked_offset(num_pages as isize).unwrap();
MemoryRegion::new(start_page_addr, end_exclusive_page_addr)
}
/// The data pages of the kernel binary.
fn virt_data_region() -> MemoryRegion<Virtual> {
let num_pages = size_to_num_pages(super::data_size());
let start_page_addr = super::virt_data_start();
let end_exclusive_page_addr = start_page_addr.checked_offset(num_pages as isize).unwrap();
MemoryRegion::new(start_page_addr, end_exclusive_page_addr)
}
/// The boot core stack pages.
fn virt_boot_core_stack_region() -> MemoryRegion<Virtual> {
let num_pages = size_to_num_pages(super::boot_core_stack_size());
let start_page_addr = super::virt_boot_core_stack_start();
let end_exclusive_page_addr = start_page_addr.checked_offset(num_pages as isize).unwrap();
MemoryRegion::new(start_page_addr, end_exclusive_page_addr)
}
// The binary is still identity mapped, so use this trivial conversion function for mapping below.
fn kernel_virt_to_phys_region(virt_region: MemoryRegion<Virtual>) -> MemoryRegion<Physical> {
MemoryRegion::new(
PageAddress::from(virt_region.start_page_addr().into_inner().as_usize()),
PageAddress::from(
virt_region
.end_exclusive_page_addr()
.into_inner()
.as_usize(),
),
)
}
//--------------------------------------------------------------------------------------------------
// Subsumed by the kernel_map_binary() function
//--------------------------------------------------------------------------------------------------
// pub static LAYOUT: KernelVirtualLayout<NUM_MEM_RANGES> = KernelVirtualLayout::new(
// memory_map::END_INCLUSIVE,
// [
// TranslationDescriptor {
// name: "Remapped Device MMIO",
// virtual_range: remapped_mmio_range_inclusive,
// physical_range_translation: Translation::Offset(
// memory_map::mmio::MMIO_BASE + 0x20_0000,
// ),
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::Device,
// acc_perms: AccessPermissions::ReadWrite,
// execute_never: true,
// },
// },
// TranslationDescriptor {
// name: "Device MMIO",
// virtual_range: mmio_range_inclusive,
// physical_range_translation: Translation::Identity,
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::Device,
// acc_perms: AccessPermissions::ReadWrite,
// execute_never: true,
// },
// },
// TranslationDescriptor {
// name: "DMA heap pool",
// virtual_range: dma_range_inclusive,
// physical_range_translation: Translation::Identity,
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::NonCacheableDRAM,
// acc_perms: AccessPermissions::ReadWrite,
// execute_never: true,
// },
// },
// TranslationDescriptor {
// name: "Framebuffer area (static for now)",
// virtual_range: || {
// RangeInclusive::new(
// memory_map::phys::VIDEOMEM_BASE,
// memory_map::mmio::MMIO_BASE - 1,
// )
// },
// physical_range_translation: Translation::Identity,
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::Device,
// acc_perms: AccessPermissions::ReadWrite,
// execute_never: true,
// },
// },
// ],
// );
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Return a reference to the kernel's translation tables.
pub fn kernel_translation_tables() -> &'static InitStateLock<KernelTranslationTable> {
&KERNEL_TABLES
}
/// The MMIO remap pages.
pub fn virt_mmio_remap_region() -> MemoryRegion<Virtual> {
let num_pages = size_to_num_pages(super::mmio_remap_size());
let start_page_addr = super::virt_mmio_remap_start();
let end_exclusive_page_addr = start_page_addr.checked_offset(num_pages as isize).unwrap();
MemoryRegion::new(start_page_addr, end_exclusive_page_addr)
}
/// Map the kernel binary.
///
/// # Safety
///
/// - Any miscalculation or attribute error will likely be fatal. Needs careful manual checking.
pub unsafe fn kernel_map_binary() -> Result<(), &'static str> {
generic_mmu::kernel_map_at(
"Kernel boot-core stack",
&virt_boot_core_stack_region(),
&kernel_virt_to_phys_region(virt_boot_core_stack_region()),
&AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
)?;
// TranslationDescriptor {
// name: "Boot code and data",
// virtual_range: boot_range_inclusive,
// physical_range_translation: Translation::Identity,
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::CacheableDRAM,
// acc_perms: AccessPermissions::ReadOnly,
// execute_never: false,
// },
// },
// TranslationDescriptor {
// name: "Kernel code and RO data",
// virtual_range: code_range_inclusive,
// physical_range_translation: Translation::Identity,
// attribute_fields: AttributeFields {
// mem_attributes: MemAttributes::CacheableDRAM,
// acc_perms: AccessPermissions::ReadOnly,
// execute_never: false,
// },
// },
generic_mmu::kernel_map_at(
"Kernel code and RO data",
&virt_code_region(),
&kernel_virt_to_phys_region(virt_code_region()),
&AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadOnly,
execute_never: false,
},
)?;
generic_mmu::kernel_map_at(
"Kernel data and bss",
&virt_data_region(),
&kernel_virt_to_phys_region(virt_data_region()),
&AttributeFields {
mem_attributes: MemAttributes::CacheableDRAM,
acc_perms: AccessPermissions::ReadWrite,
execute_never: true,
},
)?;
Ok(())
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use {
super::*,
core::{cell::UnsafeCell, ops::Range},
};
/// Check alignment of the kernel's virtual memory layout sections.
#[test_case]
fn virt_mem_layout_sections_are_64KiB_aligned() {
for i in [
virt_boot_core_stack_region,
virt_code_region,
virt_data_region,
]
.iter()
{
let start = i().start_page_addr().into_inner();
let end_exclusive = i().end_exclusive_page_addr().into_inner();
assert!(start.is_page_aligned());
assert!(end_exclusive.is_page_aligned());
assert!(end_exclusive >= start);
}
}
/// Ensure the kernel's virtual memory layout is free of overlaps.
#[test_case]
fn virt_mem_layout_has_no_overlaps() {
let layout = [
virt_boot_core_stack_region(),
virt_code_region(),
virt_data_region(),
];
for (i, first_range) in layout.iter().enumerate() {
for second_range in layout.iter().skip(i + 1) {
assert!(!first_range.overlaps(second_range))
}
}
}
/// Check if KERNEL_TABLES is in .bss.
#[test_case]
fn kernel_tables_in_bss() {
extern "Rust" {
static __BSS_START: UnsafeCell<u64>;
static __BSS_END: UnsafeCell<u64>;
}
let bss_range = unsafe {
Range {
start: __BSS_START.get(),
end: __BSS_END.get(),
}
};
let kernel_tables_addr = &KERNEL_TABLES as *const _ as usize as *mut u64;
assert!(bss_range.contains(&kernel_tables_addr));
}
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
// fn boot_range_inclusive() -> RangeInclusive<usize> {
// RangeInclusive::new(super::boot_start(), super::boot_end_exclusive() - 1)
// }
//
// fn code_range_inclusive() -> RangeInclusive<usize> {
// // Notice the subtraction to turn the exclusive end into an inclusive end.
// #[allow(clippy::range_minus_one)]
// RangeInclusive::new(super::code_start(), super::code_end_exclusive() - 1)
// }
//
// fn remapped_mmio_range_inclusive() -> RangeInclusive<usize> {
// // The last 64 KiB slot in the first 512 MiB
// RangeInclusive::new(0x1FFF_0000, 0x1FFF_FFFF)
// }
//
// fn mmio_range_inclusive() -> RangeInclusive<usize> {
// RangeInclusive::new(memory_map::mmio::MMIO_BASE, memory_map::mmio::MMIO_END)
// // RangeInclusive::new(map::phys::VIDEOMEM_BASE, map::mmio::MMIO_END),
// }
//
// fn dma_range_inclusive() -> RangeInclusive<usize> {
// RangeInclusive::new(
// memory_map::virt::DMA_HEAP_START,
// memory_map::virt::DMA_HEAP_END,
// )
// }

View File

@ -0,0 +1,355 @@
//! Platform memory Management.
//!
//! The physical memory layout.
//!
//! The Raspberry's firmware copies the kernel binary to 0x8_0000. The preceding region will be used
//! as the boot core's stack.
//!
//! +---------------------------------------+
//! | | boot_core_stack_start @ 0x0
//! | | ^
//! | Boot-core Stack | | stack
//! | | | growth
//! | | | direction
//! +---------------------------------------+
//! | | code_start @ 0x8_0000 == boot_core_stack_end_exclusive
//! | .text |
//! | .rodata |
//! | .got |
//! | |
//! +---------------------------------------+
//! | | data_start == code_end_exclusive
//! | .data |
//! | .bss |
//! | |
//! +---------------------------------------+
//! | | data_end_exclusive
//! | |
//!
//!
//!
//!
//!
//! The virtual memory layout is as follows:
//!
//! +---------------------------------------+
//! | | boot_core_stack_start @ 0x0
//! | | ^
//! | Boot-core Stack | | stack
//! | | | growth
//! | | | direction
//! +---------------------------------------+
//! | | code_start @ 0x8_0000 == boot_core_stack_end_exclusive
//! | .text |
//! | .rodata |
//! | .got |
//! | |
//! +---------------------------------------+
//! | | data_start == code_end_exclusive
//! | .data |
//! | .bss |
//! | |
//! +---------------------------------------+
//! | | mmio_remap_start == data_end_exclusive
//! | VA region for MMIO remapping |
//! | |
//! +---------------------------------------+
//! | | mmio_remap_end_exclusive
//! | |
pub mod mmu;
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
use {
crate::memory::{mmu::PageAddress, Address, Physical, Virtual},
core::cell::UnsafeCell,
};
// Symbols from the linker script.
extern "Rust" {
// Boot code.
//
// Using the linker script, we ensure that the boot area is consecutive and 4
// KiB aligned, and we export the boundaries via symbols:
//
// [__BOOT_START, __BOOT_END)
//
// The inclusive start of the boot area, aka the address of the
// first byte of the area.
static __BOOT_START: UnsafeCell<()>;
// The exclusive end of the boot area, aka the address of
// the first byte _after_ the RO area.
static __BOOT_END: UnsafeCell<()>;
// Kernel code and RO data.
//
// Using the linker script, we ensure that the RO area is consecutive and 4
// KiB aligned, and we export the boundaries via symbols:
//
// [__RO_START, __RO_END)
//
// The inclusive start of the read-only area, aka the address of the
// first byte of the area.
static __CODE_START: UnsafeCell<()>;
// The exclusive end of the read-only area, aka the address of
// the first byte _after_ the RO area.
static __CODE_END: UnsafeCell<()>;
// The inclusive start of the kernel data/BSS area, aka the address of the
// first byte of the area.
static __DATA_START: UnsafeCell<()>;
// The exclusive end of the kernel data/BSS area, aka the address of
// the first byte _after_ the data/BSS area.
static __DATA_END: UnsafeCell<()>;
// The inclusive start of the kernel data/BSS area, aka the address of the
// first byte of the area.
static __STACK_BOTTOM: UnsafeCell<()>;
// The exclusive end of the kernel data/BSS area, aka the address of
// the first byte _after_ the data/BSS area.
static __STACK_TOP: UnsafeCell<()>;
// The inclusive start of the kernel MMIO remap area, aka the address of the
// first byte of the area.
static __MMIO_REMAP_START: UnsafeCell<()>;
// The exclusive end of the kernel MMIO remap area, aka the address of
// the first byte _after_ the MMIO remap area.
static __MMIO_REMAP_END: UnsafeCell<()>;
}
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// The board's physical memory map.
/// This is a fixed memory map for Raspberry Pi,
/// @todo we need to infer the memory map from the provided DTB instead.
#[rustfmt::skip]
pub(super) mod map {
use super::*;
/// Beginning of memory.
pub const START: usize = 0x0000_0000;
/// End of memory - 8Gb RPi4
pub const END_INCLUSIVE: usize = 0x1_FFFF_FFFF;
/// Physical RAM addresses.
pub mod phys {
/// Base address of video (VC) memory.
pub const VIDEOMEM_BASE: usize = 0x3e00_0000;
}
pub const VIDEOCORE_MBOX_OFFSET: usize = 0x0000_B880;
pub const POWER_OFFSET: usize = 0x0010_0000;
pub const GPIO_OFFSET: usize = 0x0020_0000;
pub const UART_OFFSET: usize = 0x0020_1000;
pub const MINIUART_OFFSET: usize = 0x0021_5000;
/// Physical devices.
#[cfg(feature = "rpi3")]
pub mod mmio {
use super::*;
/// Base address of MMIO register range.
pub const MMIO_BASE: usize = 0x3F00_0000;
/// Interrupt controller
pub const PERIPHERAL_IC_BASE: Address<Physical> = Address::new(MMIO_BASE + 0x0000_B200);
pub const PERIPHERAL_IC_SIZE: usize = 0x24;
/// Base address of ARM<->VC mailbox area.
pub const VIDEOCORE_MBOX_BASE: Address<Physical> = Address::new(MMIO_BASE + VIDEOCORE_MBOX_OFFSET);
/// Board power control.
pub const POWER_BASE: Address<Physical> = Address::new(MMIO_BASE + POWER_OFFSET);
/// Base address of GPIO registers.
pub const GPIO_BASE: Address<Physical> = Address::new(MMIO_BASE + GPIO_OFFSET);
pub const GPIO_SIZE: usize = 0xA0;
pub const PL011_UART_BASE: Address<Physical> = Address::new(MMIO_BASE + UART_OFFSET);
pub const PL011_UART_SIZE: usize = 0x48;
/// Base address of MiniUART.
pub const MINI_UART_BASE: Address<Physical> = Address::new(MMIO_BASE + MINIUART_OFFSET);
/// End of MMIO memory region.
pub const END: Address<Physical> = Address::new(0x4001_0000);
}
/// Physical devices.
#[cfg(feature = "rpi4")]
pub mod mmio {
use super::*;
/// Base address of MMIO register range.
pub const MMIO_BASE: usize = 0xFE00_0000;
/// Base address of GPIO registers.
pub const GPIO_BASE: Address<Physical> = Address::new(MMIO_BASE + GPIO_OFFSET);
pub const GPIO_SIZE: usize = 0xA0;
/// Base address of regular UART.
pub const PL011_UART_BASE: Address<Physical> = Address::new(MMIO_BASE + UART_OFFSET);
pub const PL011_UART_SIZE: usize = 0x48;
/// Base address of MiniUART.
pub const MINI_UART_BASE: Address<Physical> = Address::new(MMIO_BASE + MINIUART_OFFSET);
/// Interrupt controller
pub const GICD_BASE: Address<Physical> = Address::new(0xFF84_1000);
pub const GICD_SIZE: usize = 0x824;
pub const GICC_BASE: Address<Physical> = Address::new(0xFF84_2000);
pub const GICC_SIZE: usize = 0x14;
/// Base address of ARM<->VC mailbox area.
pub const VIDEOCORE_MBOX_BASE: usize = MMIO_BASE + VIDEOCORE_MBOX_OFFSET;
/// End of MMIO memory region.
pub const END: Address<Physical> = Address::new(0xFF85_0000);
}
/// End address of mapped memory.
pub const END: Address<Physical> = mmio::END;
//----
// Unused?
//----
/// Virtual (mapped) addresses.
pub mod virt {
/// Start (top) of kernel stack.
pub const KERN_STACK_START: usize = super::START;
/// End (bottom) of kernel stack. SP starts at KERN_STACK_END + 1.
pub const KERN_STACK_END: usize = 0x0007_FFFF;
/// Location of DMA-able memory region (in the second 2 MiB block).
pub const DMA_HEAP_START: usize = 0x0020_0000;
/// End of DMA-able memory region.
pub const DMA_HEAP_END: usize = 0x005F_FFFF;
}
}
//--------------------------------------------------------------------------------------------------
// Private Code
//--------------------------------------------------------------------------------------------------
/// Start page address of the boot segment.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn boot_start() -> usize {
unsafe { __BOOT_START.get() as usize }
}
/// Exclusive end page address of the boot segment.
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn boot_end_exclusive() -> usize {
unsafe { __BOOT_END.get() as usize }
}
/// Start page address of the code segment.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn code_start() -> usize {
unsafe { __CODE_START.get() as usize }
}
/// Start page address of the code segment.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn virt_code_start() -> PageAddress<Virtual> {
PageAddress::from(unsafe { __CODE_START.get() as usize })
}
/// Size of the code segment.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn code_size() -> usize {
unsafe { (__CODE_END.get() as usize) - (__CODE_START.get() as usize) }
}
/// Exclusive end page address of the code segment.
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
// #[inline(always)]
// fn code_end_exclusive() -> usize {
// unsafe { __RO_END.get() as usize }
// }
/// Start page address of the data segment.
#[inline(always)]
fn virt_data_start() -> PageAddress<Virtual> {
PageAddress::from(unsafe { __DATA_START.get() as usize })
}
/// Size of the data segment.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn data_size() -> usize {
unsafe { (__DATA_END.get() as usize) - (__DATA_START.get() as usize) }
}
/// Start page address of the MMIO remap reservation.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn virt_mmio_remap_start() -> PageAddress<Virtual> {
PageAddress::from(unsafe { __MMIO_REMAP_START.get() as usize })
}
/// Size of the MMIO remap reservation.
///
/// # Safety
///
/// - Value is provided by the linker script and must be trusted as-is.
#[inline(always)]
fn mmio_remap_size() -> usize {
unsafe { (__MMIO_REMAP_END.get() as usize) - (__MMIO_REMAP_START.get() as usize) }
}
/// Start page address of the boot core's stack.
#[inline(always)]
fn virt_boot_core_stack_start() -> PageAddress<Virtual> {
PageAddress::from(unsafe { __STACK_BOTTOM.get() as usize })
}
/// Size of the boot core's stack.
#[inline(always)]
fn boot_core_stack_size() -> usize {
unsafe { (__STACK_TOP.get() as usize) - (__STACK_BOTTOM.get() as usize) }
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Exclusive end address of the physical address space.
#[inline(always)]
pub fn phys_addr_space_end_exclusive_addr() -> PageAddress<Physical> {
PageAddress::from(map::END)
}

View File

@ -5,14 +5,14 @@
#![allow(dead_code)]
pub mod cpu;
pub mod device_driver;
pub mod display;
pub mod fb;
pub mod gpio;
pub mod mailbox;
pub mod mini_uart;
pub mod pl011_uart;
pub mod power;
pub mod vc;
pub mod drivers;
pub mod exception;
// pub mod fb;
pub mod memory;
// pub mod vc;
/// See BCM2835-ARM-Peripherals.pdf
/// See <https://www.raspberrypi.org/forums/viewtopic.php?t=186090> for more details.

View File

@ -8,7 +8,7 @@ use {
mailbox::{self, channel, response::VAL_LEN_FLAG, Mailbox, MailboxOps},
BcmHost,
},
crate::{platform::rpi3::mailbox::MailboxStorageRef, println},
crate::{platform::raspberrypi::mailbox::MailboxStorageRef, println},
core::convert::TryInto,
snafu::Snafu,
};
@ -42,6 +42,7 @@ impl VC {
* (if the base or size has changed) is implicitly freed.
*/
// control: MailboxCommand<10, FrameBufferData>
let mut mbox = Mailbox::<36>::default();
let index = mbox.request();
let index = mbox.set_physical_wh(index, w, h);

View File

@ -1,362 +0,0 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
use {
super::BcmHost,
crate::platform::MMIODerefWrapper,
core::marker::PhantomData,
tock_registers::{
fields::FieldValue,
interfaces::{ReadWriteable, Readable, Writeable},
register_structs,
registers::{ReadOnly, ReadWrite, WriteOnly},
},
};
// Descriptions taken from
// https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf
/// Generates `pub enums` with no variants for each `ident` passed in.
macro states($($name:ident),*) {
$(pub enum $name { })*
}
// Possible states for a GPIO pin.
states! {
Uninitialized, Input, Output, Alt
}
register_structs! {
/// The offsets for each register.
/// From <https://wiki.osdev.org/Raspberry_Pi_Bare_Bones> and
/// <https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf>
#[allow(non_snake_case)]
RegisterBlock {
(0x00 => pub FSEL: [ReadWrite<u32>; 6]), // function select
(0x18 => __reserved_1),
(0x1c => pub SET: [WriteOnly<u32>; 2]), // set output pin
(0x24 => __reserved_2),
(0x28 => pub CLR: [WriteOnly<u32>; 2]), // clear output pin
(0x30 => __reserved_3),
(0x34 => pub LEV: [ReadOnly<u32>; 2]), // get input pin level
(0x3c => __reserved_4),
(0x40 => pub EDS: [ReadWrite<u32>; 2]),
(0x48 => __reserved_5),
(0x4c => pub REN: [ReadWrite<u32>; 2]),
(0x54 => __reserved_6),
(0x58 => pub FEN: [ReadWrite<u32>; 2]),
(0x60 => __reserved_7),
(0x64 => pub HEN: [ReadWrite<u32>; 2]),
(0x6c => __reserved_8),
(0x70 => pub LEN: [ReadWrite<u32>; 2]),
(0x78 => __reserved_9),
(0x7c => pub AREN: [ReadWrite<u32>; 2]),
(0x84 => __reserved_10),
(0x88 => pub AFEN: [ReadWrite<u32>; 2]),
(0x90 => __reserved_11),
#[cfg(feature = "rpi3")]
(0x94 => pub PUD: ReadWrite<u32>), // pull up down
#[cfg(feature = "rpi3")]
(0x98 => pub PUDCLK: [ReadWrite<u32>; 2]),
#[cfg(feature = "rpi3")]
(0xa0 => __reserved_12),
#[cfg(feature = "rpi4")]
(0xe4 => PullUpDownControl: [ReadWrite<u32>; 4]),
(0xf4 => @END),
}
}
// Hide RegisterBlock from public api.
type Registers = MMIODerefWrapper<RegisterBlock>;
/// Public interface to the GPIO MMIO area
pub struct GPIO {
registers: Registers,
}
pub const GPIO_START: usize = 0x20_0000;
impl Default for GPIO {
fn default() -> GPIO {
// Default RPi3 GPIO base address
const GPIO_BASE: usize = BcmHost::get_peripheral_address() + GPIO_START;
unsafe { GPIO::new(GPIO_BASE) }
}
}
impl GPIO {
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(base_addr: usize) -> GPIO {
GPIO {
registers: Registers::new(base_addr),
}
}
pub fn get_pin(&self, pin: usize) -> Pin<Uninitialized> {
unsafe { Pin::new(pin, self.registers.base_addr) }
}
#[cfg(feature = "rpi3")]
pub fn power_off(&self) {
use crate::arch::loop_delay;
// power off gpio pins (but not VCC pins)
for bank in 0..5 {
self.registers.FSEL[bank].set(0);
}
self.registers.PUD.set(0);
loop_delay(2000);
self.registers.PUDCLK[0].set(0xffff_ffff);
self.registers.PUDCLK[1].set(0xffff_ffff);
loop_delay(2000);
// flush GPIO setup
self.registers.PUDCLK[0].set(0);
self.registers.PUDCLK[1].set(0);
}
#[cfg(feature = "rpi4")]
pub fn power_off(&self) {
todo!()
}
}
/// An alternative GPIO function.
#[repr(u8)]
pub enum Function {
Input = 0b000,
Output = 0b001,
Alt0 = 0b100,
Alt1 = 0b101,
Alt2 = 0b110,
Alt3 = 0b111,
Alt4 = 0b011,
Alt5 = 0b010,
}
impl ::core::convert::From<Function> for u32 {
fn from(f: Function) -> Self {
f as u32
}
}
/// Pull up/down resistor setup.
#[repr(u8)]
#[derive(PartialEq)]
pub enum PullUpDown {
None = 0b00,
Up = 0b01,
Down = 0b10,
}
impl ::core::convert::From<PullUpDown> for u32 {
fn from(p: PullUpDown) -> Self {
p as u32
}
}
/// A GPIO pin in state `State`.
///
/// The `State` generic always corresponds to an un-instantiable type that is
/// used solely to mark and track the state of a given GPIO pin. A `Pin`
/// structure starts in the `Uninitialized` state and must be transitioned into
/// one of `Input`, `Output`, or `Alt` via the `into_input`, `into_output`, and
/// `into_alt` methods before it can be used.
pub struct Pin<State> {
pin: usize,
registers: Registers,
_state: PhantomData<State>,
}
impl<State> Pin<State> {
/// Transitions `self` to state `NewState`, consuming `self` and returning a new
/// `Pin` instance in state `NewState`. This method should _never_ be exposed to
/// the public!
#[inline(always)]
fn transition<NewState>(self) -> Pin<NewState> {
Pin {
pin: self.pin,
registers: self.registers,
_state: PhantomData,
}
}
#[cfg(feature = "rpi3")]
pub fn set_pull_up_down(&self, pull: PullUpDown) {
use crate::arch::loop_delay;
let bank = self.pin / 32;
let off = self.pin % 32;
self.registers.PUD.set(0);
loop_delay(2000);
self.registers.PUDCLK[bank].modify(FieldValue::<u32, ()>::new(
0b1,
off,
if pull == PullUpDown::Up { 1 } else { 0 },
));
loop_delay(2000);
self.registers.PUD.set(0);
self.registers.PUDCLK[bank].set(0);
}
#[cfg(feature = "rpi4")]
pub fn set_pull_up_down(&self, pull: PullUpDown) {
let bank = self.pin / 16;
let off = self.pin % 16;
self.registers.PullUpDownControl[bank].modify(FieldValue::<u32, ()>::new(
0b11,
off * 2,
pull.into(),
));
}
}
impl Pin<Uninitialized> {
/// Returns a new GPIO `Pin` structure for pin number `pin`.
///
/// # Panics
///
/// Panics if `pin` > `53`.
unsafe fn new(pin: usize, base_addr: usize) -> Pin<Uninitialized> {
if pin > 53 {
panic!("gpio::Pin::new(): pin {} exceeds maximum of 53", pin);
}
Pin {
registers: Registers::new(base_addr),
pin,
_state: PhantomData,
}
}
/// Enables the alternative function `function` for `self`. Consumes self
/// and returns a `Pin` structure in the `Alt` state.
pub fn into_alt(self, function: Function) -> Pin<Alt> {
let bank = self.pin / 10;
let off = self.pin % 10;
self.registers.FSEL[bank].modify(FieldValue::<u32, ()>::new(
0b111,
off * 3,
function.into(),
));
self.transition()
}
/// Sets this pin to be an _output_ pin. Consumes self and returns a `Pin`
/// structure in the `Output` state.
pub fn into_output(self) -> Pin<Output> {
self.into_alt(Function::Output).transition()
}
/// Sets this pin to be an _input_ pin. Consumes self and returns a `Pin`
/// structure in the `Input` state.
pub fn into_input(self) -> Pin<Input> {
self.into_alt(Function::Input).transition()
}
}
impl Pin<Output> {
/// Sets (turns on) this pin.
pub fn set(&mut self) {
// Guarantees: pin number is between [0; 53] by construction.
let bank = self.pin / 32;
let shift = self.pin % 32;
self.registers.SET[bank].set(1 << shift);
}
/// Clears (turns off) this pin.
pub fn clear(&mut self) {
// Guarantees: pin number is between [0; 53] by construction.
let bank = self.pin / 32;
let shift = self.pin % 32;
self.registers.CLR[bank].set(1 << shift);
}
}
pub type Level = bool;
impl Pin<Input> {
/// Reads the pin's value. Returns `true` if the level is high and `false`
/// if the level is low.
pub fn level(&self) -> Level {
// Guarantees: pin number is between [0; 53] by construction.
let bank = self.pin / 32;
let off = self.pin % 32;
self.registers.LEV[bank].matches_all(FieldValue::<u32, ()>::new(1, off, 1))
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test_case]
fn test_pin_transitions() {
let mut reg = [0u32; 40];
let gpio = unsafe { GPIO::new(&mut reg as *mut _ as usize) };
let _out = gpio.get_pin(1).into_output();
assert_eq!(reg[0], 0b001_000);
let _inp = gpio.get_pin(12).into_input();
assert_eq!(reg[1], 0b000_000_000);
let _alt = gpio.get_pin(35).into_alt(Function::Alt1);
assert_eq!(reg[3], 0b101_000_000_000_000_000);
}
#[test_case]
fn test_pin_outputs() {
let mut reg = [0u32; 40];
let gpio = unsafe { GPIO::new(&mut reg as *mut _ as usize) };
let pin = gpio.get_pin(1);
let mut out = pin.into_output();
out.set();
assert_eq!(reg[7], 0b10); // SET pin 1 = 1 << 1
out.clear();
assert_eq!(reg[10], 0b10); // CLR pin 1 = 1 << 1
let pin = gpio.get_pin(35);
let mut out = pin.into_output();
out.set();
assert_eq!(reg[8], 0b1000); // SET pin 35 = 1 << (35 - 32)
out.clear();
assert_eq!(reg[11], 0b1000); // CLR pin 35 = 1 << (35 - 32)
}
#[test_case]
fn test_pin_inputs() {
let mut reg = [0u32; 40];
let gpio = unsafe { GPIO::new(&mut reg as *mut _ as usize) };
let pin = gpio.get_pin(1);
let inp = pin.into_input();
assert_eq!(inp.level(), false);
reg[13] = 0b10;
assert_eq!(inp.level(), true);
let pin = gpio.get_pin(35);
let inp = pin.into_input();
assert_eq!(inp.level(), false);
reg[14] = 0b1000;
assert_eq!(inp.level(), true);
}
}

View File

@ -1,328 +0,0 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2018-2019 Andre Richter <andre.o.richter@gmail.com>
* Copyright (c) Berkus Decker <berkus+vesper@metta.systems>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
#[cfg(not(feature = "noserial"))]
use tock_registers::interfaces::{Readable, Writeable};
use {
super::{gpio, BcmHost},
crate::{
devices::{ConsoleOps, SerialOps},
platform::MMIODerefWrapper,
},
cfg_if::cfg_if,
core::{convert::From, fmt},
tock_registers::{
interfaces::ReadWriteable,
register_bitfields, register_structs,
registers::{ReadOnly, ReadWrite, WriteOnly},
},
};
// Auxiliary mini UART registers
//
// Descriptions taken from
// https://github.com/raspberrypi/documentation/files/1888662/BCM2837-ARM-Peripherals.-.Revised.-.V2-1.pdf
register_bitfields! {
u32,
/// Auxiliary enables
AUX_ENABLES [
/// If set the mini UART is enabled. The UART will immediately
/// start receiving data, especially if the UART1_RX line is
/// low.
/// If clear the mini UART is disabled. That also disables any
/// mini UART register access
MINI_UART_ENABLE OFFSET(0) NUMBITS(1) []
],
/// Mini Uart Interrupt Identify
AUX_MU_IIR [
/// Writing with bit 1 set will clear the receive FIFO
/// Writing with bit 2 set will clear the transmit FIFO
FIFO_CLEAR OFFSET(1) NUMBITS(2) [
Rx = 0b01,
Tx = 0b10,
All = 0b11
]
],
/// Mini Uart Line Control
AUX_MU_LCR [
/// Mode the UART works in
DATA_SIZE OFFSET(0) NUMBITS(2) [
SevenBit = 0b00,
EightBit = 0b11
]
],
/// Mini Uart Line Status
AUX_MU_LSR [
/// This bit is set if the transmit FIFO is empty and the transmitter is
/// idle. (Finished shifting out the last bit).
TX_IDLE OFFSET(6) NUMBITS(1) [],
/// This bit is set if the transmit FIFO can accept at least
/// one byte.
TX_EMPTY OFFSET(5) NUMBITS(1) [],
/// This bit is set if the receive FIFO holds at least 1
/// symbol.
DATA_READY OFFSET(0) NUMBITS(1) []
],
/// Mini Uart Extra Control
AUX_MU_CNTL [
/// If this bit is set the mini UART transmitter is enabled.
/// If this bit is clear the mini UART transmitter is disabled.
TX_EN OFFSET(1) NUMBITS(1) [
Disabled = 0,
Enabled = 1
],
/// If this bit is set the mini UART receiver is enabled.
/// If this bit is clear the mini UART receiver is disabled.
RX_EN OFFSET(0) NUMBITS(1) [
Disabled = 0,
Enabled = 1
]
],
/// Mini Uart Status
AUX_MU_STAT [
TX_DONE OFFSET(9) NUMBITS(1) [
No = 0,
Yes = 1
],
/// This bit is set if the transmit FIFO can accept at least
/// one byte.
SPACE_AVAILABLE OFFSET(1) NUMBITS(1) [
No = 0,
Yes = 1
],
/// This bit is set if the receive FIFO holds at least 1
/// symbol.
SYMBOL_AVAILABLE OFFSET(0) NUMBITS(1) [
No = 0,
Yes = 1
]
],
/// Mini Uart Baud rate
AUX_MU_BAUD [
/// Mini UART baud rate counter
RATE OFFSET(0) NUMBITS(16) []
]
}
register_structs! {
#[allow(non_snake_case)]
RegisterBlock {
// 0x00 - AUX_IRQ?
(0x00 => __reserved_1),
(0x04 => AUX_ENABLES: ReadWrite<u32, AUX_ENABLES::Register>),
(0x08 => __reserved_2),
(0x40 => AUX_MU_IO: ReadWrite<u32>),//Mini Uart I/O Data
(0x44 => AUX_MU_IER: WriteOnly<u32>),//Mini Uart Interrupt Enable
(0x48 => AUX_MU_IIR: WriteOnly<u32, AUX_MU_IIR::Register>),
(0x4c => AUX_MU_LCR: WriteOnly<u32, AUX_MU_LCR::Register>),
(0x50 => AUX_MU_MCR: WriteOnly<u32>),
(0x54 => AUX_MU_LSR: ReadOnly<u32, AUX_MU_LSR::Register>),
// 0x58 - AUX_MU_MSR
// 0x5c - AUX_MU_SCRATCH
(0x58 => __reserved_3),
(0x60 => AUX_MU_CNTL: WriteOnly<u32, AUX_MU_CNTL::Register>),
(0x64 => AUX_MU_STAT: ReadOnly<u32, AUX_MU_STAT::Register>),
(0x68 => AUX_MU_BAUD: WriteOnly<u32, AUX_MU_BAUD::Register>),
(0x6c => @END),
}
}
type Registers = MMIODerefWrapper<RegisterBlock>;
pub struct MiniUart {
registers: Registers,
}
pub struct PreparedMiniUart(MiniUart);
/// Divisor values for common baud rates
pub enum Rate {
Baud115200 = 270,
}
impl From<Rate> for u32 {
fn from(r: Rate) -> Self {
r as u32
}
}
// [temporary] Used in mmu.rs to set up local paging
pub const UART1_START: usize = 0x21_5000;
impl Default for MiniUart {
fn default() -> Self {
const UART1_BASE: usize = BcmHost::get_peripheral_address() + UART1_START;
unsafe { MiniUart::new(UART1_BASE) }
}
}
impl MiniUart {
/// # Safety
///
/// Unsafe, duh!
pub const unsafe fn new(base_addr: usize) -> MiniUart {
MiniUart {
registers: Registers::new(base_addr),
}
}
}
impl MiniUart {
cfg_if! {
if #[cfg(not(feature = "noserial"))] {
/// Set baud rate and characteristics (115200 8N1) and map to GPIO
pub fn prepare(self, gpio: &gpio::GPIO) -> PreparedMiniUart {
// GPIO pins should be set up first before enabling the UART
// Pin 14
const MINI_UART_TXD: gpio::Function = gpio::Function::Alt5;
// Pin 15
const MINI_UART_RXD: gpio::Function = gpio::Function::Alt5;
// map UART1 to GPIO pins
gpio.get_pin(14).into_alt(MINI_UART_TXD).set_pull_up_down(gpio::PullUpDown::Up);
gpio.get_pin(15).into_alt(MINI_UART_RXD).set_pull_up_down(gpio::PullUpDown::Up);
// initialize UART
self.registers.AUX_ENABLES.modify(AUX_ENABLES::MINI_UART_ENABLE::SET);
self.registers.AUX_MU_IER.set(0);
self.registers.AUX_MU_CNTL.set(0);
self.registers.AUX_MU_LCR.write(AUX_MU_LCR::DATA_SIZE::EightBit);
self.registers.AUX_MU_MCR.set(0);
self.registers.AUX_MU_IER.set(0);
self.registers.AUX_MU_BAUD
.write(AUX_MU_BAUD::RATE.val(Rate::Baud115200.into()));
// Clear FIFOs before using the device
self.registers.AUX_MU_IIR.write(AUX_MU_IIR::FIFO_CLEAR::All);
self.registers.AUX_MU_CNTL
.write(AUX_MU_CNTL::RX_EN::Enabled + AUX_MU_CNTL::TX_EN::Enabled);
PreparedMiniUart(self)
}
} else {
pub fn prepare(self, _gpio: &gpio::GPIO) -> PreparedMiniUart {
PreparedMiniUart(self)
}
}
}
}
impl Drop for PreparedMiniUart {
fn drop(&mut self) {
self.0
.registers
.AUX_ENABLES
.modify(AUX_ENABLES::MINI_UART_ENABLE::CLEAR);
// @todo disable gpio.PUD ?
}
}
impl SerialOps for PreparedMiniUart {
cfg_if! {
if #[cfg(not(feature = "noserial"))] {
/// Receive a byte without console translation
fn read_byte(&self) -> u8 {
// wait until something is in the buffer
crate::arch::loop_until(|| self.0.registers.AUX_MU_STAT.is_set(AUX_MU_STAT::SYMBOL_AVAILABLE));
// read it and return
self.0.registers.AUX_MU_IO.get() as u8
}
fn write_byte(&self, b: u8) {
// wait until we can send
crate::arch::loop_until(|| self.0.registers.AUX_MU_STAT.is_set(AUX_MU_STAT::SPACE_AVAILABLE));
// write the character to the buffer
self.0.registers.AUX_MU_IO.set(b as u32);
}
/// Wait until the TX FIFO is empty, aka all characters have been put on the
/// line.
fn flush(&self) {
crate::arch::loop_until(|| self.0.registers.AUX_MU_STAT.is_set(AUX_MU_STAT::TX_DONE));
}
/// Consume input until RX FIFO is empty, aka all pending characters have been
/// consumed.
fn clear_rx(&self) {
crate::arch::loop_while(|| {
let pending = self.0.registers.AUX_MU_STAT.is_set(AUX_MU_STAT::SYMBOL_AVAILABLE);
if pending { self.read_byte(); }
pending
});
}
} else {
fn read_byte(&self) -> u8 { 0 }
fn write_byte(&self, _byte: u8) {}
fn flush(&self) {}
fn clear_rx(&self) {}
}
}
}
impl ConsoleOps for PreparedMiniUart {
cfg_if! {
if #[cfg(not(feature = "noserial"))] {
/// Send a character
fn write_char(&self, c: char) {
self.write_byte(c as u8);
}
/// Display a string
fn write_string(&self, string: &str) {
for c in string.chars() {
// convert newline to carriage return + newline
if c == '\n' {
self.write_char('\r')
}
self.write_char(c);
}
}
/// Receive a character
fn read_char(&self) -> char {
let mut ret = self.read_byte() as char;
// convert carriage return to newline -- this doesn't work well for reading binaries...
if ret == '\r' {
ret = '\n'
}
ret
}
} else {
fn write_char(&self, _c: char) {}
fn write_string(&self, _string: &str) {}
fn read_char(&self) -> char {
'\n'
}
}
}
}
impl fmt::Write for PreparedMiniUart {
fn write_str(&mut self, s: &str) -> fmt::Result {
self.write_string(s);
Ok(())
}
}

92
machine/src/state.rs Normal file
View File

@ -0,0 +1,92 @@
// SPDX-License-Identifier: MIT OR Apache-2.0
//
// Copyright (c) 2020-2022 Andre Richter <andre.o.richter@gmail.com>
//! State information about the kernel itself.
use core::sync::atomic::{AtomicU8, Ordering};
//--------------------------------------------------------------------------------------------------
// Private Definitions
//--------------------------------------------------------------------------------------------------
/// Different stages in the kernel execution.
#[derive(Copy, Clone, Eq, PartialEq)]
enum State {
/// The kernel starts booting in this state.
Init,
/// The kernel transitions to this state when jumping to `kernel_main()` (at the end of
/// `kernel_init()`, after all init calls are done).
SingleCoreMain,
/// The kernel transitions to this state when it boots the secondary cores, aka switches
/// exectution mode to symmetric multiprocessing (SMP).
MultiCoreMain,
}
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Maintains the kernel state and state transitions.
pub struct StateManager(AtomicU8);
//--------------------------------------------------------------------------------------------------
// Global instances
//--------------------------------------------------------------------------------------------------
static STATE_MANAGER: StateManager = StateManager::new();
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Return a reference to the global StateManager.
pub fn state_manager() -> &'static StateManager {
&STATE_MANAGER
}
impl StateManager {
const INIT: u8 = 0;
const SINGLE_CORE_MAIN: u8 = 1;
const MULTI_CORE_MAIN: u8 = 2;
/// Create a new instance.
pub const fn new() -> Self {
Self(AtomicU8::new(Self::INIT))
}
/// Return the current state.
fn state(&self) -> State {
let state = self.0.load(Ordering::Acquire);
match state {
Self::INIT => State::Init,
Self::SINGLE_CORE_MAIN => State::SingleCoreMain,
Self::MULTI_CORE_MAIN => State::MultiCoreMain,
_ => panic!("Invalid KERNEL_STATE"),
}
}
/// Return if the kernel is init state.
pub fn is_init(&self) -> bool {
self.state() == State::Init
}
/// Transition from Init to SingleCoreMain.
pub fn transition_to_single_core_main(&self) {
if self
.0
.compare_exchange(
Self::INIT,
Self::SINGLE_CORE_MAIN,
Ordering::Acquire,
Ordering::Relaxed,
)
.is_err()
{
panic!("transition_to_single_core_main() called while state != Init");
}
}
}

View File

@ -1,44 +0,0 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2019 Andre Richter <andre.o.richter@gmail.com>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
use core::cell::UnsafeCell;
pub struct NullLock<T> {
data: UnsafeCell<T>,
}
/// Since we are instantiating this struct as a static variable, which could
/// potentially be shared between different threads, we need to tell the compiler
/// that sharing of this struct is safe by marking it with the Sync trait.
///
/// At this point in time, we can do so without worrying, because the kernel
/// anyways runs on a single core and interrupts are disabled. In short, multiple
/// threads don't exist yet in our code.
///
/// Literature:
/// * <https://doc.rust-lang.org/beta/nomicon/send-and-sync.html>
/// * <https://doc.rust-lang.org/book/ch16-04-extensible-concurrency-sync-and-send.html>
unsafe impl<T> Sync for NullLock<T> {}
impl<T> NullLock<T> {
pub const fn new(data: T) -> NullLock<T> {
NullLock {
data: UnsafeCell::new(data),
}
}
}
impl<T> NullLock<T> {
pub fn lock<F, R>(&self, f: F) -> R
where
F: FnOnce(&mut T) -> R,
{
// In a real lock, there would be code around this line that ensures
// that this mutable reference will ever only be given out to one thread
// at a time.
f(unsafe { &mut *self.data.get() })
}
}

View File

@ -0,0 +1,165 @@
/*
* SPDX-License-Identifier: MIT OR BlueOak-1.0.0
* Copyright (c) 2019 Andre Richter <andre.o.richter@gmail.com>
* Original code distributed under MIT, additional changes are under BlueOak-1.0.0
*/
use core::cell::UnsafeCell;
//--------------------------------------------------------------------------------------------------
// Public Definitions
//--------------------------------------------------------------------------------------------------
/// Synchronization interfaces.
pub mod interface {
/// Any object implementing this trait guarantees exclusive access to the data wrapped within
/// the Mutex for the duration of the provided closure.
pub trait Mutex {
/// The type of the data that is wrapped by this mutex.
type Data;
/// Locks the mutex and grants the closure temporary mutable access to the wrapped data.
fn lock<R>(&self, f: impl FnOnce(&mut Self::Data) -> R) -> R;
}
/// A reader-writer exclusion type.
///
/// The implementing object allows either a number of readers or at most one writer at any point
/// in time.
pub trait ReadWriteEx {
/// The type of encapsulated data.
type Data;
/// Grants temporary mutable access to the encapsulated data.
fn write<R>(&self, f: impl FnOnce(&mut Self::Data) -> R) -> R;
/// Grants temporary immutable access to the encapsulated data.
fn read<R>(&self, f: impl FnOnce(&Self::Data) -> R) -> R;
}
}
/// A pseudo-lock for teaching purposes.
///
/// In contrast to a real Mutex implementation, does not protect against concurrent access from
/// other cores to the contained data. This part is preserved for later lessons.
///
/// The lock will only be used as long as it is safe to do so, i.e. as long as the kernel is
/// executing on a single core.
pub struct IRQSafeNullLock<T>
where
T: ?Sized,
{
data: UnsafeCell<T>,
}
/// A pseudo-lock that is RW during the single-core kernel init phase and RO afterwards.
///
/// Intended to encapsulate data that is populated during kernel init when no concurrency exists.
pub struct InitStateLock<T>
where
T: ?Sized,
{
data: UnsafeCell<T>,
}
//--------------------------------------------------------------------------------------------------
// Public Code
//--------------------------------------------------------------------------------------------------
/// Since we are instantiating this struct as a static variable, which could
/// potentially be shared between different threads, we need to tell the compiler
/// that sharing of this struct is safe by marking it with the Sync trait.
///
/// At this point in time, we can do so without worrying, because the kernel
/// anyways runs on a single core and interrupts are disabled. In short, multiple
/// threads don't exist yet in our code.
///
/// Literature:
/// * <https://doc.rust-lang.org/beta/nomicon/send-and-sync.html>
/// * <https://doc.rust-lang.org/book/ch16-04-extensible-concurrency-sync-and-send.html>
unsafe impl<T> Send for IRQSafeNullLock<T> where T: ?Sized + Send {}
unsafe impl<T> Sync for IRQSafeNullLock<T> where T: ?Sized + Send {}
impl<T> IRQSafeNullLock<T> {
/// Create an instance.
pub const fn new(data: T) -> Self {
Self {
data: UnsafeCell::new(data),
}
}
}
unsafe impl<T> Send for InitStateLock<T> where T: ?Sized + Send {}
unsafe impl<T> Sync for InitStateLock<T> where T: ?Sized + Send {}
impl<T> InitStateLock<T> {
/// Create an instance.
pub const fn new(data: T) -> Self {
Self {
data: UnsafeCell::new(data),
}
}
}
//------------------------------------------------------------------------------
// OS Interface Code
//------------------------------------------------------------------------------
use crate::{exception, state};
impl<T> interface::Mutex for IRQSafeNullLock<T> {
type Data = T;
fn lock<R>(&self, f: impl FnOnce(&mut Self::Data) -> R) -> R {
// In a real lock, there would be code encapsulating this line that ensures that this
// mutable reference will ever only be given out once at a time.
let data = unsafe { &mut *self.data.get() };
// Execute the closure while IRQs are masked.
exception::asynchronous::exec_with_irq_masked(|| f(data))
}
}
impl<T> interface::ReadWriteEx for InitStateLock<T> {
type Data = T;
fn write<R>(&self, f: impl FnOnce(&mut Self::Data) -> R) -> R {
assert!(
state::state_manager().is_init(),
"InitStateLock::write called after kernel init phase"
);
assert!(
!exception::asynchronous::is_local_irq_masked(),
"InitStateLock::write called with IRQs unmasked"
);
let data = unsafe { &mut *self.data.get() };
f(data)
}
fn read<R>(&self, f: impl FnOnce(&Self::Data) -> R) -> R {
let data = unsafe { &*self.data.get() };
f(data)
}
}
//--------------------------------------------------------------------------------------------------
// Testing
//--------------------------------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*; //, test_macros::kernel_test};
/// InitStateLock must be transparent.
#[test_case]
fn init_state_lock_is_transparent() {
use core::mem::size_of;
assert_eq!(size_of::<InitStateLock<u64>>(), size_of::<u64>());
}
}

Some files were not shown because too many files have changed in this diff Show More