There are a variety of known bit-code formats that are each suitable for their specific task:
This format is based on an XML-like binary data stream model that can be used as a common compiler target for a variety of architectures and languages.
Mainly Java compilation target; increasingly reused as a larger target in this ecosystem.
Shader target language, graphics hardware abstraction.
Developed as a goal to better integrate ecosystems and existing code into the browser context.
Specialized: (eg Lua and Python bytecodes):
Interpreters generally execute bytecode during execution, whether explicit or not.
The hardware / ISA ecosystem is equally divided, in particular under ARM, i386 / x86, x86_64, Microchip / Atmel and now Risc-V.
There is also fragmentation in the computer graphics industry, which has outperformed CPU growth in many ways lately.
The hardware acceleration / API design is primarily a split between Khronos and Microsoft, but now Apple and Google are joining. Even with the two leading vendors, backward compatibility has proven to be a difficult problem, resulting in increased segmentation and overhead.
All I can see is that in the future these problems will worsen with the current state of traditional Moore's Law, the increasing use of accelerators and FPGAs, the proliferation of the Internet of Things, various types of storage and caching, hyperscale, and so forth.
In the past, segmentation was avoided forsean and in case of character set localization. For this reason, the Unicode consortium was founded independently of a single company / organization. The Unicode consortium has been able to standardize over 100,000 character encodings, and this is virtually universal.
Theoretically, I see no reason why this is not possible in the area of bitcode / Turing.
https://www.quora.com/unanswered/Why-is-the-extensive-Standards-Organisation-for-Instruction- Set-Architecture-ISA-and-Bitcodes- as-the-is-the– Case -by-Unicode 1
In response to the community:
- @Gilles Unicode acts as a supergroup of its predecessor ASCII. It complies with UTF-8, UTF-16 and UTF-32 standards. Modern microprocessors are no strangers to extending type codes in their instruction decode pipelines. Each computer can emulate larger types than its baseline, and support at the architectural level is not uncommon.