#mlir #llvm #compiler
- An MLIR-inspired extensible compiler framework written in pure Rust.
- IR Format: Pliron’s generic IR (Intermediate Representation) format is based on SSA (Static Single Assignment) form and is conceptually similar to MLIR. Like MLIR, it is a nested IR, meaning it supports hierarchical structures such as operations containing regions, which in turn contain blocks and other operations. However, there may be differences in specific implementation details.
- Arenas: It employs generational arenas to store operations, regions, and blocks. The
slotmap
crate is used for efficient generational arena storage and access. - Macros: Heavily relies on macros (proc-macros + macro rules), which may impact code readability and maintainability.
- Extensibility: Pliron’s design aligns well with Rust’s strengths, leveraging Rust’s trait system for extensibility and type safety.
- Dialects: Includes an implementation of an LLVM dialect (Pliron’s version of LLVM IR).
Let starts with LLVM
- Number of OpCodes: 70 approx
- types: 20 or so
- Number of OpCodes: 30 approx (with many sub ops)
- types: 20 or so
- Number of Dialects: 50 or probably more
- Number of Ops: 500+
- You can think of MLIR as a collection of Domain-Specific Languages (DSLs), each designed to model a specific domain and equipped with its own set of domain-specific operations.
- A key feature of MLIR is its ability to provide a lowering mechanism, enabling transformations from one DSL (typically higher-level and more abstract) to another (relatively lower-level and closer to hardware or execution).
@triton.jit
def add_kernel(x_ptr, y, output_ptr, BLOCK_SIZE: tl.constexpr):
pid = t1.program_id(axis=0)
block_start = pid * BLOCK_SIZE
offsets = block_start + tl.arange(0, BLOCK_SIZE)
x = tl.load(x_ptr + offsets)
output = x + y
tl.store(output_ptr + offsets, output)
can be translated to TTIR, one of the many MLIR DSLs or dialects
tt.func public @add_kernel_01234(%arg0: !tt.ptr<f32>, %arg1: f32, %arg2:
!tt.ptr<f32>) {
%c1024_i32 = arith.constant 1024 : i32
%0 = tt.get_program_id x : i32
%1 = arith.muli %0, %c1024_i32 : i32
%2 = tt.make_range {end = 1024 : i32, start = 0 : i32} : tensor<1024xi32>
%3 = tt.splat %1 : i32 -> tensor<1024xi32>
%4 = arith.addi %3, %2 : tensor<1024xi32>
%5 = tt.splat %arg0 : !tt.ptr<f32> -> tensor<1024x!tt.ptr<f32>>
%6 = tt.addptr %5, %4 : tensor<1024x!tt.ptr<f32>>, tensor<1024xi32>
%7 = tt.load %6 : tensor<1024x!tt.ptr<f32>>
%8 = tt.splat %arg1 : f32 -> tensor<1024xf32>
%9 = arith.addf %7, %8 : tensor<1024xf32>
%10 = tt.splat %arg2 : !tt.ptr<f32> -> tensor<1024x!tt.ptr<f32>>
%11 = tt.addptr %10, %4 : tensor<1024x!tt.ptr<f32>>, tensor<1024xi32>
tt.store %11, %9 : tensor<1024x!tt.ptr<f32>>
tt.return
- Use your domain-specific knowledge to optimize code
- In the above case, you could do something like
- Replace tensor-tensor operation with tensor-scalar operation when possible
%8 = tt.splat %arg1 : f32 -> tensor<1024xf32>
%9 = arith.addf %7, %8 : tensor<1024xf32>
The above can replaced with the following
%9 = myarith.add_ts %7, %arg1 : tensor<1024xf32>, f32 -> tensor<1024xf32>
Many hardware architectures natively support vector-scalar operations, but the arith
dialect in MLIR does not provide native support for them. Adding such support could offer significant benefits:
- Fewer instructions to execute: Vector-scalar operations reduce the number of instructions needed, improving efficiency.
- Avoid materializing splatted scalars as tensors: This eliminates the overhead of creating and managing tensors for repeated scalar values.
Define a Custom myarith
Dialect and a Custom add_ts
Operation
- Define the
myarith
Dialect:- Create a custom MLIR dialect named myarith. This dialect will serve as a container for custom arithmetic operations, including the add_ts operation.
- Define the
add_ts
Operation:- Within the myarith dialect, define a custom operation called add_ts. This operation will perform a specialized addition tailored to your specific requirements.
- Implement a Simple MLIR Pass:
- Develop an MLIR pass that identifies and rewrites sequences of operations, specifically the splat-add pattern, and replaces them with the new add_ts operation. This pass will optimize the IR by consolidating the splat-add sequence into a single, more efficient add_ts operation.
- Context:
- The Context is the central data structure that holds all IR-related data, such as operations, types, and attributes.
- It acts as a container for the IR and provides methods for creating and manipulating IR elements.
- Operation:
- Operations represent individual instructions or nodes in the IR.
- Each operation has a set of operands (inputs), results (outputs), and attributes (metadata).
- Type:
- Types represent the data types used in the IR, such as integers, floats, or custom types.
- Users can define their own types by implementing the Type trait.
- Attribute:
- Attributes are used to attach additional information to operations or types.
- Examples include constant values, debug information, or optimization hints.
- Pass:
- A Pass is a transformation or analysis that operates on the IR.
- Users can define custom passes to implement optimizations, analyses, or lowering transformations.
Pliron Index type | Storage Vec type | Slot 0 | Slot 1 | Slot 2 | Slot 3 |
---|---|---|---|---|---|
🟡 Ptr { idx: u32, version: NonZeroU32, } |
🟡 Operation | Slot { data : T, occupied : u32 } |
Slot { 🟣 next_free: slot3, 🟢 vacant : u32 } |
Slot { data : T, occupied : u32 } |
Slot { 🟣 next_free : ext, vacant : u32 } |
🔵 Ptr { idx: u32, version: NonZeroU32, } |
🔵 Region | Slot { data : T, occupied : u32 } |
Slot { next_free: ext, vacant : u32 } |
Slot { data : T, occupied : u32 } |
Slot { data : T, occupied : u32 } |
🔴 Ptr { idx: u32, version: NonZeroU32, } |
🔴 BasicBlock | Slot { data : T, occupied : u32 } |
Slot { data : T, occupied : u32 } |
Slot { data : T, occupied : u32 } |
Slot { data : T, occupied : u32 } |
- Pliron features a type called
Context
, which serves as the central storage for all IR (Intermediate Representation) data during a compilation session. This includes operations, regions, basic blocks, and additional metadata. Below is an overview of its structure:
#[derive(Default)]
pub struct Context {
/// Allocation pool for [Operation]s.
pub operations: ArenaCell<Operation>,
/// Allocation pool for [BasicBlock]s.
pub basic_blocks: ArenaCell<BasicBlock>,
/// Allocation pool for [Region]s.
pub regions: ArenaCell<Region>,
/// Registered [Dialect]s.
pub dialects: FxHashMap<DialectName, Dialect>,
/// Registered [Op](crate::op::Op)s.
pub ops: FxHashMap<OpId, OpCreator>,
/// Storage for uniqued [TypeObj]s.
pub(crate) type_store: UniqueStore<TypeObj>,
/// Storage for other uniqued objects.
pub(crate) uniqued_any_store: UniqueStore<UniquedAny>,
#[cfg(test)]
pub(crate) linked_list_store: crate::linked_list::tests::LinkedListTestArena,
}
AreanCell<T>
is just a type alias.
// Note the SlotMap holds a RefCell<T> not a <T>
pub type ArenaCell<T> = SlotMap<ArenaIndex, RefCell<T>>;
/// Slot map, storage with stable unique keys.
///
/// See [crate documentation](crate) for more details.
#[derive(Debug)]
pub struct SlotMap<K: Key, V> {
slots: Vec<Slot<V>>,
free_head: u32,
num_elems: u32,
_k: PhantomData<fn(K) -> K>,
}
// A slot, which represents storage for a value and a current version.
// Can be occupied or vacant.
struct Slot<T> {
u: SlotUnion<T>, // union whose variants are either IR data or idx to the next free slot.
version: u32, // Even = vacant, odd = occupied.
}
// Storage inside a slot or metadata for the freelist when vacant.
union SlotUnion<T> {
value: ManuallyDrop<T>,
next_free: u32,
}
The slot_map
crate provides a map type called SlotMap
, which serves as the backing store for all IR (Intermediate Representation) data.
- Note: A SlotMap is essentially a wrapper for a
Vec
of slots. Each slot can either store IR data (such as an operation, region, or block) or act as metadata for the freelist when it is vacant. - A key (pun intended) point to note is that the keys themselves are not stored in the SlotMap.
- The keys are of type
ArenaIndex
, which implements theKey
trait. Keys are generated when new data is inserted into a SlotMap. These keys can be stored and later used to index into the SlotMap. - In simple terms, a key is a combination of an index and a generational version.
/// Key(s) have to implement this trait to access stored values in a slot map.
pub unsafe trait Key: From<KeyData> + few more traits {...}
/// The actual data stored in a [`Key`].
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct KeyData {
idx: u32,
version: NonZeroU32, // generational version.
}
SlotMap
also includes a helper macro to create new key types - new_key_type!
. In Pliron, each type of slot map (such as those for operations, blocks, regions, etc.) wraps its key type in a Ptr
.
- This design ensures type safety by entirely preventing the use of a wrong key with the wrong slot map.
new_key_type! {
pub struct ArenaIndex;
}
/// The above macro expands to this
#[repr(transparent)]
pub struct ArenaIndex(::slotmap::KeyData); // and also implements the usual traits - Copy + Clone etc.
/// Pointer to an IR Object owned by Context.
#[derive(Debug)]
pub struct Ptr<T: ArenaObj> {
pub(crate) idx: ArenaIndex,
pub(crate) _dummy: PhantomData<T>,
}
Pliron stores function pointers inside a Box, and those function pointers implement a dynamic trait conversion function.
#[distributed_slice]
pub static TRAIT_CASTERS: [LazyLock<((TypeId, TypeId), Box<dyn ClonableAny + Sync + Send>)>];
static TRAIT_CASTERS_MAP: LazyLock<
FxHashMap<(TypeId, TypeId), Box<dyn ClonableAny + Sync + Send>>,
> = LazyLock::new(|| {
TRAIT_CASTERS
.iter()
.map(|lazy_tuple| (**lazy_tuple).clone())
.collect()
});
- A LazyLock in Rust is a mechanism that allows you to defer the initialization of a value until it is first accessed at runtime. This is often referred to as lazy initialization.
Why UseLazyLock?
- Performance Optimization: If initializing a value is expensive (e.g., it involves heavy computation or allocating a large data structure), you might want to delay that work until it's absolutely necessary.
- Avoid Unnecessary Work: If the value is never used, you avoid the cost of initializing it altogether.
- Global State: LazyLock is often used for global variables or static data that need to be initialized lazily and safely in a multi-threaded context.
Pliron
provides a way to cast a type that is stored as a dyn Any
(a dynamically typed object) into a dyn Trait
(a dynamically typed trait object). This is useful when you have a type that implements a trait, and you want to cast it to that trait dynamically at runtime.
dyn Any
: This is a type that can hold any type of data, but you don't know what type it is at compile time. You can think of it as a "box" that can hold any kind of value.dyn Trait
: This is a trait object, which means it can hold any type that implements a specific trait. For example, if you have a traitTrait
, thendyn Trait
can hold any type that implementsTrait
.- Casting: The process of converting one type to another. In this case, we want to convert a
dyn Any
to adyn Trait
.
type_to_trait!
Macro: This macro is used to specify that a certain type can be cast to a certain trait. For example, if you have a typeS1
that implements a traitTrait
, you can use this macro to tell the system thatS1
can be cast todyn Trait
.any_to_trait
Function: This function takes adyn Any
and tries to cast it to adyn Trait
. It does this by looking up in a global map (TRAIT_CASTERS_MAP
) to see if there's a registered caster for the given type and trait. If it finds one, it uses that caster to perform the cast.- Global Map (
TRAIT_CASTERS_MAP
): This map stores all the registered casters. Each caster is associated with a pair ofTypeId
s: one for the type and one for the trait. When you callany_to_trait
, it looks up the caster in this map using theTypeId
of the type and theTypeId
of the trait. - ClonableAny Trait: This is a helper trait that combines
Any
,DynClone
, andDowncast
. It allows the casters to be stored in the global map and cloned if needed.-
Any: All concrete types in Rust implement
Any
by default, including function pointers.- This is because
Any
is implemented for all'static
types:impl<T: 'static> Any for T { }
- This is because
-
DynClone: Normally, trait objects in Rust do not support cloning because
Clone requires Self: Sized
, which trait objects aren’t.DynClone
works around this by defining a customclone_box
method and implementingClone
for . In other words,DynClone
enables cloning Box, Arc, etc.- DynClone is implemented for all Clone types~, meaning function pointers can be treated as DynClone when boxed.
-
Downcast: trait adds downcasting support to trait objects using only safe Rust. The Downcast trait is just a helper to make it easier to convert a trait object back into its original type. It does this by exposing an
as_any
method which is used to downcast the caster (i.e. fn pointers) to apub trait Downcast: Any { fn as_any(&self) -> &dyn Any; } pub fn any_to_trait<T: ?Sized + 'static>(r: &dyn Any) -> Option<&T> { TRAIT_CASTERS_MAP .get(&(r.type_id(), TypeId::of::<T>())) .and_then(|caster| { if let Some(caster) = (**caster) // While Any allows runtime type checking, it doesn’t provide a direct way to cast // from dynTrait back to Any in a generic way. // By requiring Downcast, Pliron ensures that every type explicitly provides as_any(), // which is needed in their trait casting system. .as_any() .downcast_ref::<for<'a> fn(&'a (dyn Any + 'static)) -> Option<&'a T>>() { return caster(r); } None }) }
-
Let's say you have a type S1
that implements a trait Trait
. You can use the type_to_trait!
macro to register that S1
can be cast to dyn Trait
:
type_to_trait!(S1, Trait);
Now, if you have a `dyn Any` that contains an `S1`, you can cast it to `dyn Trait` like this:
let s1: &dyn Any = &S1;
let trait_obj = any_to_trait::<dyn Trait>(s1).expect("Expected S1 to implement Trait");
type_to_trait!
: Registers that a type can be cast to a trait.any_to_trait
: Performs the cast fromdyn Any
todyn Trait
.- Global Map: Stores all the registered casters.
ClonableAny
: A helper trait to make the casters storable and clonable.
In short: This code is essentially a way to dynamically cast types to traits at runtime, which can be very useful in situations where you need to work with types that are not known until runtime.
Pliron provides use-def and def-use chains using Rust's type system i.e. the implementation is memory-safe.
Pliron's use-def and def-use chains are built around two key concepts:
- Definitions (Def): These are values or blocks that produce something (e.g., the result of an operation or a block argument).
- Uses (Use): These are places where definitions are referenced (e.g., operands in an operation or successors of a block).
The chains link definitions to their uses and vice versa, enabling efficient traversal and analysis of the IR.
Value
:- Represents a definition in the IR.
- Can be either:
- An operation result (e.g.,
%0 = arith.addi %1, %2
defines%0
). - A block argument (e.g., a function parameter or a phi node input).
- An operation result (e.g.,
/// Describes a value definition.
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub enum Value {
OpResult {
op: Ptr<Operation>,
res_idx: usize,
},
BlockArgument {
block: Ptr<BasicBlock>,
arg_idx: usize,
},
}
-
Ptr<BasicBlock>
:- Represents a block definition.
- Its uses are in the predecessor branch operations (e.g., a branch that jumps to this block).
-
Use
:- Represents a use of a definition.
- Can be:
- A value use (e.g.,
%1
in%0 = arith.addi %1, %2
). - A block use (e.g., a successor block in a branch operation).
- A value use (e.g.,
-
DefNode
:- Tracks all the uses of a definition.
- Contains a
FxHashSet<Use<T>>
to store the uses efficiently. FxHashSet
is a fast, deterministic hash set used to storeUse
objects inDefNode
for efficient lookups and updates.
-
UseNode
:- Tracks the definition being used.
- Contains a pointer to the definition (
def: T
).
/// def-use chains are implemented for [Value]s and `Ptr<BasicBlock`.
pub trait DefUseParticipant: Copy + Hash + Eq {}
impl DefUseParticipant for Value {}
impl DefUseParticipant for Ptr<BasicBlock> {}
/// A def node contains a list of its uses.
pub(crate) struct DefNode<T: DefUseParticipant> {
/// The list of uses of this Def.
uses: FxHashSet<Use<T>>,
}
/// A use node contains a pointer to its definition.
#[derive(Clone, Copy, Debug)]
pub(crate) struct UseNode<T: DefUseParticipant> {
/// The definition that this is a use of.
def: T,
}
/// Describes a [Value] or [BasicBlock] use.
#[derive(Clone, Copy, Eq, PartialEq, Hash)]
pub struct Use<T: DefUseParticipant> {
/// Uses of a def can only be in an operation.
pub op: Ptr<Operation>,
/// Used as the i'th operand or successor of [op](Self::op).
pub opd_idx: usize,
pub(crate) _dummy: PhantomData<T>,
}
-
Def-Use Chain:
- Each definition (
Value
orPtr<BasicBlock>
) has aDefNode
. - The
DefNode
stores a set ofUse
objects, representing all the places where the definition is used. - For example:
- If
%0
is defined by an operation, itsDefNode
will store all theUse
objects where%0
is referenced as an operand.
- If
- Each definition (
-
Use-Def Chain:
- Each
Use
object points back to its definition. - For example:
- If
%0
is used in an operation, theUse
object for that operand will point to theValue
representing%0
.
- If
- Each
-
Updating Chains:
- When a new use is added, it is registered in the
DefNode
of the definition. - When a use is removed, it is deleted from the
DefNode
. - When a use is replaced (e.g., during optimization), the chains are updated to reflect the new definition.
- When a new use is added, it is registered in the
Operations in a compilation Context
:
- Can retrieve their
OpResult
(s). AnOpResult
has adef
field that contains a list of the result’s uses, i.e.,DefNode
(s). - Can also retrieve their
Operand
(s). AnOperand
has ause
field that serves as the container for aUse
in an operation, i.e.,UseNode
(s).
Pliron provides two traits to enable Value
(s) or Ptr<BasicBlock>
(s) to perform the actual retrieval:
/// Interface for [UseNode] wrappers.
pub(crate) trait UseTrait: DefUseParticipant {
/// Get a mutable reference to the [UseNode] described by this use.
fn get_usenode_mut<'a>(r#use: &Use<Self>, ctx: &'a Context) -> RefMut<'a, UseNode<Self>>;
}
/// Interface for [DefNode] wrappers.
pub(crate) trait DefTrait: DefUseParticipant {
/// Get a reference to the underlying [DefNode].
fn get_defnode_ref<'a>(&self, ctx: &'a Context) -> Ref<'a, DefNode<Self>>;
/// Get a mutable reference to the underlying [DefNode].
fn get_defnode_mut<'a>(&self, ctx: &'a Context) -> RefMut<'a, DefNode<Self>>;
}
Let’s say we have the following IR snippet:
%0 = arith.constant 42 : i32 // Defines %0
%1 = arith.addi %0, %0 : i32 // Uses %0 twice
-
Def-Use Chain for
%0
:- The
DefNode
for%0
will store twoUse
objects:- One for the first operand of
arith.addi
. - One for the second operand of
arith.addi
.
- One for the first operand of
- The
-
Use-Def Chain for
arith.addi
:- Each operand in
arith.addi
has aUse
object pointing back to%0
.
- Each operand in
-
DefNode::add_use
:- Adds a new
Use
to the definition'sDefNode
. - Ensures the use is tracked in the def-use chain.
- Adds a new
-
DefNode::remove_use
:- Removes a
Use
from the definition'sDefNode
. - Ensures the use is no longer tracked.
- Removes a
-
DefNode::replace_use_with
:- Replaces a use of one definition with another.
- Updates both the def-use and use-def chains.
-
Value::replace_some_uses_with
:- Replaces specific uses of a value with another value.
- Useful for optimizations like constant propagation or dead code elimination.
Pliron's implementation of use-def and def-use chains leveraging Rust's strengths is neat one. By using DefNode
and UseNode
to track relationships between definitions and uses, Pliron can enable optimizations and analyses while maintaining memory safety and performance.
- What’s a Pliron
Operation
.
pub struct Operation {
/// OpId of self. Composed of `dialect` name and `Op` name.
pub(crate) opid: OpId,
/// A [Ptr] to self.
pub(crate) self_ptr: Ptr<Operation>,
/// [Results](OpResult) defined by self.
pub(crate) results: Vec<OpResult>,
/// [Operand]s used by self.
pub(crate) operands: Vec<Operand<Value>>,
/// A list of basic blocks that this operation may transfer control to.
/// Typically relevant for control flow operations like branches and jumps.
pub(crate) successors: Vec<Operand<Ptr<BasicBlock>>>,
/// Links to the parent [BasicBlock] and
/// previous and next [Operation]s in the block.
pub(crate) block_links: BlockLinks,
/// A dictionary of attributes.
pub attributes: AttributeDict,
/// Regions contained inside this operation.
/// Nested regions inside this operation.
/// Used for operations that contain sub-graphs,
/// like functions or control flow constructs (e.g., if or while).
pub(crate) regions: Vec<Ptr<Region>>,
/// Source location of this operation.
loc: Location,
}
- Annotate a unit struct with
#[def_op(..)]
macro to turn it into a Pliron specific operation.- Note: pre-requisite traits for an
Op
- Printable, Parsable, and Verify, must be implemented.
- Note: pre-requisite traits for an
- Annotate a trait with
#[op_interface]
macro to turn it into a Pliron interface, allowing any Pliron operation to implement it. - Additionally, we can either annotate a Pliron
Op
with a- derive macro using
#[derive_op_interface_impl(..list_of_interfaces)]
to implement a Pliron interface. (OR) - annotate a trait implementation with the
#[op_interface_impl]
macro
- derive macro using
/// A Pliron interface.
#[op_interface]
pub trait SomePlironInterface {
fn verify(_op: &dyn Op, _ctx: &Context) -> Result<()>
where
Self: Sized,
{
Ok(())
}
}
/// A Pliron Op.
///
/// Equivalent to CLIF's return opcode.
///
/// Operands:
///
/// | Operand | Description |
/// |---------|-------------|
/// | `arg` | any type |
#[def_op("clif.return")]
#[derive_op_interface_impl(SomePlironInterface)] // either annotate Op with a derive macro
pub struct ReturnOp;
// OR
#[op_interface_impl]
impl SomePlironInterface for ReturnOp { .. }
- When an
Op
is verified, its interfaces are also automatically verified, with the guarantee that a super-interface is verified before an interface itself is.- Verification involves verifying the
Op
, theInterfaces
it implements, any requiredAttributes
, theOperands
it takes. - seeimpl Verify for Operation
below.
- Verification involves verifying the
impl ::pliron::op::Op for ReturnOp {
fn get_operation(&self) -> ::pliron::context::Ptr<::pliron::operation::Operation> {
self.op
}
fn wrap_operation(
op: ::pliron::context::Ptr<::pliron::operation::Operation>,
) -> ::pliron::op::OpObj {
Box::new(ReturnOp { op })
}
fn get_opid(&self) -> ::pliron::op::OpId {
Self::get_opid_static()
}
fn get_opid_static() -> ::pliron::op::OpId {
::pliron::op::OpId {
name: ::pliron::op::OpName::new("return"),
dialect: ::pliron::dialect::DialectName::new("clif"),
}
}
fn verify_interfaces(
&self,
ctx: &::pliron::context::Context,
) -> ::pliron::result::Result<()> {
if let Some(interface_verifiers) =
::pliron::op::OP_INTERFACE_VERIFIERS_MAP.get(&Self::get_opid_static())
{
for (_, verifier) in interface_verifiers {
verifier(self, ctx)?;
}
}
Ok(())
}
}
// verifying an Op and its interfaces
impl Verify for Operation {
fn verify(&self, ctx: &Context) -> Result<()> {
for attr in self.attributes.0.values() {
attr.verify(ctx)?;
attr.verify_interfaces(ctx)?;
}
for opd in &self.operands {
opd.verify(ctx)?;
}
for opd in &self.successors {
opd.verify(ctx)?;
}
for region in &self.regions {
region.verify(ctx)?;
}
Self::get_op(self.self_ptr, ctx).verify_interfaces(ctx)?;
Self::get_op(self.self_ptr, ctx).verify(ctx)
}
}
- Parsable, Printable, and Verify are implemented via helper macros for an 
Op

impl_canonical_syntax!(ReturnOp);
impl_verify_succ!(ReturnOp);
// expands to the following
impl ::pliron::printable::Printable for ReturnOp {
fn fmt(
&self,
ctx: &::pliron::context::Context,
state: &::pliron::printable::State,
f: &mut std::fmt::Formatter<'_>,
) -> std::fmt::Result {
::pliron::op::canonical_syntax_print(Box::new(*self), ctx, state, f)
}
}
impl ::pliron::parsable::Parsable for ReturnOp {
type Arg = Vec<(
::pliron::identifier::Identifier,
::pliron::location::Location,
)>;
type Parsed = ::pliron::op::OpObj;
fn parse<'a>(
state_stream: &mut ::pliron::parsable::StateStream<'a>,
results: Self::Arg,
) -> ::pliron::parsable::ParseResult<'a, Self::Parsed> {
::pliron::op::canonical_syntax_parser(
<Self as ::pliron::op::Op>::get_opid_static(),
results,
)
.parse_stream(state_stream)
.into()
}
}
impl ::pliron::common_traits::Verify for ReturnOp {
fn verify(&self, _ctx: &::pliron::context::Context) -> ::pliron::result::Result<()> {
Ok(())
}
}
- All Pliron interfaces (i.e., those that can be implemented by a Pliron Op) provide a verify method.
- Its signature is of type
OpInterfaceVerifier
OP_INTERFACE_VERIFIERS:
A slice containing a collection ofOpId
s along with tuples of interfaceTypeId
s and OpInterfaceVerifiers.OP_INTERFACE_DEPS:
Represents interfaces that may require the implementation of a list of super traits.OP_INTERFACE_VERIFIERS_MAP:
MapsOpId
s to a list of verifiers for the corresponding interfaces it implements.- Simply put, for each operation, retrieve the interface verifiers for the interfaces it implements.
/// Every op interface must have a function named `verify` with this type.
pub type OpInterfaceVerifier = fn(&dyn Op, &Context) -> Result<()>;
/// [Op]s paired with every interface it implements (and the verifier for that interface).
#[distributed_slice]
pub static OP_INTERFACE_VERIFIERS: [LazyLock<(OpId, (std::any::TypeId, OpInterfaceVerifier))>];
/// All interfaces mapped to their super-interfaces
#[distributed_slice]
pub static OP_INTERFACE_DEPS: [LazyLock<(std::any::TypeId, Vec<std::any::TypeId>)>];
/// A map from every [Op] to its ordered (as per interface deps) list of interface verifiers.
/// An interface's super-interfaces are to be verified before it itself is.
pub static OP_INTERFACE_VERIFIERS_MAP:
LazyLock<FxHashMap<OpId, Vec<(std::any::TypeId, OpInterfaceVerifier)>>>