Lab 06: Code introspection and metaprogramming
In this lab we are first going to inspect some tooling to help you understand what Julia does under the hood such as:
- looking at the code at different levels
- understanding what method is being called
- showing different levels of code optimization
Secondly we will start playing with the metaprogramming side of Julia, mainly covering:
- how to view abstract syntax tree (AST) of Julia code
- how to manipulate AST
These topics will be extended in the next lecture/lab, where we are going use metaprogramming to manipulate code with macros.
We will be again a little getting ahead of ourselves as we are going to use quite a few macros, which will be properly explained in the next lecture as well, however for now the important thing to know is that a macro is just a special function, that accepts as an argument Julia code, which it can modify.
Quick reminder of introspection tooling
Let's start with the topic of code inspection, e.g. we may ask the following: What happens when Julia evaluates [i for i in 1:10]
?
parsing
julia> :([i for i in 1:10]) |> dump
Expr head: Symbol comprehension args: Array{Any}((1,)) 1: Expr head: Symbol generator args: Array{Any}((2,)) 1: Symbol i 2: Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol i 2: Expr head: Symbol call args: Array{Any}((3,)) 1: Symbol : 2: Int64 1 3: Int64 10
lowering
julia> Meta.@lower debuginfo=:none [i for i in 1:10]
ERROR: MethodError: no method matching lower(::Symbol, ::Expr) The function `lower` exists, but no method is defined for this combination of argument types. Closest candidates are: lower(::Module, ::Any) @ Base meta.jl:161
typing
julia> f() = [i for i in 1:10]
f (generic function with 1 method)
julia> @code_typed debuginfo=:none f()
CodeInfo( 1 ── %1 = $(Expr(:foreigncall, :(:jl_alloc_genericmemory), Ref{Memory{Int64}}, svec(Any, Int64), 0, :(:ccall), Memory{Int64}, 10, 10))::Memory{Int64} │ %2 = Core.memoryrefnew(%1)::MemoryRef{Int64} │ %3 = %new(Vector{Int64}, %2, (10,))::Vector{Int64} │ %4 = $(Expr(:boundscheck, true))::Bool └─── goto #5 if not %4 2 ── %6 = Base.sub_int(1, 1)::Int64 │ %7 = Base.bitcast(UInt64, %6)::UInt64 │ %8 = Base.getfield(%3, :size)::Tuple{Int64} │ %9 = $(Expr(:boundscheck, true))::Bool │ %10 = Base.getfield(%8, 1, %9)::Int64 │ %11 = Base.bitcast(UInt64, %10)::UInt64 │ %12 = Base.ult_int(%7, %11)::Bool └─── goto #4 if not %12 3 ── goto #5 4 ── %15 = Core.tuple(1)::Tuple{Int64} │ invoke Base.throw_boundserror(%3::Vector{Int64}, %15::Tuple{Int64})::Union{} └─── unreachable 5 ┄─ %18 = Base.getfield(%3, :ref)::MemoryRef{Int64} │ %19 = Base.memoryrefnew(%18, 1, false)::MemoryRef{Int64} │ Base.memoryrefset!(%19, 1, :not_atomic, false)::Int64 └─── goto #6 6 ── nothing::Nothing 7 ┄─ %23 = φ (#6 => 2, #20 => %57)::Int64 │ %24 = φ (#6 => 1, #20 => %32)::Int64 │ %25 = (%24 === 10)::Bool └─── goto #9 if not %25 8 ── goto #10 9 ── %28 = Base.add_int(%24, 1)::Int64 └─── goto #10 10 ┄ %30 = φ (#8 => true, #9 => false)::Bool │ %31 = φ (#9 => %28)::Int64 │ %32 = φ (#9 => %28)::Int64 └─── goto #12 if not %30 11 ─ goto #13 12 ─ goto #13 13 ┄ %36 = φ (#11 => true, #12 => false)::Bool └─── goto #15 if not %36 14 ─ goto #21 15 ─ %39 = $(Expr(:boundscheck, false))::Bool └─── goto #19 if not %39 16 ─ %41 = Base.sub_int(%23, 1)::Int64 │ %42 = Base.bitcast(UInt64, %41)::UInt64 │ %43 = Base.getfield(%3, :size)::Tuple{Int64} │ %44 = $(Expr(:boundscheck, true))::Bool │ %45 = Base.getfield(%43, 1, %44)::Int64 │ %46 = Base.bitcast(UInt64, %45)::UInt64 │ %47 = Base.ult_int(%42, %46)::Bool └─── goto #18 if not %47 17 ─ goto #19 18 ─ %50 = Core.tuple(%23)::Tuple{Int64} │ invoke Base.throw_boundserror(%3::Vector{Int64}, %50::Tuple{Int64})::Union{} └─── unreachable 19 ┄ %53 = Base.getfield(%3, :ref)::MemoryRef{Int64} │ %54 = Base.memoryrefnew(%53, %23, false)::MemoryRef{Int64} │ Base.memoryrefset!(%54, %31, :not_atomic, false)::Int64 └─── goto #20 20 ─ %57 = Base.add_int(%23, 1)::Int64 └─── goto #7 21 ─ goto #22 22 ─ goto #23 23 ─ goto #24 24 ─ return %3 ) => Vector{Int64}
LLVM code generation
julia> @code_llvm debuginfo=:none f()
; Function Signature: f() define nonnull ptr @julia_f_36867() #0 { L18: %gcframe1 = alloca [3 x ptr], align 16 call void @llvm.memset.p0.i64(ptr align 16 %gcframe1, i8 0, i64 24, i1 true) %thread_ptr = call ptr asm "movq %fs:0, $0", "=r"() #10 %tls_ppgcstack = getelementptr i8, ptr %thread_ptr, i64 -8 %tls_pgcstack = load ptr, ptr %tls_ppgcstack, align 8 store i64 4, ptr %gcframe1, align 16 %frame.prev = getelementptr inbounds ptr, ptr %gcframe1, i64 1 %task.gcstack = load ptr, ptr %tls_pgcstack, align 8 store ptr %task.gcstack, ptr %frame.prev, align 8 store ptr %gcframe1, ptr %tls_pgcstack, align 8 %"Memory{Int64}[]" = call ptr @jl_alloc_genericmemory(ptr nonnull @"+Core.GenericMemory#36869.jit", i64 10) %.data_ptr = getelementptr inbounds { i64, ptr }, ptr %"Memory{Int64}[]", i64 0, i32 1 %0 = load ptr, ptr %.data_ptr, align 8 %gc_slot_addr_0 = getelementptr inbounds ptr, ptr %gcframe1, i64 2 store ptr %"Memory{Int64}[]", ptr %gc_slot_addr_0, align 16 %ptls_field = getelementptr inbounds ptr, ptr %tls_pgcstack, i64 2 %ptls_load = load ptr, ptr %ptls_field, align 8 %"new::Array" = call noalias nonnull align 8 dereferenceable(32) ptr @ijl_gc_pool_alloc_instrumented(ptr %ptls_load, i32 800, i32 32, i64 139720003577504) #8 %"new::Array.tag_addr" = getelementptr inbounds i64, ptr %"new::Array", i64 -1 store atomic i64 139720003577504, ptr %"new::Array.tag_addr" unordered, align 8 %1 = getelementptr inbounds ptr, ptr %"new::Array", i64 1 store ptr %0, ptr %"new::Array", align 8 store ptr %"Memory{Int64}[]", ptr %1, align 8 %"new::Array.size_ptr" = getelementptr inbounds i8, ptr %"new::Array", i64 16 store i64 10, ptr %"new::Array.size_ptr", align 8 store <4 x i64> <i64 1, i64 2, i64 3, i64 4>, ptr %0, align 8 %2 = getelementptr inbounds i64, ptr %0, i64 4 store <4 x i64> <i64 5, i64 6, i64 7, i64 8>, ptr %2, align 8 %3 = getelementptr inbounds i64, ptr %0, i64 8 store i64 9, ptr %3, align 8 %4 = getelementptr inbounds i64, ptr %0, i64 9 store i64 10, ptr %4, align 8 %frame.prev37 = load ptr, ptr %frame.prev, align 8 store ptr %frame.prev37, ptr %tls_pgcstack, align 8 ret ptr %"new::Array" }
native code generation
julia> @code_native debuginfo=:none f()
.text .file "f" .section .rodata.cst32,"aM",@progbits,32 .p2align 5, 0x0 # -- Begin function julia_f_37050 .LCPI0_0: .quad 1 # 0x1 .quad 2 # 0x2 .quad 3 # 0x3 .quad 4 # 0x4 .LCPI0_1: .quad 5 # 0x5 .quad 6 # 0x6 .quad 7 # 0x7 .quad 8 # 0x8 .text .globl julia_f_37050 .p2align 4, 0x90 .type julia_f_37050,@function julia_f_37050: # @julia_f_37050 ; Function Signature: f() # %bb.0: # %L18 push rbp mov rbp, rsp push r15 push r14 push r12 push rbx sub rsp, 32 vxorps xmm0, xmm0, xmm0 vmovaps xmmword ptr [rbp - 64], xmm0 mov qword ptr [rbp - 48], 0 #APP mov rax, qword ptr fs:[0] #NO_APP lea rcx, [rbp - 64] movabs rdi, offset ".L+Core.GenericMemory#37052.jit" mov esi, 10 mov r15, qword ptr [rax - 8] mov qword ptr [rbp - 64], 4 mov rax, qword ptr [r15] mov qword ptr [rbp - 56], rax movabs rax, offset jl_alloc_genericmemory mov qword ptr [r15], rcx call rax mov r12, qword ptr [rax + 8] mov qword ptr [rbp - 48], rax mov rbx, rax movabs r14, 139720003577504 movabs rax, offset ijl_gc_pool_alloc_instrumented mov esi, 800 mov edx, 32 mov rdi, qword ptr [r15 + 16] mov rcx, r14 call rax movabs rcx, offset .LCPI0_0 mov qword ptr [rax - 8], r14 mov qword ptr [rax], r12 mov qword ptr [rax + 8], rbx mov qword ptr [rax + 16], 10 vmovaps ymm0, ymmword ptr [rcx] movabs rcx, offset .LCPI0_1 vmovaps ymm1, ymmword ptr [rcx] vmovups ymmword ptr [r12], ymm0 vmovups ymmword ptr [r12 + 32], ymm1 mov qword ptr [r12 + 64], 9 mov qword ptr [r12 + 72], 10 mov rcx, qword ptr [rbp - 56] mov qword ptr [r15], rcx add rsp, 32 pop rbx pop r12 pop r14 pop r15 pop rbp vzeroupper ret .Lfunc_end0: .size julia_f_37050, .Lfunc_end0-julia_f_37050 # -- End function .type ".L_j_const#2",@object # @"_j_const#2" .section .rodata.cst8,"aM",@progbits,8 .p2align 3, 0x0 ".L_j_const#2": .quad 1 # 0x1 .size ".L_j_const#2", 8 .set ".L+Core.Array#37054.jit", 139720003577504 .size ".L+Core.Array#37054.jit", 8 .set ".L+Core.GenericMemory#37052.jit", 139720003577696 .size ".L+Core.GenericMemory#37052.jit", 8 .section ".note.GNU-stack","",@progbits
Let's see how these tools can help us understand some of Julia's internals on examples from previous labs and lectures.
Understanding runtime dispatch and type instabilities
We will start with a question: Can we spot internally some difference between type stable/unstable code?
Inspect the following two functions using @code_lowered
, @code_typed
, @code_llvm
and @code_native
.
x = rand(10^5)
function explicit_len(x)
length(x)
end
function implicit_len()
length(x)
end
For now do not try to understand the details, but focus on the overall differences such as length of the code.
If the output of the method introspection tools is too long you can use a general way of redirecting standard output stdout
to a file
open("./llvm_fun.ll", "w") do file
original_stdout = stdout
redirect_stdout(file)
@code_llvm debuginfo=:none fun()
redirect_stdout(original_stdout)
end
In case of @code_llvm
and @code_native
there are special options, that allow this out of the box, see help ?
for underlying code_llvm
and code_native
. If you don't mind adding dependencies there is also the @capture_out
from Suppressor.jl
Details
@code_warntype explicit_sum(x)
@code_warntype implicit_sum()
@code_typed debuginfo=:none explicit_sum(x)
@code_typed debuginfo=:none implicit_sum()
@code_llvm debuginfo=:none explicit_sum(x)
@code_llvm debuginfo=:none implicit_sum()
@code_native debuginfo=:none explicit_sum(x)
@code_native debuginfo=:none implicit_sum()
In this case we see that the generated code for such a simple operation is much longer in the type unstable case resulting in longer run times. However in the next example we will see that having longer code is not always a bad thing.
Loop unrolling
In some cases the compiler uses loop unrolling[1] optimization to speed up loops at the expense of binary size. The result of such optimization is removal of the loop control instructions and rewriting the loop into a repeated sequence of independent statements.
Inspect under what conditions does the compiler unroll the for loop in the polynomial
function from the last lab.
function polynomial(a, x)
accumulator = a[end] * one(x)
for i in length(a)-1:-1:1
accumulator = accumulator * x + a[i]
end
accumulator
end
Compare the speed of execution with and without loop unrolling.
HINTS:
- these kind of optimization are lower level than intermediate language
- loop unrolling is possible when compiler knows the length of the input
Details
using BenchmarkTools
a = Tuple(ones(20)) # tuple has known size
ac = collect(a)
x = 2.0
@code_lowered polynomial(a,x) # cannot be seen here as optimizations are not applied
@code_typed debuginfo=:none polynomial(a,x) # loop unrolling is not part of type inference optimization
julia> @code_llvm debuginfo=:none polynomial(a,x)
; Function Signature: polynomial(NTuple{20, Float64}, Float64) define double @julia_polynomial_37430(ptr nocapture noundef nonnull readonly align 8 dereferenceable(160) %"a::Tuple", double %"x::Float64") #0 { pass.18: %"a::Tuple[20]_ptr" = getelementptr inbounds [20 x double], ptr %"a::Tuple", i64 0, i64 19 %"a::Tuple[20]_ptr.unbox" = load double, ptr %"a::Tuple[20]_ptr", align 8 %0 = fmul double %"a::Tuple[20]_ptr.unbox", %"x::Float64" %1 = getelementptr inbounds double, ptr %"a::Tuple", i64 18 %.unbox = load double, ptr %1, align 8 %2 = fadd double %0, %.unbox %3 = fmul double %2, %"x::Float64" %4 = getelementptr inbounds double, ptr %"a::Tuple", i64 17 %.unbox.1 = load double, ptr %4, align 8 %5 = fadd double %3, %.unbox.1 %6 = fmul double %5, %"x::Float64" %7 = getelementptr inbounds double, ptr %"a::Tuple", i64 16 %.unbox.2 = load double, ptr %7, align 8 %8 = fadd double %6, %.unbox.2 %9 = fmul double %8, %"x::Float64" %10 = getelementptr inbounds double, ptr %"a::Tuple", i64 15 %.unbox.3 = load double, ptr %10, align 8 %11 = fadd double %9, %.unbox.3 %12 = fmul double %11, %"x::Float64" %13 = getelementptr inbounds double, ptr %"a::Tuple", i64 14 %.unbox.4 = load double, ptr %13, align 8 %14 = fadd double %12, %.unbox.4 %15 = fmul double %14, %"x::Float64" %16 = getelementptr inbounds double, ptr %"a::Tuple", i64 13 %.unbox.5 = load double, ptr %16, align 8 %17 = fadd double %15, %.unbox.5 %18 = fmul double %17, %"x::Float64" %19 = getelementptr inbounds double, ptr %"a::Tuple", i64 12 %.unbox.6 = load double, ptr %19, align 8 %20 = fadd double %18, %.unbox.6 %21 = fmul double %20, %"x::Float64" %22 = getelementptr inbounds double, ptr %"a::Tuple", i64 11 %.unbox.7 = load double, ptr %22, align 8 %23 = fadd double %21, %.unbox.7 %24 = fmul double %23, %"x::Float64" %25 = getelementptr inbounds double, ptr %"a::Tuple", i64 10 %.unbox.8 = load double, ptr %25, align 8 %26 = fadd double %24, %.unbox.8 %27 = fmul double %26, %"x::Float64" %28 = getelementptr inbounds double, ptr %"a::Tuple", i64 9 %.unbox.9 = load double, ptr %28, align 8 %29 = fadd double %27, %.unbox.9 %30 = fmul double %29, %"x::Float64" %31 = getelementptr inbounds double, ptr %"a::Tuple", i64 8 %.unbox.10 = load double, ptr %31, align 8 %32 = fadd double %30, %.unbox.10 %33 = fmul double %32, %"x::Float64" %34 = getelementptr inbounds double, ptr %"a::Tuple", i64 7 %.unbox.11 = load double, ptr %34, align 8 %35 = fadd double %33, %.unbox.11 %36 = fmul double %35, %"x::Float64" %37 = getelementptr inbounds double, ptr %"a::Tuple", i64 6 %.unbox.12 = load double, ptr %37, align 8 %38 = fadd double %36, %.unbox.12 %39 = fmul double %38, %"x::Float64" %40 = getelementptr inbounds double, ptr %"a::Tuple", i64 5 %.unbox.13 = load double, ptr %40, align 8 %41 = fadd double %39, %.unbox.13 %42 = fmul double %41, %"x::Float64" %43 = getelementptr inbounds double, ptr %"a::Tuple", i64 4 %.unbox.14 = load double, ptr %43, align 8 %44 = fadd double %42, %.unbox.14 %45 = fmul double %44, %"x::Float64" %46 = getelementptr inbounds double, ptr %"a::Tuple", i64 3 %.unbox.15 = load double, ptr %46, align 8 %47 = fadd double %45, %.unbox.15 %48 = fmul double %47, %"x::Float64" %49 = getelementptr inbounds double, ptr %"a::Tuple", i64 2 %.unbox.16 = load double, ptr %49, align 8 %50 = fadd double %48, %.unbox.16 %51 = fmul double %50, %"x::Float64" %52 = getelementptr inbounds double, ptr %"a::Tuple", i64 1 %.unbox.17 = load double, ptr %52, align 8 %53 = fadd double %51, %.unbox.17 %54 = fmul double %53, %"x::Float64" %.unbox.18 = load double, ptr %"a::Tuple", align 8 %55 = fadd double %54, %.unbox.18 ret double %55 }
julia> @code_llvm debuginfo=:none polynomial(ac,x)
; Function Signature: polynomial(Array{Float64, 1}, Float64) define double @julia_polynomial_37440(ptr noundef nonnull align 8 dereferenceable(24) %"a::Array", double %"x::Float64") #0 { top: %"new::Tuple" = alloca [1 x i64], align 8 %"new::Tuple60" = alloca [1 x i64], align 8 %0 = getelementptr inbounds i8, ptr %"a::Array", i64 16 %.size.sroa.0.0.copyload = load i64, ptr %0, align 8 %1 = add i64 %.size.sroa.0.0.copyload, -1 %.not.not = icmp eq i64 %.size.sroa.0.0.copyload, 0 br i1 %.not.not, label %L15, label %L18 L15: ; preds = %top store i64 0, ptr %"new::Tuple60", align 8 call void @j_throw_boundserror_37459(ptr nonnull %"a::Array", ptr nocapture nonnull readonly %"new::Tuple60") #11 unreachable L18: ; preds = %top %2 = load ptr, ptr %"a::Array", align 8 %3 = getelementptr inbounds double, ptr %2, i64 %1 %4 = load double, ptr %3, align 8 %5 = icmp sgt i64 %1, 0 br i1 %5, label %L78.preheader, label %L64 L64: ; preds = %L18 %.not74.not.not.not = icmp eq i64 %.size.sroa.0.0.copyload, -9223372036854775808 br i1 %.not74.not.not.not, label %L78.preheader, label %L112 L78.preheader: ; preds = %L64, %L18 %value_phi81 = phi i64 [ -9223372036854775808, %L64 ], [ 1, %L18 ] br label %L78 L78: ; preds = %L96, %L78.preheader %value_phi20 = phi i64 [ %6, %L96 ], [ %1, %L78.preheader ] %value_phi22 = phi double [ %10, %L96 ], [ %4, %L78.preheader ] %6 = add i64 %value_phi20, -1 %.not75 = icmp ult i64 %6, %.size.sroa.0.0.copyload br i1 %.not75, label %L96, label %L93 L93: ; preds = %L78 store i64 %value_phi20, ptr %"new::Tuple", align 8 call void @j_throw_boundserror_37459(ptr nonnull %"a::Array", ptr nocapture nonnull readonly %"new::Tuple") #11 unreachable L96: ; preds = %L78 %7 = fmul double %value_phi22, %"x::Float64" %8 = getelementptr inbounds double, ptr %2, i64 %6 %9 = load double, ptr %8, align 8 %10 = fadd double %7, %9 %.not76.not = icmp eq i64 %value_phi20, %value_phi81 br i1 %.not76.not, label %L112, label %L78 L112: ; preds = %L96, %L64 %value_phi43 = phi double [ %4, %L64 ], [ %10, %L96 ] ret double %value_phi43 }
More than 2x speedup
julia> @btime polynomial($a,$x)
8.975 ns (0 allocations: 0 bytes) 1.048575e6
julia> @btime polynomial($ac,$x)
19.806 ns (0 allocations: 0 bytes) 1.048575e6
Recursion inlining depth
Inlining[2] is another compiler optimization that allows us to speed up the code by avoiding function calls. Where applicable compiler can replace f(args)
directly with the function body of f
, thus removing the need to modify stack to transfer the control flow to a different place. This is yet another optimization that may improve speed at the expense of binary size.
Rewrite the polynomial
function from the last lab using recursion and find the length of the coefficients, at which inlining of the recursive calls stops occurring.
function polynomial(a, x)
accumulator = a[end] * one(x)
for i in length(a)-1:-1:1
accumulator = accumulator * x + a[i]
end
accumulator
end
The operator ...
serves two purposes inside function calls [3][4]:
- combines multiple arguments into one
julia> function printargs(args...) println(typeof(args)) for (i, arg) in enumerate(args) println("Arg #$i = $arg") end end
printargs (generic function with 1 method)
julia> printargs(1, 2, 3)
Tuple{Int64, Int64, Int64} Arg #1 = 1 Arg #2 = 2 Arg #3 = 3
- splits one argument into many different arguments
julia> function threeargs(a, b, c) println("a = $a::$(typeof(a))") println("b = $b::$(typeof(b))") println("c = $c::$(typeof(c))") end
threeargs (generic function with 1 method)
julia> threeargs([1,2,3]...) # or with a variable threeargs(x...)
a = 1::Int64 b = 2::Int64 c = 3::Int64
HINTS:
- define two methods
_polynomial!(ac, x, a...)
and_polynomial!(ac, x, a)
for the case of ≥2 coefficients and the last coefficient - use splatting together with range indexing
a[1:end-1]...
- the correctness can be checked using the built-in
evalpoly
- recall that these kind of optimization are possible just around the type inference stage
- use container of known length to store the coefficients
Details
_polynomial!(ac, x, a...) = _polynomial!(x * ac + a[end], x, a[1:end-1]...)
_polynomial!(ac, x, a) = x * ac + a
polynomial(a, x) = _polynomial!(a[end] * one(x), x, a[1:end-1]...)
# the coefficients have to be a tuple
a = Tuple(ones(Int, 21)) # everything less than 22 gets inlined
x = 2
polynomial(a,x) == evalpoly(x,a) # compare with built-in function
# @code_llvm debuginfo=:none polynomial(a,x) # seen here too, but code_typed is a better option
@code_lowered polynomial(a,x) # cannot be seen here as optimizations are not applied
julia> @code_typed debuginfo=:none polynomial(a,x)
CodeInfo( 1 ─ %1 = $(Expr(:boundscheck, true))::Bool │ %2 = Base.getfield(a, 21, %1)::Int64 │ %3 = Base.mul_int(%2, 1)::Int64 │ %4 = Core.getfield(a, 1)::Int64 │ %5 = Core.getfield(a, 2)::Int64 │ %6 = Core.getfield(a, 3)::Int64 │ %7 = Core.getfield(a, 4)::Int64 │ %8 = Core.getfield(a, 5)::Int64 │ %9 = Core.getfield(a, 6)::Int64 │ %10 = Core.getfield(a, 7)::Int64 │ %11 = Core.getfield(a, 8)::Int64 │ %12 = Core.getfield(a, 9)::Int64 │ %13 = Core.getfield(a, 10)::Int64 │ %14 = Core.getfield(a, 11)::Int64 │ %15 = Core.getfield(a, 12)::Int64 │ %16 = Core.getfield(a, 13)::Int64 │ %17 = Core.getfield(a, 14)::Int64 │ %18 = Core.getfield(a, 15)::Int64 │ %19 = Core.getfield(a, 16)::Int64 │ %20 = Core.getfield(a, 17)::Int64 │ %21 = Core.getfield(a, 18)::Int64 │ %22 = Core.getfield(a, 19)::Int64 │ %23 = Core.getfield(a, 20)::Int64 │ %24 = Base.mul_int(x, %3)::Int64 │ %25 = Base.add_int(%24, %23)::Int64 │ %26 = Base.mul_int(x, %25)::Int64 │ %27 = Base.add_int(%26, %22)::Int64 │ %28 = Base.mul_int(x, %27)::Int64 │ %29 = Base.add_int(%28, %21)::Int64 │ %30 = Base.mul_int(x, %29)::Int64 │ %31 = Base.add_int(%30, %20)::Int64 │ %32 = Base.mul_int(x, %31)::Int64 │ %33 = Base.add_int(%32, %19)::Int64 │ %34 = Base.mul_int(x, %33)::Int64 │ %35 = Base.add_int(%34, %18)::Int64 │ %36 = Base.mul_int(x, %35)::Int64 │ %37 = Base.add_int(%36, %17)::Int64 │ %38 = Base.mul_int(x, %37)::Int64 │ %39 = Base.add_int(%38, %16)::Int64 │ %40 = Base.mul_int(x, %39)::Int64 │ %41 = Base.add_int(%40, %15)::Int64 │ %42 = Base.mul_int(x, %41)::Int64 │ %43 = Base.add_int(%42, %14)::Int64 │ %44 = Base.mul_int(x, %43)::Int64 │ %45 = Base.add_int(%44, %13)::Int64 │ %46 = Base.mul_int(x, %45)::Int64 │ %47 = Base.add_int(%46, %12)::Int64 │ %48 = Base.mul_int(x, %47)::Int64 │ %49 = Base.add_int(%48, %11)::Int64 │ %50 = Base.mul_int(x, %49)::Int64 │ %51 = Base.add_int(%50, %10)::Int64 │ %52 = Base.mul_int(x, %51)::Int64 │ %53 = Base.add_int(%52, %9)::Int64 │ %54 = Base.mul_int(x, %53)::Int64 │ %55 = Base.add_int(%54, %8)::Int64 │ %56 = Base.mul_int(x, %55)::Int64 │ %57 = Base.add_int(%56, %7)::Int64 │ %58 = Base.mul_int(x, %57)::Int64 │ %59 = Base.add_int(%58, %6)::Int64 │ %60 = Base.mul_int(x, %59)::Int64 │ %61 = Base.add_int(%60, %5)::Int64 │ %62 = Base.mul_int(x, %61)::Int64 │ %63 = Base.add_int(%62, %4)::Int64 └── return %63 ) => Int64
AST manipulation: The first steps to metaprogramming
Julia is so called homoiconic language, as it allows the language to reason about its code. This capability is inspired by years of development in other languages such as Lisp, Clojure or Prolog.
There are two easy ways to extract/construct the code structure [5]
- parsing code stored in string with internal
Meta.parse
julia> code_parse = Meta.parse("x = 2") # for single line expressions (additional spaces are ignored)
:(x = 2)
julia> code_parse_block = Meta.parse(""" begin x = 2 y = 3 x + y end """) # for multiline expressions
quote #= none:2 =# x = 2 #= none:3 =# y = 3 #= none:4 =# x + y end
- constructing an expression using
quote ... end
or simple:()
syntax
julia> code_expr = :(x = 2) # for single line expressions (additional spaces are ignored)
:(x = 2)
julia> code_expr_block = quote x = 2 y = 3 x + y end # for multiline expressions
quote #= REPL[2]:2 =# x = 2 #= REPL[2]:3 =# y = 3 #= REPL[2]:4 =# x + y end
Results can be stored into some variables, which we can inspect further.
julia> typeof(code_parse)
Expr
julia> dump(code_parse)
Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol x 2: Int64 2
julia> typeof(code_parse_block)
Expr
julia> dump(code_parse_block)
Expr head: Symbol block args: Array{Any}((6,)) 1: LineNumberNode line: Int64 2 file: Symbol none 2: Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol x 2: Int64 2 3: LineNumberNode line: Int64 3 file: Symbol none 4: Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol y 2: Int64 3 5: LineNumberNode line: Int64 4 file: Symbol none 6: Expr head: Symbol call args: Array{Any}((3,)) 1: Symbol + 2: Symbol x 3: Symbol y
The type of both multiline and single line expression is Expr
with fields head
and args
. Notice that Expr
type is recursive in the args
, which can store other expressions resulting in a tree structure - abstract syntax tree (AST) - that can be visualized for example with the combination of GraphRecipes
and Plots
packages.
plot(code_expr_block, fontsize=12, shorten=0.01, axis_buffer=0.15, nodeshape=:rect)
This recursive structure has some major performance drawbacks, because the args
field is of type Any
and therefore modifications of this expression level AST won't be type stable. Building blocks of expressions are Symbol
s and literal values (numbers).
A possible nuisance of working with multiline expressions is the presence of LineNumber
nodes, which can be removed with Base.remove_linenums!
function.
julia> Base.remove_linenums!(code_parse_block)
quote x = 2 y = 3 x + y end
Parsed expressions can be evaluate using eval
function.
julia> eval(code_parse) # evaluation of :(x = 2)
2
julia> x # should be defined
2
Before doing anything more fancy let's start with some simple manipulation of ASTs.
- Define a variable
code
to be as the result of parsing the string"j = i^2"
. - Copy code into a variable
code2
. Modify this to replace the power2
with a power3
. Make sure that the original code variable is not also modified. - Copy
code2
to a variablecode3
. Replacei
withi + 1
incode3
. - Define a variable
i
with the value4
. Evaluate the different code expressions using theeval
function and check the value of the variablej
.
Details
julia> code = Meta.parse("j = i^2")
:(j = i ^ 2)
julia> code2 = copy(code)
:(j = i ^ 2)
julia> code2.args[2].args[3] = 3
3
julia> code3 = copy(code2)
:(j = i ^ 3)
julia> code3.args[2].args[2] = :(i + 1)
:(i + 1)
julia> i = 4
4
julia> eval(code), eval(code2), eval(code3)
(16, 64, 125)
Following up on the more general substitution of variables in an expression from the lecture, let's see how the situation becomes more complicated, when we are dealing with strings instead of a parsed AST.
replace_i(s::Symbol) = s == :i ? :k : s
replace_i(e::Expr) = Expr(e.head, map(replace_i, e.args)...)
replace_i(u) = u
Given a function replace_i
, which replaces variables i
for k
in an expression like the following
julia> ex = :(i + i*i + y*i - sin(z))
:((i + i * i + y * i) - sin(z))
julia> @test replace_i(ex) == :(k + k*k + y*k - sin(z))
Test Passed
write a different function sreplace_i(s)
, which does the same thing but instead of a parsed expression (AST) it manipulates a string, such as
julia> s = string(ex)
"(i + i * i + y * i) - sin(z)"
HINTS:
- Use
Meta.parse
in combination withreplace_i
ONLY for checking of correctness. - You can use the
replace
function in combination with regular expressions. - Think of some corner cases, that the method may not handle properly.
Details
The naive solution
julia> sreplace_i(s) = replace(s, 'i' => 'k')
sreplace_i (generic function with 1 method)
julia> @test Meta.parse(sreplace_i(s)) == replace_i(Meta.parse(s))
Test Failed at REPL[2]:1 Expression: Meta.parse(sreplace_i(s)) == replace_i(Meta.parse(s)) Evaluated: (k + k * k + y * k) - skn(z) == (k + k * k + y * k) - sin(z) ERROR: There was an error during testing
does not work in this simple case, because it will replace "i" inside the sin(z)
expression. We can play with regular expressions to obtain something, that is more robust
julia> sreplace_i(s) = replace(s, r"([^\w]|\b)i(?=[^\w]|\z)" => s"\1k")
sreplace_i (generic function with 1 method)
julia> @test Meta.parse(sreplace_i(s)) == replace_i(Meta.parse(s))
Test Passed
however the code may now be harder to read. Thus it is preferable to use the parsed AST when manipulating Julia's code.
If the exercises so far did not feel very useful let's focus on one, that is similar to a part of the IntervalArithmetics.jl
pkg.
Write function wrap!(ex::Expr)
which wraps literal values (numbers) with a call to f()
. You can test it on the following example
f = x -> convert(Float64, x)
ex = :(x*x + 2*y*x + y*y) # original expression
rex = :(x*x + f(2)*y*x + y*y) # result expression
HINTS:
- use recursion and multiple dispatch
- dispatch on
::Number
to detect numbers in an expression - for testing purposes, create a copy of
ex
before mutating
Details
julia> function wrap!(ex::Expr) args = ex.args for i in 1:length(args) args[i] = wrap!(args[i]) end return ex end
wrap! (generic function with 1 method)
julia> wrap!(ex::Number) = Expr(:call, :f, ex)
wrap! (generic function with 2 methods)
julia> wrap!(ex) = ex
wrap! (generic function with 3 methods)
julia> ext, x, y = copy(ex), 2, 3
(:(x * x + 2 * y * x + y * y), 2, 3)
julia> @test wrap!(ex) == :(x*x + f(2)*y*x + y*y)
Test Passed
julia> eval(ext)
25
julia> eval(ex)
25.0
This kind of manipulation is at the core of some pkgs, such as aforementioned IntervalArithmetics.jl
where every number is replaced with a narrow interval in order to find some bounds on the result of a computation.
Resources
- Julia's manual on metaprogramming
- David P. Sanders' workshop @ JuliaCon 2021
- Steven Johnson's keynote talk @ JuliaCon 2019
- Andy Ferris's workshop @ JuliaCon 2018
- From Macros to DSL by John Myles White
- Notes on JuliaCompilerPlugin
- 1https://en.wikipedia.org/wiki/Loop_unrolling
- 2https://en.wikipedia.org/wiki/Inline_expansion
- 3https://docs.julialang.org/en/v1/manual/faq/#What-does-the-...-operator-do?
- 4https://docs.julialang.org/en/v1/manual/functions/#Varargs-Functions
- 5Once you understand the recursive structure of expressions, the AST can be constructed manually like any other type.