```
julia> using StaticArrays, TransformVariablesjulia> t = as((a = asℝ₊, b = as(Array, asℝ₋, 3, 3),
c = corr_cholesky_factor(13),
d = as((asℝ, corr_cholesky_factor(SMatrix{3,3}),
UnitSimplex(3), UnitVector(4)))))
TransformVariables.TransformTuple{NamedTuple{(:a, :b, :c, :d), Tuple{TransformVariables.ShiftedExp{true, Int64}, TransformVariables.ArrayTransformation{TransformVariables.ShiftedExp{false, Int64}, 2}, CorrCholeskyFactor, TransformVariables.TransformTuple{Tuple{TransformVariables.Identity, TransformVariables.StaticCorrCholeskyFactor{3, 3}, UnitSimplex, UnitVector}}}}}((a = asℝ₊, b = TransformVariables.ArrayTransformation{TransformVariables.ShiftedExp{false, Int64}, 2}(asℝ₋, (3, 3)), c = CorrCholeskyFactor(13), d = TransformVariables.TransformTuple{Tuple{TransformVariables.Identity, TransformVariables.StaticCorrCholeskyFactor{3, 3}, UnitSimplex, UnitVector}}((asℝ, TransformVariables.StaticCorrCholeskyFactor{3, 3}(), UnitSimplex(3), UnitVector(4)), 9)), 97)
```

which is a single line of 700+ characters (which your browser mercifully hides behind a scrollbar), we now have

```
julia> using StaticArrays, TransformVariables
[ Info: Precompiling TransformVariables [84d833dd-6860-57f9-a1a7-6da5db126cff]julia> t = as((a = asℝ₊, b = as(Array, asℝ₋, 3, 3),
c = corr_cholesky_factor(13),
d = as((asℝ, corr_cholesky_factor(SMatrix{3,3}),
UnitSimplex(3), UnitVector(4)))))
[1:97] NamedTuple of transformations
[1:1] :a → asℝ₊
[2:10] :b → 3×3×asℝ₋ (dimension 1)
[11:88] :c → 13×13 correlation cholesky factor
[89:97] :d → Tuple of transformations
[98:98] 1 → asℝ
[108:110] 2 → SMatrix{3,3} correlation cholesky factor
[120:121] 3 → 3 element unit simplex transformation
[131:133] 4 → 4 element unit vector transformation
```

which tells you which indices map to which part of the result.

]]>After profiling, this turned out to be the most costly part, so I had to approximate it. Since I needed derivatives \(f'(x)\), I was wondering whether making the approximation match them (known as a Hermite interpolation) would increase accuracy.

The (pedagogical, unoptimized) code below sums up the gist of my numerical experiments, with `f`

below standing in for my implicitly solved function. It also demonstrates the new features of SpectralKit.jl `v0.10`

.

First, we set up the problem:

```
using SpectralKit, PGFPlotsX, DisplayAsf(x) = (exp(x) - 1) / (exp(1) - 1)
f′(x) = exp(x) / (exp(1) - 1)
const I = BoundedLinear(0, 1) # interval we map from
```

Then define an interpolation using `N`

Chebyshev nodes, matching the values.

```
function interpolation0(f, N)
basis = Chebyshev(EndpointGrid(), N)
ϕ = collocation_matrix(basis) \ map(f ∘ from_pm1(I), grid(basis))
linear_combination(basis, ϕ) ∘ to_pm1(I)
end;
```

Same exercise, but with the derivatives too, so we need two bases, one with double the number of functions (so we need to make sure `N`

is even), while we just use `N/2`

for the nodes.

```
function interpolation01(f, f′, N)
@assert iseven(N)
basis1 = Chebyshev(EndpointGrid(), N ÷ 2) # nodes from this one
basis2 = Chebyshev(EndpointGrid(), N) # evaluate on this basis
x = from_pm1.(I, grid(basis1)) # map nodes from [-1,1]
M = collocation_matrix(basis2, to_pm1.(I, derivatives.(x)))
ϕ = vcat(map(y -> y[0], M), map(y -> y[1], M)) \ vcat(f.(x), f′.(x))
linear_combination(basis2, ϕ) ∘ to_pm1(I)
end;
```

Importantly, note that mapping to [-1,1] for the collocation matrix has to be preceded by lifting to derivatives.

Then calculate the max abs difference, in digits (`log10`

).

```
function log10_max_abs_diff(f, f̂; M = 1000)
x = range(0, 1; length = M)
log10(maximum(@. abs(f(x) - f̂(x))))
end;
```

Then let's explore the errors in values ...

```
Ns = 4:2:20
errors = [(log10_max_abs_diff(f, interpolation0(f, N)),
log10_max_abs_diff(f, interpolation01(f, f′, N)))
for N in Ns]
```

```
9-element Vector{Tuple{Float64, Float64}}:
(-3.1996028783051695, -2.594513489315976)
(-5.882145733021446, -5.488666848393999)
(-8.835160643191552, -8.232779398084544)
(-11.994023867372805, -11.44897859894945)
(-15.176438519807359, -14.664555158828485)
(-15.35252977886304, -15.35252977886304)
(-15.255619765854984, -15.35252977886304)
(-15.35252977886304, -15.35252977886304)
(-15.35252977886304, -15.35252977886304)
```

... and derivatives.

```
d_errors = [(log10_max_abs_diff(f′, (x -> x[1]) ∘ interpolation0(f, N) ∘ derivatives),
log10_max_abs_diff(f′, (x -> x[1]) ∘ interpolation01(f, f′, N) ∘ derivatives))
for N in Ns]
```

```
9-element Vector{Tuple{Float64, Float64}}:
(-2.0758500387125216, -2.093336352131656)
(-4.549339116162139, -4.611253272379436)
(-7.363367596306161, -7.429305371299876)
(-10.417554370684012, -10.485171320264207)
(-13.381718167990524, -13.689771947181466)
(-13.834015838985154, -14.374806173574193)
(-14.03551167781493, -14.539616422220185)
(-13.724140848812729, -14.750469787535078)
(-13.714040521908403, -14.724140848812729)
```

Finally the plots:

```
@pgf Axis({ xlabel = "number of basis functions",
ylabel = "log10 abs error in values",
legend_cell_align= "left" },
PlotInc(Table(Ns, first.(errors))),
LegendEntry("fitting values"),
PlotInc(Table(Ns, last.(errors))),
LegendEntry("fitting values and derivatives")) |> DisplayAs.SVG
```

```
@pgf Axis({ xlabel = "number of basis functions",
ylabel = "log10 abs error in values",
legend_cell_align= "left" },
PlotInc(Table(Ns, first.(d_errors))),
LegendEntry("fitting values"),
PlotInc(Table(Ns, last.(d_errors))),
LegendEntry("fitting values and derivatives")) |> DisplayAs.SVG
```

The conclusion is that even without matching them explicitly, derivatives are well-approximated. Getting an extra digit of accuracy in derivatives above 12–14 nodes means sacrificing a digit of accuracy with a low number of nodes. 14 seems to be the break-even point here, but then we are at machine precision anyway.

As usual, simply approximating with Chebyshev polynomials is extremely accurate in itself for practical purposes, even when derivatives are needed. Of course, this depends on the function being “nice”.

]]>

```
using MarkdownTables
my_table = [(animal = "cat", legs = 4),
(animal = "catfish", legs = 0),
(animal = "canary", legs = 2)]
my_table |> markdown_table()
```

animal | legs |
---|---|

cat | 4 |

catfish | 0 |

canary | 2 |

Under the hood, it just wraps some basic Markdown with DisplayAs.jl:

`my_table |> markdown_table(String) |> print`

```
| animal | legs |
|---------|------|
| cat | 4 |
| catfish | 0 |
| canary | 2 |
```

The default output is pretty basic — while the function has some options for formatting, it is recommended that you use CSS instead.

I expect that this pretty much rounds out the tooling I need for blogging with Franklin.jl. Feedback and PRs are of course welcome, but I intend to keep this package very basic.

]]>

This page was processed using that package. Here is how it works:

take a Julia code file marked up with Literate.jl,

add a

`Project.toml`

and a`Manifest.toml`

(eg activate the directory as a project and add packages)produce a markdown file using

`ReproducibleLiteratePage.compile_directory()`

.

Here is some code:

```
using UnPack # the lightest package I could think of
struct Foo
a
b
end
@unpack a, b = Foo(1, 2)
a, b
```

`(1, 2)`

The Julia source (again, marked up with Literate.jl), `Project.toml`

, and `Manifest.toml`

should be available as a `tar`

archive at the bottom of the page.

]]>

I do not have the time to convert all my earlier posts, they remain available in this directory. I hope that this will keep old links working, if there is a problem please let me know and I will try to fix it.

]]>