This guide is for a Linux user who wants the best headphone spatial audio they can get, but has never configured PipeWire for this kind of DSP work.
The goal is to explain the moving parts first, then show which example config does what, and finally give you concrete download sources for the data files those configs expect.
This guide is not a Dolby Atmos implementation.
It does not provide:
- Dolby decoding
- Dolby object metadata support
- Dolby certification
- a licensed Dolby renderer
What it does try to do is reproduce the kind of headphone listening result people often want from Atmos:
- more convincing front vs rear cues
- a wider and more externalized soundstage
- better surround virtualization than plain stereo downmixing
- sometimes a limited sense of height, depending on the HRTF and the content
The practical target is:
multichannel or spatial audio -> binaural stereo -> headphones
In other words, we want PipeWire to turn a speaker-like scene into the specific left-ear and right-ear signals that make headphones sound more three-dimensional.
If you are new to Linux audio, these are the important pieces:
- Your app or game: outputs stereo, 5.1, or 7.1 audio.
- PipeWire: the Linux audio server and processing graph that receives that audio.
- WirePlumber or another session manager: notices the new virtual sink and makes it show up in your normal audio-device selection tools.
- A PipeWire filter-chain config: a
.conffile that tells PipeWire which DSP nodes to create and how to wire them together. - A virtual sink: the new playback device created by that config. Your app sends audio to this sink instead of directly to your headphones.
- A spatial data file: a
.sofa,.wav, or.irsfile that contains the measurements or filter data used by the DSP. - Your headphones: the final stereo output device.
The overall signal path looks like this:
app/game/player -> PipeWire virtual sink -> spatial processing -> stereo headphones
That is the central idea of the whole document.
Traditional surround formats such as stereo, 5.1, and 7.1 are channel-based.
That means sounds are mixed into fixed speaker channels like:
- front left
- front right
- center
- surrounds
- subwoofer
Dolby Atmos adds object-based audio on top of that.
With object-based audio, a sound can be treated as "a thing in 3D space" plus metadata describing where it is and how it moves. A renderer then decides how to turn that scene into speaker or headphone output.
This guide is only trying to recreate the headphone rendering result, not the proprietary Atmos pipeline itself.
Binaural means a two-channel headphone signal where the left and right channels are intentionally different in a way that imitates how sound reaches real ears in space.
Normal stereo is just two channels.
Binaural stereo is still only two channels, but those two channels are shaped so your brain can infer position.
HRTF stands for Head-Related Transfer Function.
It describes how a sound changes before it reaches your ears because of:
- your head
- your outer ears
- your torso
- the direction the sound comes from
Your brain uses those direction-dependent differences to judge location.
The main cues are:
- ITD: interaural time difference
- ILD: interaural level difference
- frequency shaping from your ears and head
Plain version:
same sound + different direction = different left/right ear filtering
An HRIR is the impulse-response form of that directional filter.
More generally, an impulse response is just a compact recording of how a system transforms sound. Once you have it, you can apply it to other audio later with convolution.
In this guide, an impulse response might represent:
- one speaker position
- one direction in space
- a whole virtual surround profile
- the captured output of another effect chain
SOFA stands for Spatially Oriented Format for Acoustics.
A SOFA file is a standard container for HRTF data plus metadata such as source positions and measurement details. A spatializer can look up different directions from the SOFA file at runtime.
That makes SOFA a good fit when you want to say:
put this source at 30 degrees
put that source behind me
move this source upward
Convolution is the process of applying an impulse response to audio.
In practice, that means:
- load an HRIR or IRS dataset
- run audio through it
- get the filtered left and right headphone signals out
If SOFA is the "look up the right direction at runtime" approach, convolution is the "apply a precomputed filter" approach.
You do not need Dolby's implementation to build a convincing 3D headphone presentation.
What you actually need is a reliable answer to this question:
If a sound should appear from this direction, what should the left and right ears hear?
Everything else in this guide is just a way to:
- store that answer
- measure that answer
- capture that answer
- apply that answer inside PipeWire
This approach treats each input channel as a source placed somewhere in 3D space.
Conceptually:
channel input -> choose direction -> spatializer reads SOFA data -> stereo ears
Why use it:
- you want direct control over azimuth, elevation, and radius
- you want to experiment with positions
- you want to test SOFA datasets directly
- you want the graph to reflect a spatial model explicitly
Tradeoff:
- more moving parts
- more tuning
- you are building the scene at runtime
This approach assumes the speaker layout is already known, so each surround channel gets its own fixed left/right HRIR filter and all results are mixed together.
Conceptually:
5.1 or 7.1 channels -> fixed HRIR filters -> stereo ears
Why use it:
- simpler mental model
- easy to use with existing HRIR WAV files
- often the fastest path to a strong virtual surround effect
- useful when adapting HeSuVi-style data or captured IRS data
Tradeoff:
- less flexible than a runtime spatializer
- speaker positions are effectively baked into the filter set
The Stack Overflow post is another version of Approach B.
The idea is:
- Play an impulse through a Windows system that already has Dolby processing enabled.
- Record the processed output.
- Trim the recording around the impulse response.
- Save it as WAV.
- Optionally package or rename it as an IRS file.
- Load it into a convolver.
That is not implementing Dolby's renderer.
It is capturing the output shape of an existing processing chain and reusing it as a static convolution profile:
proprietary effect chain -> captured impulse response -> reusable convolution profile
That is relevant here because it is yet another way to obtain a spatial profile for PipeWire to apply.
You shared several related configs, but they do different jobs.
This creates a one-channel test sink and places that single source at one point in space using a SOFA spatializer.
Use it to:
- confirm the spatializer works at all
- test whether a SOFA file produces believable direction cues
- move one source around before you build a full surround graph
This treats each 7.1 channel as its own source in space, runs each one through a SOFA spatializer node, and mixes all left-ear outputs together and all right-ear outputs together.
Use it to:
- build a dynamic 7.1-to-binaural graph
- experiment with where each surround channel should sit
- understand the spatial model directly
This converts 5.1 input to stereo using a fixed HRIR WAV dataset based on KEMAR-style measurements.
Use it to:
- get a simple 5.1 headphone surround sink
- work with a static measured profile
- start with the simpler convolution-based path
This converts 7.1 input to stereo using a 14-channel HRIR WAV layout that matches common HeSuVi-style virtual surround datasets.
Use it to:
- build a static 7.1 headphone surround sink
- plug in compatible HRIR WAV data
- try community-shared virtual surround profiles without building a runtime spatial scene
This is not a binaural virtualizer.
It matrix-encodes a 5.1-style input into Lt/Rt stereo using a phase-shifted surround path. That is useful historical context, but it is not the main "Atmos-like on headphones" path in this guide.
Use it to:
- understand Dolby Surround style matrix encoding
- compare matrixed stereo with actual binaural virtualization
The config files are only half of the setup. They reference external data files, and those files are what actually define the spatial effect.
The files spatializer-single.conf and spatializer-7.1.conf need a .sofa HRTF dataset.
Good public starting points:
These are the kinds of files the config is pointing at when you see something like:
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"That filename is just an example. Replace it with the actual path to the .sofa file you downloaded.
The file sink-virtual-surround-5.1-kemar.conf expects a KEMAR-style HRIR WAV file.
Background and measurement source:
This config path:
filename = "hrir_kemar/hrir-kemar.wav"is a placeholder for a compatible HRIR WAV file on your system. The MIT page is the underlying measurement reference, but you may still need a WAV export or conversion that matches the channel layout used by this config. The important point for a beginner is that this is data, not another config file.
The file sink-virtual-surround-7.1-hesuvi.conf expects a compatible 14-channel hrir.wav layout.
Useful catalog/reference:
This config path:
filename = "hrir_hesuvi/hrir.wav"again points to a data file you need to obtain and place somewhere predictable.
The uploaded file Dolby ATMOS ((128K MP3)) 1.Default.irs is a binary impulse-response preset file, not a text config.
It is meant to be loaded by a convolver-style effect. If you want to build or understand that kind of file, the Stack Overflow method above is the relevant reference.
These are not required downloads, but they are useful context if you want to compare notes with other Linux users:
The config files themselves should go in a PipeWire conf.d directory such as:
~/.config/pipewire/filter-chain.conf.d/
The HRTF or HRIR data files can live wherever you want, but for sanity you should keep them in a dedicated directory and then point the config to the real path.
A simple layout would be:
~/.config/pipewire/filter-chain.conf.d/
~/.config/pipewire/hrtf-sofa/
~/.config/pipewire/hrir_kemar/
~/.config/pipewire/hrir_hesuvi/
For a first-time setup, using real absolute paths in every filename field is the least confusing option.
The example configs below use example paths. Do not assume those paths already exist on your machine.
After you add or change files in filter-chain.conf.d, you will usually need to restart your PipeWire user services or log out and back in before the new sink appears.
If you are starting from zero, use this order:
spatializer-single.confspatializer-7.1.confsink-virtual-surround-5.1-kemar.confsink-virtual-surround-7.1-hesuvi.conf
That order makes debugging easier because it starts with the smallest graph and the clearest listening test.
The main question at each stage is simple:
does this config create a believable left/right directional cue?
If the answer is no at the single-source stage, moving to a larger graph will only make troubleshooting harder.
mono input
-> SOFA spatializer (azimuth/elevation/radius)
-> Out L
-> Out R
-> stereo headphones
FL -> SOFA node -> Out L/Out R --+
FR -> SOFA node -> Out L/Out R --+
FC -> SOFA node -> Out L/Out R --+
RL -> SOFA node -> Out L/Out R --+--> mixL/mixR -> stereo headphones
RR -> SOFA node -> Out L/Out R --+
SL -> SOFA node -> Out L/Out R --+
SR -> SOFA node -> Out L/Out R --+
LFE -> SOFA node -> Out L/Out R -+
FL -> copy -> convFL_L -> mixL --+
-> convFL_R -> mixR --+
FR -> copy -> convFR_L -> mixL --+
-> convFR_R -> mixR --+
FC -> copy -> convFC_L -> mixL --+
-> convFC_R -> mixR --+
RL -> copy -> convRL_L -> mixL --+
-> convRL_R -> mixR --+
RR -> copy -> convRR_L -> mixL --+
-> convRR_R -> mixR --+
SL -> copy -> convSL_L -> mixL --+
-> convSL_R -> mixR --+
SR -> copy -> convSR_L -> mixL --+
-> convSR_R -> mixR --+
LFE -> copy -> convLFE_L -> mixL -+
-> convLFE_R -> mixR -+
mixL + mixR -> stereo headphones
Windows output with Dolby enabled
-> play impulse
-> record processed result
-> trim around response
-> save as WAV or IRS
-> load into convolver
-> stereo headphones
A good result can give you:
- stronger front vs rear cues
- more space outside the head
- better virtual surround imaging than a plain stereo downmix
- some height-like perception, depending on the HRTF and the content
What it will not give you:
- true Dolby Atmos decoding
- Dolby metadata support
- identical results for every listener
- a guarantee that the "best" HRTF for someone else will also be the best one for you
That last point matters. HRTFs are personal enough that some trial and error is normal.
The full configs are below in a collapsed section so the gist reads like a README by default and keeps the long examples out of the way until you actually need them.
Show full example configs
The following configs are included exactly because they are the concrete examples being referenced in the text above.
# A virtual sound source sink
# Useful for testing spatial effects by moving it around with controls
#
# Copy this file into a conf.d/ directory such as
# ~/.config/pipewire/filter-chain.conf.d/
#
# Adjust the paths to the sofa files to match your system
#
context.modules = [
{ name = libpipewire-module-filter-chain
flags = [ nofail ]
args = {
node.description = "3D Sink"
media.name = "3D Sink"
filter.graph = {
nodes = [
{
type = sofa
label = spatializer
name = sp
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
}
control = {
"Azimuth" = 220.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
]
inputs = [ "sp:In" ]
outputs = [ "sp:Out L" "sp:Out R" ]
}
capture.props = {
node.name = "effect_input.3d"
media.class = Audio/Sink
audio.channels = 1
audio.position = [ FC ]
}
playback.props = {
node.name = "effect_output.3d"
node.passive = true
audio.channels = 2
audio.position = [ FL FR ]
}
}
}
]# Headphone surround sink
#
# Copy this file into a conf.d/ directory such as
# ~/.config/pipewire/filter-chain.conf.d/
#
# Adjust the paths to the sofa file to match your system.
#
context.modules = [
{ name = libpipewire-module-filter-chain
flags = [ nofail ]
args = {
node.description = "Spatial Sink"
media.name = "Spatial Sink"
filter.graph = {
nodes = [
{
type = sofa
label = spatializer
name = spFL
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
# The gain depends on the .sofa file in use
gain = 0.5
}
control = {
"Azimuth" = 30.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spFR
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 330.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spFC
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 0.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spRL
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 150.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spRR
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 210.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spSL
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 90.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spSR
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 270.0
"Elevation" = 0.0
"Radius" = 3.0
}
}
{
type = sofa
label = spatializer
name = spLFE
config = {
filename = "~/.config/hrtf-sofa/hrtf b_nh724.sofa"
gain = 0.5
}
control = {
"Azimuth" = 0.0
"Elevation" = -60.0
"Radius" = 3.0
}
}
{ type = builtin label = mixer name = mixL
control = {
# Set individual left mixer gain if needed
#"Gain 1" = 1.0
#"Gain 2" = 1.0
#"Gain 3" = 1.0
#"Gain 4" = 1.0
#"Gain 5" = 1.0
#"Gain 6" = 1.0
#"Gain 7" = 1.0
#"Gain 8" = 1.0
}
}
{ type = builtin label = mixer name = mixR
control = {
# Set individual right mixer gain if needed
#"Gain 1" = 1.0
#"Gain 2" = 1.0
#"Gain 3" = 1.0
#"Gain 4" = 1.0
#"Gain 5" = 1.0
#"Gain 6" = 1.0
#"Gain 7" = 1.0
#"Gain 8" = 1.0
}
}
]
links = [
# output
{ output = "spFL:Out L" input="mixL:In 1" }
{ output = "spFL:Out R" input="mixR:In 1" }
{ output = "spFR:Out L" input="mixL:In 2" }
{ output = "spFR:Out R" input="mixR:In 2" }
{ output = "spFC:Out L" input="mixL:In 3" }
{ output = "spFC:Out R" input="mixR:In 3" }
{ output = "spRL:Out L" input="mixL:In 4" }
{ output = "spRL:Out R" input="mixR:In 4" }
{ output = "spRR:Out L" input="mixL:In 5" }
{ output = "spRR:Out R" input="mixR:In 5" }
{ output = "spSL:Out L" input="mixL:In 6" }
{ output = "spSL:Out R" input="mixR:In 6" }
{ output = "spSR:Out L" input="mixL:In 7" }
{ output = "spSR:Out R" input="mixR:In 7" }
{ output = "spLFE:Out L" input="mixL:In 8" }
{ output = "spLFE:Out R" input="mixR:In 8" }
]
inputs = [ "spFL:In" "spFR:In" "spFC:In" "spLFE:In" "spRL:In" "spRR:In", "spSL:In", "spSR:In" ]
outputs = [ "mixL:Out" "mixR:Out" ]
}
capture.props = {
node.name = "effect_input.spatializer"
media.class = Audio/Sink
audio.channels = 8
audio.position = [ FL FR FC LFE RL RR SL SR ]
}
playback.props = {
node.name = "effect_output.spatializer"
node.passive = true
audio.channels = 2
audio.position = [ FL FR ]
}
}
}
]# Convolver sink
#
# Copy this file into a conf.d/ directory such as
# ~/.config/pipewire/filter-chain.conf.d/
#
# Adjust the paths to the convolver files to match your system
#
context.modules = [
{ name = libpipewire-module-filter-chain
flags = [ nofail ]
args = {
node.description = "Virtual Surround Sink"
media.name = "Virtual Surround Sink"
filter.graph = {
nodes = [
{
type = builtin
label = convolver
name = convFL_L
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 0
}
}
{
type = builtin
label = convolver
name = convFL_R
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 1
}
}
{
type = builtin
label = convolver
name = convFR_L
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 1
}
}
{
type = builtin
label = convolver
name = convFR_R
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 0
}
}
{
type = builtin
label = convolver
name = convFC
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 2
}
}
{
type = builtin
label = convolver
name = convLFE
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 3
}
}
{
type = builtin
label = convolver
name = convSL_L
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 4
}
}
{
type = builtin
label = convolver
name = convSL_R
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 5
}
}
{
type = builtin
label = convolver
name = convSR_L
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 5
}
}
{
type = builtin
label = convolver
name = convSR_R
config = {
filename = "hrir_kemar/hrir-kemar.wav"
channel = 4
}
}
{
type = builtin
label = mixer
name = mixL
}
{
type = builtin
label = mixer
name = mixR
}
{
type = builtin
label = copy
name = copyFL
}
{
type = builtin
label = copy
name = copyFR
}
{
type = builtin
label = copy
name = copySL
}
{
type = builtin
label = copy
name = copySR
}
]
links = [
{ output = "copyFL:Out" input = "convFL_L:In" }
{ output = "copyFL:Out" input = "convFL_R:In" }
{ output = "copyFR:Out" input = "convFR_R:In" }
{ output = "copyFR:Out" input = "convFR_L:In" }
{ output = "copySL:Out" input = "convSL_L:In" }
{ output = "copySL:Out" input = "convSL_R:In" }
{ output = "copySR:Out" input = "convSR_R:In" }
{ output = "copySR:Out" input = "convSR_L:In" }
{ output = "convFL_L:Out" input = "mixL:In 1" }
{ output = "convFR_L:Out" input = "mixL:In 2" }
{ output = "convFC:Out" input = "mixL:In 3" }
{ output = "convLFE:Out" input = "mixL:In 4" }
{ output = "convSL_L:Out" input = "mixL:In 5" }
{ output = "convSR_L:Out" input = "mixL:In 6" }
{ output = "convFL_R:Out" input = "mixR:In 1" }
{ output = "convFR_R:Out" input = "mixR:In 2" }
{ output = "convFC:Out" input = "mixR:In 3" }
{ output = "convLFE:Out" input = "mixR:In 4" }
{ output = "convSL_R:Out" input = "mixR:In 5" }
{ output = "convSR_R:Out" input = "mixR:In 6" }
]
inputs = [ "copyFL:In" "copyFR:In" "convFC:In" "convLFE:In" "copySL:In" "copySR:In" ]
outputs = [ "mixL:Out" "mixR:Out" ]
}
capture.props = {
node.name = "effect_input.virtual-surround-5.1-kemar"
media.class = Audio/Sink
audio.channels = 6
audio.position = [ FL FR FC LFE SL SR]
}
playback.props = {
node.name = "effect_output.virtual-surround-5.1-kemar"
node.passive = true
audio.channels = 2
audio.position = [ FL FR ]
}
}
}
]# Convolver sink
#
# Copy this file into a conf.d/ directory such as
# ~/.config/pipewire/filter-chain.conf.d/
#
# Adjust the paths to the convolver files to match your system
#
context.modules = [
{ name = libpipewire-module-filter-chain
flags = [ nofail ]
args = {
node.description = "Virtual Surround Sink"
media.name = "Virtual Surround Sink"
filter.graph = {
nodes = [
# duplicate inputs
{ type = builtin label = copy name = copyFL }
{ type = builtin label = copy name = copyFR }
{ type = builtin label = copy name = copyFC }
{ type = builtin label = copy name = copyRL }
{ type = builtin label = copy name = copyRR }
{ type = builtin label = copy name = copySL }
{ type = builtin label = copy name = copySR }
{ type = builtin label = copy name = copyLFE }
# apply hrir - HeSuVi 14-channel WAV (not the *-.wav variants) (note: */44/* in HeSuVi are the same, but resampled to 44100)
{ type = builtin label = convolver name = convFL_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 0 } }
{ type = builtin label = convolver name = convFL_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 1 } }
{ type = builtin label = convolver name = convSL_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 2 } }
{ type = builtin label = convolver name = convSL_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 3 } }
{ type = builtin label = convolver name = convRL_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 4 } }
{ type = builtin label = convolver name = convRL_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 5 } }
{ type = builtin label = convolver name = convFC_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 6 } }
{ type = builtin label = convolver name = convFR_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 7 } }
{ type = builtin label = convolver name = convFR_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 8 } }
{ type = builtin label = convolver name = convSR_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 9 } }
{ type = builtin label = convolver name = convSR_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 10 } }
{ type = builtin label = convolver name = convRR_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 11 } }
{ type = builtin label = convolver name = convRR_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 12 } }
{ type = builtin label = convolver name = convFC_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 13 } }
# treat LFE as FC
{ type = builtin label = convolver name = convLFE_L config = { filename = "hrir_hesuvi/hrir.wav" channel = 6 } }
{ type = builtin label = convolver name = convLFE_R config = { filename = "hrir_hesuvi/hrir.wav" channel = 13 } }
# stereo output
{ type = builtin label = mixer name = mixL }
{ type = builtin label = mixer name = mixR }
]
links = [
# input
{ output = "copyFL:Out" input="convFL_L:In" }
{ output = "copyFL:Out" input="convFL_R:In" }
{ output = "copySL:Out" input="convSL_L:In" }
{ output = "copySL:Out" input="convSL_R:In" }
{ output = "copyRL:Out" input="convRL_L:In" }
{ output = "copyRL:Out" input="convRL_R:In" }
{ output = "copyFC:Out" input="convFC_L:In" }
{ output = "copyFR:Out" input="convFR_R:In" }
{ output = "copyFR:Out" input="convFR_L:In" }
{ output = "copySR:Out" input="convSR_R:In" }
{ output = "copySR:Out" input="convSR_L:In" }
{ output = "copyRR:Out" input="convRR_R:In" }
{ output = "copyRR:Out" input="convRR_L:In" }
{ output = "copyFC:Out" input="convFC_R:In" }
{ output = "copyLFE:Out" input="convLFE_L:In" }
{ output = "copyLFE:Out" input="convLFE_R:In" }
# output
{ output = "convFL_L:Out" input="mixL:In 1" }
{ output = "convFL_R:Out" input="mixR:In 1" }
{ output = "convSL_L:Out" input="mixL:In 2" }
{ output = "convSL_R:Out" input="mixR:In 2" }
{ output = "convRL_L:Out" input="mixL:In 3" }
{ output = "convRL_R:Out" input="mixR:In 3" }
{ output = "convFC_L:Out" input="mixL:In 4" }
{ output = "convFC_R:Out" input="mixR:In 4" }
{ output = "convFR_R:Out" input="mixR:In 5" }
{ output = "convFR_L:Out" input="mixL:In 5" }
{ output = "convSR_R:Out" input="mixR:In 6" }
{ output = "convSR_L:Out" input="mixL:In 6" }
{ output = "convRR_R:Out" input="mixR:In 7" }
{ output = "convRR_L:Out" input="mixL:In 7" }
{ output = "convLFE_R:Out" input="mixR:In 8" }
{ output = "convLFE_L:Out" input="mixL:In 8" }
]
inputs = [ "copyFL:In" "copyFR:In" "copyFC:In" "copyLFE:In" "copyRL:In" "copyRR:In", "copySL:In", "copySR:In" ]
outputs = [ "mixL:Out" "mixR:Out" ]
}
capture.props = {
node.name = "effect_input.virtual-surround-7.1-hesuvi"
media.class = Audio/Sink
audio.channels = 8
audio.position = [ FL FR FC LFE RL RR SL SR ]
}
playback.props = {
node.name = "effect_output.virtual-surround-7.1-hesuvi"
node.passive = true
audio.channels = 2
audio.position = [ FL FR ]
}
}
}
]# Dolby Surround encoder sink
#
# Copy this file into a conf.d/ directory such as
# ~/.config/pipewire/filter-chain.conf.d/
#
{
"context.modules": [
{
"name": "libpipewire-module-filter-chain",
"flags": [
"nofail"
],
"args": {
"node.description": "Dolby Surround Sink",
"media.name": "Dolby Surround Sink",
"filter.graph": {
"nodes": [
{
"type": "builtin",
"name": "mixer_fc",
"label": "mixer"
},
{
"type": "builtin",
"name": "mixer_s",
"label": "mixer"
},
{
"type": "builtin",
"name": "s_phased",
"label": "convolver",
"config": {
"filename": "/hilbert",
"length": 90
}
},
{
"type": "builtin",
"name": "mixer_lt",
"label": "mixer",
"control": {
"Gain 1": 1,
"Gain 2": 0,
"Gain 3": 0.7071067811865475,
"Gain 4": -0.7071067811865475
}
},
{
"type": "builtin",
"name": "mixer_rt",
"label": "mixer",
"control": {
"Gain 1": 0,
"Gain 2": 1,
"Gain 3": 0.7071067811865475,
"Gain 4": 0.7071067811865475
}
}
],
"links": [
{
"output": "mixer_fc:Out",
"input": "mixer_lt:In 3"
},
{
"output": "mixer_fc:Out",
"input": "mixer_rt:In 3"
},
{
"output": "mixer_s:Out",
"input": "s_phased:In"
},
{
"output": "s_phased:Out",
"input": "mixer_lt:In 4"
},
{
"output": "s_phased:Out",
"input": "mixer_rt:In 4"
}
],
"inputs": [
"mixer_lt:In 1",
"mixer_rt:In 2",
"mixer_fc:In 1",
"mixer_fc:In 2",
"mixer_s:In 1",
"mixer_s:In 2"
],
"outputs": [
"mixer_lt:Out",
"mixer_rt:Out"
]
},
"capture.props": {
"node.name": "effect_input.dolby_surround",
"media.class": "Audio/Sink",
"audio.channels": 6,
"audio.position": [
"FL",
"FR",
"FC",
"LFE",
"SL",
"SR"
]
},
"playback.props": {
"node.name": "effect_output.dolby_surround",
"node.passive": true,
"audio.channels": 2,
"audio.position": [
"FL",
"FR"
]
}
}
}
]
}This is not Dolby Atmos, and it is not trying to be a licensed Dolby implementation. It is a practical PipeWire-based approach to building a spatial audio profile that can convert surround-like input into binaural stereo using SOFA spatialization, measured HRIR data, or captured convolution profiles such as WAV/IRS presets.