Jump to content
IGNORED

Dsp to dlna


Recommended Posts

I've been playing with a SOTM SMS-100. I tried unsuccessfully to use DIRAC on my server and output via DLNA to the SOTM unit.

 

For those that aren't familiar with DIRAC: DIRAC applies the DSP filter at the output. Typical setup is Jriver--->DIRAC WASAPI---->ASIO or kernel DAC device.

 

With SMS-100 DLNA as the endpoint, I select Jriver ASIO for the DIRAC output, as DIRAC lacks DLNA. When the SMS-100 is configured, it automatically creates a DLNA zone in Jriver which is the default zone assigned to the DLNA device. This cannot be changed.

 

I can set up 2 Jriver instances and loopback through one of the instances using Jriver ASIO driver. I can easily select Jriver ASIO as the output for DIRAC and the signal loopsback into the first instance of Jriver. However, for some reason, the Jriver ASIO driver will not work when looped back into the DLNA zone Jriver instance. I also tried linking another zone to the DLNA zone and that doesn't work either.

 

Unless DIRAC updates their software to enable DLNA or Jriver updates their software to allow Jriver ASIO loopback through a DLNA zone, the SMS-100 isn't going to work with DIRAC.

 

I think this is a limitation within Jriver. Jriver won't allow the ASIO loopback only into a DLNA zone. I also noticed that convolution and all other DSP is disabled inside the DLNA zone.

 

It would be great if Jriver (or DIRAC) could address these issues, as I believe there are many folks that want to stream to a DLNA device and use DSP in the process. DLNA seems like a very promising advance in CA. I just hope that it brings some of the more advanced features of CA along with it.

 

Michael.

THINK OUTSIDE THE BOX

Link to comment
  • 2 months later...
I've been playing with a SOTM SMS-100. I tried unsuccessfully to use DIRAC on my server and output via DLNA to the SOTM unit.

 

For those that aren't familiar with DIRAC: DIRAC applies the DSP filter at the output. Typical setup is Jriver--->DIRAC WASAPI---->ASIO or kernel DAC device.

 

With SMS-100 DLNA as the endpoint, I select Jriver ASIO for the DIRAC output, as DIRAC lacks DLNA. When the SMS-100 is configured, it automatically creates a DLNA zone in Jriver which is the default zone assigned to the DLNA device. This cannot be changed.

 

I can set up 2 Jriver instances and loopback through one of the instances using Jriver ASIO driver. I can easily select Jriver ASIO as the output for DIRAC and the signal loopsback into the first instance of Jriver. However, for some reason, the Jriver ASIO driver will not work when looped back into the DLNA zone Jriver instance. I also tried linking another zone to the DLNA zone and that doesn't work either.

 

Unless DIRAC updates their software to enable DLNA or Jriver updates their software to allow Jriver ASIO loopback through a DLNA zone, the SMS-100 isn't going to work with DIRAC.

 

I think this is a limitation within Jriver. Jriver won't allow the ASIO loopback only into a DLNA zone. I also noticed that convolution and all other DSP is disabled inside the DLNA zone.

 

It would be great if Jriver (or DIRAC) could address these issues, as I believe there are many folks that want to stream to a DLNA device and use DSP in the process. DLNA seems like a very promising advance in CA. I just hope that it brings some of the more advanced features of CA along with it.

 

Michael.

 

 

Did you have any luck with this? I was wanting to do the same with my CP-800. I currently use USB but I would like to try it via DLNA. I came up with the same method as you did but like you I failed to make it work!

 

Tom

Link to comment

edit; I see you did post this at the JRiver Interact forum a while back. I'd repost. Maybe the developers will read it there and respond.

Main listening (small home office):

Main setup: Surge protector +>Isol-8 Mini sub Axis Power Strip/Isolation>QuietPC Low Noise Server>Roon (Audiolense DRC)>Stack Audio Link II>Kii Control>Kii Three (on their own electric circuit) >GIK Room Treatments.

Secondary Path: Server with Audiolense RC>RPi4 or analog>Cayin iDAC6 MKII (tube mode) (XLR)>Kii Three .

Bedroom: SBTouch to Cambridge Soundworks Desktop Setup.
Living Room/Kitchen: Ropieee (RPi3b+ with touchscreen) + Schiit Modi3E to a pair of Morel Hogtalare. 

All absolute statements about audio are false :)

Link to comment

This is what I've learned about DSP, multi channel and DLNA:

1. The sms-100 is a great device and I'd buy it in a heartbeat if it worked in a MCH system.

2. I don't know of any DLNA device that can do more than 2CH.

3. DSP cannot be applied to a DLNA stream.

4. I've since moved on from DIRAC Live to Acourate. Acourate NAS is a great solution if one is only using 2CH and needs to DLNA AND apply DSP. ACOURATE NAS basically pre-convolves the files before streaming. This is the only way I know of the apply DSP to DLNA.

 

Michael.

THINK OUTSIDE THE BOX

Link to comment
2. I don't know of any DLNA device that can do more than 2CH.

There are some multi-channel DLNA devices but they tend to be video players as well such as the Oppo.

Eloise

---

...in my opinion / experience...

While I agree "Everything may matter" working out what actually affects the sound is a trickier thing.

And I agree "Trust your ears" but equally don't allow them to fool you - trust them with a bit of skepticism.

keep your mind open... But mind your brain doesn't fall out.

Link to comment
2. I don't know of any DLNA device that can do more than 2CH.

 

HQPlayer Embedded can work as UPnP Renderer and supports all the same stuff as the Desktop version.

 

3. DSP cannot be applied to a DLNA stream.

 

You just need a renderer that can do the DSP and you are all fine... ;)

 

4. I've since moved on from DIRAC Live to Acourate. Acourate NAS is a great solution if one is only using 2CH and needs to DLNA AND apply DSP. ACOURATE NAS basically pre-convolves the files before streaming. This is the only way I know of the apply DSP to DLNA.

 

You can also load Acourate correction filters to HQPlayer, including the Embedded version.

 

P.S. You can also configure SMS-100 as a NAA for HQPlayer and let HQPlayer do all the convolution before output goes to SMS-100. Supports also multichannel output. (this option has nothing to do with UPnP AV / DLNA though)

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Miska

 

What's your thoughts on the DSP being applied by the UPnP media server, as opposed to the UPnP renderer, at a transcoding stage. Wouldn't it be more beneficial to relieve the renderer of that sort of processing and just get it to playback the 'doctored' music file as supplied by the transcoding media server?

 

Must be something in the air as there has been a similar sort of discussion going on over at the Aries thread in the streaming section:

http://www.computeraudiophile.com/f22-networking-networked-audio-and-streaming/auralic-aries-hardware-impressions-and-information-21261/index31.html#post358677

 

John

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment
What's your thoughts on the DSP being applied by the UPnP media server, as opposed to the UPnP renderer, at a transcoding stage. Wouldn't it be more beneficial to relieve the renderer of that sort of processing and just get it to playback the 'doctored' music file as supplied by the transcoding media server?

 

I think media server is not the right place for it. One reason is that you can have multiple renderers on multiple playback systems, but one media server. For example I have one media server providing content for four different playback systems. Which correction it would apply?

 

In UPnP model, decoding is done by the Renderer and Server does transcoding only for the cases where Renderer doesn't support the requested content type.

 

I still hold my overall position, that UPnP/DLNA is sub-obtimal solution in general. :)

 

It is also possible to configure HQPlayer Embedded to output to a NAA, so in this case one would have [Media Server] -> [Renderer] -> [NAA], thus Renderer and DAC having ethernet connection between.

 

(HQPlayer Embedded is just HQPlayer with GUI ripped off and replaced with control API where you can plug-in for example UPnP Renderer interface (or another custom GUI))

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment

Sure Miska, I understand that the UPnP model intended transcoding to be used for when the file types aren't supported by the renderer, but that doesn't stop it being used for 'correction' purposes too. We could always petition for an extension to the UPnP spec, if that proves to be a bother!

Also surely the question of whether to transcode or not with a particular renderer and what to do during transcoding is handled by renderer profiling setup in the UPnP/DLNA server already anyway.

 

However, my question wasn't really whether 'pure' or otherwise UPnP/DLNA can provide a good mechanism for handling DSP. It was more to know if you agreed that distributing the mechanism, in other words, getting another device to apply DSP to the music file would be preferable as it would free the player device to just, well, play.

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment
Also surely the question of whether to transcode or not with a particular renderer and what to do during transcoding is handled by renderer profiling setup in the UPnP/DLNA server already anyway.

 

I'm not aware of such possibility. Server would need to know in which room the renderer is located to apply correction for that, and also whether user is listening through speakers or headphones.

 

Transcoding is based on set of media capabilities provided by the renderer. So when renderer connects to the media server, it tells list of MIME-types it can support and then server picks up one of those and begins to send that format. Because it is not obvious that there would be any common type between the two, DLNA was formed to specify restricted set of media types that are either MUST or OPTIONAL. That includes mime-type, sampling rate, number of bits and number of channels. Plus bunch of more parameters for video, such as bitrate and codec profile.

 

DLNA says things like "renderer MUST support 576i AVC base-profile at 1 Mbps". Or "renderer MUST support stereo 44100 Hz 16-bit MPEG-1 Layer III audio at 128 kbps". So when server picks up MP3 format, both agree on what are the encoding parameters.

 

It was more to know if you agreed that distributing the mechanism, in other words, getting another device to apply DSP to the music file would be preferable as it would free the player device to just, well, play.

 

Yes, that's how I've been doing it with NAA, although what goes out is not a file, but a stream. And there the "player" == server and the DAC end at the network is just "adapter". But of course that's not how UPnP works, and that model doesn't fit the UPnP model well in general.

Signalyst - Developer of HQPlayer

Pulse & Fidelity - Software Defined Amplifiers

Link to comment
I'm not aware of such possibility. Server would need to know in which room the renderer is located to apply correction for that, and also whether user is listening through speakers or headphones.
There are certainly UPnP/DLNA supporting media servers that allow the setup of client profiles for different renderers, eg Plex, Serviio, PS3 Mediaserver, etc. The idea is that each renderer connects with its own unique identifier which is matched by client profile contained in the media server. Of course it'll require the renderer to be flexible enough to allow the identifier to be modified, should more than one similar renderer be used on the network with the same (default) 'name'. If not a proxy type server could be employed on the network, whose job would be to provide unique identfier mapping and redistribution. The BubbleUPnP Server does something very similar, as one of its functions is to map standard UPnP/DLNA renderers to pseudo Openhome Media ones, which can be configured to have their own unique name and room location, thus allowing a standard UPnP/DLNA renderer to be recognised & used by an ohMedia control point.

 

 

 

Transcoding is based on set of media capabilities provided by the renderer. So when renderer connects to the media server, it tells list of MIME-types it can support and then server picks up one of those and begins to send that format. Because it is not obvious that there would be any common type between the two, DLNA was formed to specify restricted set of media types that are either MUST or OPTIONAL. That includes mime-type, sampling rate, number of bits and number of channels. Plus bunch of more parameters for video, such as bitrate and codec profile.

 

DLNA says things like "renderer MUST support 576i AVC base-profile at 1 Mbps". Or "renderer MUST support stereo 44100 Hz 16-bit MPEG-1 Layer III audio at 128 kbps". So when server picks up MP3 format, both agree on what are the encoding parameters.

Oh I entirely agree with you, that is the mechanism that the DLNA supporting devices are supposed to implement for transcoding. However, there's nothing to stop you employing another mechaism. For example, the MinimServer UPnP media server does not implement DLNA transcoding, but does have an optional transcoding module, MinimStreamer. MinimStreamer has configuration settings to allow the user to implement manual music file conversions, even rudimentary upsampling. It is the same mechanism that MinimServer employs to provide DoP from stored DSD files. I wouldn't have thought it would take that much effort to add correction type profiles to the list of things that MinimStreamer can do!

 

 

 

Yes, that's how I've been doing it with NAA, although what goes out is not a file, but a stream. And there the "player" == server and the DAC end at the network is just "adapter". But of course that's not how UPnP works, and that model doesn't fit the UPnP model well in general.
Excellent, so distributed processing is a good thing & I was thinking reasonably straight (for once)!

 

NAA sounds interesting and I'm intrigued by its 'stream'. I'll certainly look into it.

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment

Well, it's the 'classic' reason for requiring transcoding. It's for people with renderers that don't support DSD files natively, but of course can play DoP (in MinimServer's case it looks to the renderer like a normal WAV file), to an externally attached DSD DAC that has a DoP input.

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment

It certainly should, so long as you've enabled DoP in MinimServer, though not much point if you've not connected the Mind to a DSD DAC! The question's been asked before for other UPnP renderer's, eg:

http://www.computeraudiophile.com/f22-networking-networked-audio-and-streaming/no-idea-could-olive-one-play-direct-stream-digital-dsd-over-pcm-19965/

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment
I wouldn't have thought it would take that much effort to add correction type profiles to the list of things that MinimStreamer can do!
Speaking of which, have a look at recent conversation over at the MinimServer forum's:

DSP Plugin / Parametric EQ plugin for Minimstreamer

 

Looks like MinimServer's developer, Simon Nash, is keen to talk to anyone interested in supplying correction plug-ins to the MinimStreamer module. A complete coincidence, honest!

 

John

We are far more united and have far more in common with each other than things that divide us.

-- Jo Cox

Link to comment
FWIW the latest version of JRiver (20.0.27) has added this feature

 

11. NEW: Added dsp studio to DLNA server audio advanced options. REQUIRES the output format to be set to "Specified Output Format".

 

 

 

Tried out JRiver 20.0.27 with the new DSP DLNA feature and it works well with my Sonore Rendu. A very nice addition.

PI Audio ÜberBUSS > Synology DS1813+ > TP-Link MC220L > Sonore opticalRendu > Holo Audio Spring DAC L3 > Goldpoint SA1X > Hegel H20 > Salk SoundScape 10's > GIK / PI Audio Group Room Treatments > :)

Link to comment
  • 2 months later...

I haven't given up on MCH DSP over ethernet. I think the best solution will require a new DAC/ADC. I believe the Merging Technologies gear will work best. I may get a Hapi. I'll wait until after CES to decide. I know there will be some new ethernet gear there so I'll wait until then. I think the Ravenna standard has legs. Ravenna offers a very robust solution. One the coolest things about the Ravenna protocol is PTPv2. This is a method to very tightly sync multiple devices on the same network. The benefits for this are huge. I think there will be more and more devices using Ravenna in the future. Here's more info:

http://www.merging.com/uploads/assets/Installers//RAVENNA_ASIO_Core%20Audio/Ravenna_ASIO_and_CoreAudio_Guide.pdf

THINK OUTSIDE THE BOX

Link to comment
2. I don't know of any DLNA device that can do more than 2CH.

 

What about XBMC (or Kodi nowadays)?

 

3. DSP cannot be applied to a DLNA stream.

 

If you use LMS with UPnP/DLNA plugin and Inguz DRC, you can.

It requires a bit more than just changing a few parameters in some GUI though.

Link to comment

This isn't the issue any longer. I believe Jriver can apply DSP to DLNA. For me, it's MCH hardware. There's nothing that does MCH over ethernet. Okay maybe Oppo or Marantz but that's HT gear. I also want the ethernet to be future proof. I think DLNA is already in the past.

 

What about XBMC (or Kodi nowadays)?

 

 

 

If you use LMS with UPnP/DLNA plugin and Inguz DRC, you can.

It requires a bit more than just changing a few parameters in some GUI though.

THINK OUTSIDE THE BOX

Link to comment
This isn't the issue any longer. I believe Jriver can apply DSP to DLNA. For me, it's MCH hardware. There's nothing that does MCH over ethernet. Okay maybe Oppo or Marantz but that's HT gear. I also want the ethernet to be future proof. I think DLNA is already in the past.

 

Unless you mean pure Ethernet as in layer 2, without higher level layers, Kodi can act as renderer for MCH. It can also act as it's own media player without any UPnP or DNLA, that is how most people use it.

Link to comment
Unless you mean pure Ethernet as in layer 2, without higher level layers, Kodi can act as renderer for MCH. It can also act as it's own media player without any UPnP or DNLA, that is how most people use it.

The whole point of ethernet is to use something better than USB. U.S.B. is an acronym for Use Something Better. :-)

 

If I could find a linux renderer with MCH USB output to my DAC. (There isn't one). I'd still be using USB.

THINK OUTSIDE THE BOX

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...