Jump to content
IGNORED

Massive Music Libraries


Recommended Posts

Does anyone have experience with very large music libraries (i.e. >200,000 songs and/or >12TB)? We have been bumping up against the limits of the iTunes database structure. As one approaches 30,000 songs, performance is noticeably slower, even on very fast machines (e.g. a Mac Pro with 6 x 3.3Ghz / 24GB RAM / direct attached SAN storage with throughput of 2.4GB/s). The problem appears to be related to the underlying database files.

 

As I mentioned in a post a while back, one can lock the iTunes Music Library.XML file to prevent constant writes with every move. (This file only appears to be used by 3rd party software that links up to the iTunes library, so it is not critical to keep it updated.) This helps, but as the library continues to grow, performance continues to drop.

 

By the time one reaches 200,000 songs, browsing, playing from and maintaining the library become rather painful. And as one moves towards 500,000 songs, iTunes becomes fairly useless.

 

From what we can tell, this has to do with the way iTunes stores information in a single database file. Doesn't this remind you of the dreaded PST file in MS Outlook?? We are looking into other database architectures for the OS X platform, but wondered if others have had similar experiences, and if so, how the challenges were addressed.

 

Many thanks

 

 

 

 

Sanjay Patel | Ciamara Corporation | New York, NY | www.ciamara.com

Link to comment

The little freebie version of SQL has definite limits on size and capability. I'm rather amazed you got 500,000 entires into it at all!

 

I guess we are going to have to write a better database manager than iTunes, but that is non-trivial exercise, especially to add the interfaces needed for all the high end programs such as Amarra, Pure Music, Decibel, etc.

 

-Paul

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

I doubt itunes, or any of the desktop media managers out there were designed for anything close to 500,000 entries. You need to look at getting a relational database setup for that amount of data. Hire a real programer who can put that stuff into a database like SQL or Oracle. Quality programer's can setup queries and indexes that will reduce times from mintues/hours to seconds. You will then need to get a front end created to access the database. There might be software out there that does this already but it won't be cheap.

 

Link to comment

(1) A Spanish company called Digibit has developed a music database, but their website is short on details. You'll have to phone or email them for more info:

http://english.digibit.es/

 

(2) If you cannot afford to hire a SQL database programmer and you want to make a hobby out of learning database design, you can create your own music database in FileMaker. (Since I already use FileMaker for other purposes, this is on my to-do list at some point, but don't hold your breath.)

 

HQPlayer (on 3.8 GHz 8-core i7 iMac 2020) > NAA (on 2012 Mac Mini i7) > RME ADI-2 v2 > Benchmark AHB-2 > Thiel 3.7

Link to comment

but since I use multiple computers connected to a DAC, my solution would be to dedicate each computer to one or more music genres. Thus using either my LIO-8 with four computers (or my Benchmark USB DAC1 with four computers) I can dedicate one computer to classical music, one computer to jazz, one computer to R&B, Rock, Pop, Hip-hop and one computer to Reggae, Calypso, World. So a massive 12 TB library becomes four manageable 3 TB libraries.

 

Since all my computers play music 24/7 all I often do is select a different computer to hear what's playing (kind of like flipping through TV channels or radio stations with a remote). And of course since the computers provide quick on demand selections, as they say the sky is the limit.

 

Link to comment

I saw one fairly good remark : hire a good programmer. Fairly good, because even that will bring you nowhere.

 

Database design takes some real higher level skills, and a "programmer" might be up to it after 10 or so years of general programming experience, while being focused on "databases" thoughout these 10 years.

These days, not many of those exist, because when the SQL RDBMSes were invented, courses would explicitly tell that no real design would be needed from then on, because the DBMS would take care of it all.

What a BS.

 

I'm ranting a bit (a bit ??), but let's be fair. We most probably all use computers for our work. Shall we put up some poll about who is satisfied with the response and who is not ?

 

Ok, let's say I am one of the old school insiders, and that I know. Just for fun, let's assume. Now -again for fun- here is some data for you to compare with, derived from customers from my (as in "my company") very own ERP software :

 

Number of transactions can be 200,000 per second for some customers. Compare "transaction" with "album entry" (dragging along its tracks at eg. creating it).

 

Number of records in one file (or table in SQL terms) can be hundreds of millions.

 

Number of files (tables) comprising the database (ok, in my case) 2,400 or so.

 

Number of programs (think screens) in use throughout the year by all users, something like 14,000 (but I didn't count lately). Yes, ERP systems are HUGE.

 

Guarenteed response time some 22 years ago : under 2 seconds for 100 concurrent users.

Today we dare state : under one second for an unlimited amount of users (which is because of the scaleability of the hardware).

 

Again for fun, but telling you the "impact" of how really BAD things are today :

Abovementioned 22 years back, was about the world's first ERP implementation on PC's (on a Novell network). Yea yea, I did that. Blahblah. But you know what ? PC's where 12MHz. Those 100 users (130 actually) were served by two file servers running on 33MHz. They didn't come faster back then.

 

Do we know the difference ? Something like 4GHz / 12 MHz = 333 times ?

I know this is not about cpu speed only, but you bet it is way important when Oracle databases need 4GB or whatever of memory per user in order to speed up things a bit. This means pure cpu useage.

So, 333 times faster, and still all (did I say all ?) systems manage to be way slow, or at least more slow than this 12MHz from back then. Isn't it amazing. Or total rubbish actually.

 

Allrihgt, you didn't think I was funny right from the start, so I will add a last example to make you cry :

 

Something named "XXHighEnd" was setup without a database. Nothing, zilch. I was my challenge, say, being fedup with database design a little, but also because -at least at the time- all this media stuff seemed tough to normalize in advance. Also take into account the real unknown of how you set up your music stuff. No constraints to be put on you. Just let it go, your logic assumed, whatever that would be.

I won't say it was the most easy thing to do, and I will say that a day may come that I pop in some tiny little DBMS in afterall. But anyway at this moment I control some 24,000 albums with it, sheer unlimited coverart (like more coverart files than tracks), and still my total collection is on the screen within 0.1 seconds, ready to browse through or search in (hot-typing limits the result in real time, like Google does it).

 

So PeterSt, how great you are.

Ah, oh, come on now. It is the other way around : what's produced these days generally is worth nothing much and I just behave as should. That's about all.

 

To give the better example, look at a forum like this. Or all the so many Simple Machines forums around. No matter how huge they are for number of posts and users, no performance problems there. And you know what ? they run on this "baby" (free) MySQL most often. So it really can be done.

 

Don't take iTunes for the example. It's really nothing. Did you notice the startup time ? come on now.

Install it. Drag Quicktime along with it, and you can be sure of total system degradation just because of the latter.

 

Phew, I'll stop. Crazy post. But I guess I'm too much bothered by all these beautiful IT systems around me that I had to vent.

 

Peter

 

PS: To be ahead of you : general user friendlyness or intuitivity is quite another subject. I am not good at that at all (haha, you knew that).

 

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Many thanks for your input, and Peter, yes, I feel your pain! Here are a few additional suggestions I've heard so far from various sources:

 

1. Look into the software platforms used by radio stations. I'd love to ... Still looking currently.

 

2. Try J River Media on Windows 7. I'll get around to it for sure, but I'm looking for an OS X solution.

 

3. Abandon the library concept and use the Finder with Spotlight (and throw in a neat plugin called Houdahspot, which is like Spotlight on steroids -- http://www.houdah.com/houdahSpot/). And then once music is found, play it through whatever audiophile player you want using its independent mode. I guess this works, but it isn't all that user friendly.

 

4. Miro. This is the open source project. I have started to toy around with it but I'm not sure it can handle large amounts of media. I'll test it out and see what happens.

 

5. While hunting online, I also came across Final Cut Server. This looks to be part of the pro audio / video suite including Logic, Final Cut Studio etc. It looks like a pretty impressive database tool. The question becomes, can one manage to use it in a computer audiophile application?

 

 

 

 

 

Sanjay Patel | Ciamara Corporation | New York, NY | www.ciamara.com

Link to comment

but since I use multiple computers connected to a DAC, my solution would be to dedicate each computer to one or more music genres.

 

Now *that* is fresh for a solution !

Also the strangest ever, but alas.

 

:-)

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

have Sonata, there is a thread started in CA just this week. It's a classical oriented music database for Windows. Looks like the engine is J river, but the front end is suited to a deeper classification for that genre. I spent a bit of time reading the manual, and curiously downloaded the app.

Sadly, the installer works only on 32bit Windows, with x64 to come later. There's no mention of this in the manual, nor how to optimise the sound settings either.

 

Pity, cause x64 has >4GB RAM which makes searches a bit easier and using a large database somewhat more pleasant. Means I can't install it on any of my Sony Vaio :(

 

More : http://www.sonataserver.com/

 

AS Profile Equipment List        Say NO to MQA

Link to comment

Let me put it this way, if your amplifier could not drive your speakers to acceptable levels, what would you do? Change the firmware in it? Use a different input?

Live with clipped overstressed piss poor sound?

 

Obviously not, you would simply replace it with an amplifier that would do the job.

 

Take that analogy one little step further, suppose no amplifier existed that would handle the specific load you put on it? What would you do then?

 

Most likely, you would call up one of the manufacturers and see what it would take to get a special purpose amp modified to do what you want it to do. Could be expensive, especially in a one-up configuration, but then, it could well be the only way you will get a clean, working, satisfactory, solution.

 

Software works the same way. If you were to hire my company to design and implement an application that essentially functions like iTunes but has greater capacity, guaranteed response times, multi-platform Mac/Windows/Linux, AND leaving OUT things like streaming to remote devices and ripping, and most especially without a player engine, it would run you somewhere north of $28K, delivered with source and full distribution rights. That's a rough estimate of course, and depending upon feature sets desired, would be slightly higher or lower.

 

It isn't that complex of a project, but it isn't a dead simple thing you are going to do well by fooling around with a database program either. In point of fact, indexed files would be a better implementation choice.

 

I do not agree with Peter about how database systems were taught over the past few decades, but then, I was in lectures given by C. J. Date, when DB/2 was created and published, and still have a copy of Oracle that runs on MacOS 6.5. I taught some of those classes in fact.

 

I'm not trying to throw cold water over your thinking, but there just isn't anything out there commercially affordable to individuals, that is designed well for large private music collections. J.River Media Center handles large collections much less gracefully than iTunes.

 

-Paul

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

Paul,

 

Thanks for your insights. I had a deeper look at Final Cut Server, and while it could "work" in the hack sense, it is not really designed for a music-only library. It has features for media management that video post production houses use, and a lot of that stuff just makes it too complicated for any private collector to use effectively.

 

It appears that for large collections, a proprietary database structure or custom solution is the only way. I am looking into Sooloos, but I'd much prefer a computer-based solution (for OS X).

 

Sanjay Patel | Ciamara Corporation | New York, NY | www.ciamara.com

Link to comment

On MacOS, there are two fundamental limitations that are difficult to get past - disk storage systems are (by server standards) very slow, and a vast amount of the CPU time is spent making the user interface responsive and accommodating to the user.

 

You could look into something like a Linux server running the server software and publishing the disk space over the network, and just a neat user interface running on the Mac.

 

It's more complex, and inherently multi-user (which is kinda bad, it adds more complexity) but possible to do "on the cheap" as it were. You could probably handle a million or two tracks in a typical DB/2 or Oracle database on a dedicated Linux server, and Oracle/DB/2 is cheap on those platforms.

 

I can see where it would be possible to cut the cost of a player (software wise) down a whole bunch, but at the expense of adding a second networked computer to do the heavy lifting.

 

-Paul

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

Hi Paul,

 

Looking at the suggestions from your last post, or maybe even looking at the general flow of ideas in the thread ...

 

How do you actually see this (happening) ?

 

You (not only you) seem to speak in terms of adding a backend database server somewhere will solve whatever response problems in general applications ? how ??

 

If you're talking about a dedicated (new) application to deal with music data, that allowing to address random existing players, then it is another matter. But now (then) we would be talking about solving problems which you and me have solved a long time ago. Just do it right, period.

 

and a vast amount of the CPU time is spent making the user interface responsive and accommodating to the user.

 

Which isn't going to be solved by us tweaking something (which I wouldn't understand anyway). It's just a matter of the most poor "programming" (design as a whole). As I said, look at the startup time of iTunes, for example.

 

As you said or implied earlier, all is a matter of amateurs (like Apple ???) working on these things.

 

Oh, but we disagreed a little anyway, right ?

haha

 

I do not agree with Peter about how database systems were taught over the past few decades, but then, I was in lectures given by C. J. Date, when DB/2 was created and published, and still have a copy of Oracle that runs on MacOS 6.5. I taught some of those classes in fact.

 

The first part tells me that you should know a few things. About the last part : Three years back I hired an SQL teacher. You know, someone who is doing this day in day out for 20 years or so. He knew the "language" allright, but didn't understand sh*t about what would be happening in there at applying his index (or not) at random places. How many potentially good students will he have destroyed ? a kind of countless I'm afraid.

 

About Date ... Ever back I extended Codd's Normal Forms with another 5. Real fun. Real good too. But nothing any SQL would like to deal with, unless you are blind as the designer/programmer and again won't know what damage it does (once the number of records approaches the billion).

 

When I started my company back in 1987 I put forward one little rule for the developers, which exists up till today :

 

If you ever read one record more from disk than will appear on the user's screen, you're fired.

 

(don't nag about the disk subsystem reading at the block level)

 

Think about this, and how you would manage (pick your DBMS).

 

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

I think you should really give Music Player Daemon & Co a go. It's been reported that it handles libraries with 500.000 tracks with ease.

 

Furthermore, one may split the load between a DAC-connected music playing computer (the actual daemon) and client(s). By moving the track files to yet another (arbitrary fast) file server you have a nice load-balanced setup.

 

It runs on OSX, Windows and Linux and can be configured to be a bit-perfect player (at least on Linux ;).

 

If this works for you, you are of course welcome to send me a fraction of the $ 28K mentioned earlier. Or better yet; donate it to the mpd project so they can continue their great works.

 

Cheers ,

Ronald

 

 

Link to comment

Getting subsecond response for relatively small database systems like this would be, over a network, is a pretty well understood task these days. Not all that much challenge to it.

 

Remembering of course, this is the data, not thedisplay of the data. That can take a second or so longer, depending upon how complex the display actually is.

 

f you're talking about a dedicated (new) application to deal with music data, that allowing to address random existing players, then it is another matter. But now (then) we would be talking about solving problems which you and me have solved a long time ago. Just do it right, period.

 

I do not see another answer for the problem posed by the OP. You either write a very optimized application using basically what amounts to index files, or you architect a multi-tier solution with a database engine capable of reliably delivering the data required very quickly.

 

 

As you said or implied earlier, all is a matter of amateurs (like Apple ???) working on these things.

 

Not exactly, Apple has clearly go the skill set to do large databases, even with a web front end, very well indeed. Look at the iTunes store.

 

The iTunes application is, within it's limts, a very well written and even better designed application. If does of course, spend more time making itself look pretty than it does actually doing anything.

 

The size of a project, the complexity, the completeness, and the scope of a project will reveal the capabilities of a software developer.

 

-Paul

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

I do not see another answer for the problem posed by the OP

 

Ah, then that is out of the air. We actually agree.

 

Still I think it looks like overdoing things (multi tier etc. solutions) because let's say I've proven that all can even be done without any DBMS mechanism. This is not only about less-saying examples of 25,000 albums being there in a whimp (the 250,000 tracks coming from that ditto btw), but the maybe more speaking example of having those 25,000 albums, and compare them with another 2,000 other albums, to find which are different (could be new to your collection). This takes something like 5 minutes, and includes normalizing all to artist names and album names - which obviously you would structure differently from how I do that. Think about downloads and the such.

Not trying to layout what's all in there, but merely to indicate that it's "quite some", it covers for typo's (a typo shouldn't denote an album as "I don't have that yet"), people puting years of production in it, other stuff like sample rate, labels and name it, it works with phonetics and all together quite a lot which consumes some cpu cycles.

 

Each of the 2,000 has to be layed against each of the 25,000 which makes it 50 million comparisons of this kind, and the moral is and remains : no DBMS in order here. Just two folder structures of unknown structure (I can't anticipate on your structure).

 

So yes, the only solution is re-do what's wrong/too slow, but to put it into an Oracle etc. database ... no.

Oh wait, I'm fairly sure that the result would be within a second (maybe two) when setup well, but it's really not worth the effort (and cost for the end user), nor does it justify the general overhead it will imply (hey, stuff in lots of GB of memory, only to let it operate in the first place).

 

The presentation on the screen goes similar. Yes, it takes cpu cycles again to present a picture nicely, but so many things can be done to avoid cycles. You can let it go and use some inefficient stuff and do nothing further, or you create your own rendering (more work) and create caches (more work, more complexity). Next you can render pictures according their presented size (more work, more complexity) and/or render them by the very best means you can think of (more cycles, so do more about the cacheing etc.).

 

All the crazies tend to create "the best coverart" by means of 8000x8000 pixel covers, that consuming 40MB or more per picture. Hard to deal with other than read that 40MB in the first place, BUT we can provide means to downscale those pictures to 500x500 or anything we think is acceptable without any visible loss because the rendering applied is again the best.

That this takes routines which don't exist in even the best Adobe's et al is something else, and it takes time time time to think about the how, design it properly and make it.

 

The difference is close to 8000 hours for the few things I mentioned, against a week without different result at first glance.

But wait until your library grows.

 

I guess it is this what happens when the developer is used to working at infinity boundaries (quite necessary in ERP), instead of "won't be more than 10000 obviously" or not thinking about it at all.

 

Add to the latter that I always think about futures, which for example leads to me thinking that soon everybody will be able to copy an album from the internet within a few seconds. That I can't do this says me nothing, but it will happen, and it *will* mean that my current 25,000 will grow to 250,000 - which is ridiculous in the first place, but the software should be able to manage it. And so it does, right from the start. Well, for linear response that is, which possibly doesn't make it suitable for "workable" in that future. By that time I will hop over to that DBMS and the "one or two seconds", just because I know how to, which for now is the importance.

 

That the 250,000 is ridiculous - thus leading to other setups is another matter, and that a 5 second download should lead to direct streaming is my thinking. And thus -as right from the start as the other stuff- this is in there and ready to use.

That this leads to other approaches than the iCloud stupidity is something else, but you can bet that something like the XXHighEnd community alone is sufficiently enough to setup a common database with shared music for far beyond the remainder of anyone's life, and it really will take nothing else (like Amazon etc.). I think I talked about that in the other thread.

 

That this didn't happen yet is only because of upload speeds not being sufficient, but this is a matter of time.

That -once this is sufficient- it still won't allow for 100 people to stream "my music" will be solved by load balancing, because obviously those 100 people will find their albums to play in 100 places easily. Is that load balancing ? or is THAT cloud computing. Think about it.

 

I'm drifting off. But maybe not, because the organizing of huge music libraries WILL be in order, and it is the very first problem people WILL run into as soon as things get large. Not so much about the response from it all, but merely at the logical level. Maybe that is why I mentioned the "compare albums" stuff, which is the only solution at managing your library when something new comes in in some larger pile. We may not be used to this all yet, but I am for a longer time, and there is no way this can be dealt with manually. No-way.

But it also will turn out to be the wrong way to collect and collect, which only takes effort, while it can be done at some higher level, again by automation. That cloud you know, but then executed well.

 

Sorry for the long story, but I hope to envision how "bad" applications regarding this should be worked out in the future. It is only my vision of course, and it most probably can be outbettered. But at least it is nothing like how it currently goes.

 

Lastly, I just thought about the good example of the wrong and the good;

Over 20 years ago I wrote (no, had written) an application that would allow to enter albums and track names into a database. It contained everything and all, just like we need it today, with the difference *that* was fully normalized (up to labels and origines (like it came from Paul). The application was used by a group of audiophools copying CDs to DAT tape. You could say it was a larger activity than you'd ever see today, because there was no internet hence no downloads. We had to rip ourselves, and a row of DAT machines were running throughout (at work haha). We made analogue photos from the CD covers, and were the best customer of the local photo developer shop. It was all setup so, that we each had our own share of entering track names and running times, which was a hell of a job. The database was one and the same, and later we only needed to copy DAT tapes, and indicate in the database we "owned" the album concerned. The typing had been done by the others ...

 

This was the predecessor of CDDB, where some clever guy did the same, but at the global level. It needed the internet to do it, just like the speed of the internet is needed to do what I all talked about (and some legal form !!!).

 

Oh well, sorry again for the long post. I'm getting enthusiast some times.

Best regards,

Peter

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

It may look very strange to you (after being able to finish my previous post), but whereever I am, I try to buy some CDs. It is the most tough (and actually a real problem) to buy something which is not already in my collection. In the end I don't care much at all, apart from physically present CDs being twice in a rack somewhere.

 

The very first thing I'd need, is access to my library in that store. Having it on some handheld is a possibility, but the 40GB or so the metadata of it already consumes, makes it a tedious operation to maintain (slow copying to the device, and it's redundant, which is not good).

 

See ? when things grow larger, we already need access to our "database" only for that.

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Maybe it needs a dedicated thread, but I don't start many, so I won't do it now either. Besides, I think it is quite allowed to having it in here, expecially looking at the text of my far too large previous post (if anyone could finish that in the first place).

 

It is quite clear that I can setup accessible music (of any availbe high(est) quality completely on my own). Right today it only needs a server with some bandwidth somewhere else, or possibly I could have another (more fast for upload) subscription. I'd invite 10 others throughout the world, and really all would be set and flowing. We'd be playing eachother's music ...

 

BUT

 

Coincidentally I am not amongst those who likes to steel money from hardworking people, like musicians, producers and all who actually deserve the money. This is also related to my "I don't care" from my previous post at buying a CD which I already have in "the" collection. The contrary, arriving back home that only feels good. I hope that you know what I mean, speaking in this context.

 

What I wish to say outloud, is that this will never go without a means to have the ones concerned payed. And NOW I have a problem, because I may think that I can do everything and all by myself, but this one is one step too far;

 

I am 100% sure that those XXHighEnd users joining this, think exactly the same as I do, which already starts with inviting only those of whom I KNOW they think the same. So that is set and settled for.

 

But how in the world to setup a means, structure, model to pay the rightfully owners. You know, all those actually (ten)thousands of people (or hundreds of labels to make it more easy) ?

 

I have no clue ...

 

Ayone ?

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Sorry, one more about this :

 

What I have always been thinking about, is that, say, my downloads - for which I always payed although much of it would be illegallish anyway to say the least - are payed for at last. Or again, if you want.

 

So, no matter where we got our music from and no matter how illegal (or not) that has been, by letting it stream by others, that music now suddenly is payed for to the rightful owners.

 

Btw, there would be no single cent flowing to the one who shares his music. Not to me, not to the one who joins this. So, the only benefit a participant would have is the access to all the music against a rate which can't be painful at all. Could be as low as 10 cents per track, of which I certainly wouldn't even think, at playing a couple of hours (ending up at spending a few $ only). If a few $ would be $2 per day per participant, it obviously takes quite a number of participants to make it beneficial to those rightful ownwers. Assuming a 1000 joining in, that's 2,000 per day or 730,000 per year. Still nothing much I'm afraid, but at least that's going back to people who otherwise would have received around the same when these same 1000 would have bought CDs. What will that be ? one per week ? That's 1000 per week or 52,000 per year. This times 20 gives 1,040,000. The normal model gives 52 new CDs per year, while my setup gives access to an amount which can't be played ever.

 

You could even say that the model is based upon illegal downloads at first, but it doesn't harm much.

 

Look at it from the other perspective;

Chesky brings out a new great recorded album. I most probably wouldn't buy it because I'm not even finished with the other ones. But, somebody will. From the 1000 maybe a few do, but originally this is one only, because he expects it to be uploaded somewhere anyway, and he watches the Library ...

There it is, but when the potential second buyer wants to play it, he will be in a rather endless queue, because 50 of the others want to play it, and the bandwidth of the one "owner" is not sufficient to let it flow faster. So he buys it too. If we stick to these two, that's 40 income for Chesky (et al), eventually followed by 50 listeners who each pay $1 for the 10 tracks the album contains (they listen to all the tracks).

While this is the short term outcome (and income), on the long term other things happen which are hard to judge, but the short term gives a sale of 4.5 CDs (think per 1000 people) while most probably otherwise it would have stuck to 2, with somewhere the chance that one of these two buyers or even both, would upload it, and you and me and the 48 others would be downloading it anyway, if all is right all of us 50 having a bad feeling about it, "but this is the modern world".

 

Of course things work better when the 1,000 participants grow to 10,000 and more (and more), but the result should be fairly linear to my scenario above, all being related to bandwidth limits and people keep on buying the CDs.

It is obvious to me that in the end the ratio between the amount in the database and the number of participants will change for the better, because at some end all will be in the database while the number of participants grow, as will the source (cloud) as will the useage. So, more will pay for less originating costs (the CDs).

 

Yes, sometimes I have strange dreams.

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

That's quite a post Peter, took me a while to work through it. Correct me if I am wrong (and I may be, I think only in English... :) but is this essentially what you are saying? I think we are pretty much in ageement all told.

 

(1) You don't have to have a big multi-tier application. A well designed monolithic application can handle large multi-million record entries.

 

[Warning, relatively dense technobabble to follow in the ordered list section below! ]

If so, then we do agree there. The point of looking at a multi-tier solution is thus:

    It separates the data "engine" from the display "engine", allowing each "engine" to be optimized for it's particular task. Advantage.
    Because of that separation, each piece, data and display, may be implemented wtih different tool sets, and that saves money. It adds complexity, but it saves money. COTS software may be configured and used for the data engine, and easy to use stuff like RealBasic can be used for the "display" engine.
    It is inherently multi-user, and facilities for multi-user record locking and managment are already in place.
    The data engine part may be scaled to meet the customer need - the customer with 50 million tracks may use a more robust engine (and consequently, a more expensive one) than the customer with 5000 tracks.
    The biggest downside is it is far more complex to setup, populate, and manage.
    The user sees a simple to use and fast interface, which hides all the complexity beneath the surface.

 

A monolithic application will be faster, perhaps much faster, though the difference between 3 millisecond response and 20 millisecond response will most likely be unnoticed by the user.

 

An application that is about 6x faster is probably wasted if the user doesn't notice it. :)

 

(2) XXHighEnd and the Phasure DAC are built to be very fast with a single user model, and don't provide much in the way of music library management or user interface at this point.

 

Yes, and to add music library management onto the XXHighEnd line would mean a Windows Product running on the same machine that defeats some of the purpose of the XXHighEnd product.

 

Realize, that XXHighEnd is meant to appeal to only a tiny fraction of even the Audiophile market, as is true of any tight closed application. One has to buy into not only the Phasure, but a particular kind of PC, limits on the type of music files, and other similar limits.

 

This is not bad of course, it is a choice.

 

For a mass market appeal, even to a market as limited as the audiophile market, cross platform compatibilty can be important, user interface is VERY important, simplicity is also up there in the highly important range. :)

 

Sound quality is the top consideration, but often, we will compromise sound quality for other factors. Compromise, not sacrifice.

 

 

Yours,

-Paul

 

 

 

Anyone who considers protocol unimportant has never dealt with a cat DAC.

Robert A. Heinlein

Link to comment

Hi Paul,

 

I don't disagree with all you said. But at some places I may miss the point why you said it. This most probably is because I myself derailed somewhat from some sub subject in this thread ? I'm not sure. So :

 

If I had to setup something like a globally accessable "music library" - hence database with all the music in there, and say for just finding the music (so not "playable" yet), I would go the way you desribed it. A database engine somewhere, and application developers could use it to make anything like it is existing today, but the music data would be stored elsewhere. Could be your place, could be in a datastore somewhere, could be spreaded over more places (cloud like).

 

But we would have a database means, maybe the music itself, and no application. Thus the objective of this would be a common shared database, ready to use by anyone.

Let's say I could put this up in a week or two (just because of my experience with it, and the datamodel of it is not all that hard to make).

 

But we'd have nothing. The database would be empty, and already there the "nothing" starts.

I don't think I need to explain that these lousy two weeks will be far less than 1% of the work, and what that more than 99% of the work to come will comprise of.

We could think that the Foobar guys would start to make relatively small routines so we could upload CD data to there. Maybe in large agreement every playback application would start to do so, but then what. Then, for example, I will be rewriting my application so the data from now on comes from there ? No, thank you. I have had my share, and I'm glad it works today.

You could do it (right from the start), but what would you have ? PPlay. Maybe it sounds the best, but probably it doesn't.

 

Of coure, once this would be all up and running (say, in 20 years time), any new development would start using the data there. But I don't even see it start (chicken-egg).

 

(2) XXHighEnd and the Phasure DAC are built to be very fast with a single user model, and don't provide much in the way of music library management or user interface at this point.

 

I think it is this what makes me think that possibly you interpreted my post in the realm of your ideas in the first place (which would be what I was just writing about). Also, your ideas seem to be more from the technical point of view than the functional one. So, regarding this quote (which btw, is your text, not mine - it looks the other way around) :

 

To hopefully make better clear what I am talking about : XXHighEnd *today* is ready for all this. No adjustments needed.

 

Does this change anything ?

If not than *I* don't understand what this is all about, and I guess it is okay (I won't be *that* damaging to the world, will I ? haha).

But if so, maybe a hopefully small explanation of how I see this working :

 

First off, XXHighEnd is "Music Library" all over. This, in contrary to what you seem to think (??), see quote. It provides ALL, instead of "not much". For example, if I'd send you my meta data (named Galleries and which is nothing else than a copied folder structure), you can browse my music data including coverart and everything, could search it like it was your own data (via XXHighEnd, via Explorer, or via any means you could make on top of it with the folders as a base), and you could even load the tracks in the Playlist Area. But you can't press Play, because the real music is missing.

 

Think of the upcoming next as virtual, but if we'd really go along, it would really work. Remember, *today*.

 

If you hold that Gallery, received from me, against your own Gallery - and which could go per music type on mine - as well as your side - you could create an excluding common denominator of our both Galleries (this goes by automation), and what comes from it is a Gallery again. You send that to me, and at loading your sent Gallery (which is the 0.1 sec. thing), I'd press ONE button, and what comes from it is real music data - onto a denoted location (can be a network share), and next you'd have my music, or better said : the music you selected yourself from the list of music I have, and you had not.

 

This is the "offline" situation, because we sent (emailed) eachother this Gallery data.

The online situation - and this is the one I just don't put up, but also works *today* because it is exactly 100% the same as I (and all XX users) use it internally, is this :

 

We all have our Galleries, and put them up (on-line, accessable web server) under a main folder structure. Btw, this "putting up" is just XX functionality, and it's just denoting your Gallery to some (network) location. Remember, it's all normal copyable folders/files. Since the hierarchy is doing things (the higher you ask, the more you get, with at the lowest side your Jazz somewhere and my Jazz somewhere else in the structure, asking for the highest level would present all from everyone. Still it can be filtered to Jazz, and mine as well as yours as everyone's Jazz would pop up.

 

When I use XXHighEnd I denote my Gallery data (which is meta data) from a share (on that "web server"), and when you use it (or your re-created derival of it, named PPlay), you would do the same.

 

The meta data contains the physcial locations of the actual music data. It works like that here in my local music room, as it would work like that at your side (trying is all it takes :-). There is no difference whether this meta data refers (or defers) from a meta data file locally to my music data locally, or whether the meta data file sits on a shared location (we called that "web server"). Ehh, obviously. So, whether I play the meta data file (which is what we'd "play") from here locally or from the webserver, both start pulling the physical tracks. If you coincidentally play a track which defers to my location, well, it's pulled from there and will play at your Proton or whatever you hooked up.

 

Is this multi-tier ? I don't think so. It's the plain stupid use of something which is inherent in file systems, and it doesn't even matter which file system that is. That's how PCs or MACs are made. Or the larger systems which really can do it to (because the PCs desired it). It's all transparant.

 

No database.

 

Now more towards your layout ...

 

What *is* multi-tier, is the separation of the "database means" (which is XXHighEnd) and the audio engine (which would be XXEngine3 fo the insiders). So, in the XXHighEnd setup this is 100% separated. One application sets up the tracks to play, and the other plays them. No in-memory communication, and all goes over files (again).

That -here too- this is not physically implemented is mainly because everything is still in beta, and actually it's quite a large project.

 

What it takes towards your layout, is making available the Gallery routines, so everybody (audio app developer) could work in their own way, present the data in their own way etc. etc., in order to still be able to play eachothers music. This, obviously, should take again separation on the other side, of the user interface and the audio engine, all of one's choice.

 

Where those Gallery routines currently make use of a stupid folder structure, that obviously can be a database (with DBMS) on that webserver. This is needed when more speed is required, or when things grow really large. It is relatively simple to change it into that, BUT all the transparancy (for me, the developer) will be gone. Things will not be the same anymore at my local side vs. the webserver side. Not something I like.

The applications all will suddenly become more difficult, because all need to use the database, no matter it is presented as a separate tire. It needs an interface, and interfaces are tough. Of course things could be done to present all like a folder structure again (so the clients look at it like that), but technical complexity adds and adds.

 

I hope that now you also see the actual reason why I didn't want to approach this with any DBMS means. It was not only a challenge, but there's also some strategy behind it. The strategy of what we locally use, can work globally with the very same means.

 

The physical music tracks are not uploaded to a central place. They stay where they are (with me, you). To play (stream) them is no big deal. To "connect" is something else, and to play with the highest quality (as if the track would be locally stored) requires all the proxies and chacheing and buffering to sustain glitchless (and gapless) playback.

 

Peter

 

PS: I don't see how this is related to the Phasure NOS1, but I already thought about you playing music over my DAC. Hahaha

 

Lush^3-e      Lush^2      Blaxius^2.5      Ethernet^3     HDMI^2     XLR^2

XXHighEnd (developer)

Phasure NOS1 24/768 Async USB DAC (manufacturer)

Phasure Mach III Audio PC with Linear PSU (manufacturer)

Orelino & Orelo MKII Speakers (designer/supplier)

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...