Jump to content
IGNORED

Sound Engineering Repository – Research and Guides


Recommended Posts

Examples for few scenarios of guidelines.

 

An example for point 2: The generalized term “damping factor” actually comes from relative gain equations. The generalized predictions from the equation works only for a particular topology within certain constraints. Whether a transducer is stable or not at the impedance spikes decides whether the whole system is stable with a certain damping factor. Aside from this, other topologies may have other relative gain equations. This leads us to a multitude of variables that decides the reliability of the overall prediction from “damping factor”. Instead you are welcome to share articles just on gain equations in different topologies one set of posts, and transducer stability of different structures in another set of posts.

 

An example for point 4: For example, Diffuse field target is a pure mathematical derivation from diffusion coefficients and other parameters (accounting human ear variations to some extent), while Harman target has preference elements mixed to it. Link to posts on how former was arrived is welcome, the latter is not. You have multiple other places to post and explore the latter, not here.

 

Further Reading:

 

I understand that the structure of this thread can be daunting, and especially time consuming for beginners. But trust me, it’ll be worth it and over time you’ll get the hang of it. I have gone through the same stages of confusion and inability to correlate, but once I got enough experience, it all made sense and gave me a much cleaner view of individual components.

 

Great things take great effort. The point of the thread is to aid your searches and instill curiosity; not to make you a lazy reader. Always try to venture a little outside of ideal properties and check how things are characterized in real world scenarios. You’ll also get to understand why great things generally get expensive.

 

Another suggestion I’d give is not to form sweeping generalizations just by seeing a particular name/word being used. As you start exploring, you’ll get to know everything has its own perks and pitfalls. “Implementation >> Buzzword”

 

In case of YouTube videos, I highly recommend you to go through comments sections. I have personally found the place to act similar to a classroom. A fellow viewer would have asked a pretty daft question which would have had pretty swank answers from either the channel or from fellow viewers. Of course, it depends on the type of videos you watch, and it can get as 4chanesque as it can get on some of them.

 

Some of the channels are quite interactive, and would probably find time to reply to your questions if you made a comment. I would also recommend to leave them a thank you note in case they helped you. Everyone feels happy when they are appreciated.

 

Awaiting approval and feedback from Chris before moving forward with the posts and links.

 

Kindly Read and abide by the rules described in First page of the thread. Violations will be deleted.

Link to comment

Considering that we are looking at the links in Digital audio, I feel it would be worthy to describe the chain before we explore each link in detail.

 

In general, the following form the components in the link in the respective hierarchy.

 

Storage Drive

RAM

CPU + Cache

Interface from CPU to I/O

I/O controller

Transmission Media

Slave/DAC I/O controller

Digital to Analog Converter

Amplifier

Transducer

 

In addition, depending on your setup you might have additional components to "fix" certain issues, which we shall talk about later.

 

Depending upon individual setup, it is not necessary all the following are separate blocks in your chain, but it helps to look at them as individual blocks in order to be able to analyse them deeper. Some of the following can also be skipped, for example you can have an I/O controller being able to do Direct memory access on your RAM, eliminating the need for your CPU to be involved in the chain.

 

It is important to know that each of these are connected to each other by physical (or wireless) links. The reality of these systems is that each of them operate at different speeds. Hence we have well defined protocols to ensure a harmonious communication between these links. And typically we will also have buffers, FIFO inside each component to facilitate better synchronization.

 

Apart from these, we shall have software and pre-defined logic that shall be executed by microprocessors and micro-controllers across the chip.

 

Now let us see how a general flow happens. This is an initial post and hence I am only trying to give a general view. In depth analysis of each of the links and protocols will follow soon.

 

Assume having music stored as flac played from your computer through USB to your DAC. A DAC interface (the I/O controller), will be expecting data in the form of PCM code (or DSD), which is almost equal to your uncompressed wav file. So the first stage of the process will be to decode the flac file to PCM code.

 

In an ideal system, the CPU should fetch the data directly from the Storage Drive. However, the speed of storage drive and the CPU is wildly different and in order to help the CPU not wait too much, we will buffer a big chunk of this decoded music to the RAM. Depending upon your configuration, you can buffer the entire music to RAM, or buffer small chunks of data to RAM as you wish. From the RAM, CPU will fetch the data and send it to the USB root hub, which shall be connected through the PCH interface. What I'm meaning here is that there is multiple interfaces that are present between CPU and the USB port in order to facilitate efficient operation of both of them.

 

If you're confused, I'd recommend you to open Device Manager if you're on Windows (or equivalent if on Linux/Mac). You will be able to see the names I'm mentioning. I would also recommend you to go through the concept of Direct Memory Access if you can. https://en.wikipedia.org/wiki/Direct_memory_access . If not don't worry, I'll explain it in a future post.

 

From the USB root hub it shall go to the USB controller which will have its own tiny buffer memory. Once this buffer fills, the data shall be sent through your USB cable and to the DAC where it is received and interpreted. This data is sent with a particular structure, which I will explain in detail when we discuss protocols. When a signal goes out of a system, it now goes beyond the internal structure and is typically falls under the domain of Computer Networks.

https://en.wikipedia.org/wiki/Computer_network

 

This is because of the fact that the operating conditions - current required and voltage required in simple terms, start to vary. This is primarily fueled by the fact that we are now starting to deal with significant external influences, and significant possibility of transmission loss. To do this translation and modification, there is an entire stack of different blocks that help convert the data fed to the controller into a model that is feasible for transport. Interested people can read this popular generalized model - the OSI model: https://en.wikipedia.org/wiki/OSI_model .

 

One important thing to note here is the fact that most of the data going out of a PC (including but not limited to USB signals) are sent has high frequency pulses (In the order of MHz). typically in a serial transmission line. In this case, a conducting wire will start to exhibit properties that used to be negligible in low frequency operation and these factors determine how you are supposed to design the cable. One of them, is characteristic impedance. I recommend you to watch this video to understand what it means and why it is important:

 

Now assuming all goes well, the data reaches the slave, which will also have the full Network stack to decode the received data. This data shall be fed to your DAC, which shall reconstruct the analog wavefrom and feed it to your amplifier, which shall then drive your headphones.

 

That's it for this post, next post coming soon, and will be about DMA.

 

Thanks and Regards,

Manuel Jenkin.

 

Since this is my first post, feedbacks via Personal Message appreciated.

 

Kindly Read and abide by the rules described in First page of the thread. Violations will be deleted.

Link to comment

My apologies for deviating from the earlier proposed order. I feel that an introduction to processors is necessary prior to diving into the rest of the stuff. DMA will follow shortly after introduction to processors.

 

In this post and the next few posts that follow, I will only describe the general design of the building blocks of a processor, with mentions of some non-ideal behavior of transistors, to minimize abstraction. Transistors are the building blocks of modern digital computers. An in-depth look into anatomy of different types of transistors will be done later when we proceed to amplifiers, power supplies and transistor properties in general - Bipolar Junction Transistors, Field Effect Transistors, Valve based Transistors, Thyristors and more, for those curious.

 

We will stick to application of transistor as a switch for the most part here (other applications include but not limited to transistor as an amplifier, rectifier, etc.). For now, know that it is a physical device with finite response speed/time, non-linear behaviors both electrically and thermally, and we have to make necessary compensations to be able to make usable machines out of them. A single transistor can be as small as only a few atoms thick. At this level, most of what we know in classical mechanics fails and their characters are better described by quantum mechanics and a deeper understanding of their electrical/chemical interactions. This is the reason why they have such properties.

 

I will link a few web links, read them at your own interest, if you’re curious enough. I would also recommend starting from diodes before moving to transistors.

 

https://en.wikipedia.org/wiki/Semiconductor

https://en.wikipedia.org/wiki/Diode

https://en.wikipedia.org/wiki/Transistor

https://en.wikipedia.org/wiki/Bipolar_junction_transistor

https://www.electronics-tutorials.ws/transistor/tran_1.html

https://en.wikipedia.org/wiki/Field-effect_transistor

https://en.wikipedia.org/wiki/Vacuum_tube

 

The following post shows the levels of abstraction I am describing about.

http://vlsi-design-engineers.blogspot.com/2015/07/levels-of-abstraction.html

 

So, let’s begin. With regards to computing, a processor, put simply, is a machine that processes something. You typically give an input, and commands, and it is designed to give you a usable output. Seems simple, doesn’t it? Well, the important thing here is, how do you realize this physically?

 

There are many ways to go about this, but we will restrict our talk to digital processors (if you are curious you can search topics on analog computers, quantum computers etc.). A digital processor put simply has its input, command and output are represented in digits, most common being binary. So how can we go about realizing this physically?

 

There are many ways to go about this. One way is to have different electrical voltage levels describing the two binary states. Another way is to have optical signals describing the binary states. There can be so many other ways. In any case, we need hardware to realize this physically. The most common way we have is using transistors. When you think of a modern processor, it is made up of billions of transistors forming individual blocks and chains to help process the information.

 

Now the functionality of transistor that is most exploited inside a processor is its ability to act as a switch. As you know, a switch has two states - ON and OFF. These can be used to represent the two basic units of our digital world – 0 and 1. It is important to note that we are using electrical signals (or equivalent) of particular strengths to represent these 0 and 1 and in physical circuits that have their own non-ideal properties, which certainly means there are certain conditions that need to be fulfilled before these can be used reliably as a switch. This imposes bounds on a lot of things like maximum clock speed that can be attained, reliable operating temperatures, power supply requirements, etc. If not, we will encounter issues. (curious readers can start searching from Meta-stability, Glitch, Hazard etc.).

 

Let us begin with assuming an ideal transistor without any of these issues and see how we can realize the device as a switch and a NOT gate. A gate is perhaps the basic building block of a computer, just as the cell is the building block of a human body. A gate may be made of one or more transistors and passive components like resistors, capacitors and inductors. NOT gate means output is complement of input.

 

Let us assume we have two levels, 5V for 1 and 0V for 0.

 

Look at Figure in the attachment. I have described an Ideal switch of zero resistance when it is ON. The input is the action on the switch, whether it is made ON or OFF, which we denote as 1 and 0 here. The output is tapped along the ends of the switch. When it is in ON (1) position, the output voltage is dropped fully along the resistors and voltage along our tapped point is 0V (0). When it is in off position, the output voltage across the switch is 5V since it is open.

 

Now this is an ideal switch. If you look carefully, we have input as a mechanical action, and output as electrical action. What if we want to have multiple logics one after another? We would need a single common type of control signal. This is where transistors come in. We can replace this ideal switch by a transistor operating within certain bounds.

https://learn.sparkfun.com/tutorials/transistors/all

 

Let me explain this with context of an NPN Bipolar Junction Transistor (for PNP, it is mostly just the complement/flip). I recommend you read the non-linear properties, and reasons of operation of BJT in your own interest now. Cutting to the chase, a fabricated BJT that you can purchase will come with three input terminals – Collector, Emitter and Base. In an ideal BJT, when the Base to Emitter voltage is above a specific voltage typically around 0.7v (varies with type of semiconductor) the line from Collector to Emitter will behave as a conductor. If not, it will behave as an open. Hence by varying the input voltage at the base we can make it act as a switch.

 

As said before, things are never ideal, and in this case, there will be a finite voltage drop across Collector to Emitter, about as equal to 0.05-0.2 volts typically (again varies on semiconductor used). We will be compensating for this further by designing improvements to the circuit. The above is with reference to Resistor Transistor logic. And if you have paid attention you would have realized that the voltage required to make the transistor turn on is only 0.7v but the output voltage from one stage is from the power supply (which is typically 3.3 or 5 volts). We will have to make changes in the circuit further to eliminate these issues as well. I’ll leave it up to you to further explore designs by your own effort.

 

Links for reference. Do check what are their perks and pitfalls and where they are best suited.

https://en.wikipedia.org/wiki/Resistor%E2%80%93transistor_logic

https://en.wikipedia.org/wiki/Diode%E2%80%93transistor_logic

https://en.wikipedia.org/wiki/Transistor%E2%80%93transistor_logic

https://www.allaboutcircuits.com/textbook/digital/chpt-3/cmos-gate-circuitry/

http://vlsi-design-engineers.blogspot.com/2015/07/cmos-logic-families.html

http://vlsi-design-engineers.blogspot.com/2015/07/diode-logic.html

 

A “NOT” gate is one of the types of gates used, where the we have one input. We also have other gates, like AND, OR, NAND and NOR gates where we have two inputs (or more). These help us build blocks that are used in addition, subtraction and in well-designed cascades, can be used to design blocks that can-do multiplication and other complex operations. Their input-output relations are described in a table called truth table. The table can be made accounting only ideal behavior and states like 0 and 1. They can also be made accounting to their aberrations with additional states like Z (high impedance).

 

Kindly read each of these gates, their designs and their behavior in the links below.

http://www.ee.surrey.ac.uk/Projects/Labview/gatesfunc/

https://www.electronics-tutorials.ws/logic/logic_9.html

 

I will proceed to show you how these gates can be used to do mathematical operations. But before we get there, let me show you how a binary computer counts (and how it is different from how we count).

 

We count in what we call as a decimal system. We have 10 digits per place, 0-9 and hence each index holds a value of 10^x. So, the value of 543 is 5x10^2 + 4x10^1 + 3*10^0 (it starts from 10^0 and not 10^1). A binary computer, since it has only two valid states, use a binary number system. We have a choice of 2 digits per place, and hence each index holds a value of 2^x. So 1011 in binary system holds a value of 1*2^0 + 1*2^1 + 0*2^2+ 1*2^3 = 11. 543, is an invalid entry in binary system since it can only have 0 or 1. Binary representation of 543 shall be 1000011111.

 

When we add numbers in binary system, we carry over to next place once the sum is above 1, different from the 9 that we have in decimal system. 01 + 01 = 10 in binary system. The same holds for subtraction as well.

 

This link describes how a summer circuit is done.

https://en.wikipedia.org/wiki/Adder_(electronics)

 

When we want to do it for more than one place, we will want to have a logic that calculates carry and input it to the next stage. So, we will be having three inputs at this stage. The logic is described in this blog.

https://www.elprocus.com/half-adder-and-full-adder/

 

All the above calculate each of the digits one after the other. As stated, transistors take a finite time to process and output the information. When the number of digits is very high, such a circuit may take a very long time to produce results. Hence, we try to parallelize the operation. One example is a carry look ahead adder. Link for the same is below.

https://en.wikipedia.org/wiki/Carry-lookahead_adder

 

When it comes to modern computers, we have more complex things, like Speculative execution which shall be covered in the next few posts. Interested people can start reading right away in the below link.

https://en.wikipedia.org/wiki/Speculative_execution

 

Now there is no free lunch. A carry-look-ahead adder does have more transistors, hence consumes more power for equivalent type of transistors, and while a large operation could be done faster, a simple operation might take more time. The complexity changes. This is pretty much a universal rule wherever we go, from hardware till software, and their efficiency/speed are characterized by curves and orders.

https://en.wikipedia.org/wiki/Time_complexity

 

The above circuits show valid output only when we have a constantly feeding input. What if we can’t always have an input. This is essential to the concept of storage. That is achieved through another implementation of gates known as flip-flops. There are different types of flip flops each of which have their own characters, but one of the defining properties that differentiate flip flops from normal gates is their ability to store/retain state. Flip flops are used in Solid state storage drives and other storage media. (Of course, we also have optical storage media, which we will talk about later).

 

It will also introduce the concept of clocks. In earlier cases, we had circuits where we constantly had input. However here, we will have to distinguish between a valid and invalid signal, since we may not be constantly feeding input. Since things have finite transmission time, it is also necessary to keep a reference time. Clocks and supporting logic are used to synchronize the different stages and differentiate between valid and invalid inputs, aside from other functionalities.

 

Kindly go through flip flops in this link. Also try to read latches before going to flip flops.

https://en.wikipedia.org/wiki/Flip-flop_(electronics)

 

Further Reading: R-S flip flops can be used as debouncing circuits in keyboard switches.

http://jayanth911.blogspot.com/2013/11/switch-debouncer-using-sr-latch.html

 

On the topic of timing and errors, I’d like to introduce one concept known as glitch. In the following case we will talk about glitch happening due to propagation delay. These kind of examples reinforce the need of a well-structured synchronization mechanism (need for clocks).

 

http://www.designcabana.com/knowledge/electrical/electronics/digital/propag/

 

Once you go through the above, kindly read about sequential circuits (synchronous and asynchronous) – counters, finite state machines, Mealy and Moore machines etc. which are all built using these flip flops and latches typically.

 

https://www.sciencedirect.com/topics/computer-science/synchronous-sequential-circuit

http://www.ee.surrey.ac.uk/Projects/CAL/seq-switching/synchronous_and_asynchronous_cir.htm

https://www.geeksforgeeks.org/difference-between-synchronous-and-asynchronous-sequential-circuits/

https://en.wikipedia.org/wiki/Counter_(digital)

https://www.geeksforgeeks.org/counters-in-digital-logic/

https://en.wikipedia.org/wiki/Moore_machine

https://en.wikipedia.org/wiki/Mealy_machine

 

I believe I have given a decent enough introduction to the building blocks of computers, at least in its working principles, if not in its anatomy. To be honest, these are concepts taught over months together in EE courses, please take it slow and don’t get frustrated if it becomes incomprehensible. Everyone goes through that state, but over time things do click. Also, it is not necessary to understand these completely to comprehend the next posts. They only make it easier if you understand the basics first. I am trying my part to give interested people lead through how to go about it.

 

I will follow this up with an overview of how a basic micro-processor/micro-controller looks like and operates, and then in the following post I shall describe the functioning of a modern processor (or how it deviates from the idea we have with a basic microprocessor).

 

Thanks, and Regards,

Manuel Jenkin Jerome.

IMG_20200227_162449__01.jpg

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...