text
stringlengths 83
79.5k
|
---|
H: Does this AC to DC + breadboard setup have the ability to kill/severely hurt me?
I'm currently building a PC peripheral that uses an AC to DC converter to power a set of DC fans. I'm not a trained EE or electrician, so I want to make sure I'm not putting myself a risk with this setup.
Essentially, I have the AC to DC converter pictured below with a screw terminal block on the DC end plugged into my breadboard power rails:
Assuming I'm using a standard 120V US outlet, am I putting myself at serious risk? I know it doesn't take much current to kill someone, so I get really concerned with this exposed wiring when I see this can output 3A at 12V.
AI: These can be good. They're usually good.
You don't need to guess about quality.
You only need to consider the source, and look at the markings to see if it has the mark of an NRTL (Nationally Recognized Testing Lab, a list curated by USA OSHA which many countries rely on). The most common NRTL marks you encounter in North America are
UL
CSA
ETL
There's really no problem finding listed Wall Wart power supplies, as the things are absolute commodities made in billion quantity, and the vast majority of units are sold to consumer electronics manufacturers like Philips or Linksys who need listed supplies. Listed ones are so common I'm surprised to see one that is not listed. Maybe it's the multi-voltage switch that is the problem. Whatever, get a single-voltage unit.
There are many, many ways to shortcut the design so one of the DC output pins is energized at AC line voltage or AC line neutral (which is line voltage if certain malfunctions occur). However, UL won't allow that on a listed unit.
And since the vast majority of wall warts are approved, it's understandable to assume they all are.
Not this one, though.
All NRTL marks are conspicuous in their absence from this one. What you see is the marks that are universally faked in the North American market, because there are no consequences for doing so when your boots are on a faraway territory of a nation that does not cooperate with mark enforcement.
FCC. That is a self-certification that it complies with radio emissions rules, but self-certification is meaningless from overseas junk sellers, because the US agency FCC does not have the public funds to go on overseas adventures into uncooperative nations to defend their mark. *
CE. This is a European self-certification of compliance with EU safety rules. But the EU also won't spend public funds chasing miscreants outside the EU, so the mark has no force outside the EU proper.
RoHS. Ditto for EU electronic waste rules, e.g. use of lead-free solder.
CCC. China's competitor to CE, that China doesn't enforce on goods destined for export.
Reputable suppliers help.
It is rare to see falsification of NRTL marks in the consumer space, because the Federal law that enables NRTLs requires them to legally defend their mark aggressively. However, it is not impossible for someone to sneak something out before UL notices, and that is where it helps to use reputable suppliers who have a chain of custody. I.E. the item was shipped from Philips' warehouse to Home Depot or Mouser's warehouse. Reputable bricks-and-mortar stores are almost always reliable product, as they have good chain of custody and are heavily focused-on by consumer protection agencies.
Direct-mail sites such as eBay or Amazon generally involve 3rd party sellers, and there is no chain of custody whatsoever. I don't see much risk of forged NRTL marks in the consumer space, but if it happens anywhere, it'll happen there.
* If I were the FCC, I would go after wholesalers and drop-shippers of this garbage, but I am not the FCC.
|
H: Will reversing the polarity of a K thermocouple damage it?
I have a K thermocouple connected to a MAX31855KASA+T IC supplied by 3.3V. I mistakenly reversed the polarity and was taking reading from it for about 1hr. I am noticing the instrument is now somewhat inaccurate. Freezing water was registering 10F and boiling water about 180F. Did the polarity reversal damage the instrument? I have replaced the IC but am still noticing the inaccuracy.. Isn't this just measuring the dV across two dissimilar metals, I imagine this would be pretty robust.
AI: No the thermocouple cannot be damaged by reversing it.
More likely you have an error in your cold junction compensation or extension lead wire polarity.
All thermocouple wire in the sensor circuit must have the correct polarity- if there is a piece that is reversed you’ll get an error approximately double the temperature difference between the ends of the incorrect piece. Similarly, any temperature difference between the IC chip and the junctions between K wire and copper will show up as approximately 1:1 error.
|
H: Which matrix index should be used when Y (admittance) is shown to compose a circuit segment mapped to an ABCD matrix? Y21? Other?
I would like to cascade a serial capacitor with a shunt inductor to form an L-match network by multipling two ABCD parameters and convert the result to S-parameters that represent the composite network. As a starting point, the capacitor's S-parameters were converted to an ABCD matrix using transforms from here. Now I need to create an ABCD matrix for the shunt inductor.
To avoid the X/Y problem, ultimately this is my question, but please also address the questions at the bottom to help my understanding of the situation: How do you transform an ABCD matrix to form a shunt so I can multiply it (chain) with another ABCD matrix? (related questions below)
Here is what I've discovered so far:
This video shows how to create an ABCD matrix for various circuit segments. For my purpose, this frame shows the ABCD matrix for the shunt component (the inductor, in my case):
The Y parameters for the 120 nH inductor were converted to Y from series-modeled S-parameters at 100MHz as provided by Coilcraft.
3x Serial 120nH Coilcraft Inductors
Update: In the original post I had posted s-parameters for a composite of 3 serial inductors. Using these numbers, @ThePhoton expertly composed his answer but noted that the numbers in that original post did not match the .s2p files from the Vendor's website.
These are the original numbers @ThePhoton used for his calculations:
These are the inductor's Y parameters at 100MHz in mag-angle format:
Y11: [0.00440205824472129, -86.149908981413]
Y21: [0.00440205824186984, 93.8500909570987]
Y12: [0.00440205824186984, 93.8500909570987]
Y22: [0.00440205824472129, -86.149908981413]
These are the inductor's original S parameters at 100MHz in mag-angle format:
S11: [0.893392359379373, 23.1032289305104]
S21: [0.393276519863971, -63.0466801187767]
S12: [0.393276519863971, -63.0466801187767]
S22: [0.893392359379373, 23.1032289305104]
1x 0402DC-121 coilcraft 120nH Inductor
These are the inductor's Y parameters at 100MHz in mag-angle format for a single component. (Note that the values above in the 3x-serial version were used by @ThePhoton to calculate his numbers):
Y11: [0.013083525177408, -86.1734582308486]
Y21: [0.0130835251738178, 93.8265417327069]
Y12: [0.0130835251738178, 93.8265417327069]
Y22: [0.013083525177408, -86.1734582308486]
These are the 1x inductor's original S parameters at 100MHz in mag-angle format:
S11: [0.588600478, 50.2086462]
S21: [0.770096917, -35.9648121]
S12: [0.770096917, -35.9648121]
S22: [0.588600478, 50.2086462]
Questions:
Given a two-port Y-parameter matrix representing the inductor, which Y matrix index would I use for the "C" value of the ABCD matrix pictured above?
Intuition says to use \$C=Y_{21}\$, but I think that means the remaining indexes (11, 12, 22) are lost information so, for example, the \$Y_{11}\$ input admittance behavior would not present in the composite circuit after the two ABCD matrixes that represent the serial capacitor and shunt inductor are multiplied. Is this a a correct interpretation?
If so, Is there a more accurate way to transform the 2-port 2x2 Y matrix into a shunt ABCD matrix where no information is lost?
AI: The shortest route to a solution is probably to convert the S-parameters you have for the series-connected inductor to ABCD parameters. Then, since the ABCD parameters have a very simple structure for a simple series-connected or shunt-connected device, you can convert to the ABCD-parameters for the shunt-connected device.
To convert from S-parameters to ABCD parameters you can use the formula I found here. For the case where the reference impedance is 50 ohms on both sides of the 2-port,
$$A = \frac{(1+S_{11})(1-S_{22})+S_{21}S_{12}}{2S_{21}}$$
$$B = 50\frac{(1+S_{11})(1+S_{22})-S_{21}S_{12}}{2S_{21}}$$
$$C = \frac{1}{50}\frac{(1-S_{11})(1-S_{22})-S_{21}S_{12}}{2S_{21}}$$
$$D = \frac{(1-S_{11})(1+S_{22})+S_{21}S_{12}}{2S_{21}}$$
Using these formulas, for your device I calculated (after rounding appropriately)
$$\begin{bmatrix} A & B \\
C & D \end{bmatrix} =
\begin{bmatrix} 1 & 15.253 + j226.65 \\ 0 & 1 \end{bmatrix}.$$
This is consistent with the expectation that for a single series element we should have
$$\begin{bmatrix} A & B \\
C & D \end{bmatrix} =
\begin{bmatrix} 1 & Z \\ 0 & 1 \end{bmatrix}.$$
(The reason I did all this math myself rather than just explain it in rough terms was to make sure that the numbers you had would actually give an ABCD matrix in this form—it wouldn't have surprised me if, for example, the vendor measurements captured some parasitic shunt elements in their test fixture and produced an ABCD matrix with a non-zero C value)
Now you can convert this to the ABCD matrix for the same device in shunt configuration
$$\begin{bmatrix} A' & B' \\
C' & D' \end{bmatrix} =
\begin{bmatrix} 1 & 0 \\ Y & 1 \end{bmatrix}.$$
using \$Y=\frac{1}{Z}\$
$$\begin{bmatrix} A' & B' \\
C' & D' \end{bmatrix} =
\begin{bmatrix} 1 & 0 \\ 0.0002956-j0.0004392 & 1 \end{bmatrix}.$$
Note: This result (\$Y=0.0002956-j0.0004392\$) is pretty far from the value for an ideal 120 nH inductor, which would have \$Y=-0.0133j\$. I'm not sure exactly what causes the discrepancy, given the charts in the component datasheet indicate this inductor should be well below resonance at 100 MHz.
Solution: You don't seem to be using the right data for the 120 nH part. If I look at the vendor provided S2P file for the 120 nH part, for 100.93 MHz it gives
$$S = \begin{bmatrix} 0.3767+j0.4523 & 0.6233-j0.4523 \\
0.6233-j0.4523 & 0.3767+j0.4523 \end{bmatrix}$$
and if I calculate from these values I get \$Z\approx 76.3\ \Omega\$, which is pretty close to the \$75.4\ \Omega\$ we'd expect for an ideal 120 nH inductor at 100 MHz.
|
H: Mixing light from multiple leds
I'm trying to design a device that produces visible light with a somewhat controllable spectrum. My goal is to measure the human eye cone cell responses by switching between two different spectra and finding spectra that are hard to distinguish by eye, so an RGB led won't be sufficient for this purpose.
My first draft PCB has 13 LEDs in different wavelengths set in a circle with a diameter of 20 mm, but I can change this arrangement in whatever way is desirable. The LEDs I plan to use come in 2 mm x 2 mm packages, so they could also be fit more tightly if that's useful for the mixing (see image below). Their "typical viewing angle" is between 145° and 170°. (LED datasheet)
What kind of optics could I use to produce a well mixed light out of these? I would not need more than a (say) 2 cm x 2 cm diffuse surface with well mixed light. The output power need not be large—ideally it would be comfortable to look at from a distance of tens of centimeters—and it is not a huge problem if a fair amount of the light is absorbed.
I have looked into light pipes in the Digikey catalog, and while I have a feeling they might be part of the solution, they generally seem to be designed for leading the light from a single led to a panel. I think a light pipe should mix light shot into it from a sufficient angle. I also have explored all kinds of weird ideas like installing LED reflectors upside down and having diffuse lenses in the other end. Another idea is to put the LEDs in a separate, smaller board that fits inside a reflector. In the end, it's clear to me that I don't know enough about optics to understand how to best approach this problem.
Here's a very early draft design of a Raspberry Pi hat (65 mm x 56 mm) with the 13 LEDs in a d=20 mm circle.
AI: In photometry, the standard instrument for doing this is the Integrating Sphere.
The concept is to bounce light around inside a matt white enclosure before allowing it to escape, and arranging the light sources and exit hole so that you can't see any light source directly from the output.
How good or big does your enclosure need to be? Probably not as good as you would need in a photometer, just something better than you are getting by just putting the LEDs close together.
As a first attempt, try a thin translucent sheet of white paper, or a thick sheet of tracing paper, spaced off from the LEDs, to diffuse the light, it may be sufficient.
If you are fabricating the board the LEDs are on, then use white silk screen around the LEDs so that the board surface is white rather than dark green. Light scattered back from the paper will illuminate the white areas, which mixed light will then bounce forward again. Maybe make matt white walls around the LEDs to complete the enclosure. If you want to try with existing boards, use Tippex or white paint to paint the board around the LEDs.
A better approximation to the integrating sphere will be to have the LEDs facing away from the viewer, and bouncing off the back wall of the box.
|
H: Is it safe to keep PS_ON grounded?
I want to disable the 'soft power off' feature of an ATX power supply. So I think of connecting PS_ON and GND with a tweezer and keep it. But when the motherboard wants to shut down and sends high level signal to PS_ON, will the current be too large and damage something?
AI: The AT specification says that the #PS_ON signal should be pulled up inside the power supply to the 5V standby power rail. So it's only necessary for the motherboard to have an open drain or open collector output driving the signal, and that's what most motherboards tend to use.
However, without being certain that your particular motherboard does that, you can't be sure that it won't be damaged if you short the line to GND and it tries to force it high instead of passively allowing it to pulled high. If you get hold of the schematic or you can trace the circuit driving that pin you may be able to determine the type of output used but if not, disconnect the signal by cutting the wire or removing the pin from the connector with a removal tool and tape it up.
|
H: What will the output voltage will look like if we have non sinusoidal AC paired with capacitor?
I am currently studying for exam and a lot of questions are popping up in my head.
What will Uout of a full wave bridge rectifier with a capacitor look like?
I am talking about square, trapezoidal, and triangular type of AC wave. I do not need any calculations only, how it will look so I can imagine it.
AI: The capacitor will charge up to the peak voltage of the input waveform so, if the peak voltages of each waveform are the same then, the DC output will be the same. This is an unloaded capacitor scenario. With a resistive load across the capacitor, the output voltage will discharge a little between input voltage peaks but recharge to the original peak value when the input voltage returns to maximum voltage.
With a load on the capacitor there will also be a slight peak voltage reduction due to the diodes in the bridge rectifier having a small forward voltage drop when conducting.
Example of a trapezoidal input waveform at 50 Hz: -
With an open circuit load resistor, the peak voltage on the capacitor would be closer to about 9.8 volts rather than the 8.5 volts under load.
Triangle input waveform: -
Again, peak capacitor voltage will be about 8.5 volts with the load resistor connected.
Sinewave input: -
|
H: Why this motor shows delta 220 / star 380 which is usually the other way round from what I know?
Brand new induction motor arrived in the factory,it comes with gearbox on top controlled by VFD, but I could not understand it's nameplate, it should have shown just delta / star without voltage number, so confusing.
AI: You're mistaken.
The motor phase voltage is 220 V.
When connected in star, the line voltage is 220 * √3 = 380V.
When connected in delta, the line voltage is the same as the phase voltage i.e. 220 V.
|
H: What does the 'C' marking on this battery indicate?
As the title says, what does the 'C' marking mean? I first figured it may be the terminal that outputs temperature data but I'm not sure. I wanted to build a custom solar charger for this battery and I was hoping to know if this terminal is important in any way.
AI: NP-FZ100 is the battery supplied with Sony α9, α7R III, α7 III, a9 II, a7RIV, a7S III, a7C cameras.
The terminal marked 'C' is for serial communication to display the remaining charge of the battery on the camera's LCD screen.
|
H: No output signal with ULN2803 on Proteus
I'm using an ULN2803 to amplify a voltage signal from an Arduino, but the output is 0 V even though the input is 5 V. I've tried multiple simple circuits to test it, but it just doesn't seem to work. I don't know if it's a Proteus problem or something else.
PS: I need to get the 5 V to 10 V, controlling it from an MCP4728, maybe you can suggest other amplifiers.
AI: The ULN2803 is an open-collector output like this: -
In other words it doesn't magically generate a voltage from nothing; you have to add a power supply of the correct voltage in the output circuit in series with your load such as like this: -
Notice the 12 volt power source feeding all the LED circuits. Circuit image from here.
|
H: Audio: Can I use a stereo-amplifier with a mono DAC?
Here a schematics of my project:
There is only one speaker (the sound is mono)
The DAC I will use is a mono DAC, its reference is ES8311 and its datasheet is here: URL. It has two ouputs: OUTP and OUTN.
Because of component shortage, I have to use the following audio amplifier: PAM8019 (datasheet : URL). It is a stereo amplifier, it has two inputs (L_IN and R_IN) and four outputs (L_OUT_P + L_OUT_N and R_OUT_P + R_OUT_N).
I have two questions:
Is it possible to use this stereo amplifier with the audio DAC?
If so, how should I connect OUTP and OUTN to the inputs of the audio amplifier? Mono amplifiers generally have two inputs (IN+ and IN-) so the connections are clear, but in such a case it is a bit unclear
Thanks.
AI: The problem is not on the input, but on the output side!
The block diagram of the PAM8019 suggest completely independently running PWM modulators for the left and right channel. (they don't necessarily actually are independently free-running. It's just that the datasheet doesn't make any statements that they'd be linked, or linkable, at all. )
That means you cannot in general directly connect the output bridges, because that might lead to shorts. (look at the diagram on page 2 of the datasheet. You'll notice the Push-pull stages. It's perfectly possible that if you connect two outputs together, one push-pull stage will connect the output to VDD, and the other to ground. Now you have a large current flowing from VDD to the first output pin, into the other output pin, into ground. That will make the chip very sad.)
So, if anything, you'd need to have an output filtering state for both channels, and then add them up. But that directly works against the filterless design. So,
Because of component shortage, I have to use the following audio amplifier : PAM8019
You will have to use two speakers, or a different amplifier IC. I'd tend towards the latter.
Alternatively, you can just ignore all outputs for the right channel, and just connect your DAC to the left channel.
But: the DAC you chose already contains, just like your amplifier, a differential output stage. That's annoying, because your amplifier has a non-differential input. The datasheet says the output has "differential output option", but doesn't say how to disable it, so it seems it's always differential.
So, honestly, wrong DAC for this amplifier, wrong amplifier for this kind of job.
|
H: Battery high voltage protection
I'm making a coilgun with four 11.1V 1300mah 95C lipo batteries in series, providing 44.4V for the coils. Current will peak at 200A for short durations of time. Is there some common way to protect the battery, say, with a fuse, in case a coil turns on for too long? I'm a beginner at electronics, so it will help to be more detailed.
AI: Your options are:
A slow blow fuse
A micro controlled switch
A slow blow fuse sounds like a cheap and easy solution; put a slow blow fuse in line, if the current is too high for too long, fuse goes, everyone is happy. The problem is the variation in fuses. A 200A rated fuse will conduct 200A at 25C for a long time. Increase the current to 500A and it’ll blow quickly. The problem is that if you pick up 10 of these fuses, you’ll have 10 different perfromances. Some will blow quickly, some will take a bit longer, some will blow at 300A, others at 500A. Standards for fuses are very broad, because they have to be due to have fuses work. Fuses are a great simple safety device for when looking at normally drawing 5A, but during fault you draw 100A. Then it will blow and be safe. If you are normally drawing 5A, and at fault draw 7A, you have a problem. Fuses are not that accurate. This is true regardless of the fuse technology.
Your alternative, the only one which makes sense to me, is to spend more and do some proper design. Using a simple current sense resistor and some basic circuitry you can measure the current accurately. You can put this into a microcontroller or possibly a basic timer circuit, and use that to turn off a FET which will turn off the current. This is a lot more complicated, but a lot more accurate and reliable. Put a fuse in as well, just in case you get a failure and your FET fails short (as is their want).
Side note: you are doing something very dangerous. I advise that you do not do this at all. 200A from batteries is scary, LiPos explode if handled wrong. You are making a coil gun. And you are asking basic questions about it online. All of these things make me want to say “put it down and step away”.
|
H: How can the minimal number of bits representing modulus of counter be smaller than the number of bits representing our output?
Suppose we want to have this sequence 30, 50, 1003, 30, etc... It's no hard to see that we need a MOD-3 counter because there're only 3 states. But this also means we need 2 flip flops. But how do we represent 30 or 50 in the output? What is the design of such counter? Do we have to use a decoder to map 0 -> 30; 1 -> 50 and 2 -> 1003?
I'm new to electronics so my question might be quite silly but this seems so confusing to me. If I have to design a counter that results in another sequence like: 000, 001, 101, 000,... then it's three flip flop matching with three bits of our output (Q2, Q1, Q0 and D2, D1, D0 would be function of Q2, Q1, Q0 if we're using D flip flop and designing synchronous counter). Again, this is MOD-3 counter which only requires 2 flip flops!?
How can such circuit be designed? Thank you in advance! :D
AI: If you have a binary flip flop, its stable output is high voltage, or low voltage, so it can be in one of two states.
It is up to you whether you call those two states (0,1), (True, False), (30, 50). Depending on what you are trying to do, some codings may be more useful than others.
With two flip flops, you can now uniquely encode four states. Again it's your choice to call them what you want, and to not use all of them.
If you have a black box which can output a two bit binary signal to encode its four states, but you have a large number of lines you want driven, then the simplest way to design something that can do that is to use a decoder-encoder model. The decoder drives one line for each state that can be output (using perhaps half an HC139). The encoder takes each output and uses it to drive a number of lines to represent 30, 50 or 1003 in the encoding of your choice, whether it's binary, BCD, one-hot, 7-segment or anything else.
There may be more efficient ways to do this encoding, especially if you have a large number of states, or there is some regularity in the choice of final output representations that can be exploited, usually discovered by constructing a Kernaugh map from input to output.
|
H: How to practically measure an input impedance?
I would like to know how I could practically measure the input impedance of the following circuit:
I update my post by adding the rest of the circuit.
This is what my circuit looks like. The aim of my test is to measure a differential input impedance 10 kohms ±10% in the bandwidth (300-2000 Hz).
I hope that is clear enough for you so that you could help me.
AI: The usual way to measure an input impedance is to drive the input with a signal, in series with two different known impedances, and measure the change in output level when changing drive impedance.
The most convenient impedances to use are zero, and one in the ballpark of your expected input impedance. If the output halves when going to the second impedance, then your external impedance is equal to your internal one. Otherwise you just have to write and solve the equation for voltage division into the input with arbitrary impedances.
|
H: Is there a power amplifier class called "bridged"?
I wanted to know what clas of amplifer the TDA7266 from STmicroelectronic is, but I couldn't understand. Its datasheet says:
The TDA7266 is a dual bridge amplifier specially designed for TV and portable radio applications.
What does "bridge" mean in this phrase?
AI: From TDA7266 datasheet:
"Bridged" means each loudspeaker is driven by two amplifiers instead of one. One amp outputs a positive polarity signal (labeled "OUT+") and the other amp inverts the signal (labeled "OUT-").
This has some advantages:
It can output an AC signal of positive and negative polarity, using only one positive supply. This is quite useful for car audio, or other applications where you don't have a negative voltage avaibale, for example if it is powered by the ubiquitous 19VDC laptop brick. In the case of your TV, a bridged amp saves money on the power supply, by only requiring a positive voltage.
It doubles the output voltage swing. If each amp is capable of 0V...12V, then the voltage across the loudspeaker will be -12V...+12V, or a 24V swing. Thus, you get double the voltage swing over a more "oldskool" arrangement with a single amp and a huge DC blocking cap in the output. If the loudspeaker impedance remains the same, since P=RI^2, double the voltage means four times the power.
It also has some drawbacks:
First, obviously, you need two amps per speaker instead of one.
And, of course, doubling the voltage swing means it doubles the maximum current. So you don't get that extra power for free, the amps must be able to provide that current.
Note that "bridged" is not a class. The bridged amplifiers themselves can be class A,B,D, whatever. These days they tend to be class D for efficiency and cost.
|
H: Why was full page bursting removed when we moved to DDR
I'm interfacing with SDRAM on an FPGA and full page bursts are a godsend for streaming data. It's seems to be much, much more handy then a fixed burst size. I know it was removed when we moved to DDR. Does anyone know why the most useful burst mode was removed?
AI: There is something better now: independent banks.
You can activate a row in bank 1 while accessing bank 0, which allows you to issue a write command to bank 1 exactly 8 cycles after sending a write command to bank 0 later, and DQS toggling from the first command serves as the preamble for DQS on the second. The same applies to reads.
If you issue commands at the exact right times (which can be easily hardcoded with a counter), you can set up a continuous stream for the entire chip.
|
H: Point-to-point building of medium- frequency circuits, ground plane implementation
I build audio projects and small logic control circuits using point-to-point on perfboard (see pic of a recent build).
As you can see this allows me to select wire gauge in accordance to the currents at various parts. When the board is finished, tested and washed I spray paint a clear enamel on the underside. I have built circuits running up to 450V like this without problems, but these are all low frequency projects.
Now I want to build a switcher-style power supply running at 50kHz. For the moment I am only in the exploratory stages as this is terra incognita for me. Is this building method any good for medium frequency work? I seem to recall having seen HF projects built in dead-bug fashion without issues, that can't be any worse no?
The only potential issue I see could be parasitic capacitance between the copper pads. If it is a problem, I could dissolve the unneeded pads using etchant - I have done this for boards that are connected to mains in order to reduce the potential for arcing. This is done before soldering parts - I design my boards with autoCAD so I know in advance which pads need to be kept.
The second point is about the use of a ground plane; this seems mandatory for switching circuits. I can implement it with a piece of plain PCB stock under the perfboard with drilled pass-through holes for the few pins that need to connect to it but there would be a space of almost 1/8" between the two boards, can this be a problem? If so I can place the plain PCB stock on a spot that isn't under parts so it is in direct contact but it will be of smaller area of course.
Alternatively, could a large-gauge piece of wire be used instead of a ground plane? If a low-impedance path is all that is needed that could fit the bill. A 18AWG wire has as much copper as a sizeable PCB plane no? There might be problems due to skin effect in the wire though...
I could etch a bona fide PCB but I'd rather not - I hate drilling dozens of holes using those crazy-thin drill bits, and I'm not convinced at all about consistancy of home-etched PCBs tracks physical/electrical properties. This is the reason I build point-to-point on perfboard.
Sorry for the long post and thanks in advance for any help!
-Joe
AI: For switcher frequencies, you can forget about problems caused by excess pad capacitance, it's just not the right order of magnitude to cause trouble.
However, grounding is a more serious issue. It's not actually necessary to use a ground plane. It is necessary to have a ground return running close to all signal traces, to minimise the area of the current-switched loops that you will be creating in a switcher design, and to ensure that ground currents in one part of the design don't create stray voltages in other parts.
A gridded ground can work quite well. Something like a 10mm or 1/2" spaced grid of wires on both x and y axes, soldered together at every intersection, would be more than enough like a ground plane for switcher frequencies.
If you do use an additional board for a plane, the 1/8" gap should be fairly negligible.
What most RF engineers would do is start from a ground plane, and then mount components above it. You can get sets of pads, IC footprints, and transmission lines, on adhesive-backed copper clad. Another way to do this for cheap is to cut up some copper clad into tiny squares and strips, and epoxy or super-glue these down to the ground plane. Hint, 15 minute epoxy goes off in a few seconds if you hold a soldering iron to the pad.
|
H: Negative gain value in stability of a system
Let D(z) = k, where k is a constant gain. When I want to find the range of k such that the system is stable, the range of k is included with negative value (eg -5<k<5). My question is, can the gain constant k be a negative value? If can, what will happened to the system?
AI: If k is negative, then the system may have positive feedback (in some frequency ranges). This is not necessarily unstable (as long as the feedback gain doesn't reach the value of -1), but it would be unusual and not very robust.
|
H: Big capacitor on amplifier output
I am currently making an audio amplifier and I was wondering what is the use of the capacitor C5. I think it's here to avoid having DC voltage in the speakers when switching on the amplifier but, I'm not sure. If anyone could let me know on this, that would be great.
AI: Yes, the capacitor C5 is required to block DC from the amplifier.
There is DC because it is a single-supply amplifier, so the amplifier input and output are biased to half-supply voltage. If the supply is 20V, the amplifier input and output idle at 10V.
The value needs to be large to allow low enough frequencies to pass, for example 2200 uF allows 18 Hz to 4 ohm speaker.
|
H: Designing an audio amplifier with push-pull output stage
I'm working on a push-pull stage audio amplifier. I'd like your opinion on whether I'm on the right track with this, both theoretically and practically.
In this circuit I'll be using the following components:
Q1&Q2 TIP31C & TIP32C with LM358N op-amp (this was the only available at my university.)
My circuit:
AI: This modified circuit will have "better" performance due to negative feedback via R2 connected directly to the output and, dc coupling the op-amp to the power-transistor stage: -
The above modifications are pretty much what most class AB audio amplifiers use.
I'd like your opinion on whether I'm on the right track with this
In my opinion, you are not really going places with your design. I mean it will work but, it will have higher distortion than mine (due to the way I've implemented negative feedback) and, if you improved the op-amp type (higher speed and slew rate) mine would be even better whereas your design wouldn't improve with a better op-amp.
|
H: Lower voltage limit of constant current driver
I am looking for a constant current driver with rather low current (100-200mA) to drive COB LEDs (approx 34V). I found several which give an output voltage range, e.g.
12 W
input: 220 Vac 50 Hz
output: 90-102 Vdc 115 mA
Will that one really not work if the load has a voltage drop of only 34@115 mA or will the efficiency be low but work anyway?
AI: A switchmode constant current regulator doesn't waste energy in the same scale as a series transistor constant current source would do. Think the switchmode current regulator as a modified buck type voltage regulator which is modified to keep the coil (and load) current in predetermined limits. The switch is ON causes current to gradually increase and the current decreases when the switch is OFF. The coil current continues through the diode when the switch is OFF.
I do not believe a commercial switchmode current regulator works if it's used with a load which has different voltage drop than what's specified. Nobody guarantees anything if a device is used out of its specs. If it's designed as a current controlling buck regulator, the current in the coil grows fast at low output voltage - remember the induction law di/dt = U/L. The regulation may be poor and the switch transistor or mosfet may get too hot if it's not designed to turn its state fast enough.
BTW. I guess it's in practice impossible to make the above circuit so that the input voltage is rectified 230VAC, the output is about 34V at the wanted current and the current swing really happens so slowly that the device is not a remarkable radio noise transmitter. The inductor for that purpose should be far too big to be practical. If one allows the current to be triangle Ipeak =200mA with rise time from 0 to 200 mA say 20 us, inductor could be 29 millihenries which is still quite a boulder. 10 times higher operating frequency needs only 2,9 mH, but people who try to use AM radios in the neighbourhood surely would have something to say.
|
H: What are these white/grey stains on the back of a motherboard?
I've recently bought a used motherboard and while cleaning it, I noticed these white stains on the back side of the PCB:
I've managed to perfectly clean the front side with pure isopropyl alcohol and a soft brush, which was mostly just dusty, but I can't easily remove the stains on the back. When soaking in alcohol, they can be smeared around somewhat with a cotton pad, but they never seem to clean up fully.
I've worked with used hardware before and remember having seen similar stains on other electronics. They also seem to have a certain formation to them, but I can't make out a logical pattern.
What are they and is this a sign of.. anything particular?
AI: The lines seem to be surrounding all areas with exposed PTH leads sticking out, to the exclusion of areas with only SMD parts.
I think they mark the openings of something called a "selective wave soldering pallet" that is used to protect previously assembled SMD parts during wave soldering of the PTH parts. I'm not sure how the stains are formed. I can see some references to flux residue being trapped under the side walls of the pallet mask.
|
H: Correct way to label +5V and GND in KiCad
For KiCad to detect the +5V and GND parts of a schematic, I assume the power component symbols have to be attached.
How does this work when the power comes in via a terminal?
In the image below, I have had to attach additional GND and +5V symbols to the net, behind the terminal, so tools like the copper fill tool can recognise the different parts of the net.
This (probably naively) feels a little like adding extra connectors that aren't really there. Is there a better way of indicating power is supplied through the terminal or is this the correct approach?
AI: For KiCad to detect the +5V and GND parts of a schematic, I assume the
power component symbols have to be attached.
KiCAD recognizes the "+5V" and "GND" labels but, if you don't specify that those nets are driven from somewhere (you don't need to tell KiCAD where), KiCAD will produce a warning when performing an electrical rules check.
So, adding the PWR_FLAGs is a means of telling KiCAD that it doesn't need to make a warning message when checking those nets/lines.
How does this work when the power comes in via a terminal?
KiCAD doesn't recognize a terminal as a source for the power on a net so, you still need to add the PWR_FLAGs.
This (probably naively) feels a little like adding extra connectors
that aren't really there. Is there a better way of indicating power is
supplied through the terminal or is this the correct approach?
When I first used KiCAD I hated PWR_FLAGs thinking them unnecessary but, they are what they are; just a method of preventing warnings when performing electrical rule checks.
|
H: UART oversampling
I've read that for UART is over sampling used to recover some kind of “clock signal”. How is that exactly done and why is that needed?
I already searched on the internet, but I didn't find any helpful information.
AI: Two devices that communicate over UART are not synchronous, so device A uses its own clock to transmit, and device B uses its own clock to receive data.
The receiving device must check the incoming data pin at a larger rate than the bit rate, to determine when the transmitting device has began to send the start bit, so the receiver can sample each data bit right in the middle of the bit, with as much margin as possible from the transitions regions.
Typically the oversampling rate is 16x which generally allows the bit rate clocks to have 1% to 2% tolerance.
|
H: DC analysis of a BJT circuit with base feedback
I'm trying to analyze a bigger circuit and trying to figure out the DC voltage at the collector. I narrowed it down to this much simpler circuit, which outputs 1.379 V. I cannot seem to get the same result, so I must be doing something wrong.
I'm not sure if calling this a "base feedback" is correct, what I meant is that the base and collector share a resistor instead of the usual base divider.
What would be the steps for analyzing this kind of circuit?
AI: Rough/approximate answer/method
Call the voltage at the collector \$V_C\$ and call collector current \$I_C\$.
We can then say that \$V_C = 5 - I_C\cdot R_7\$ (assumes base current is small)
Then using the current gain of the BC547C, \$h_{FE}\$ we can say: -
$$V_C = 5 - h_{FE}\cdot I_B\cdot R_7$$
And \$I_B\$ is simply \$\dfrac{V_C - 0.7}{R_4}\$ where 0.7 is the voltage needed to roughly start to forward bias the transistor's base-emitter region.
So, putting things together we get: -
$$V_C = 5 - h_{FE}\cdot \dfrac{V_C - 0.7}{R_4}\cdot R_7$$
For a BC547C transistor, \$h_{FE}\$ is about 600: -
And given the values for the two resistors we get: -
$$V_C = 5 - 600\cdot \dfrac{V_C - 0.7}{1000000}\cdot 10000$$
$$V_C = 5 - 6\cdot (V_C - 0.7)$$
$$7\cdot V_C = 5 + 4.2$$
Hence, \$V_C\$ is about 9.2/7 = 1.314 volts.
What would be the steps for analysing this kind of circuit?
Well you could do what I did above or, you could use a simulator: -
The simulator estimates the collector voltage to be 1.502 volts and therein lies the difficulty in analysing single BJT amplifiers because there are assumptions made about current gain and base voltages that are probably more accurately made using a sim.
|
H: How can I go about understanding an electronic circuit?
I have this circuit:
My goal is to gain a qualitative understanding of what VO1 and VO2 are going to look like for different signals V3 and V4 applied.
Because you can not calculate with BJTs or electronic components in general, I would like to ask how I could start going about learning how the circuit works. Do you maybe know any ressources like books or web pages about this?
I only found advice along the lines of knowing what the symbols mean.
AI: The voltages applied on a differential pair are classified into two classes :
Common Voltage (CM) as V5.
Differential voltage (Dif) as (V3-V4).
Outputs are "differential", which means they vary in the opposite direction (+,-).
Analyzing is done, reporting the outputs, while varying one kind of "inputs" (CM or Dif or both).
Thus reporting (VO1-VO2) vs CM, or (VO1-VO2) vs Dif.
Or something as this, with parameter -5V < Vcm < + 5V, for the first try.
|
H: If an IGBT's collector-emitter max voltage is 600 V, Do I need 1 kV rated capacitors on the output side?
I'm fixing this solar PV diverter (pictures attached at the bottom of this post), other than the traces it has a blown IGBT and 2 capacitors.
The datasheet for the IGBT (model FGH40T65SPD) is here:
https://www.onsemi.com/pdf/datasheet/fgh40t65spd-d.pdf
I've tested the other non blown capacitors and found the missing ones are 12 uF and 56 nF, the max output from the IGBT collector-emitter voltage can reach 600 V though so do I need to use caps rated at 1 kV? On the datasheet it looks like normal operation is around 20 V but I'm not 100%. What are your thoughts?
Thanks
Some pictures here: https://i.stack.imgur.com/uNbzE.jpg
AI: The picture shows the driving circuit. The voltage there must be limited to Vge, which is +-20V for your IGBT.
Standard 50V X7R is probably sufficient.
There also isn't enough clearance under C5 to allow such voltages, and the parts don't exist.
Note: the picture also shows delamination of the pcb and possible internal layer damage. See the stain going up to C47 and R51.
|
H: Capacitor that connects the base and the collector of BJT makes it hard to study multistage amplifier
I'm studying ECE and at this point in microelectronics they've taught us that when performing DC analysis on low frequencies we must short circuit the capacitors of the circuit.
In the circuit below, I've been asked to find the value of \$R_B\$ so that the dc voltage at the collector of \$T_1\$ is equal to \$5V\$ (\$V_{C1}=5V\$) and then calculate the voltage gain \$\left(\frac{V_{out}}{V_{in}}\right)\$. The capacitor \$C_{BC}\$ makes everything difficult! I've short-circuited the capacitors and have calculated the following.
Assuming that for both \$T_1\$ and \$T_2\$ it is \$\beta=246\$, \$V_{BE}=0.78V\$ and they operate in the active region:
\$V_{B1}=V_{C1}=V_{B2}\$
To calculate \$I_{B1}\$:
\$V_{B1}=V_{BE}+I_{E1}\cdot 0.1k\Omega\Rightarrow\$
\$5V=0.78V+(\beta+1)\cdot I_{B1}\cdot 100\Omega\Rightarrow\$
\$I_{B1}=170.85\mu A\$
Thus, \$I_{C1}=\beta\cdot I_{B1}=42.0291mA\$
To calculate \$I_{B2}\$:
\$V_{B2}=V_{BE}+I_{E2}\cdot (R_{E21}+R_{E22})\Rightarrow\$
\$5V=0.78V+(\beta+1)\cdot I_{B2}\cdot (0.33+1.8)k\Omega\Rightarrow\$
\$I_{B2}=8.02136\mu A\$
Now, to calculate \$R_B\$:
\$V_{C1}=15V-(I_{B1}+I_{C1}+I_{B2})\cdot\left(220k\Omega+R_B+1.8k\Omega\right)\Rightarrow\$
\$5V=15V-(0.17085mA+42.0291mA+0.00802136mA)\cdot\left(220k\Omega+R_B+1.8k\Omega\right)\$
and I get \$R_B=-219.72716...k\Omega\$.
Note that \$R_B\$ is an ohmic resistor. I've found many similar examples/problems in textbooks but none of them had a capacitor like \$C_{BC}\$. Using the low-frequency hybrid-\$\pi\$ model of the BJT proves to be a bit difficult, once again because of \$C_{BC}\$.
I'm not quite sure what it is that I'm not doing right. Any sort of hint, help or even a reference to a similar example in some textbook is much appreciated!
AI: they've taught us that when performing \$\color{red}{\text{dc analysis}}\$ on low frequencies
we must short circuit the capacitors of the circuit.
That is incorrect.
When performing AC analysis it's usual to short capacitors. When performing DC analysis it's usual to open-circuit capacitors.
I've short-circuited the capacitors
That's incorrect for DC analysis; you need to open-circuit the capacitors. So, when you initially start your solution and say this: -
$$V_{B1}=V_{C1}$$
That is incorrect for DC analysis.
|
H: How to measure a steady state load and surge load of relay?
I am asked to test a relay and show that it is capable of handling steady state load of 0.2 A at 27.5 Vdc and 2.0 A surge load with a maximum of 10 ms at 27.5 Vdc.
How can I do this test and measurements?
The relay that used in the circuit as follow:
AI: I am asked to test a relay \$\color{red}{\text{and show}}\$ that it is capable of handling
steady state load of 0.2 A at 27.5 Vdc and 2.0 A surge load with a
maximum of 10 ms at 27.5 Vdc.
My answer demonstrates that you don't need to test anything to show that it is capable of meeting the stated requirement. Here's an image of relay K2 captured from question: -
Fujitsu FTR 12 volt coil relay data sheet link.
Firstly, there is nothing I can think of that might be gained in testing a relay beyond the limits stated in its data sheet. The data sheet limits will be supported by Fujitsu's quality department and, it will have undergone extensive testing until the device reaches the end of its life. In short, it will likely be tested to a higher standard than that which you or I could test to: -
Secondly, if the testing you wish to do is within the limits set out in the data sheet, is there any justifiable reason to do any testing yourself?
|
H: Photodiode amplifier circuit for VBP104FAS sensor
I am trying to create a photodiode circuit for this sensor so that I can connect its output to a microcontroller. I am following the document: https://www.ti.com/lit/an/sboa220a/sboa220a.pdf for my design. However, I am unable to get similar plots on the AC analysis after running a simulation of the design on Tina-TI.
For my situation, the following parameters are my design goals:
Input current: 0A to 100uA (based on reverse light current from Fig. 3 of https://www.vishay.com/docs/81169/vbp104fa.pdf)
Output voltage: 100mV to 3.3V (as I intend to connect this to a microcontroller ADC pin)
BW: kept at 20kHz (not sure if this is relevant to my scenario)
Supply: Vcc is 3.3V as it will run on the same power rail as the microcontroller, Vee=0V and Vref = 0.1V.
Using the parameters above, I have worked out the following values for the different components:
R1 = 32kOhm
R2 = 240kOhm, R3 = 7.5kOhm (based on a resistor ratio formula)
C1 <= 4.97uF (based on R1=32kOhm and bandwidth of 20kHz)
GBW - I was unable to work this out. For this situation I had been looking at using a LMP7221 opamp, which has an input capacitance of 11pF. I assume this is equivalent to Cd + Cm (differential and common mode capacitance). Then I used 17pF for junction capacitance of diode, I assume this is equivalent to diode capacitance at 3V of the photodiode I am using.
I kept C2 = 1uF
Using the following parameters above, I have generated the following schematic and plots.
I think the AC sweep does not look correct so I am not sure which parameter I might have entered correctly.
AI: R1C1=T is about 150 ms is far too low for 20 kHz with that needs a T~ 50us/2pi
There is no good reason for Vref to be more than needed if you have a 1M load and 50 Ohm source. The output swing is rated for 2k and 10k loads. If you insist, at least use proper values of 8.45k instead of 7.5k to target 100 mV for Vref.
Then change R1 from 32k to 30.1 k for 0.5% gain error with an output swing from 0.1 to 3.1V and a more accurate log range near 0.1V for dark current and dim light.
Bode is correct for the C1 chosen in error but 20 kHz is for what light source? 100 MHz sweep is based on what assumptions?
|
H: Non linear inductor in LTspice
I am trying to construct a non linear inductor in LTspice and have a question regarding the two plots provided below. In the first image current is swept from -30A to 30A in steps. The result yields the required behavior where the initial inductance is 500uH and with 27A peak current the left over inductance is around 40 uH.
However, when using the same flux function measuring inductance across L1 the obtained result shows that with zero current the inital inductance is around 15mH, this is not correct.
I wonder if there is something I may do not understand here.
Spice netlist is provided below:
L1 ind1 0 Flux=0.007*tanh(0.072*x)
I1 0 ind1 PWL(0 0 1 30)
I2 0 ind2 {I_DC} AC {1/(2*pi)}
L2 ind2 0 Flux=0.007*tanh(0.072*x)
.tran 1
.step param I_DC -30 30 0.1
;.ac list 1
.backanno
.end
AI: Your second analysis is in the frequency domain, so the results that you get are a function of frequency, first and foremost. You are using the card .ac list 1, which means you are evaluating the frequency response at 1 Hz, for a variation in the source's DC value.
If you'll change the analysis to be .tran 10 and set the current source to be sin 0 1 1 (unity amplitude, 1 Hz), you'll see that the voltage across the inductor will be ~3.16 mV peak. If you now divide this by 2*pi it will show ~504 uV peak, which is the same as what you got there.
Don't forget that, if you want a .DC analysis it will fail since that only evaluates the operating point, without any dynamics involved, and the inductor is considered a short. If you want a .TRAN analysis then you will get the dynamic variation, provided that it varies much slower than the variation, itself. And an .AC analysis will give you a response as a function of frequency, and if you perform one for 0 Hz (only works with .ac list 0) you will get exactly what a .DC analysis will get you: an operating point with shorted inductors and opened capacitors; otherwise you'll need to take frequency into account.
|
H: Any issues wiring high power switching power supplies to 3 phase power this way?
I have to wire several high power loads to a 415 V 3 phase feed. In case it matters, this is in Singapore.
The loads have a switching power supply front end. They are rated for 277 to 480 VAC and several thousand Watts. I have live, neutral and protective earth wires for the input.
This is what the proposed installation would look like (minus circuit breakers to keep it simple).
Any issues with this at all? I don't think so, but I thought I'd check.
Neutral is not used. Only L1, L2, L3 and PE, with the 3 switching power supplies presenting a balanced load.
AI: No problems with what is shown.
Add >400V MOVs L to N , or 640V L-L if not included. 10kA min. 40kA suggested
Delay Power Sequencing is suggested on power failure to minimize start surge unless designed with a soft-start.
Arc flash represents a potentially lethal hazard for those inexperienced in working with high voltages with high follow-on currents.
Some companies may wish to have electricians perform installations of HiV loads rather than IT personnel until such time as IT personnel can be trained on the proper safety precautions. !!
|
H: Lattice MachXO3 - what's "HW Default Mode"?
What's the meaning of the "HW Default Mode" for a Lattice MachXO3 device? I've seen this term come up a few times in the configuration guide but there is no clear definition of it. Does this refer to erased/blank devices?
AI: Note:
To 'Check Device ID' over the I2C configuration port, the
MachXO3 must be in Feature Row HW Default Mode state
(that is, blank/erased)
FPGA-TN-02055-2.7 page 49. So, you guessed right. It applies to unprogrammed or erased devices.
|
H: Is the reference voltage for voltage control the RMS between the three phase voltages?
Suppose to provide voltage control with an AVR connected to a synchronous generator. A voltage reference is compared with the output terminal voltage of the generator, and the error between them is sent to a controller.
My doubt is: since we are in three-phase system, which is the terminal voltage to take from the generator terminal? Is that the average between the RMS of the 3 phases? We cannot measure just one phase and consider it I suppose..
AI: It seems they only use 1 phase for feedback to control the AVR field winding current.
|
H: A capacitor charge time - two methods two different answers
I am a bit puzzled and ask for your help about the following:
A theoretical capacitor of 100 F is being charged with a constant power source of values V = 50V, I = 50A, ESR of capacitor = 5 mohm
To calculate the time to charge the cap:
Approach 1: [Calculate time using energy flow rate]
Capacitor capacity = 0.5xCxV^2 = 0.5x100x50^2 = 125 kJ
Charging power = VxI = 50x50 = 2500 W= J/s
Time to charge = Capacitor capacity / charging power = 125 kJ/2500 J/s = 50 s
Approach 2: [using standard capacitor charging formula]
V of capacitor = V(1-e^(-t/RC)) = 50(1-e^(-2.5/(0.005x100)) = 49.88 V
As one can see that after 5 time constants (2.5 s), the capacitor's voltage is 99% using approach 2.
Obviously, this is the correct approach using the established formula.
Why is approach 1 off this much? What am I missing in either case?
AI: Your time constant of 0.5 seconds clearly is derived from your capacitor ESR of 5 mohm. So, what you are effectively proposing as a charge circuit looks like
simulate this circuit – Schematic created using CircuitLab
So, start with the switch open and the capacitor discharged. Now close the switch. What is the charge current?
That is 50/.005, or 10,000 amps.
Compare this to the 50 amp limit of the constant power charger. You think that might have something to do with it?
|
H: Output level of LM311 comparator
I am simulating the circuit above in LT Spice, where a LM311 (a LT1011 is simulated, but should be largely equivalent) is used to transform a (noisy) sine wave signal into a square wave. Value of R5 controls hysteresis as seen in red waveform.
I understand that the comparator has an open-collector output, pull-up resistor R1 is used to obtain 5V on high output.
My question is about the low output level. Even though V- is grounded, the voltage at the output doesn't quite reach 0V, rather it stays at 200.3mV; changing the value of R1 from 1kOhm to 10kOhm decreases this level to 155mV. How does this voltage behave and how can it be selected by the circuit designer? Is it any of the specs in the datasheet?
Looking at datasheets for LM311/LT1011 it seems pin 1/EMIT OUT is connected to the rest of the circuit through a 4Ohm resistor. If this was the only factor, I'd expect a 1kOhm by 4Ohm +5V to 0V voltage divider to provide a much lower 19.9mV. Instead, it seems to behave as a 42Ohm resistor with a 1kOhm R1, and it even changes to a 320Ohm resistor with a 10kOhm R1.
AI: The LM311 and LT1011 are bipolar chips- the LT1011 output is an NPN transistor with what looks like some current-limiting circuitry (including the 4 ohm resistor in the emitter). So the output will not behave in a “resistive” manner.
The typical and worst-case voltage drop is specified in the datasheet. For the LT1011 if you call on it to sink less than 8mA you can count on the low voltage voltage being less than 400mV given some overdrive and Tj < 100 degrees C. For 8mA sink and 3mV of overdrive it will typically be around 260mV at 25 degrees C.
If you are using the LM311 refer to the relevant datasheet, of course, and preferably simulate with the correct model.
|
H: Mosfet Relay Gate Drive
Recently I ran into a part called a MOSFET Relay to address a design problem,
On paper, this part seems ok to use for my design but looking at the internal circuit I don't see any charge pump that drives the FET's gate. The Gate looks to be driven by photodiode dome array (PDA). Can this part be used to switch loads as it is? My design requires an Isolated Load switch driven by an I2C Expander, Can this part be used?
Datasheet
AI: This kind of MOSFET SSR uses a series photovoltaic array with a circuit to speed (and ensure) turn off. The array supplies a number of volts but the current is weak so it takes some time to charge the gate capacitances (milliseconds, not microseconds or nanoseconds). On the plus side, you don’t need an external isolated supply to drive the gates.
If your circuit does not exceed the max voltage/current/isolation specs and can tolerate the leakage and on-resistance and the slow switching speed (3ms on/ 1ms off) then it should work okay.
|
H: Bias voltage higher than input voltage
Can someone explain why the voltage on the Bias pin should be higher than the input voltage n this LDO?
What is the reason behind it?
Why is the bias voltage on different pin? For what purpose? How can it have overlapping voltage ranges with the input voltage?
AI: The MIC47050 uses the bias voltage to power the amplifiers and control circuitry in the regulator.
The MIC47050 is intended for low voltage regulation. It uses a slightly higher voltage for the control circuits to regulate the lower voltages.
"Bias" is the power supply for the regulator.
"In" is the source that provides the power to the regulated output.
I expect the reason it is done this way is to get better performance.
Circuit design gets more difficult the lower the source voltage. It gets harder to get good performance.
With the split design of the MIC47050, the designer can use higher voltage for the critical parts of the regulator. The low voltage is only handled by the pass transistor.
This drawing from the datasheet show how it works:
The red line is where current flows from the "IN" power source to the regulated "OUT." Pretty much everything else in the IC is powered from the "Bias"
|
H: How many optocoupler outputs can I stack in series?
I have 44 devices - each which provides a 5V output when active and operating correctly and 0V when not.
On my microprocessor, I need to check if all 44 are active or not (don't need to know if individual devices are active or not, but just to know that all are active or not.) I have used optocouplers to try and achieve this for a small number of devices (4) and it seems to be working. My circuit is as below.
My questions are:
How many optocouplers can I put in series here? Based on my limited knowledge of circuits, I believe the LED on the PC817 input side drops 1.1V and each transistor on the output side of the PC817 drops 0.7V. Does this mean I can theoretically put 12 optocouplers here?
How do I go about calculating the value of R2?
Based on the maximum number of device inputs I can "stack" for each microprocessor input, I will know the total pins I need on my microprocessor for the complete 44 device inputs.
EDIT: Many responses are asking if isolation is required and the answer is yes. That is a must as the inputs are all on different grounds.
AI: Vf = 1.2V out of 5V input gives 3.8V drop across each input resistor, or 8mA. Fig. 12 here gives the saturation characteristic:
https://www.farnell.com/datasheets/73758.pdf
about 0.1-0.5V depending on Ic. If we say 2mA is required, then it's in the 0.1-0.2V range, and out of 12V, less the 1.2V needed for the output opto, less a couple volts for a current limiting resistor*, probably 20 in series will do fine.
*We might go further and assert that the Vce(sat)s have resistance too. Evidently on the order of (0.5V)/(5mA) = 100 ohms each. Which means 20 in series will be a pretty reasonable 2kohms, without any extra resistance needed. I wouldn't feel bad "abusing" this property: even at maximum CTR, current flow won't be dangerous, and you can always add in a little resistance to keep it safe (say to limit any one transistor's worst case Pd to ratings). Probably this allows shrinking the resistor such that 30 in series will behave.
Mind, this is a typical curve, and the worst case / minimum condition will be much worse than this. Grades are available with different CTR ranges, which can make this a bit more consistent.
Note that optos are dreadfully slow, expect several microseconds turn-on and a hundred or so turn-off. No problem for a basic indicator or whatever, but also practically an eternity to most CPUs for example.
As for need: are these channels all on separate grounds? None can be joined? Not that a common-ground logic circuit is all that much simpler (an array of 74HC30s, say?), or greatly cheaper as discrete implementation (44 diodes, lower parts cost perhaps, but a ton more parts!), so the use of optos is not particularly objectionable here, but they're usually a poor choice when a simpler solution is available.
|
H: Pi-matching impedance calculation for LoRa 868Mhz RFM95
I'm not an expert in RF, but I'm interested in understanding how to calculate an RF matching circuit for my RFM95W 868M LoRa node that indicates pin ANT terminated in a matched 50 ohm impedance matching.
At the output of the matching circuit, I want to connect a wire antenna or an RP-SMA connector like this.
I've readed around and I've found this tool that can be used to calculate the RF matching circuit.
This is how I've placed my frequency (F=868) reference, my source resistance (RS=50) and my load resistance (RL=50.) The output is the following:
I have two main questions:
Let's suppose that the calculation are okay, which commercial parts/values I can use?
The Q factor can be 3? Must it be higher? Lower? Is there a "rule of thumb?"
AI: This is how i've placed my Frequency (F=868) reference, my Source
resistance \$\color{red}{\text{(RS=50)}}\$ and my Load Resistance \$\color{red}{\text{(RL=50)}}\$
Your source and load impedances are identical so, you have no need to match them. The pi network will give you some filtering benefits but that's a different question.
The Q factor can be 3? Must it be higher? Lower? Is there a "rule of
thumb?"
Here's where there's a bit of controversy as to what Q factor actually means. This document (entitled Quality Factor, Bandwidth, and Harmonic Attenuation of Pi Networks) discusses what the so-called Q factor is in pi filters and concludes that it can mean different things to different people.
Significantly, the on-line calculators that invoke Q factor as a parameter don't appear to justify what it means or how to use it. Think about a pi filter of equal input and output impedance; the circuit gain has to be unity hence, Q factor should be unity basically because: -
$$Q = A_V = \sqrt{\dfrac{R_L}{R_{IN}}}$$
On my basic website I don't use Q factor because of the ambiguity stated above. I derive the pi network as two back-to-back L-pads like this: -
\$R_X\$ would be the output impedance of the left hand L-pad or the input impedance of the right-hand L-pad. Of course, when placed back-to-back they fully interlock as matching impedances AND, \$R_X\$ is unambiguous in that respect. For instance, if I use my calculator at 868 MHz I get the same results as the calculator you used when \$R_X\$ is 5 Ω: -
And, if I vary \$R_X\$ you can see it is peakier in the response as \$R_X\$ gets lower: -
Incidentally, the graph above is when \$R_{IN}\$ = 50 Ω and \$R_L\$ = 300 Ω. So, decide yourself whether you want to use Q or want to use \$R_X\$. As a last resort, you can always simulate the circuit to look at the frequency response.
|
H: Why don't most electrons hit the anode in a cathode ray tube or electron gun?
In a cathode ray tube or electron gun, electrons liberated from the cathode by thermal emission accelerate towards a ring-shaped anode, from the potential difference between cathode and anode.
If an electron is slightly off-centre, so nearer to one edge of the anode ring than the other, I'd expect the near side of the anode ring to attract it more than the far side, pulling it further off centre (because electrostatic force follows the inverse square law)
So I'd then expect most electrons to end up hitting the anode ring, rather than going through.
Why do electrons go through the hole instead of hitting the ring? Is it z-pinch keeping the beam together, or does a complicated arrangement of anodes at different voltages lead to an inward radial force, and if so how? I'd expect any anode to attract electrons towards itself, moving the beam further from the centre for any anode that's on the outside of the beam.
AI: Electrons have mass, and therefore momentum. As long as they are fast enough, they are not likely to change their direction much.
Also, when leaving the cathode, the anode is far enough away, so they are pulled towards the center of the ring anode (because the anode is symmetrical) and are barely accelerated towards the ring itself.
Most important: the electrical field in the plane of the anode ring is almost zero, so there is barely any force on the electrons towards the ring. That's basically the same principle like inside of a Faraday cage.
Furthermore, the inverse-square law applies to point sources of an electrical field, the field in a different geometry can differ significantly - like between charged plates, around a charged rod, between two point sources or inside of a charged ring.
After all, nobody says that none of the electrons hit the anode. Some certainly will and if you measure the current required to sustain the high voltage, you will see some leakage current, which comes from those electrons hitting the anode.
|
H: Can I short VBUS and GND in a 6-circuit USB-C receptacle?
In search for a USB-C receptacle just to source power from my MacBook Pro's USB-C port for my PCB with no need for data transfer, I found this Molex 6-circuit USB-C receptacle which seems to offer just what I need and is also easier to solder by hand.
I understand the two VBUSs and GNDs are designed to accommodate the two USB orientations, but can I simply short them together (VBUS-VBUS; GND-GND) before powering my PCB? Or do I really need another IC to detect my orientation and manage which VBUS pin I should source current from?
AI: You can short those pins together, they are not required to detect orientation.
See this answer on a simmilar question.
|
H: TLC5946 no output when BLANK is LOW
I'm testing the TLC5946 LED driver and I'm trying to bitbang into it's registers with this bit of code:
void bitbang(bool high, uint8_t bits) {
digitalWrite(LED_DAT, high ? HIGH : LOW);
delayMicroseconds(5);
for(uint16_t i = 0; i < bits; i++) {
digitalWrite(LED_SCK, HIGH);
delayMicroseconds(5);
digitalWrite(LED_SCK, LOW);
delayMicroseconds(5);
}
}
void prep_DC() {
digitalWrite(LED_MODE, LOW);
delayMicroseconds(5);
bitbang(HIGH, 96);
digitalWrite(LED_LATCH, HIGH);
delayMicroseconds(5);
digitalWrite(LED_LATCH, LOW);
delayMicroseconds(5);
digitalWrite(LED_MODE, HIGH);
delayMicroseconds(5);
}
void prep_GS() {
digitalWrite(LED_MODE, HIGH);
bitbang(HIGH, 192);
digitalWrite(LED_LATCH, HIGH);
delayMicroseconds(5);
digitalWrite(LED_LATCH, LOW);
delayMicroseconds(5);
}
void setup() {
pinMode(LED_SCK, OUTPUT);
pinMode(LED_DAT, OUTPUT);
pinMode(LED_LATCH, OUTPUT);
pinMode(LED_BLANK, OUTPUT);
pinMode(LED_MODE, OUTPUT);
digitalWrite(LED_BLANK, HIGH);
digitalWrite(LED_LATCH, LOW);
digitalWrite(LED_SCK, LOW);
prep_DC();
prep_GS();
digitalWrite(LED_BLANK, LOW);
}
void loop() {
}
That seems to work fine, however the TLC5946 doesn't light the LEDs up - They only very briefly blink. With some experimentation, I found out that cycling of the LED_BLANK pin is what makes the
LEDs blink.
So when I add
void loop() {
digitalWrite(LED_BLANK, HIGH);
delayMicroseconds(5);
digitalWrite(LED_BLANK, LOW);
delayMicroseconds(5);
}
the LEDs light up, with the correct GS PWM, it seems. I'm pretty sure the GSCLK is not the problem, since when I disconnect it, the LEDs don't even blink. Simmilary, the actual data latched into the IC seems right, correct LEDs turned on/off when I tried bitbanging something other than all 1s.
Is clocking the BLANK pin correct way to use this IC? If so, when should it be clocked, and where is it described in the datasheet? (I couldn't find that information) If not, what could I be doing wrong?
AI: The datasheet says:
The counter is reset to zero when the BLANK signal is set high.
[…]
When the counter becomes FFFh, the counter stops and output does not turn on until the next grayscale cycle.
So BLANK must be pulsed high after every 4096 GSCLK cycles (or less then that if you do not need 12-bit PWM resolution).
If you do not need PWM at all, then you can pulse BLANK high, send a rising edge on GSCLK, and then do not send any more GSCLK cycles.
|
H: LTspice: conditions values measurement in error log automatically
I am trying to measure automatically values according to another value on the schematic (.meas values in the log error).
Here you can see what I would like :
In this case I want in the spice error log:
When V1(Vs_aop)>15V,
read I1(Rsh)=5.927A,
save value in lor error
When V2(Vs_aop)>15V,
read I2(Rsh)=5.936A,
save value in lor error
... ect
I try to use .meas with WHEN condition but it doesn't work :
.meas TRAN Ich_min WHEN I(Rsh)=V(vs_aop)>15
AI: If only when is used then the implied form is find time when, or find freq when, which means the result is time or freq. You need to write the full syntax and explicitely choose the desired quantity to measure, here I(Rsh):
.meas Ich_min find I(Rsh) when V(Vs_aop)=15
It looks like you're using a .step card, so there's no need to use Ichmin1, Ich_min2, Ich_min3, .... Here's a quick test:
|
H: Generate 10MHz clock in Artix-7 FPGA series
This is a question from a FPGA newbie. I have a simple Verilog for a counter and I would like to generate a clock for it.
Can I generate a 10MHz clock in FPGA without an external clock source?
How can generate it?
PLL/MMCM seems to work just with external clock sources.
Thanks in advance
Jorge Johanny
AI: It's all a matter of clock precision. If you can afford a +/- 50% (500000 ppm) frequency tolerance on your 10 MHz clock, then yes, this is doable without relying on any external clock.
There is a primitive in 7-series, called STARTUPE2, that outputs the internal oscillator (the internal clock used by internal configuration logic in master modes). For Artix-7, this clock is specified in datasheet (DS181, table 66) as being 65 MHz Typical, with +/- 50% tolerance (see FCFGMCLK, FCFGMCLKTOL).
You can pipe it to MMCM or PLL and get an output of 10 Mhz Typ., but tolerance of 50% wont change.
|
H: Small signal model of current mirror
I'm trying to analyze the small signal model of this following snippet circuit (all the transistors are the same with beta = 100, and VA =100:
What i've come up with is:
simulate this circuit – Schematic created using CircuitLab
when the rest of the circuit is located 'above' the left part. I want to analyze what is the net resistance the the rest of the circuit 'sees', But I got stuck..
I can't figure out how to account for the current sources.
The rightmost current source, is just a resistor of 1/gm (I think), but what about the left one?
AI: The small-signal model will look like this:
simulate this circuit – Schematic created using CircuitLab
simulate this circuit
Notice that:
\$R_{OUT} = \frac{V_X}{I_X}\$
Also, notice that \$V_{BE1} = V_{BE2} = 0V\$ thus, we can see that \$g_{m1}\times V_{BE1} = 0A\$.
So we are only left with \$r_{o1}\$.
Therefore
\$R_{OUT} = \frac{V_X}{I_X} = r_{o1}\$
Do you see it?
|
H: Voltage divider, linear regulator, other options to power proportional HV DC-DC converter?
I have a EMCO AG15P-5 DC-DC converter (datasheet) that takes 0-5V in and outputs up to 1.5kV (proportional to input voltage). I need to operate the converter at 500V output. I've done testing with a precision voltage source and determined that an input voltage of 1.14V gives 500V output. Under full load, the converter uses ~20mA, and this value would be constant in my application. Given the converter's proportional output, the input voltage needs to remain as stable as possible. Moreover, the precision of the voltage source would ideally keep it steady in the 10s of millivolts. So now I need to figure out how to provide stable 1.14V @ 20mA to power this sucker.
My first thought was to use an LM317 with a divider network. I think this wont work considering the 1.25V Vref of the LM317. I've searched digikey for regulators and found this regulator, but am not sure if buck regulators are the best for this, considering stability
AI: You should consider using this converter inside of a closed-loop controller. The output voltage could be precision divided down to produce (say) 1 volt out per kV in and you use an op-amp controller to compare that potted down voltage with a precision 1.5 volt reference. This will ensure that if anything drifts inside the XP device, you have a closed-loop mechanism that keeps the output voltage stable (even under varying load conditions).
From the data sheet: -
A separate high impedance control pin is standard and is designed for
external error amplifier and/or DAC control in closed or open loop
systems
Control Voltage Input Analog Control Voltage adjusts output from 0 to
100%
Thanks to @soup for finding a decent article on controlling the device: -
Make sure you choose an output resistor divider that doesn't get too warm. Maybe 10 MΩ for the upper resistor (rated at the voltage) and the appropriate low kohm resistor for the lower part of the divider. This network will also discharge the output to safe levels when turned off rather than leaving a high voltage present that could be a nasty surprise for someone.
Be aware that the device in your question can only produce 100 volts on the HV output: -
Maybe you meant the AG10P-5?
|
H: Can I multiplex only VTREF for multiple SWD programmations?
We would like to program several (6) PCB successively, each one working with a Nordic nRF52 µC. The idea is to have only one J-Link with the SWD lines (SWDIO, SWCLK and VTREF) common to my six boards.
I have seen some people multiplexing SWDIO, SWCLK and VTERF (for example here: https://devzone.nordicsemi.com/f/nordic-q-a/35094/what-is-the-swd-driver-impedance-of-swdio-line -- you need to scroll a bit) but I was wondering if only multiplexing VTREF could be enough. I mean: if the VTREF pin is used as supply pin (by providing 5V from the J-Link and converting it into 3.0V through a LDO), can the SWD work correctly if only one board is supplied while the others are not?
I do not see any problem to that solution but maybe I have missed something (pullup/down resistors causing problems, etc.).
Best regards,
Nil
AI: You probably know a MCU digital IO pin features ESD protection diodes that typically look like this:
(from IC protection diodes confusion)
If VDD is actually floating and there is 3.3 V on IO, VDD will be supplied though D1 and you'll get 3.3 V minus diode's forward voltage on VDD (i.e. ~2.8 V). Decoupling capacitors of target chip will get charged through IC pins. If chip consumption is low enough and VDD in range, it will run properly.
In your situation, all MCUs will be connected on the SWD lines. All of them will get powered through protection diodes, they will most probably wake up, and all of them will conflict on the SWD lines. Expect bad results.
|
H: OpAmp filter drive capabilities
My application requires a high input impedance. Therefore I am using 2 x TLV313 OpAmps as unity-gain buffers for my ADS1296. They also provide an adequate common-mode output voltage (2.5 V) for my ADC. My signal of interest is 0 - 10 kHz. The ADCs modulator frequency (as I understand is the actual sampling frequency) is, depending on the mode, 512 kHz or 256 kHz, which puts it's Nyquist frequency to 256 kHz or 128 kHz. At and above the Nyquist frequency I would want to have significant attenuation of my input signal to not have components above that frequency folding back into my spectrum of interest.
However, I cannot build a reasonable (RC) low-pass at the input of the TLV313, because the capacitor to ground would decrease my input impedance, which I need. E.g., the TLV313 has an input capacitance of 1 pF and an RC low pass with a cutoff frequency of 25 kHz for example, in combination with a 10 kohm series input resistance of the TLV313 (recommended to decrease input current at a reasonable noise level), would already require a capacitance to GND of 630 pF (and hence strongly decreasing my input impedance). I am willing to build a strong filter (e.g., third order Chebyshev) at my TLV313 output stage. However, I only find that the TLV313 can drive up to 1 nF of a purely capacitive load (TLV313 OpAmps, page 17). How do I know if this works with a more complicated filter?
Am I overcomplicating things? I thought this is not something special you would want to do.
What is the usual/good practice to employ a strong filter after unity-gain buffers?
How do I know if my TLV313 will function with a complicated mix of R-L-C in case I use a higher order (Chebyshev) filter?
How far can I trust the LTSpice simulations? If it shows well behavior, can I take it for granted to some degree?
In the TLV313 datasheet the test conditions always relate to "R_L connected to Vs/2". What is meant by that? How can I imagine this connection?
What are the resistance limits for my TLV output/my ADC input? I read with another ADC from Analog that the series resistance towards the ADC cannot be larger than 2 kOhms, for the ADC to be able to buffer / get enough current. For this ADC I don't find this information.
Any input is highly appreciated!
AI: I am willing to build a strong filter (e.g., third order Chebyshev) at
my TLV313 output stage. However, I only find that the TLV313 can drive
up to 1 nF of a purely capacitive load (TLV313 OpAmps, page 17). How
do I know if this works with a more complicated filter?
Yes, I believe you are over-complicating things.
The anti-alias filter capacitor would be preceded by a resistor of several kΩ as you analysed. You can usually get a good feel for how low this resistance can be here in the data sheet: -
So, if you ensure you have at least 2 kΩ in series with your reactive components then it should not be a problem. However, the data sheet does suggest you can go much lower than this: -
In the TLV313 datasheet the test conditions always relate to "R_L
connected to Vs/2". What is meant by that? How can I imagine this
connection?
Imagine an op-amp powered by split + and - rails. RL would connect to the midpoint i.e. to 0 volts or GND. On a single rail application "10k to Vcc/2" implies 20k to Vcc and 20 k to GND.
What is the usual/good practice to employ a strong filter after unity-gain buffers?
That entirely depends on the filter. To answer this generally requires dozens pages of work so, to avoid this, indicate what filter you are considering.
How far can I trust the LTSpice simulations? If it shows well
behaviour, can I take it for granted to some degree?
If a sim tells you something won't work then, it probably won't. If it tells you something does work then it might work providing you have done your homework, read all the applicable data sheets and not gone crazy in circuit ideas unless you are a proficient designer.
|
H: How do you bandlimit a continuous-time signal?
I wish to avoid aliasing as a result of taking the FFT of a signal. The signal isn't band limited. My understanding is that I should band limit the continuous-time signal first. How do I do this?
EDIT:
I am simulating a continuous time signal of the form \$s(t) = \sum_{p} t/\tau_{p}\$ from knowledge of the constants \$\tau_{p}\$. I then wish to see what that looks like in the frequency domain, but when I take the FFT it does not appear band limited. (Happy to add whatever additional info might be relevant.)
AI: Typically this is done with an antialiasing filter on the analog input.
This will typically be a low pass filter, although some may be integrated with a high pass at a much lower frequency for other purposes. Alternatively, it can be a bandpass filter if you're performing demodulation or intentional aliasing as a step in the sampling process.
Depending on how much aliasing you want to avoid, you can put the corner(s) of the filter at or short of the edge of the desired band--it depends on the balance of your concern for the integrity of the information at the band's edge and your aversion to aliasing. Standard practice is to set your Nyquist rate somewhat higher (10-20% typically) than your band of interest, place the corner of your antialiasing filter at the edge of the band of interest, and then digitally filter out the unnecessary content near the band edge.
|
H: Is this a correct FSM graph?
This question is sourced from H. Roth's book 'Fundamentals of Logic Design.' It pertains to a sequential circuit featuring a single input (X) and one output (Z). The circuit's function entails examining sets of four consecutive inputs, with the output Z being equal to 1 when the input sequence 0101 or 1001 is detected. Notably, the circuit undergoes a reset operation after processing every four inputs. The task at hand involves the determination of the Mealy state graph for this specific circuit.
Is this a correct path (highlighted in pink color)?
AI: Yes, the pink highlighted path is valid.
Since the FSM resets every 4 inputs, an input sequence which begins with 1-1 does not match your required input sequences and will keep the output as 0. Once you enter state \$S_5\$, you guarantee the output remains 0.
|
H: How to choose a phototransistor or photodiode?
I've found a photoresistor (yes, resistor, probably 30 years
old) in my scrap box, attached it to an ATtiny as ambient light
sensor, with a 100kΩ resistor as voltage divider towards +5V.
On power on, the ATtiny measures the ambient light using its ADC
capability, and uses that value as switching threshold later on.
That works extremely well, slight differences in dark ambient
conditions can be resolved, and give consistently reproducible
results as twilight switch. Looks like my choice of the 100kΩ
resistor was a lucky find, my goal was to use as much as possible of
the 0–5V voltage range for ATtiny's the 10bit ADC.
I'm so happy with the result, I wanted to get a couple more of these
photoresistors. Turns out: I cannot buy them anymore, at least
not easily:
The use of CdS and CdSe photoresistors is severely restricted
in Europe due to the RoHS ban on cadmium.
They seem to have fallen from favour, and been replaced
by phototransistors and photodiodes.
What should I use to replace the photoresistor? A photodiode, or a
phototransistor? What details are important to make a good choice
for the described use case?
The device is not required to act particularly fast. It is more
important to me
to be able to discern slight variations in dark conditions,
have a wiring of similar simplicity (I think I do understand
voltage dividers, and I'd rather not have to use more components),
work in a 3.3—5.5V setting, which would allow me to power from
USB, 3×1.5V AA or 4×1.2V AAA batteries,
and the resulting circuit should draw little current, it's
intended to last for days, up to weeks.
I lack the knowledge to draw a conclusion from looking at data sheets,
(e.g., this one, found at Reichelt). I assume the sensor
should be sensitive to wavelengths in the range of 450–650nm, which
rules out all devices labelled “IR”. But I don't know what to look for
exactly.
I have no idea how to interpret the irradiance values. I see the
number and unit, but I cannot estimate the ballpark of my use case. I
don't own equipment to measure the lighting conditions at the site of
intended use.
Also for the electrical characteristics: Assuming the 100kΩ resistor
I'm now using for the voltage divider, I'd assume 50μA current
maximum. Is that the I_c to look for? What role do the other values
play?
AI: Never use a phototransistor for accurate light readings as the hFE range spans many octaves. (unless you calibrate hFE and compensate for temp.)
The keyword you need is:
Ambient Light Sensor (ALS)
IR PD's usually span visible range unless they are filtered for IR remote controls. Yet may detect IR range from visible light with a slope towards UV.
PD's are practical but why not use ones good for camera photometers. Panasonic made the best AM302 but are obsolete from low camera demand. They spanned over 3 decades with a log output and many more decades with variable fixed R loads.
These ALS parts are a good selection in stock at the time of this writing and are low cost. Some are eye-corrected for wavelengths.
https://www.mouser.ca/c/optoelectronics/optical-detectors-and-sensors/ambient-light-sensors/?product=Ambient%20Light%20Sensors&instock=y&normallystocked=y&sort=pricing
TEPT5700 ambient light sensor is a silicon NPN epitaxial
planar phototransistor in a T-1¾ package. It is sensitive to
visible light much like the human eye and has peak
sensitivity at 570 nm.
These are very accurate unlike Cd photoresistors and will do what you expect at 10uA in dim 10 Lux light.
use 250k from 5V to get 2.5V at 10uA.
https://www.mouser.ca/datasheet/2/427/tept5700-1766842.pdf
Check uA at green 10lux 550 nm +/- 1xx range .. to calibrate Rs for 50% drop to compare, but if you edit my 1st link change "ca to de"(or your country code) & it will show local stock sorted by lowest cost. You can add a cap to filter noise and transient sensor blocking RC=T
|
H: Designing a second order HPF with a high gain common-source
I have a High gain (A = 22k) Common-Source circuit, and I was wondering If I can make a second order HPF like this circuit:
My circuit is just an NMOS with an input at the gate, output at the Drain, and I'm assuming it has 22k gain, (So approximately infinity)
is a second-order-HPF possible in some way with a regular High gain Common-source? I can't figure out the connections
My theoretical circuit (5V and not 1V VDD, sorry):
thank you!
AI: There is no reason why you can't design a reasonable 2nd order HPF with 1 Nch FET.
Unreasonable might be;
gain > 10k
Resistors that are < 10 Rd as source impedance must be low for multiple feedback filters or > 33 M.
Caps too big or too small <= Cin
Freq. too high for the amplifier
not properly DC biased just above Vt.
here's a quick n dirty ~750kHz HPF with a gain of ~280 at 1MHz
with a log sweep 10k to 5Meg in 3ms unidirectional. gm=43, Zin = 200 ohms
Adjust Rg ratio for desired Vd voltage for 50% roughly of Vdd and use a low impedance source or buffered source 50 ohm or less.
|
H: How can I find what the sample rate is if I measure it continuously for 100 times?
I am using Adafruit ItsyBitsy, and an external ADS1115 ADC module.
I am continuously measuring the voltage using ADS1115 via loop 100 times.
for i in range(0, 100):
analog_in = chan.voltage
analogfinal += analog_in
How can I know what the sampling rate of the loop is?
The ADS1115 can measure up to 860 samples per second and I2C @ 100kHz.
Why is it taking about a second to output the result even though ADS1115 can sample up to 860 per second?
AI: You don't mention any other settings you've made using the Adafruit ADS1x15 CircuitPython library, so I expect that means you are using the standard values.
The library uses a 128 Hz sampling rate and single mode in the default settings.
"Single mode" means that it requests one sample from the ADC each time you call "chan.voltage." Your code has to sit and wait for one sample to be made.
128 Hz means you get 128 samples per second. You are reading 100 samples in a loop, so it will take "about one second" to read all the values.
|
H: Negative resistance - crystal
Why do we have the negative resistance concept in a crystal?
How does the crystal exhibit negative resistance?
I've read this Link and it seems to say that the amplifier is the one that causes the negative resistance due to positive feedback? Am I understanding it correctly?
AI: Why do we have the negative resistance concept in a crystal?
We don't as far as I know.
How does the crystal exhibit negative resistance?
It doesn't as far as I know.
Talking about oscillators (an amplifier with appropriate phase-filter components) having a negative resistance is just an alternative (and somewhat overly-trendy) way of describing how an oscillator works. I don't particularly like this way of describing how an oscillator works so I would advise you to look at the crystal oscillator as per your recent questions on the subject that I have answered. Negative resistance doesn't bring anything new to the table and it somewhat dilutes what the main point about an oscillator is (IMHO).
Am I understanding it correctly?
Impossible to say.
|
H: Closed Loop Transfer Function For Negative Feedback Circuit
I have the following circuit diagram
I understand the open loop circuit (H1H2H3H4) and I also understand the negative feedbacks H1H2H6 , H3H4H5 and H2H3H7, but I can't figure it out why the positive feedback H1H2H3H4H5H6 takes part in the closed loop response.
Can someone explain me this? Also, are there any rules or algorithms that would help me simplify this diagram?
AI: Hint
Do you see what I've done here: -
I've moved the H7's input to H4's output and, in doing so, I have to reduce the input to H7 by H4 to make things equal. This then allows the function inside the red box to be simplified.: -
Step and repeat in several other areas and it should all easily resolve into a final formula.
Can someone explain me this? Also, are there any rules or algorithms
that would help me simplify this diagram?
Redraw, redraw and redraw until you get one block.
Moving the output of H7(s)/H4(s) to the left-most input: -
If you followed how I got to here, the rest is trivial.
|
H: Can I power my microcontroller system directly from a 3V lithium coincell (transient consideration)?
My device has to be powered with user replaceable 2032 coincell.
I have a ~3.3V microcontroller like nrf52832. Its datasheet says:
So lithium coincell voltage range (3.3V - 0V) is in operating conditions (microcontroller has brown out detector). I would like to power the microcontroller directly, straight from the battery.
But, is there a possibility that any ESD or hotplugging transients can occur during battery replacement, that could harm microcontroller?
Even if I add ESD suppresion diode across the battery contacts, there's not enough reserve in microcontroller VDD range?
There's a lithium ion battery in 2032 size on the market: https://www.amazon.com/LIR2032-Rechargeable-Li-ion-Button-Battery/dp/B074CV44LC
Would you consider someone could try to power our circuits with this?
AI: The nominal voltage is perfectly in spec, as pointed out by Autistic.
On the other hand, ESD and hotplugging events are two different things.
An ESD event can expose the circuit to thousands of volts, with relatively little pulse energy.
But in a hotplugging event, you deal with well-known voltage, that can be out of range and has to be considered as a steady overvoltage (the duration of someone plugging something in can vary quite much and is much longer than a spark caused by ESD). One of the first things that will happen when someone hotswaps a device that is not designed for hotswapping is that the ESD protection diodes get fried ;)
But in my opinion, it is highly unlikely that you get problems with either of those effects:
Hotplugging in general is only a problem when there is more than two contacts involved because the order, in which those make contact, is what matters. So there is no "hotplugging" when talking about inserting a coin cell. Just normal "plugging" of plus and minus.
ESD should not be that much of an issue for supply terminals, which typically have a quite large capacity which can absorb charge pulses to some extend. Most microcontrollers have rudimentary ESD protection included. It's up to you to add this extra savety, but if it's some kind of low-cost and not safety critical device, I personally wouldn't bother.
|
H: Transistor C114 markings
What are the letters below the marking C114 on the package of the transistors?
For example one can find
\$E.SB, \quad E.S\bar C, \quad T.S\bar B\$
Are those transistors all equivalent?
Is this type of marking standard among other components? Any advice on where to gather information on this topic?
AI: Grab a data sheet and look up the markings: -
I'm sure if you need to understand the markings after the letter "E" (the code) either you'll find it in the DS or you can contact the supplier.
|
H: How to print uvm_tlm_analysis_fifo properties with `uvm_info() in UVM?
I'm stuck on the print properties of uvm_tlm_analysis_fifo handle with `uvm_info().
I made a simple sequence item as below.
class simple_sequence_item extends uvm_sequence_item;
rand bit[9:0] address;
rand bit[31:0] data;
rand bit wr_en;
bit acc;
function new(string name="simple_sequence_item");
super.new(name);
endfunction
`uvm_object_utils_begin(simple_sequence_item)
`uvm_field_int(address,UVM_ALL_ON)
`uvm_field_int(data,UVM_ALL_ON)
`uvm_field_int(wr_en,UVM_ALL_ON)
`uvm_field_int(acc,UVM_ALL_ON)
`uvm_object_utils_end
constraint c_sequence_item { data<'d40;
data>'d1;
}
constraint c_address{ address<'d500;
address>'d0;
}
constraint c_wr_en{
wr_en=='d1;
}
endclass
I have implemented with a declaration of simple_sequence_item as the below,
class apb_scoreboard extends uvm_scoreboard;
simple_sequence_item seq_item;
`uvm_component_utils(simple_scoreboard)
uvm_tlm_analysis_fifo#(simple_sequence_item) fifo_exp;
...
virtual task run_phase( uvm_phase phase);
fifo_exp.get(seq_item);
`uvm_info(get_type_name(), $sformat("fifo_get seq_item from mon in Scoreboard : \n Address=%02h\n data=%02h\n wr_en=%02h\n acc=%02h\n", seq_item.address, seq_item.data, seq_item.wr_en, seq_item.acc, ), UVM_LOW)
endtask
But, the problem is that I got the System task was invoked like a function (it has no return value) error message during compile.
xmelab: *W,DSEMEL: This SystemVerilog design will be simulated as per IEEE 1800-2009 SystemVerilog simulation semantics. Use -disable_sem2009 option for turnin
seq_item.address, seq_item.data, seq_item.wr_en, seq_item.acc ), UVM_LOW)
|
*E,NOTSYF (./apb_scoreboard.sv,46|105): System task was invoked like a function (it has no return value) [2.7.4(IEEE Std 1364-2001)].
How to print properties of uvm_tlm_analysis_fifo handle with uvm_info()?
AI: Change:
$sformat
to:
$sformatf
Note the f at the end. Your code uses the system task $sformat, which does not return a string. You need the system function $sformatf, which does return a string. This is what the error message is referring to.
Refer to IEEE Std 1800-2017, section 21.3.3 Formatting data to a string for complete details about the syntax.
|
H: Unexpected impedance spike when paralleling capacitors
I was watching a video from EEVBLOG about bypass capacitors, and he presented a theory that randomly connecting different values of capacitors in parallel can create unexpected impedance spikes:
To inspect the picture, right click and open in new tab, the scales are then visible. Regardless: The frequency scale is logarithmic, impedance scale is linear. Both graphs have 100kHz-40MHz frequency range, and the spike on the left side is located at 8Mhz point, reaching ~800mOhm impedance.
He did not explain why the spike is there. Thinking about it some more, I could not come up with an explanation other than bad test setup. Resistance and inductance go down as more values are added in parallel. Although total capacitance goes up, which would lower the total resonance frequency, 110nF in relation to 10uF cannot cause such a drastic shift.
This was his test setup:
I assume that the solder blobs between capacitors introduced a series inductance that in turn caused the spike in impedance at 8MHz.
Could that be the case, or is there something else that could cause the spike?
AI: This is called "antiresonance" and it is an important part of designing in the decoupling caps for your chips.
The caps themselves have inductance and resistance. In addition, the PCB, adds inductance depending on how they are mounted, if they are connected with traces, vias, planes, etc. I eyeballed the mounting inductance and used some typical ESR/ESL values, and I get an impedance peak around the same frequency. The plot on the left shows the impedance of both caps in parallel.
To calculate this...
Notice the two caps and their L and R form a series RLC network. I simplified it in the above schematic. First, all the inductances are in series, so I added them into one single inductor (left). Then, all the resistances are in series, so I added them into a single resistor (middle). And the two caps appear in series, so I calculate the resulting capacitance (right). So we get a series RLC network with 99nF, 55mOhm, and 6 nH. The resonance frequency is \$ \frac{1}{2 \pi \sqrt{ LC }} \$ which is about 6.5MHz. That corresponds to the peak in the impedance plot.
If the two caps have the same value, then the impedance peak disappears (right schematic, red trace)... but only if one forgets to model the inductance of the trace connecting them!
So, to summarize:
If you use close coupled power and ground planes, you can parallel capacitors with very low inductance between them. Then, you can use lots of capacitors (and lots of work) to design a flat impedance power rail. A smaller version of this is a power island under a chip, coupled to the ground plane, with a bunch of caps on it. The key feature is low inductance between the caps, which allows wiring them in parallel without too much trouble. But close coupled planes are expensive, because you need 6 layers. On a 4 layer board, you can get good coupling between the outer layer pairs because they're only 0.2mm from each other, so it works very well to make small power islands under the chips.
When using planes, since the inductance between caps is very low, you can use larger caps, with larger package, and therefore larger inductance for low frequency decoupling, and smaller caps with smaller packages and therefore lower inductance for high frequency decoupling. Basically, the low inductance of the plane doesn't add too much to the low inductance of a smaller cap.
However, if you use a 2 layer board, or a 4 layer but without close coupled planes, then paralleling capacitors can become a bit hazardous because the traces connecting them together will add enough inductance to create antiresonance peaks. Sometimes they are surprisingly high and introduce a lot of noise into your power rail.
In this case, lower value caps like 100nF work pretty well because their built-in ESR is a bit high, which dampens the resonance. If you make a board with a microcontroller and a bunch of logic chips, put in a good ground plane, route power with traces, and place 100nF on every power pin... it's quite bulletproof.
However you might find that adding one 10µF ceramic cap actually increases the noise, because these have very low ESR, so they make higher Q peaks. Sometimes, an electrolytic with a bit of ESR results in more damping, therefore less noise.
In addition, if both caps are the same size, like 0805, they will have the same inductance. As far as MLCCs are concerned, inductance depends pretty much only on size. So if you see a 100nF and a 10µF next to each other, both in the same 0805 package... then the 100nF isn't actually doing anything useful.
|
H: Why does this JFET buffer perform so poorly at 82 MHz?
I made a Colpitts oscillator that worked well first time. Frequency about 82 MHz.
(nb. corrected diagram adding ground between series 68 pFcaps and corrected Vout)
I wanted to add a buffer and chose to use a JFET 2N3819 from the circuit described in the excellent book by Martin Hartley Jones A Practical Introduction to Electronic Circuits (page 22).
The buffer worked fine on a 1 MHz signal with high input impedance, low output impedance and a gain of about 7, which suited me fine.
However, at the frequency of my Colpitts oscillator it performed poorly (see scope which has the same scale for the input in blue as the output in yellow) The gain is much less than unity.
The datasheet for the 2N3819 says "This device is designed for RF amplifier and mixer applications operating up to 450MHz" so I'm not sure what I am doing wrong.
Edited: further information added by OP:
After reviewing the helpful comments (and correcting the Colpitts circuit diagram) I positioned a TinySA spectrum analyser and measured qualitatively the difference in measured outputs between the scope loaded, and not loaded on the frequency and amplitude with the following results:
As expected, there was a noticable small increase in frequency and amplitude when the scope probe was removed, but not by as much as I feared.
Then I looked at the harmonics up to 350 MHz. The photo speaks for itself
AI: Expanding on Andy's answer: note that analysis of this circuit, in general is kind of dubious. Let me explain in what sense I mean that:
Consider the inductance between most any two points in the circuit. The veroboard layout and THT components give paths of several cm in length. The inductance of free space is \$\mu_0 \approx 1.257 \mu \textrm{H/m}\$. So these lengths correspond to some 10s of nH, give or take a geometric factor due to their cross section and relative placement (i.e.: wide loops of thin conductors, have higher inductance than thin loops of wide conductors). Corresponding to some 5's of ohms at 82MHz. Maybe not a lot versus the ~kohm circuit impedances, but keep in mind these inductances are coupled as well, so where some 10s of mV drop across one, a few mV also appear along neighboring traces. There is no concept of ground in such a circuit, not as we would like to consider it -- everything is floating somewhat relative to everything else, a little bit squishy rather than the hard and solid connections we would like to think we have.
Likewise, your scope probes have considerable effect on the circuit. A typical 10x probe is on the order of (10pF + 500R) || 10M. That is, at DC it looks like 10M, but at very high frequency it's closer to 500 ohms (or maybe low kohms, depends), and inbetween, the equivalent capacitance and resistance is also somewhere inbetween.
So it all looks rather messy. We could run a simulation of the schematic as given, but we're almost certain to get significant differences in waveforms (amplitude, frequency) -- even if accounting for probe impedances. It's not exactly that such a circuit is hopeless -- but it's a lot harder to work with, than it might look at first.
So the first challenge is just seeing how the system (when built this way) is more complicated than the schematic suggests. The second is finding ways around that.
What should we do to solve this?
Use RF design techniques.
First, build everything on ground plane.
You can get veroboard or pad-per-hole or whatever perf you like, with plane (mesh really, since, it's perforated with holes, but it's still better than nothing), and that helps a lot. The wide area conductor forces currents to flow on one side, near the conductors routed over it -- reducing stray inductance, and greatly reducing coupling between traces. This is the first critical step to being able to easily analyze circuits: we can again mostly ignore the coupling between traces.
With a ground plane, we can measure a voltage difference at a point (with respect to a point on that ground plane -- notice we need to maintain good shielding when we make such a measurement -- say by soldering a coax shield directly to the plane), and be reasonably confident that what we measure is what's really going on in circuit, and not an artifact of strays or probing technique.
Second, use lower impedances.
More generally: appreciate that impedances are lower by necessity, and work with that.
In fact, impedances ultimately are proportional to the impedance of free space, Zo ~ 377 ohms. Transmission lines are typically lower than this (again, due to geometry factors -- hence coax being 50/75 ohms, twisted pair 50-150, twin lead 200-600, etc.) There is no such thing as 10kohm transmission line, so we can leave such impedances aside, at least for signal purposes! Now, we can have circuit impedances significantly different from these, either when the transmission line is very short (~cm is still quite short relative to the ~3m wavelength of this frequency, or even the wavelength of most harmonics thereof), or when it's resonating (basically, reflections pile up N times, giving something on the order of Zo*N or Zo/N).
Mind, we don't need to consider transmission lines in a compact circuit like this, we can use their equivalent inductance and capacitance (low frequency / fractional wave approximation) -- but we still must use them if we connect to anything at some distance, i.e. by cable, probe, antenna, etc. (Indeed, the probe loss can be seen as the fact that power flow is required to sense the signal; it can be made arbitrarily small with clever probe design (especially active types), but cannot be made zero.) And since reflections would be a problem, we prefer to use terminated transmission lines, even if we don't need the available signal power, or the low impedance, at the far end (say, where it's going into an oscilloscope's input buffer circuit).
So, for the oscillator itself, I mean, it's running, clearly it's not having too bad of a time. The signal at its collector is probably much less than it could be, with the 1.8k supplying it, but that's not even a big deal, since your JFET will have voltage gain, and a lowish input voltage is probably a good thing overall. (The other way to get a low voltage, of course, would be making the resistance very low, maybe 100 ohms or less. Which might be an excellent idea for coupling directly to a coax cable, for example -- say if the buffer had to be at some distance, for some reason.)
As for the buffer, we have several things to consider:
Input capacitance: what effect does this have on the oscillator (loading)?
Output power (or voltage, or gain): is biasing adequate (smaller drain resistor, or change to an inductor)? Is it well matched to the load? What is the load? (Again, a scope probe is quite a strong load compared to the schematic as shown!)
Feedback: a common-source amplifier exhibits Miller effect, i.e. Cdg is multiplied by voltage gain, which means effectively the input capacitance depends on voltage gain. And output voltage depends on load, so by chain, there will be some sensitivity of the oscillator on the buffer load. Which means it's not really a great "buffer", is it? Now, 2N3819 is a pretty small JFET (low capacitances, modest Rds(on)), so this might not be much effect, but then again if you're looking for a quite stable frequency, maybe this bears more consideration.
The usual way to avoid the last one, is a common-gate or cascode circuit (the latter of course incorporates the former). The source input side has a fairly low impedance (~1/gm), which is a good match to transmission lines (hence a common input circuit for radios) but might not be so desirable here; the drain output is well isolated (Cds is small) so it serves well as a buffer. Noninverting diff pair is also a good candidate: the input side looks like a source follower, thus having mostly Cgd (with no Miller effect), causing little load on the signal source; the output has current and voltage gain.
To say nothing of matching, of course; it could be that you can increase your output power by using a capacitor divider from the oscillator to the JFET, for example, or some manner of LC network; or a matching network or transformer at the output (so the impedance seen by the drain can be fairly high, getting a reasonable voltage swing thus using most of the power available from the transistor), to match to a low impedance load (preferably using coax straight into the scope at this point -- probes aren't great at this frequency), but that's a much more complex (ha) topic to cover in another answer...
Finally, the scope -- the DS1054Z is only 100MHz or so; you're lucky to see any bounce/flattening of that waveform at all! Likely, the real waveform is crunchier than you can see here, but you can only guess at what that might look like. (On the upside: 2N3904 doesn't really go fast enough to do anything crazy, and probably it's some flattening due to saturation, and a bit of ringing due to slamming into saturation every cycle. 2N3819 can oscillate at a bit higher frequencies (>400MHz), but you'd have to be pretty unlucky with layout I think to excite that.)
If you can get a one of those cheap handheld spectrum analyzers, you can get a better idea what's going on with the waveform, by looking at the higher harmonics.
|
H: Reading OpAmp Open-Loop Gain vs Temperature Graph
From the OPA140 datasheet here: https://www.ti.com/lit/gpn/opa2140
How do you read this graph? Because on the Y-axis the units are all weird and the numbers are strangely negative.
The open loop gain is listed as:
AI: Looks like they’re stating it as the reciprocal, so 0 would be infinite gain. So for a 10k load and 25 degrees C the 1/gain is typically about 0.15u/V or a gain of 6.7x10^6.
That’s 136dB which is quite a bit higher than the typical figure of 126dB over a wide common-mode voltage range.
Not sure why they’re stating it as negative, maybe aesthetics so up=positive, either polarity is okay- it depends on which input they are assuming to be connected to the signal input.
|
H: Trying to ID these FinFETs
I'm trying to ID this FET, but can't seem to find anything.
The logo looks like "FCE"
It says B6066 M300AM
How can I ID these guys in the future?
AI: They are Huashuo Semiconductor HSBB6066 n-channel MOSFETs.
|
H: 4 input multiplexer in VHDL
I am trying to model a 4 input multiplexer using VHDL; I am using edaplayground.com. I am trying to do this using a 2 input multiplexer. The code for only the 2 input multiplexer works, but when I try the 4 input multiplexer I get a lot of errors. This example is almost exactly what is in my textbook. Do you see what is wrong here?
library IEEE;
use IEEE.std_logic_1164.all;
entity mux2 is
port(d0, d1: in STD_LOGIC;
s: in STD_LOGIC;
y: out STD_LOGIC);
end;
architecture synth of mux2 is
begin
y<= d1 when s='1' else d0;
end;
entity mux4 is
port(d0,d1,d2,d3: in STD_LOGIC;
s: in STD_LOGIC_VECTOR(1 downto 0);
y: out STD_LOGIC);
end;
architecture struct of mux4 is
component mux2
port(d0, d1: in STD_LOGIC;
s: in STD_LOGIC;
y: out STD_LOGIC);
end component;
signal low, high: STD_LOGIC;
begin
lowmux: mux2 port map(d0, d1, s(0), low);
highmux: mux2 port map(d2, d3, s(0), high);
finalmux: mux2 port map(low, high, s(1), y);
end;
The error-messages I get is:
design.vhd:16:26: no declaration for "std_logic"
design.vhd:17:16: no declaration for "std_logic_vector"
design.vhd:18:17: no declaration for "std_logic"
design.vhd:23:25: no declaration for "std_logic"
design.vhd:24:19: no declaration for "std_logic"
design.vhd:25:20: no declaration for "std_logic"
design.vhd:27:23: no declaration for "std_logic"
design.vhd:29:9: for default port binding of component instance "lowmux":
design.vhd:29:9: type of signal interface "d0" declared at line 23:14
design.vhd:29:9: not compatible with type of port "d0" declared at line 5:10
design.vhd:29:9: type of signal interface "d1" declared at line 23:18
design.vhd:29:9: not compatible with type of port "d1" declared at line 5:14
design.vhd:29:9: type of signal interface "s" declared at line 24:13
design.vhd:29:9: not compatible with type of port "s" declared at line 6:10
design.vhd:29:9: type of signal interface "y" declared at line 25:13
design.vhd:29:9: not compatible with type of port "y" declared at line 7:10
design.vhd:30:9: for default port binding of component instance "highmux":
design.vhd:30:9: type of signal interface "d0" declared at line 23:14
design.vhd:30:9: not compatible with type of port "d0" declared at line 5:10
design.vhd:30:9: type of signal interface "d1" declared at line 23:18
design.vhd:30:9: not compatible with type of port "d1" declared at line 5:14
design.vhd:30:9: type of signal interface "s" declared at line 24:13
design.vhd:30:9: not compatible with type of port "s" declared at line 6:10
design.vhd:30:9: type of signal interface "y" declared at line 25:13
design.vhd:30:9: not compatible with type of port "y" declared at line 7:10
design.vhd:31:9: for default port binding of component instance "finalmux":
design.vhd:31:9: type of signal interface "d0" declared at line 23:14
design.vhd:31:9: not compatible with type of port "d0" declared at line 5:10
design.vhd:31:9: type of signal interface "d1" declared at line 23:18
design.vhd:31:9: not compatible with type of port "d1" declared at line 5:14
design.vhd:31:9: type of signal interface "s" declared at line 24:13
design.vhd:31:9: not compatible with type of port "s" declared at line 6:10
design.vhd:31:9: type of signal interface "y" declared at line 25:13
design.vhd:31:9: not compatible with type of port "y" declared at line 7:10
The link to edaplayground is:
https://edaplayground.com/x/w3By
AI: Repeat the lines:
library IEEE;
use IEEE.std_logic_1164.all;
Like this (before the second entity):
Which works when elaborating 'mux4':
|
H: Can anybody explain these examples?
I am just getting started with electronics coming from a web dev background so this is a beginner question.
I am following this tutorial and I'm confused about this part! The paragraph under step 4 has me very confused so I have a few questions:
Sentence that confuses me: 10 times as much current flows through the I2 branch as the I1 branch
My question: According to the image above the paragraph, 90mA is running through I1, and only 9mA is running through I2. Shouldn't 10 times as much current be flowing through I1 than on I2?
Sentence that confuses me: the ratio of the two resistors R1 to R2—R2 is 10 times larger than R1
My question: What exactly does it mean by the ratio of R1 to R2—R2, would that mean that R1:0 is 10 times greater than R1? — doesn't seem to be a minus sign, if it's not then what is it?
AI: question*: According to the image above the paragraph, 90mA is running through I1, and only 9mA is running through I2. Shouldn't 10 times as much current be flowing through I1 than on I2?* ........Yes.
I think it's pretty clear there typos or editing mistakes here. Not only is \$I_{1}\$ (\$90mA\$) 10 times greater than \$I_{2}\$ (\$9mA\$), but just looking at the equations to the right, both branches can not both use \$R_{1}\$ in the denominator and have different results!
*question:What exactly does it mean by the ratio of R1 to R2—R2, would that mean that R1:0 is 10 times greater than R1? — doesn't seem to be a minus sign, if it's not then what is it?
\$R_{2}\$ is 10 times greater than \$R_{1}\$, so the ratio \$ \frac{R_{2}}{R_{1}}\$ is 10. A higher resistance means we just expect 10 times more current to be restricted in the \$R_{2}\$ branch, than can flow in the \$R_{1}\$ branch. Or conversely, we just expect 10 times more current to be flow in the \$R_{1}\$ branch, than can flow in the \$R_{2}\$ branch. Which matches what we would expect with proper use of Ohm's law.
Alternatively, \$I_{1} = \frac{V_{A}}{R_{1}}\$ and
\$I_{2} = \frac{V_{A}}{R_{2}}\$. Setting both sides equal to \$V_{A}\$ leads to \$ \frac{R_1}{R_2} = \frac{I_2}{I_1}\$ so \$\frac{1}{10} = \frac{9mA}{90mA}\$.
|
H: How to analyze this RC circuit without finding Thévenin's equivalent circuit?
simulate this circuit – Schematic created using CircuitLab
I want to find the capacitor's voltage equation in this circuit (\$v_{C}\$) knowing that the capacitor is initially uncharged. I know that analyzing the Thévenin equivalent for this circuit yields the equation:
$$v_{C}=6(1-e^{\cfrac{t}{3\times 10^{-3}}})$$
where \$\small\tau=3\times 10^{-3}~\mathrm{s}\$ and \$\small V_{Th}=6~\mathrm{V}\$.
But analyzing this circuit in its current form:
Applying KCL at node \$a\$ gives:
$$i_{R1} = i_{R2} + i_{C}$$
$$\frac{v_{R1}}{2000} = \frac{v_{R2}}{2000} + 10^{\displaystyle -6}\frac{dv}{dt}$$
Using KVL: $$v_{R2}=v_{R3}+v_{C}$$ and $$v_{R1}=E-v_{R2}$$
Substituting: $$\cfrac{E-2(v_{R3}+v_{C})}{2\times 10^{-3}}=\cfrac{dv}{dt}$$
$$\frac{6-v_{R3}-v_{C}}{10^{-3}} = \frac{dv}{dt}$$
$$\frac{dt}{10^{-3}}=\frac{dv}{6-v_{R3}-v_{C}}$$
Integrating: $$\int_{0}^{t}\frac{dt}{10^{-3}}=\int_{V_{0}}^{v(t)}\frac{dv}{6-v_{R3}-v_{C}}$$
$$\frac{t}{10^{-3}} = -\ln{(\frac{v(t)+v_{R3}-6}{V_{0}+v_{R3}-6})}$$
But \$V_{0}=0\$ and after isolating v(t):
$$v(t) = (v_{R3}-6)[\exp({\displaystyle \frac{t}{10^{-3}}})-1]$$
I am stuck now and I can't find a way to get rid of \$v_{R3}\$, perhaps my approach is wrong from the beginning, but I hope someone can help me identify what's wrong about my analysis.
AI: KCL
Here's your re-drawn schematic (I'm in the practice of re-drawing schematics as a rule):
simulate this circuit – Schematic created using CircuitLab
My KCL sets up like this (treating your bottom node as ground):
$$\begin{align*}
\begin{array}{rccc}
{\text{KCL for node }V_a:}\vphantom{\frac{E}{R_1}+\frac{ v_c}{R_3}}\\\\
{\text{KCL for node }V_c:}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c}
\end{array}
&&
\overbrace{
\begin{array}{r}
\frac{v_a}{R_1}+\frac{v_a}{R_2}+\frac{v_a}{R_3}\\\\
\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c
\end{array}
}^{\text{outflowing currents}}
&
\begin{array}{c}
&\quad{=}\vphantom{\frac{E}{R_1}+\frac{ v_c}{R_3}}\\\\
&\quad{=}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c}
\end{array}
&
\overbrace{
\begin{array}{l}
\frac{E}{R_1}+\frac{ v_c}{R_3}\\\\
\frac{v_a}{R_3}\vphantom{\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c}
\end{array}
}^{\text{inflowing currents}}
\end{align*}$$
Above, I place out-flowing currents on the left and in-flowing currents on the right. That helps me keep things straight. As it turns out, a rising voltage at \$v_c\$ means an outflowing current (out from the node towards the capacitor) so that is placed on the left side. There is no inflowing current through the capacitor since ground can't generate any.
Solve the top equation for \$v_a\$ and substitute into the bottom equation (on the right side, of course.)
$$\begin{align*}
\frac{v_c}{R_3}+C\,\frac{\text{d}}{\text{d}t}v_c&=\frac{1}{R_3}\cdot\left[\frac{R_2\left(E \,R_3 + v_c\, R_1\right)}{R_1 R_2+R_1 R_3+R_2 R_3}\right]
\\\\
\frac{v_c}{R_3\, C}+\frac{\text{d}}{\text{d}t}v_c&=\frac{R_2}{R_3\,C}\cdot\left[\frac{E\, R_3}{R_1 R_2+R_1 R_3+R_2 R_3}+\frac{v_c\, R_1}{R_1 R_2+R_1 R_3+R_2 R_3}\right]
\end{align*}$$
The above, placed in standard form, results in:
$$\begin{align*}
\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1}{R_3\, C}\left(1-\frac{R_1\,R_2}{R_1 R_2+R_1 R_3+R_2 R_3}\right)\right]v_c&=\frac{E\,R_2}{C\left(R_1 R_2+R_1 R_3+R_2 R_3\right)}
\\\\\text{applying values,}\\\\
\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1000}{3}\right]v_c&=2000
\end{align*}$$
Solution using integrating factor
That's a 1st order non-homogeneous linear DE whose standard form looks like: \$y^{'}+a_t\,y=f_t\$. Of course, you just have simple constants there, so \$a_t=\frac{1000}{3}\$ and \$f_t=2000\$. The integrating factor is \$\mu=e^{^{\int a_t\:\text{d}t}}\$, which is just \$\mu=e^{^{\frac{1000}{3} t}}\$. So the solution is:
$$\begin{align*}
y_t&=\frac{\int \mu \,f_t\:\text{d}t+C}{\mu}
\\\\
&=e^{^{-\frac{1000}{3} t}}\cdot\left(\int \left[e^{^{\frac{1000}{3} t}}\cdot 2000\right]\:\text{d}t+C\right)
\\\\
&= e^{^{-\frac{1000}{3} t}}\cdot\left(2000\cdot \frac{3}{1000}\cdot e^{^{\frac{1000}{3} t}}+C\right)
\\\\
&=6+C\cdot e^{^{-\frac{1000}{3} t}}
\\\\&\text{as }y_0=0, \text{it follows that } C=-6, \text{so:}
\\\\&=6\cdot\left(1-e^{^{-\frac{1000}{3} t}}\right)
\end{align*}$$
(I used \$y_t\$ above as a substitute for the capacitor's voltage, \$v_c\$, over time.)
Solution using separation of parameters
$$\begin{align*}
\frac{\text{d}}{\text{d}t}v_c+\left[\frac{1000}{3}\right]v_c&=2000
\\\\
\frac{\text{d}}{\text{d}t}v_c&=2000-\left[\frac{1000}{3}\right]v_c
\\\\
\text{d}\,v_c&=\left[2000-\left[\frac{1000}{3}\right]v_c\right]\text{d}\,t
\\\\
\text{d}\,t &=\frac{\text{d}\,v_c}{2000-\left[\frac{1000}{3}\right]v_c}
\\\\\text{set }u=2000-\left[\frac{1000}{3}\right]v_c &\therefore \text{d}\,u=-\left[\frac{1000}{3}\right]\text{d}\,v_c
\\\\
t=\int \text{d}\,t&=-\frac3{1000} \int \frac{\text{d}\,u}{u}
\\\\
&=-\frac3{1000}\cdot\ln\left(u\right)+C
\\\\\therefore\\\\
-\frac{1000}{3}\,t+C&=\ln\left(u\right)
\\\\
Ae^{^{-\frac{1000}{3}\,t}} &= u
\\\\
Ae^{^{-\frac{1000}{3}\,t}} &= 2000-\left[\frac{1000}{3}\right]v_c
\\\\
-\left[\frac{3}{1000}\right]\left[Ae^{^{-\frac{1000}{3}\,t}} -2000\right] &= v_c
\\\\
v_c &=Ae^{^{-\frac{1000}{3}\,t}} +6
\\\\\text{find }A=-6\text{ at }t=0,\text{ so:}
\\\\
v_c &=6\cdot\left(1-e^{^{-\frac{1000}{3}\,t}}\right)
\end{align*}$$
Same answer.
KCL Addendum
The KCL equations show outflowing currents on the left and inflowing currents on the right. This approach is used by some Spice programs (those where I've directly looked over the code used to generate these) to develop their KCL.
Perhaps the easiest way to imagine is that a voltage at a node spills away from that node through the available paths. But also that voltages spill back into that same node from surrounding nodes through those same paths. The result is the application of a simple superposition concept that results in, effectively, the potential differences controlling the result.
You can test this, easily, by rearranging the resulting equation(s), moving the right side over to the left side and then combining terms. You'll then see the usual potential differences that you expect. So it really is the same result.
The reason I very much prefer this method is that it is simple to visualize and very difficult to make mistakes. You can easily orient yourself to a node and then work out the terms for out-flowing currents for the left side of the equation. Then all you have to do is position yourself at each surrounding node and work out the terms for in-flowing currents for the right side. It's almost impossible to screw that up.
Conversely, when you are instead struggling to work out the potential differences in your mind (using the more traditionally taught method) and just write those terms, you often find yourself not entirely sure if you have the sign right as you try and add them up. I find, time and time again that not only others wind up messing up somewhere and making an uncaught mistake.. but that I also make those mistakes, as well. Even with lots of experience, you just aren't 100% sure and you often find yourself double and triple checking your work, just in case.
This approach also just works and works right without the continual question about orientations of signing expressions. With this method, I still make typos. But I don't make sign errors. It's too easy to use.
So voltage spills away from a node via available paths and voltage spills into a node from nearby nodes via the same available paths. The only caveat is that a current source or sink can only flow in, or flow out, but not both directions. It's one way. So it will either appear on the out-flowing side or on the in-flowing side -- but not both sides.
This also works perfectly well with capacitors and inductors. It does turn the equation into a differential/integral equation. But that's just a technicality. It's still correct.
|
H: Could a CPU execute the fetch-execute cycle without a microsequencer/microcode?
VERY SPECIFIC QUESTION, PLEASE READ THROUGH ALL OF IT
From wikipedia:
"The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen."
My question is: Before the microsequencer and the microcode, how did they make CPUs to physically step from one point of the sequence to another? What was going on in those CPU designs BEFORE the "more complex" ones we have today? If that "sequence of operations the control unit goes through to process an instruction is in itself a short computer program", like Wikipedia claims, then who's executing it? Answering "The CPU itself" won't help cause the CPU executes user-defined programs exploiting the fetch-execute cycle, so it can't execute the fetch-execute cycle itself too, right? I mean, this piece of hardware responsible for creating the fetch-execute cycle should still be located somewhere inside the CPU, but it's not what people refer to when they say "the CPU". It's at a lower level of abstraction, it's what's given for granted in EVERY, EVERY, EVERY resource about CPU's internal functioning (at least the non-academical ones) that I've found so far. I can't find this information anywhere.
Please don't answer "The program counter points to the memory address to fetch the next instruction from". I know that, I just don't know how this sequentiality is actually/electrically/physically implemented. I would need a more straight up electrical answer. I know what a finite state machine is and I know that at its core the CPU is one. But that still doesn't answer "how is a FSM realized in a CPU without a microsequencer?". The microsequencer should still rely on this "sequential automation device" (the thing I'm asking about) to work.
My only mental approximation of how such automated sequenciality could work is only mechanical, not electrical (it's the piano player), and I know there are no moving parts on a computer. The Jacquard loom isn't a good example here either because it was not automated, the weaver would have to manually pull the wire that made the metal drum rotate so that the machine could read the set of punched holes in the next card. I'm also not asking anything about the clock pulse nor synchronisation.
Notice that this isn't even a very technical question and as such shouldn't require a too technical answer too, as those I was given earlier. Literally, in the most intuitive sense, when you read "this machine is automatic" you wonder by which means it's automated. The same I'm asking in regards of computers. If you say "the CPU automatically steps through a sequence of always-the-same processes, a loop", I'll be tempted to ask you "How is this loop realised?". This should be a technology invented somewhere in the 50s, because that's about the time people stopped manually plugging/unplugging wires into those room-sized mainframes just to tell the machine to add stuff. As such an old technology, it suggests me it shouldn't be that complicated to understand even for me as a chemistry student. But it also should have been a very revolutionary discover, so it baffles me how I wasn't able to find it anywhere. Is it really that complicated that it's only talked about between engineers?
Thank you very much for reading!
AI: Several have commented on the use of finite state machines. At the heart of a finite state machine is a register. Its contents—a number—identify the current state of the machine. The processor designer adds logic around this register. This logic considers the current state (and possibly inputs) and then creates outputs based on all the data it is considering. Among the outputs is the next state to be loaded into the state register. Other outputs are used to control such devices as an arithmetic logic unit, a memory device, and so on. The whole process is synchronized by the leading edge of the clock. When the clock strikes, the desired next state is loaded into the state register, and the cycle repeats, generally with different results since both computer instruction and data change over time.
It might take several clock cycles before a single computer instruction has been fully processed. In one architecture, the computer instruction stays in place while over multiple steps the logic examines its contents—addresses, data, etc.—and executes it. In another, the computer instruction can be passed down through a series of registers in a pipeline, like a factory assembly line. At each stage in its progression through the pipeline, additional work is done to complete the execution of the computer instruction. In either case, the finite state machine keeps on chugging along like a worker bee tending the queen bee.
The pipeline arrangement can give the impression that one instruction takes one clock cycle. This is a bit of an illusion, since multiple instructions are resident simultaneously in the several stages of the pipeline, yet one result is completed every cycle. This is analogous to Henry Ford's assembly line: there might be 100 or 200 cars in work at any moment, but one pops out every hour, giving the superficial impression that it only takes an hour to build a car.
Microcode is a variation on all this. Instead of calculating the outputs in each cycle, the logic simply looks up what those outputs should be, drawing them from a special-purpose memory that holds them. Want action A? Take the memory word that matches it and use it to supply the desired outputs. To create the memory code, jam the correct bits in the correct locations. Today the main reason microcode is out of favor is because memory is slower than logic. If that should change when some new technology emerges, you might see microcode again, since it then would be faster.
Superscalar processors use multiple pipelines in parallel, with logic used to dispatch a computer instruction to the appropriate pipeline. One—or multiple— pipelines might handle multiplication, for example, while others handle memory access. As instructions exit the pipeline, the results need to be put back together again in the appropriate sequence so that you get the right answer whether there are parallel pipelines or not.
But as one commenter said, the study of processor architecture spans many books and is deep. I support the recommendation that you get a good book. I would recommend Heuring, Vincent P., Harry Frederick Jordan, and Miles Murdocca. Computer systems design and architecture. Addison-Wesley, 1997. This is a clearly written introduction to computer architecture and gives detailed explanations of everything you're asking about.
After that try Shen, John Paul, and Mikko H. Lipasti. Modern processor design: fundamentals of superscalar processors. Waveland Press, 2013. It's not as well written, but it covers a wide range of techniques used in modern processors.
|
H: Unconventional use of a JFET: Potential pitfalls?
The idea is to (almost) replicate the soft power button on a modern PC, using an ATX power supply and some GPIO's from a microcontroller that's going to be there anyway:
simulate this circuit – Schematic created using CircuitLab
With power off, the gates of both transistors are pulled to 0V, which has Q1 off and Q2 on.
When the power button is pressed:
It grounds the ATX enable pin via Q2, which turns the supply on.
Early in the init code, the MCU pulls GPIO_0 high, which turns on Q1. This allows the button to be released without dropping the supply.
Later in the init code, the MCU pulls GPIO_2 high, which turns off Q2. This converts the power button into an ordinary user button that is attached to GPIO_1 (internal pull-up), with its ground path via Q1.
To power off, either:
De-power the MCU, which makes all GPIO's Hi-Z. The circuit then returns to the normal powered-off state.
Hold the power button long enough to trigger a software timeout. The MCU then pulls GPIO_2 low, followed by GPIO_0 low, and waits to lose power. Actual power loss happens when the button is released.
Is there anything I should be aware of, that would make it not behave like that, or damage something?
AI: When the thing's powered up and and the PWR_BTN is being used as a user button, you take GPIO_2 high which you say "turns off Q2". I would expect, when you take GPIO_2 high, Q2 will not turn off until it has pulled GPIO_1 way down towards 0V. I would expect GPIO_1 to sit close to ground when GPIO_2 is taken high even without the PWR_BTN being pressed.
EDIT
For a p channel jfet the source is always the terminal which is at the more positive voltage of the two non-gate terminals.
The jfet is a depletion device which means that it is passing maximum current when the gate voltage is equal to the source voltage and, for a p-fet, the current conduction reduces more the further the gate is taken positive relative to the source. If the gate is taken far enough positive relative to the source then pinch-off is achieved where source to drain conduction ceases.
Usually care must be taken with jfets not to forward bias the gate diode. For a p-jfet care must be taken not to take the gate more than about 0.7 V negative relative to the source or the gate diode will be over driven and the jfet may be damaged.
The above describes the operation of a p-jfet. For a n channel jfet just reverse everything.
For an n-fet, the source is always the terminal which is at the more negative voltage of the two non-gate terminals. Again, being a depletion device the jfet achieves maximum drain to source conduction when the gate voltage is equal to the source terminal voltage. Conduction reduces more as the gate is taken negative relative to the source until the gate is taken sufficiently negative to achieve pinch-off at which point drain to source conduction ceases.
As with the p-fet, care must be taken to avoid damaging the fet which, for an n-fet, is likely to occur if the gate diode is forward biased which will happen if the gate is taken more than about 0.7 V positive relative to the source.
My interpretation of your circuit is that when you take GPIO_2 low you are forward biasing the gate diode but this shouldn't be a problem because the source terminal can follow the gate terminal but a small voltage above it, with a voltage drop across the microcontrollers internal pull-up resistor. If instead the p-fet's source had been hard wired to +3.3 V then I would have thought that taking the gate to 0V would have been a problem as too large a voltage would have been applied across the forward biased gate diode. With a typical microcontroller pull-up resistor value of say 20k ohms I would expect the current to be so low as to cause a quite a low voltage drop across the forward biased gate. If a much lower value of pull-up resistor was used then I would expect a larger voltage drop of say 0,7 V across the forward biased gate.
The problem I think I can see is that when GPIO_2 is taken high, if the source were to be pulled-up high by the internal pull-up resistor then the gate would be at the same voltage as the source, the p-fet would be fully turned on, current flows through the internal pull-up resistor which forces the source voltage to drop relative to the gate. The source will drop low enough to achieve a balance where less drop and the j-fet is turned on enough to make the source drop further but the source can't drop too far or the jfet will be turned off too much to allow enough current to flow for the required voltage drop across the internal pull-up resistor.
So, when PWR_BTN is being use as an ordinary user button I think that GPIO_1 will sit at some voltage significantly less than +3.3 V when PWR_BTN is not pressed.
I hope that gives you a better understanding and some ideas about possible potential problems.
|
H: Feedback loop gain not as expected in a differential amplifier
I am trying to build an approximate opamp with a differential amplifier and add a feedback loop to get the approximate wanted gain.
I used a differential amplifier with a current source and a current mirror, and got open loop gain of 3k.
When I add the feedback loop resistors I expect to get:
$$ A_{cl} = \frac{A_OL}{1 + A_OL \beta}\cdot K =\frac{3k}{1 + 3k\cdot\frac{1}{1+20} }\cdot\frac{20}{1+20} \approx 19.8$$
Instead I get $$ A_{CL} \approx 70$$
I don't understand what I'm doing wrong, and if what I want to do is even possible.
EDIT: In the schematic below I placed by mistake a feedback resistor of 50k instead of 20k like my original circuit.
I also forgot to mention, for the NPNs and PNPs: $$\beta = 100, V_A = 100[V]$$
simulate this circuit – Schematic created using CircuitLab
AI: Your feedback is positive. As the voltage at the base of Q7 goes more positive Q7 turns on more, and it's collector voltage goes down. The voltage at the collector of Q6 will do the opposite, going more positive. This voltage is what you are feeding back, so as the input goes more positive the feedback goes more positive as well, which should either drive it into saturation or oscillation.
By taking the output from Q6 you are making the base of Q7 the non-inverting input and the base of Q6 the inverting input. You can see this if you remember that a common emitter amplifier inverts the signal, and a differential amplifier is just two common emitter amplifiers tied together.
In an op-amp there are usually 3 stages, the differential input, the voltage amplifier, and the output stage. The feedback is taken from the output stage which will be 180 degrees out of phase with the inverting input.
|
H: IC current sensor functionality
I'm trying to understand the VOC and FAULTB pins of this IC sensor(MCR1101-20-5).
As far as I understand, this sensor can measure -20A to +20A DC current and for the model I have the transfer function is given as:
Vout = VCC/2 + Iin x 100mV/A x VCC/5V
In my case Vcc = 5V so
Vout = 2.5V + Iin x 100mV/A
And in my application I'm going to measure a current between 0 to 5A and I want to activate an LED when the current exceeds 600mA. This could be done by a microcontroller with ADC but I wonder whether the IC already has the functinality.
It seems VOC and FAULTB pins might be used for such purpose. Can a reference Voltage be set and then the IC outputs ON or OFF when certain current passes through?
The overcurrent fault threshold (I ) is user-configurable via an external resistor divider and supports a range of 120% to 200% of the full-scale primary input (IP).
AI: Datasheet says : VOC range is 0 to Vcc/2.
You need use also read the fault range (depends of the model).
So, only two levels of "over range", +/- 24 A and +/- 30 A.
So, not adjustable to 600 mA. One must do externally.
|
H: Can I double the circuit on the same PCBDoc using Altium Designer?
Can I double the number of components for a PCB while having a single schematic in Altium Designer? I want to make a two same circuit on the same board.
Or should I recreate the same schematic to double the number of component for PCB?
AI: You can use the "embedded board array" feature to put several copies of a design on a pcb.
This is useful for panelization. The copies will be exact copies, and if the original PCB is modified, the copies will inherit the modifications.
|
H: Soldering iron stands
A lot of soldering iron stands have the metal shaft of the soldering iron in contact with a metal part of the stand. Is this desirable, or should it be avoided?
I've had a couple of those stands with the chrome plated springy spiral that surrounds the shaft. Some designs are made of plate metal and the shaft rests on the metal. Either way, heat is drawn away from the iron and the stand itself heats up.
Shouldn't a stand ideally not come in contact with the shaft/tip of the iron at all? Why are these designs so common?
AI: Decent soldering irons cycle down the temperature when you place them in the stand. The cartridge shouldn't sit there unused at some 350°C or the tip life will be reduced, so you ideally want it to cool down. Though as mentioned in comments, the very tip itself should not be resting against anything.
Great solder irons (read: Metcal) don't just lower the temperature when not used, but also heat up quickly as you remove the tip from the stand. Poor ones (like my Hakko...) lower the temperature but heat up so slowly that the feature becomes a useless burden during bigger solder jobs. As always, pick one corresponding to your needs.
|
H: How to choose the RC values between a buffer and an ADC input?
In the OPA365 specification sheet there is this example circuit where they add a RC filter between a unity buffer and input of an ADC. My understanding of the purpose of this \$ 100 \,\Omega - 1\operatorname{nF} \$ RC filter is to anti-alias the signal coming out of the buffer - by filtering out any components higher than 250kHz before it enters a 250kSPS ADC. But when I calculate the cutoff frequency with the chosen RC values, I get:
$$
f_c = \frac{1}{2\pi RC} = \frac{1}{2\pi *100*(1*10^{-9})} \approx 1.6 \,\operatorname{MHz}
$$
1.6 MHz is more than six times larger than 250kHz! This RC filter is NOT anti-aliasing the signal into the ADC. Even if I take account of the open loop output impedance (\$ 30\,\Omega\$), I still end up having a cutoff frequency of \$ 1.2\operatorname{MHz} \$. Also according the the Nyquist theorem, I should set my cutoff frequency to at least half of the ADC sampling rate, or 125kHz, by setting my RC values to be 12-13 times larger than what TI chooses for this example circuit.
So why and how is this \$ 100 \,\Omega - 1\operatorname{nF} \$ pair chosen?
AI: It is clear in the question that it is already known that the RC has nothing to do with anti-aliasing for the sampling frequency mentioned. From the ADC datasheet we can get an equivalent input circuit as follows:
Note that the switch also models the "On resistance". You can understand that the first capacitor, which is much larger that the second one, works like a large bucket quickly filling a small cup, through the switch resistance:
So, it is not related to "frequencies the cause instability to the charging capacitor", but to quickly charging the sample-and-hold capacitor.
So why and how is this \$ 100 \,\Omega - 1\operatorname{nF} \$ pair chosen?
Here is a nice reference from TI about "Charge Bucket Filter Design". But in the ADC datasheet, this information is to be found:
Ideally the resistor would be zero (but it may be a problem for the opamp) and the capacitor should be large enough so the amount of charge it looses to charge the ADC capacitance is small enough to avoid further conversion errors within the sampling frequency (and parasitic inductance and resistance could be a problem too for the large capacitors).
|
H: ETSI standard for 433 MHz
I am really confused about ETSI standards for SRD. I need some ETSI standard which regulates frequency band 433 MHz (433,050 MHz to 434,790 MHz).
I found ETSI EN 300 220-1 V3.1.1 (2017-02) - it seems like the latest, but there I can not find any information about maximal TX power and bandwidth on specific frequencies.
I found ETSI EN 300 220-1 V2.4.1 (2012-05) where is chapter 7.2.3 for limits. There is clear table with frequency - power limit and bandwidth.
Which ETSI is valid? The newer without limits, or the older one?
AI: Output power and bandwidth aren't regulated through a standard but through EU and national directives.
Different nations have different requirements of this licence-free band. EU has attempted to standardize it somewhat with 2006/771/EC, with amendment 2013/752/EU. In the latter you can see various available alternatives 44a, 44b, 45a, 45b, 45c.
Simplified, it means:
You can either stay within 433,05-434,04 MHz with 10mW ERP, 10% duty cycle and no bandwidth requirement.
Or you can stay in 434,04-434,79 MHz with 10mW ERP continous transmission but 25kHz channel spacing (narrowband transmitter).
Or you can go with the 1mW ERP, optionally with broadband modulation > 250kHz and then send across both 433,05-434,04 MHz and 434,04-434,79 MHz.
Also check out the 868 MHz bands. There's no coincidence that these are license-free too, because that's where your 1st harmonic is at.
Many EU and European nations follow the above to the letter. A few nations like Germany and Sweden allow more liberal use of the band still. In case national directives are different than EU directives, the national requirements take precedence. To know for sure, check with the national radio authorities.
This is also assuming that your device is compliant with EN/ETSI 300 220-2. The v3.1.1 that you found is the correct one used for CE marking, it is harmonized below the Radio Equipement Directive ("RED") 2014/53/EU. From the EU RED directive web page you can find a list of all harmonized standards here. You'll find EN/ETSI 300 220-2 v3.1.1 in that list, which is how you know which version to use.
EN/ETSI 300 220-1 describes measurement and testing, EN/ETSI 300 220-2 contains the actual requirements.
So if you for example go for the 25kHz spacing, then 25kHz becomes your occupied bandwidth (OBW) as mentioned in 300 220-2. If you go with the 10% duty cycle version then you can state the bandwidth yourself, assuming it is less than 250kHz and that you stay within the band.
|
H: Concatenating first-order high-pass filters
I have the following circuit, which is an active high-pass filter with the knee frequency \$ f_{-3 dB} = \frac{1}{2\pi RC} = \text{1 kHz}\$
I wanted to concatenate two of those systems in order to make a second-order HPF:
simulate this circuit – Schematic created using CircuitLab
When I did, I noticed that the knee frequency shifted to \$ f_{-3 dB} = \text {~1.4 kHz}\$
I have tried to look up the mathematics behind why that is, but I couldn't find anything on the subject.
I'd like to know the relationship behind the shift on the knee frequency to concatenation of systems like that.
Note: I don't want to build a second order HPF with one op-amp because in my circuit one of the legs of the op-amp is connected to ground, and the input voltage is as well and I can't change that.
My reasoning was:
$$ HPF(s) = \frac{sRC}{sRC + 1} $$
therefore
$$ HPF(s)\cdot HPF(s) =\frac{s^2R^2C^2}{(sRC + 1)^2} = \frac{s^2}{s^2 + \frac{2s}{RC} + \frac{1}{R^2C^2}} $$
So as I understood (which is wrong):
$$ \omega_0 = \frac{1}{RC} $$
Edit:
I ended up solving the equation $$ |HPF(s) \cdot HPF(s) | = \frac{1}{\sqrt{2}} $$ with $$ \omega = 2\pi \cdot 1000 $$ and I found the C values that get a -3 dB frequency at 1000 Hz.
AI: Cascaded 1st order filters, when buffered and with the same elements, converge towards a Gaussian bell. It only happens after many such stages, but that's the point of convergence.
For your case, as you have correctly shown, the transfer function is:
$$H(s)=\dfrac{s}{s+\dfrac{1}{RC}} \tag{1}$$
Cascading \$N\$ such stages means the overall transfer function will be of the form:
$$G(s)=H(s)^N \tag{2}$$
Since the denominator has the same form, for whatever power of \$N\$, the \$s^0\$ term will be of the form \$1/(RC)^N\$. For a 1st order, the attenuation @1 Hz is always -3 dB (\$1/\sqrt2=1/2^{1/2}\$). For two cascaded sections, the transfer function becomes a 2nd order, and solving for the frequency at a specific attenuation is better suited for squared terms (considering \$RC=1\$):
$$\begin{align}
G(j\omega)^2&=H(j\omega)^4\quad\Rightarrow \\
\dfrac{\omega^4}{\omega^4+2\omega^2+1}&=\dfrac12 \tag{3}
\end{align}$$
Solving the above will yield 4 roots since it's a 4th order, but two of them will be imaginary and one negative, which leaves the positive one as the real result:
$$\begin{cases}
\omega^{2\text{nd}}_{1,2}&=\pm\sqrt{1+\sqrt2}=\pm\sqrt{2^\frac02+2^\frac12} \\
\omega^{2\text{nd}}_{3,4}&=\pm\sqrt{1-\sqrt2} \tag{4} \\
\end{cases}$$
For a 3rd order the results are more complicated, as you would expect, but a pattern starts forming:
$$\omega^{3rd}_1=\sqrt{2^\frac03+2^\frac13+2^\frac23} \tag{5}$$
By now you can readily form a general formula that gives you the precise value for the frequency when the attenuation is -3 dB:
$$A_{-3\;\text{dB}}^{HP}=\sqrt{\sum_{k=0}^{N-1}{2^\frac{k}{N}}} \tag{6}\label{6}$$
A simple numerical check with wxMaxima confirms it:
H(s) := s/(s + 1)$
n:7$
find_root( cabs( H(%i*w)^n )=1/2^-0.5, w, 1, 100 ); /* numerical */
float( sqrt( sum ( 2^(k/n), k, 0, n-1 ) ) ); /* analytical */
The results come up as:
3.099534753828498
3.099534753828497
the difference being in the last decimal, due to the numerical nature of find_root() (IIRC it uses Brent's method). And for n=13:
4.273111111613913
4.273111111613912
For the sake of completeness, cascaded 1st order lowpass will have this formula:
$$A_{-3\;\text{dB}}^{LP}=\sqrt{2^\frac1N-1} \tag{7}\label{7}$$
I'll leave it to you to prove it.
One thing to note is that everything above treats the ideal case, when buffers have infinite input impedance and zero output impedance, thus achieving perfect isolation. In practice this will not happen, so minor deviations will occur.
(edit)
Regarding \$\eqref{6}\$, it can be written in a different format, considering one thing: the corner frequency for the lowpass is gradually shifting downwards in frequency, and it does so relative to unity (or \$\omega_p\$, here 1). This means that the highpass will follow in the exact same manner, mirrored against \$\omega_p\$ in a geometrical sense: \$\omega_p^2=\omega_{_\text{LO}}\omega_{_\text{HI}}\$. This further means that the simpler, more digestible \$\eqref{7}\$ -- which can be derived a bit more easily (you did try it, didn't you?) -- can be used to obtain the same flavour formula for the highpass:
$$\begin{align}
{}&\begin{cases}
\omega_p^2&=\omega_{_\text{LO}}\omega_{_\text{HI}} \\
\omega_p&=1 \\
\omega_{_\text{LO}}&=\sqrt{2^\frac1N-1}
\end{cases} \\
\Rightarrow\quad 1^2&=\omega_{_\text{HI}}\sqrt{2^\frac1N-1} \\
\Rightarrow\quad \omega_{_\text{HI}}&=\dfrac{1}{\sqrt{2^\frac1N-1}} \tag{8}\label{8}
\end{align}$$
Wolfram Alpha confirms it.
|
H: Can't find the Pspice model for a diode
Last time I had an issue I posted a question and the answer was pretty useful, so it's the time to try to find the answer to a new one.
I don't seem to be able to find the Pspice model for the following diode: DSA120C150QB. I have searched in the two manufacturers page and also tried to make total shots in the dark trying to find what I need, but I haven't found anything useful, just the datasheet.
I was wondering if someone with more experience could tell me how to find the model for the diode or if some of them just can't be found, or maybe drop the url x) if it's not much work.
AI: I was wondering if someone with more experience could tell me how to
find the model for the diode or if some of them just can't be found
Spice models for dual diodes are much, much rarer than for single diodes so, speak to the supplier (IXYS) and ask them what the equivalent device is for each diode inside the dual version.
Then you might have a chance.
|
H: Transistor Not Entering in Cutoff region
I have constructed these circuits, utilizing either DTL or TTL logic, and I'm aiming to place transistors T1, T2, and T3 into the cutoff region. However, I'm currently unable to achieve this. Could you please advise me on which parameter I should modify to ensure these transistors enter the cutoff region?
Circuit simulation
AI: With the steering diodes there will be 1 diode drop (~0.6V) so the transistor base is never going to be below that and it will not fully cut off.
You could add a resistor from the base to ground to pull it a bit lower, but that brings additional current draw.
Back when they used this sort of logic one thing they would do is use Germanium diodes which have a voltage drop of around 0.3V, so the base would be pulled closer to ground. Another way of doing it was to use a negative voltage, which is why you see older logic with both positive and negative supplies. Here is a circuit using a resistor from the base to a negative voltage:
|
H: How to implement the design of a bidirectional buck-boost converter?
I need your help in designing a bidirectional buck boost converter with the following requirements:
Input voltage: 100V-400V
Voltage range: 250V-1.2kV
Output current: 40-80A
Power dissipation of 24 kW.
Switching frequency: 20kHz
I did some designing on LTspice. I would like to know if this looks right.
That is not the MOSFET I am using for my design. I would be using either GaN or SIC MOSFET. I chose to use an inductor rather than a transformer. Please help me with the right direction to accomplish this design.
AI: You need to choose a switching device that looks like it has a prayer of working
From comments: -
G2R120MT33J That the Mosfet. I am using. – victor_uk
Here's the safe operating area from the data sheet: -
I've targeted two points on the curves (orange and red). Both points suggest a peak power of 5 kW i.e. the curve I've chosen is the maximum instantaneous power that this GeneSIC device is rated at and, in my experience of GeneSIC devices, they will die if you much exceed it.
So, the scenario where you get maximum instantaneous peak power dissipated in the MOSFET is when your MOSFET-bridge changes state and reverses the inductor's voltage polarity. You will find that for a short length of time the MOSFETs will have about half the line voltage across them at about half the current in the inductor (just a simple and approximate rule of thumb that is always best checked for in simulation).
So, do simulations for each scenario and check peak power dissipated, average power dissipated, peak voltage seen from drain to source and peak drain current. You can't simulate enough when it comes to converters of this sort of power. Leave no stone unturned.
As an example, a recent design I did was a 10 kW converter and it worked first time with no value changes. It's this sort of target you need to aim for because if it all goes up in smoke you have no chance of diagnosing what went wrong. Sure, I eventually had the odd MOSFET failure in later testing but, unless you get that confidence early on you'll start to feel the threat of that high-voltage and that'll eat you. So, "do sims" until you are sick of it. Then spend another 5 days doing more sims.
Going back to my approximate rule of thumb...
If the peak voltage is 1200 volts (as per your spec) and the peak inductor current is 80 amps (as per your spec), a reasonable estimate of peak power dissipated in the MOSFET is 40 amps × 600 volts = 24 kW and miles over what this device can achieve.
The problem with GeneSIC devices is that they just won't give you a peak dissipation figure for (say) 10 μs (unlike other suppliers). If you found a different supplier that gives a 10 μs figure, you'd find that the peak power dissipated might be 20 kW to 30 kW and then, you'd be in with a chance.
The package is a tad weedy too. I've used similar dies from ON Semi packaged in TO-247 and in SMD (like yours) and the TO-247 is always going to muscle through without failure compared to the SMD package. Apart from anything else, you are going to need a substantial heatsink for this level of power throughput and the SMD parts just won't compare the heatsinks for the TO-247.
Your MOSFET gate drivers will not work
They need to be isolated drivers that can push maybe an amp or two into the MOSFET gates. There are devices available of course.
Static MOSFET power dissipation
If the average current in the MOSFET is (say) 40 amps, then there is a power dissipated in the MOSFET of \$40^2 \times R_{DS(ON)}\$. \$R_{DS(ON)}\$ is quoted as being 0.12 Ω hence there might be a static power of 192 watts. This maybe halved to 96 watts for 50% duty cycle but, it's still a lot of power to dump from a TO-263 device.
|
H: Connecting Arduino to Keithley external trigger
I need to connect digital output pin of Arduino Uno to Keithley DAQ6510 digital multimeter external trigger input. Both are utilizing 5V TTL logic.
Now when I measure the voltage on external trigger input of DMM, I see 5V. So I assume there is some pull up resistor involved just like they have it for digital input pins. My question is whether I can connect this input to Arduino Uno's digital output pin or this input is assuming dry contact control using relay (connect input pin to the ground for initiating trigger). Connecting 5V to 5V seems little unsafe to me so I would like to confirm.
UPDATE:
AI: It's true that if you connect 5V to a microcontroller GPIO pin, without any limitations to current, and then configure that pin as an output pin and pull it to ground, a lot of current will flow and exceed the MCU specs (and the magic smoke will be released).
But the multimeter trigger input is not the same as a power supply, and its 5V is a high-impedance TTL implementation:
(Ref. Manual, pg 64)
In other words, it's not providing 5V with an amount of current that could damage the microcontroller. The Arduino Uno working voltage is also 5V, but be aware that other microcontrollers and even some Arduinos use 3.3V in which case you would use a buffer or level shifter.
|
H: Boost converter inductor calculation
I am designing a boost converter with the following parameters:
Vin: 3.0 - 3.7 V
Vout: 30 V (nominal)
Iout(max): 10 mA
Fsw: 600 kHz
TI's application note has the following equations for boost converter inductor selection:
When I plug my numbers into the equation, I get a very large value of ~2250 uH.
This does not make sense to me. Why do I need such a large inductor to power such a small load?
Here are my numbers:
Vin: 3.0 V (worst case scenario)
Vout: 30 V
delta Il 2 mA (20% of 10 mA)
Fs: 600 kHz
AI: The delta I is missing a factor of Vout/Vin which is ten so you get ten times larger inductance.
|
H: Can I make a homemade Peltier module with constantan wire?
I'm trying to make my own Peltier module.
I ordered constantan wire, which I used as N-type semiconductor and iron wire as P-type.
I made this and powered it with 3V, but without response.
What am I doing wrong?
AI: Iron and constantan make a good measurement thermocouple, where thermoelectric efficiency doesn't matter. The metal properties of ability to be drawn into wires, flexibility, and be used over a wide temperature range make them good, practical sensors.
While they can be run backwards as a heat pump, the efficiency is so bad that not only are they completely impractical as a device, you would have difficulty even measuring any heat pumping effect.
Peltiers didn't take off as practical cooling devices until we started making the thermocouple junctions from semiconductors, which materials have a several order of magnitude times better FOM (figure of merit) than metals.
The FOM \$zT\$ depends on the material's electrical conductivity (\$\sigma\$) (good), thermal conductivity (\$\kappa\$) (bad), and Seebeck coefficient (S) (good).
$$ zT = \frac{\sigma S^2 T}{\kappa}$$
A high thermal conductivity conducts heat across the device without being useful. A low electrical conductivity generates a lot of heat in the device due to current flow. Unfortunately, in metals, the thermal conductivity and electrical conductivity tend to both vary together, as they are both mediated by the 'sea' of conduction electrons. They also have a very low S.
When you go to non-metals, the two conductivities use different mechanisms, so there's the opportunity to make materials with better ratios, and better S.
Even today, the materials used for Peltiers are only just good enough to make devices for niche applications where nothing else will do. Compressor systems are an order of magnitude better.
The above equation is from the Thermoelectric Materials article on wikipedia. It also gives a comprehensive list of the materials presently being used, and those being researched in the pursuit of better \$zT\$.
|
H: Is it possible to build radar system for drones detection at home?
Can I build DIY radar system using ESP32 or Raspberry Pi modules + a bit of soldering that will be able to detect drones in a radius 1-2km? Is it possible? What do I need for this and how much will it cost?
I need a system which will be able to detect a position of drone if it flies inside of radar radius (like a radar in airports).
AI: It will be expensive, since you'll need to hire an experienced RF engineer to design it, make the prototype, then pass the prototype through certification needed to legally operate it. And you'll likely need a license for a fixed radar installation.
Very simple things like those microwave radar speed indicating road signs require operating licenses in most places, never mind the burden of compliance certification.
What you're looking for is an actual radar that has sensitivity similar to terminal defense radars that protect tanks or small ships against incoming missiles/artillery shells/grenades. It'd deal with slower targets and slower scan speeds, so the signal processing will be much easier. But the radio cross-sections of consumer drones at a distance of 1-2km are similar to the cross-sections of small artillery shells at similar distances.
Just the RF hardware needed to pull this off in any reasonable amount of time will cost a pretty penny. Given the size of the drones, you'd be looking at an S, C or X band radar (15cm to 2.5cm wavelength range). For loitering or slowly maneuvering drone detection, you could use old-school approach: azimuth scan and elevation scan, with a single RF channel. You'll need relatively large antennas, mechanical scanning, etc. A phased array antenna would be way outside of any reasonable budget.
You'd end up with something that would rather resemble a scaled-down military air defense or flight approach guidance radar from decades ago - those that had a vertically- and a horizontally-oriented antenna, oval- or rectangular-shaped. Interestingly enough, that stuff doesn't get any cheaper with time. And even if you could make this, it'd be likely export-limited and not a project you could openly document without facing rather stiff penalties. You're basically talking of low-end military capability here.
So, is it possible? Sure: the "at home" part is relative. Wealthy people live in homes too, you know. So, if you got the money and the know how, or can afford someone with the know-how, and are ready to navigate the governmental bureaucracy and fees needed to get this thing legally operating - go for it. But no, it won't be anything like soldering some bits and pieces you can buy at the Arduino or RPi Foundation store together.
|
H: How is it possible for GreatScott to use this MOSFET driver if it can only give 1.5 amps max?
I am trying to follow this tutorial by Greatscott.
I don't understand how the TC4427 MOSFET driver he is using in this build works. I searched up the datasheet which can be found here. It says "High Peak Output Current: 1.5A". If this MOSFET Driver can only deliver 1.5 Amps how is it possible to use with a BLDC motor? While I don't know the exact current this BLDC motor uses I'm sure it has to be much higher than 1.5 Amps right?
What am I missing here? How is it possible to use this TC4427 MOSFET driver?
AI: The TC4427 is a MOSFET driver, intended to drive the (capacitive) gate of a MOSFET, which in turn supplies current to the motor. Note the usage here in one leg of the design:
The driver turns the FETs on and off, and motor current flows through the body of the MOSFET. The driver's relatively high current rating is so that it can charge the FET gate capacitance (and Miller capacitance) quickly to avoid excessive switching loss and potential shoot-through.
The driver does NOT supply current to the motor.
|
H: Why is the voltage drop being used to calculate the current?
I need some more help with explaining an example. I'm having trouble understanding this part of a tutorial, and my question is fairly simple. Why use the voltage drop rather than the battery voltage to calculate the current? Or anything else for that matter such as the 9V - 8.3V, which is 0.7V.
AI: The current in this simple loop is the same in all components. Unless we split it up and insert a current meter, we must look for a component, where we know its resistance and the voltage across it. Then we can calculate I = U/R. We don't know the resistance of the diode and it is a dynamic resistance. We also don't know the voltage drop in the battery and the internal resistance. The resistor is the only component we can use for the calculation.
|
H: How do you choose the correct MOSFET for your project?
I am building an electric longboard. I am replicating this exact ESC circuit by GreatScott. The amount of voltage and current that will be going through my MOSFET's are 36V Max and 80A Max.
I need 3 N-channel MOSFETS and 3 P-channel MOSFETS. The N channel MOSFET GreatScott uses is rated for 55V and 49A, and the P-channel MOSFET is rated for 55V and 31A. Since I need a max amp rating of 80 amps I decided I should get some higher rated MOSFETS instead of using the same ones he uses.
I am thinking for the N-channel MOSFET of using an IRF3205 which is rated for 55V and 110A. I'm having a hard time finding a P-channel MOSFET that is able to handle these voltage and current ratings.
Question: Is the N-channel MOSFET I picked a good option? Do you have any suggestions on P-channel MOSFETS that would work? Any advice on picking the correct MOSFET's would be appreciated.
AI: Rather than using matched pair of Pch and Nch FETS, you should choose dual Nch Half Bridges rated for much more than your winding resistance stall current in order to ease the temp rise from inevitable power loss. That will reduce the size of your heat-spreader which must have a very low thermal resistance like having a dozen massive CPU coolers with fans. You will need to apply the thermodynamics of thermal resistance to estimate heat rise from accelerating to max speed with the maximum mass load.
You can also compare performance with low capacitance but higher saturation voltage IGBT's with FET gate drivers. Due to the higher usable current density of IGBTs, it can usually handle 300% more current than a typical MOSFET it replaces. This means that a single IGBT device can replace multiple MOSFETs in parallel operation or any of the super-large single power MOSFETs that are available today
https://www.mouser.ca/datasheet/2/196/Infineon_IR2x33_IR2x35_DataSheet_v01_01_EN-1731357.pdf
http://www.irf.com/technical-info/designtp/dtwarp.pdf
|
H: What does this symbol from a voltage regulator schematic, looking like a capacitor connected with with diagonal lines, mean?
What does this schematic symbol mean?
It is taken from the LM78XX datasheet (figure 17, page 22.)
AI: According to Wikipedia, it's an obsolete symbol for a capacitor with polarity indicator.
|
H: Transient Simulation - ADS
I'm facing a doubt in transient simulation in ADS.
In many setups I've seen placing a resistor in series to the transient voltage step generator. Placing the series resistor, and simulating the below circuit, I obtain the plots which follow.
My goal is to simulate just the reflection between a step generator and a matched transmission line. I don't understand if the step generator already includes a series resistor in the ADS component (I guess not, and this is the reason why we place the 50 ohm resistor).
However, what I don't understand is that at the node 'bbb' the voltage is 0.5 V (as it can be seen from the plot). Since no reflection should occur, should not the node 'bbb' be equal to 1 V? I'm thinking about a practical setup: if we set the signal generator with impedance 50 ohm and next in series we place a matched load (50 ohm), should not be the node 'bbb' at the same voltage of the signal generator (no reflections i.e. no variations)?
Thanks in advance!
AI: However, what I don't understand is that at the node 'bbb' the voltage is 0.5 V (as it can be seen from the plot).
NB: the step generator in the simulator does "never" include internal R.
There is no reflection in your system because all is matched.
And you have 1/2 volt (Vg = 1 V) because the emission coefficient, at the first input line, is \$Ke=0.5= Vg * Zo/(Zo+50)\$.
NB: the Ke (coefficient emission), as its name says, is the factor that says what part of the voltage generator is "injected" into the transmission line which follows.
So, one must consider that the first line is "matched" at output ... and it follows that its input impedance is considered as Zo= 50 Ohm. You replace the line and all at the right with 50 Ohm. Finally you have only the generator, the internal impedance of the generator, and at last ... 50 Ohm load.
The voltage (which will go into the line ...) on this load is \$ Vg * 50/(50+50)=0.5 V \$ which is also Ke (by definition).
Some other coefficients can also be defined:
Ke (emission coefficient at the generator)
Kr,g (reflection coefficient at the generator)
Kr,l (reflection coefficient at the load)
Kt 1,2 (transmission coefficient from line 1 to line 2, different impedances)
Kt 2,1 (transmission coefficient from line 2 to line 1, different impedances)
Here is an example of the point (check) in the middle of the two lines.
Check the voltage of 1.6 V (NB: Vg = 2 V).
The real generator does "always" include a 50 Ohm (sometimes, 75 ... 600 Ohm), and it must be taken into account (even if you don't materially see it, don't forget it) when calculating Ke.
Here is also 2 examples of waves when the impedances are not matched.
Note that the voltage can be as higher as 2 x Vg (=2 V) = ~ 4 V.
|
H: Estimating PCB Trace Delays
In the context of FPGA design, I sometimes need to estimate PCB trace delays between the FPGA and external devices to properly constrain the input/output timing. As those do not need to be accurate to the picosecond, I'm looking for generally accepted rules of the thumb rather than exact formulas. From what I've seen here, 150 ps/in seems to be a reasonable estimate. Which values do you use?
AI: As the delay is dependent on "many" variables, you should use AppCAD from: https://www.broadcom.com/appcad.
Don't forget "matching" impedances load.
Example:
|
H: Can we use an integrator circuit as a Hilbert transformer?
I want to know about the hardware implementation of Hilbert transform. Can an integrator circuit be used to introduce a -90° phase shift to the input signal?
AI: Phase shift? Sure. That's the easy part.
Or flat amplitude response. Trivial.
How do you do both at the same time? That's the tricky part.
Typical implementation is a chain of all-pass filters, thus constructing a phase shift with flat amplitude response, within some approximation margin with respect to both (i.e. phase and amplitude will not be perfectly flat, but bouncing between limited extents within the passband).
|
H: NE5532 spice model on Pspice for ti
I've installed PSpice for TI but I can't find NE5532 model on it? I'm searching for it's spice model to being able to simulate it on LTSPICE. (I wonder if TI does not included it's devices model in their Pspice, what this massive garbage application is included!)
AI: To add the NE5532 Model, follow the steps in my LM741 post (see link below), using the NE5532 model shown here below.
Special thanks to Uwe Beis who provided the TI 5534 updated model:
(http://www.beis.de/Elektronik/Electronics.html)
Adding the LM741 model to LTSpice
***** NE5532 Source: Texas Instruments NE5534
* C2 added to simulate compensated frequency response (Uwe Beis)
* NE5532 OPERATIONAL AMPLIFIER "MACROMODEL" SUBCIRCUIT
* CREATED USING NE5534 model from Texas InstrumentsAT 12:41
* (REV N/A) SUPPLY VOLTAGE: +/-15V
* CONNECTIONS: NON-INVERTING INPUT
* | INVERTING INPUT
* | | POSITIVE POWER SUPPLY
* | | | NEGATIVE POWER SUPPLY
* | | | | OUTPUT
* | | | | |
.SUBCKT NE5532 1 2 3 4 5
*
C1 11 12 7.703E-12
C2 6 7 23.500E-12
DC 5 53 DX
DE 54 5 DX
DLP 90 91 DX
DLN 92 90 DX
DP 4 3 DX
EGND 99 0 POLY(2) (3,0) (4,0) 0 .5 .5
FB 7 99 POLY(5) VB VC VE VLP VLN 0 2.893E6 -3E6 3E6 3E6 -3E6
GA 6 0 11 12 1.382E-3
GCM 0 6 10 99 13.82E-9
IEE 10 4 DC 133.0E-6
HLIM 90 0 VLIM 1K
Q1 11 2 13 QX
Q2 12 1 14 QX
R2 6 9 100.0E3
RC1 3 11 723.3
RC2 3 12 723.3
RE1 13 10 329
RE2 14 10 329
REE 10 99 1.504E6
RO1 8 5 50
RO2 7 99 25
RP 3 4 7.757E3
VB 9 0 DC 0
VC 3 53 DC 2.700
VE 54 4 DC 2.700
VLIM 7 8 DC 0
VLP 91 0 DC 38
VLN 0 92 DC 38
.MODEL DX D(IS=800.0E-18)
.MODEL QX NPN(IS=800.0E-18 BF=132)
.ENDS
http://jeastham.blogspot.com/2011/11/
|
H: What specifically causes damage when a transformer-coupled valve amplifier has no load across the secondary coil?
I'm new to this forum, so be gentle! I'm an electronics and computer systems engineering student, and I always thoroughly enjoy audio circuits, especially examining older valve based designs (I play guitar, who would've guessed!). My degree is very much not an electrical degree, we don't look at anything above around 12VDC. That being said, I have a cursory knowledge on transformers, as it was briefly covered during a module in first year, though I will add that valves were not even mentioned due to their niche use-case in today's world. Anyway, onto the question:
I understand that having an open circuit across the secondary of a transformer-coupled valve amp is dangerous, but I would like to know why. My (again, incredibly base-level) knowledge of transformers would lead me to believe its due to the infinite impedance load being reflected in the primary, but again I'm uncertain as this was very hastily covered. If this is somewhere along the right lines, what does this actually do to the valves that causes damage?
I'm aware this is probably very painful to read, as I know some terms, but am probably using them wrong, but any answers would be very appreciated!
AI: For those unaware, tube amps require the use of transformers (except for certain extreme examples), because the plate voltage is quite high (~300V) and current quite low (~100mA), essentially useless into a low impedance speaker by itself. Tubes are also only "N type" as it were, so push-pull amplifiers can only be done by totem-pole arrangements (awkward to drive), or using a transformer for phase inversion (which, with the transformer being more-or-less mandatory anyway, is fine).
We can refer to these examples, and draw similar conclusions, noting the differences :
https://www.tutorialspoint.com/amplifiers/transformer_coupled_class_a_power_amplifier.htm
https://www.tutorialspoint.com/amplifiers/push_pull_class_a_power_amplifier.htm
Basic operation with a load is, the transformer's magnetizing inductance is charged up to quiescent current, and as the device throttles down, that (inductive) current flows into the load, pushing the plate voltage up. This happens in a controlled manner because the load impedance sets the voltage due to a given current. (Tube amps with local negative feedback, or triodes (which can be considered to have internal NFB), can still control this voltage without a load connected.) Without a load, the voltage is limited only by the impedance of the transformer itself -- which has an LC characteristic, peak voltage given by Iq * sqrt(L/C). Which isn't necessarily going to be destructive or anything, but, as is usually the case -- it depends, and it could lead to arc-over, typically at the tube socket or something like that. Rarely internally, causing device damage. (For the transistor case, avalanche may occur, more likely to cause damage.)
In short, particularly when overdriven, the circuit resembles a boost converter with no load, and the flyback pulse can reach dangerous voltages.
For the push-pull case, note that one side turns on while the other turns off. With normally no quiescent current in the transformer (the currents in the two sides are balanced for zero net magnetization, and class B operation is possible so that peak signal current can be many times Iq), flyback isn't an issue, and for the transistor case at least, the opposite side acts to clamp that voltage.
We encounter a difference with tubes here: vacuum tubes are inherently [series] diodes (no current flow in reverse) -- the inverse of MOSFETs (inherent parallel diode, full current flow in reverse). (BJTs can handle some reverse current, but only when the base is forward-biased. They're kind of inbetween I guess you could say.) So we can have the situation that one side turns off and the other turns on, but the voltage dips too low, reverse-biasing the "on" tube. This doesn't result in dangerous voltages, so much, but with the high grid voltage (it's "on"), cathode current is demanded -- but there's no plate voltage to absorb it, and consequently a massive current flows into the screen grid instead (for tetrodes+; N/A for triodes, which are fine here). This can result in "toaster grid" and subsequent destruction.
There could also be instability where, due to the light load, feedback goes unstable and oscillation results; and with help of the above mechanics, a relatively large voltage could develop (i.e. more than twice B+ voltage). Which might go with toaster-grid operation too, so, all around not a good time. This does require that the circuit is poorly compensated, which as usual, depends.
|
H: In I2C communication, let's assume 10 bytes are to be transferred from Master to slave. What happens if NACK is received after transfer of 5 bytes?
Is only erroneous byte sent again and communication resumes or communication is restarted? Similarly, what will happen if data is being received by master?
AI: According to the I2C specification, if the controller as transmitter sees a NACK during transmission, that means the receiver has either received the byte correctly but can't accept new bytes, or not received the byte (for some of multiple reasons), and the controller must generate either a STOP condition or a repeated START condition to start a new transfer. It is up to the devices what they mean by sending a NACK, and how they then continue communicating, for example ot might just signal that a FIFO is full or it might mean a connector to a sensor is unplugged.
For the second case, if the data is transferred from transmitter device back to host, so it is the host who must send ACK or NACK after each byte, and if host sends a ACK, it means it expects a next byte from device so it should prepare transmitting next byte, and if it receives a NACK, it means host has now transferred the last byte it wants, so the device knows there is no need to transfer anything more and it will stop driving the bus, and maybe ignore clocks and wait for a STOP or START condition.
|
H: How does my fan have active tacho (RPM) output even though the blades aren't turning?
Whilst working on a project to control and monitor an EC fan from a Raspberry Pi, I have encountered an anomaly that has me completely confused; the fan is still outputting pulses on the tacho line proportional to the input power when the actual fan itself is disconnected from its built-in control circuitry.
I've so far managed to implement an RPM monitor by listening to the pulses coming out of the fan's tacho connection, however the maximum RPM observed was less than the manufacturer claimed the max RPM of the fan was so I decided to double check how many pulses appeared on the tacho line per revolution of the fan blade.
Whilst doing this, I disconnected the three wire connection (red, black, yellow) between the fan motor and its controller board. I then turned the bundled fan controller up and saw my RPM monitor respond as though the fan has spun up, which it had not.
Observations:
With the fan motor disconnected, adjusting the fan speed controller causes the tacho to respond as though the fan was turning.
RPM ramps up and down with the same profile as a genuine fan tacho signal, i.e. has ramp up and ramp down as if the momentum of a physical fan blade were involved.
With the fan motor reconnected to the control board, but the blades clamped stationary, the tacho line outputs no pulses, even when the speed controller is altered up and down.
My initial suspicion was that somehow the controller board was 'faking' the tacho signal, but then you have to wonder why would the manufacturer go to such lengths when a hall effect sensor is such a cheap and standard component, and how would the physical characteristics of a real fan be so accurately emulated if the signal was being faked?
I really hope someone who knows about these fans can shed some light.
Edit: Before anyone asks, yes the fan is absolutely, and without shadow of doubt, not turning.
AI: Traditionally a tach signal is actually generated by a tachometer that is sitting on the shaft, but sensorless brushless motor drivers need to know the rotor position anyways to properly commutate the motor and detect the rotor position electrically once the motor is up to speed which means that external sensing is redundant. The driver can just synthesize a tach signal based on the sensorless detection scheme.
Of course, this won't work at low speeds where the BEMF is insufficient for the sensorless scheme to work. This also means that on startup, sensorless drivers need to open-loop commutate the motor to get it up to speed so the sensorless detection can kick in. But that doesn't mean the driver is can't still synthesize a tach signal based on its open loop commutation.
hall effect sensor is such a cheap and standard component
Not only do you need THREE whole hall sensors but the motor needs to be constructed to accommodate the hall sensor so they can access the magnetic field. You can't do that from the outside. Otherwise you need a full blown encoder assembly with hall sensors, magnets, and bearings which sits on a rear shaft that the motor must now have. Even if none of this was required, cost cutting knows no bounds.
|
H: Wien-Bridge Oscillator
First question - Why does the Barkhausen Criterion state the total phase shift from input to output and back to input have to be zero? I thought the loop gain phase shift was supposed to be -180 for oscillations to be sustained?
Second question - what exactly is the 'input' here? It's an oscillator. Why are they so interested in the voltage v2 ? It's not an input?
AI: Why does the Barkhausen Criterion state the total phase shift from
input to output and back to input have to be zero?
It can be 0°, 360°, 720° or any multiple of 360°.
I thought the loop gain phase shift was supposed to be -180 for
oscillations to be sustained?
That would be true if the op-amp was configured as an inverting amplifier; it's configured as a non-inverting amplifier hence 0° or 360° is needed.
Why are they so interested in the voltage v2 ? It's not an input?
Every oscillator like this has an explicit output and an implicit input to which is fed back the phase shifted signal. That input is marked by v2; it is the non-inverting input referred to earlier.
|
H: How to apply two NOT gates sequentially in VHDL?
I have one signal:
signal 1 : std_logic := '0';
I want this signal to go through two sequential NOT gates:
signal 2 : std_logic := not(not(1));
Two NOT gates will be automatically converted to nothing, but I need two NOT gates to add some signal delay.
It will work if I only have one NOT gate, one NOT gate will be automatically added when signal 2 : std_logic := not(1);
Really appreciate it if someone can help.
AI: Well, what you want is some delay, not really NOT gates.
What do you want that delay for? Adding random gates isn't a reliable solution, logic synthesis will optimize equations as needed. Depending on the layout, crossing inverters make take less time than wiring.
If you want some delay between signals, for example for correct sampling data wrt some another signal, you should place timing constraints, and maybe resample these signals with a clock.
A typical use case for cascading NOT gates is ring oscillators, in that case, special properties are signaled to the synthesiser to disable optimisation. These properties are usually named "SYN_PRESERVE" or "KEEP".
|
H: Resonance vs Undamped response
Why does this book say that resonance only occurs in an RLC circuit?
If I have a two pole complex conjugagte system where the damping factor is low, there will be a resonant peak in the frequency response of that system, it is not necessary that there is a cap AND an inductor present, right?
AI: You may create a 2 pole conjugate mechanical system with springs and weights: it doesn't have an inductor or a capacitor.
And resonance is also a characteristic of an RC circuit with inductance and capacitance.
It's even possible to create mixed systems with a transducer and capacitance or inductance.
But it's not possible to create a resonant system unless you have out-of-phase energy storage. In particular, it's not possible to create a 2 pole conjugate resonate system unless you have some form of out-of-phase energy storage.
The phase response may be due to a physical inductor or capacitor, or it may be due to any kind of storage with electronic phase control. You may build an 'inductor' out of op-amps and capacitors: this was standard for electronic filters before they were replaced with digital filters.
|
H: Problem with the output of the engine control signal
Program to control three DC motor levels and display speed (RPM) on a 7-segment LED using 8051. Use external interrupt 0 to read pulse from encoder. When compiling on keil C there is no error, but simulation on proteus cannot output control signal from pin P2.5. When I delete display function in main it gets controllable. I think the problem is in display function.
Here is the code:
#include <reg51.h>
#define on 0
#define off 1
sbit led1 = P2^0;
sbit led2 = P2^1;
sbit led3 = P2^2;
sbit led4 = P2^3;
sbit in = P3^2;
sbit out = P2^5;
sbit low = P1^0;
sbit medium = P1^1;
sbit high = P1^2;
sbit stop = P1^3;
unsigned int count = 0, n, t = 0;
char so[] = {0xc0, 0xf9, 0xa4, 0xb0, 0x99, 0x92, 0x82, 0xf8, 0x80, 0x90}; // for number 0->9
void delay_ms (int time)
{
unsigned int i,j;
for (i = 1; i < time ; i++)
{
for (j = 1; j < 125; j++); //delay 1ms
}
}
void init()
{
in = 1;
IT0 = 1;
EX0 = 1;
EA = 1;
TMOD = 0x10;
ET1 = 1;
TR1 = 1;
}
void demXung() interrupt 0
{
count++;
}
void timer1() interrupt 3
{
t++;
TH1 = 0xfc;
TL1 = 0x18;
TR1 = 1;
if (t>=60)
{
n = count;
count = 0;
t = 0;
}
}
void display (unsigned int dem)
{
unsigned char nghin, tram, chuc, donVi;
int i;
nghin = dem/1000;
tram = (dem%1000)/100;
chuc = (dem%100)/10;
donVi = dem%10;
for(i=0;i<50;i++)
{
led1 = on; P0 = so[nghin]; delay_ms(100); led1 = off;
led2 = on; P0 = so[tram]; delay_ms(100); led2 = off;
led3 = on; P0 = so[chuc]; delay_ms(100); led3 = off;
led4 = on; P0 = so[donVi]; delay_ms(100); led4 = off;
}
}
void dung()
{
if (stop == 0)
{
low = medium = high = 1;
out = 0;
}
}
void thap()
{
if (low == 0)
{
medium = high = 1;
low = 0;
out = 1;
delay_ms(30);
out = 0;
delay_ms(70);
}
}
void trungBinh()
{
if (medium == 0)
{
low = high = 1;
medium = 0;
out = 1;
delay_ms(50);
out = 0;
delay_ms(50);
}
}
void cao()
{
if (high == 0)
{
low = medium = 1;
high = 0;
out = 1;
delay_ms(90);
out = 0;
delay_ms(10);
}
}
void main()
{
init();
out = 0;
while (1)
{
display(n);
dung();
thap();
trungBinh();
cao();
}
}
AI: Your display function has four 100ms delays and loops 50 times - that works out to 20 seconds. Whilst that is happening, nothing else apart from your interrupts are getting processed.
You need to re-arrange how you update your display. It looks like you are using multiplexing, so you might want to use a timer to generate regular interrupts, say,10ms. Update 1 digit each interrupt and you won't need to use delay_ms(). Once you do this, the rest of your code will be processed much faster than every 20 seconds.
|
H: Intuitive understanding of relation between phase margin and peak overshoot and/or damping factor
Second, percent overshoot is reduced by increasing the phase margin, and the speed of the response is increased by increasing the bandwidth.
"Control Systems Engineering, Norman S. Nise"
The mathematical proof of this statement is available in the book but I am unable to understand it in a physical sense.
AI: It may become more understandable if you relate it to the open loop gain, k.
With regard to a simple 2nd order transfer function with 2 poles.....
Increasing k moves the closed loop system closer to the point of instability. Increasing k increases percentage overshoot and reduces rise time (faster response). Damping ratio, zeta is reduced.
So, increasing k moves the system closer to the point of instability and so phase margin and gain margin are reduced.
Interestingly, increasing k reduces the damping ratio, zeta but increases the undamped natural frequency,Wn and so changing k has no effect on settling time.
EDIT
The behavior of a closed loop system with just 2 poles (2nd order system) is totally determined by the combination of the values of zeta, the damping ratio and Wn, the undamped natural angular frequency. These two variables determine the position of the poles on the S-plane and so you can also say that the behavior of a 2nd order system is determined by the position of the poles on the S-plane.
Well, you might say, I've already said that closed loop system behavior changes with the value of K, the open loop gain. That is true and it is true because changing the value of K in the open loop transfer function alters the values of zeta and Wn in the closed loop transfer function. So ultimately, the behavior of the closed loop system depends solely on the values of zeta and Wn in the closed loop transfer function.
Considering a closed loop system with a low value of open loop gain,K such that the value of zeta is greater than one (over damping). The two poles on the S-plane will be real and spaced apart on the real axis. As the open loop gain,K is increased the poles will move towards each other (converge) until they meet, still on the real axis. At this value of K, the poles have the same value. Zeta has a value of 1. This is critical damping. If the open loop gain is increased further, zeta will become less than one (underdamping), and the two poles will diverge away from each other moving away from each other at right angles to the real axis and parallel to the imaginary axis. The two poles now have an imaginary component, they have become complex conjugate poles and the system is now oscillatory with some overshoot in response to a step input in the time domain. Oscillatory does not mean unstable, it means that, in response to a step input, the closed loop system will oscillate a bit before the output settles down to a steady state value.
This tracing of the movement of the poles of a closed loop system in response to the increase of some system parameter, such as open loop gain, from 0 to infinity is the basis of the Root Locus technique.
As K is increased and the poles move away from each other parallel to the S-plane's imaginary axis we can say that the distance from the origin to the pole is equal to Wn the undamped natural angular frequency and the cosine of the angle which that line makes with the real axis is equal to zeta, the damping ratio and so we can see that, as K increases, the poles move further from the real axis causing a reduction in damping ratio (angle gets closer to 90 degrees so cosine of the angle and hence zeta reduces) and the distance from the origin increases, increasing Wn.
Hence we can see how the the position of the poles are determined by the values of zeta and Wn.
Incidentally, the distance from the real axis to the pole, the distance along the imaginary axis, is equal to Wd, the damped angular frequency which is the frequency that the closed loop system will oscillate at in response to a step input before settling down to its steady state value.
|
H: What will be the equivalent resistance in this situation?
We have to find the equivalent resistance between A and B.
My attempt:
Let's simplify the circuit.
simulate this circuit – Schematic created using CircuitLab
Now,
$$\frac{1}{R_P}=\frac{1}{5+5}+\frac{1}{5+5}+\frac{1}{5}$$
$$R_P=\frac{5}{2}\Omega$$
However, this is not the correct answer; the correct answer is 3.125 ohm. Why is that?
Edit:
I was able to simplify the circuit correctly:
simulate this circuit
AI: This problem is a good example of why diagonal schematic symbols/connections can make it more difficult to interpret schematics correctly. Re-drawing the circuit without the diagonals should make things more clear:
Steps given here for solving:
|
H: Op-Amp complex impedance
I am having some trouble understanding the background of this question.
I think they are saying that as you sweep the frequency of Vin, the impedance Zin seen will be different as you pass through the different pole location. For example, at DC, impedance should be purely resistive? Capacitors introduce a pole so a -20dB/dec and a -45degree phase shift at the cap pole location which eventually becomes a -90 degree shift.
Can someone just provide me some hints here to help me solve this?
AI: The OpAmp has a phase margin of 45 degree at UGF so you can model it as follows:
|
H: Mixing current in Gilbert cell
I'm currently breaking my head over this Gilbert cell:
I understand the workings of this circuit, that is not the issue I'm having.
What I do not understand is, when looking at the current waveforms:
How do you get this resulting output? I understand it all the way to where the red arrows are drawn, but not how you get this result from combining those 2, where is the DC current offset?
AI: How do you get this resulting output? I understand it all the way to
where the red arrows are drawn, but not how you get this result from
combining those 2,
A - B is the special thing here. Signal A minus signal B.
The circuit produces an AC part of A that is \$\color{red}{\text{antiphase}}\$ (negative in polarity) to the AC part of B so, if those AC parts each have an magnitude of (say) "1", when we do "A - B", we get an AC output of "2". In other words \$A_{AC} - B_{AC} = \color{blue}{1 - (- 1)} = \color{orange}{1+1 = 2}\$.
But, the DC parts of A and B are near-enough identical hence, when we do A - B we get near-zero DC output.
where is the DC current offset?
It is now zero due to subtraction i.e. A minus B.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.