text
stringlengths
83
79.5k
H: How to calculate power supply output current from its three phase input current In my lab I have three-phase electric power according to the following diagram: I'm using a three-phase power supply TRIO-PS-2G/3AC/24DC/40 which delivers 24V and max current 40A. Using a Current Clamp I measured the input current on each phase to be 0.6A. My question is, how to calculate the output current according to this three-phase input current. From its manual the power supply efficiency is typ. 93 % (400 V AC). AI: To calculate the input power you need the power factor from the data sheet which is 0.77. \$ Input\,Power = 3 \times 230\;V \times 0.6\;A \times 0.77 = \sqrt 3 \times 400\;V \times 0.6\;A \times 0.77 = 320\;W \\[2ex] Output\,Power = Input\,Power \times Efficiency = 320\;W \times 0.93 = 297.6\;W \\[2ex] Output\,Current = \frac{Output\,Power}{Output\,Voltage} = \frac{297.6\;W}{24\;V} = 12.4\; A \$ But I wouldn't calculate the DC current this way. The power factor and efficiency will change with output load so this calculation can only give a rough guide. Your supply voltage is also likely to vary somewhat. You would be much better off getting a Hall effect AC/DC clamp meter which will let you read the DC current directly with more accuracy. Clamp meters tend to have high current ranges so will read 40 A directly. Make sure you get one that is DC. Many clamp meters are transformer-operated rather than using a Hall effect sensor and are therefore AC only. As a bonus you won't need to cut the DC power leads to insert the ammeter in circuit. If you have multiple leads running from the power supply you just need to put all the positive leads (usually) or all the negative leads through the jaws of the ammeter (but not both sets of leads which should give a zero reading). Check the jaws are large enough to accept all the wires you are using. It is also much safer to be making measurements on a 24 V circuit than on a 400 V three-phase circuit which typically have very high fault current levels if you make a mistake.
H: Should there be keepout zones between ground pins and ground fill? I have very limited PCB design experience. I've been using this program called Fritzing (it... has its pros and cons). Something I noticed is that when I do a ground fill with this software, it leaves copperless borders around ground pins except for traces I manually draw. For example, here is a rough draft of a power supply board I'm trying to make. The circuit, for reference, is: It's just a linear regulator, some capacitors, and some terminals. Now, take a corresponding PCB layout. This is a rendering of the copper mask (light gray) on the ground layer: I've drawn blue dotted lines around the ground pads. As you can see, there are borders around the ground pins, with only one edge of each pin connected. Most notably there's very minimal copper between the regulators heat sink fin and the rest of the copper fill. I suspect that this is not a good idea. My gut tells me there should be copper all around those pins, if only for heat management, e.g. something like: So, my question is: Is my suspicion correct? Should those border areas around the pins also be filled with copper? Note: The above PCB is my first draft of this design; I'm aware that it might need improvements; e.g. routing practices, via and trace sizes, capacitor selection and placement, etc. But, no spoilers please! I'm saving that for its own question after I do a little research. For this question, I'm just wondering about those ground fill pin borders in general. AI: This is called thermal relieving. If pads are directly connected to a copper plane, it can be very difficult to hand solder the component, as all the heat from the soldering iron is transferred into the copper plane acting as a heat sink. To solve this, you use thermal relief. All CAD suites I've ever used has settings for this, where you can select how many "spokes" the pad should have, at what angles and how wide they should be. When it comes to SMD parts, if the boards are only reflow soldered, you can often get away without thermal reliefs. This has some advantages when it comes to heat transfer from power components, lower impedance etc. But it can be a pain to repair such boards. When it comes to thru-hole parts, especially on multi-layer boards, one has to use thermal relieving or you will never be able to properly solder these pins directly connected to copper planes.
H: The input buffer of a Sample and Hold In a Sample and Hold circuit, I know that the buffer amplifier after the capacitor keeps the capacitor from discharging because of its high impedance and causes the output voltage to be equal to the capacitor voltage. But.. what is the function of the input buffer, why do we need it there? AI: Because the bare sample circuit presents a varying load to its input. This means that if the impedance of the source driving the sample circuit is too high, then the sampling capacitor does not have time to settle out before the switch opens. For the sampling stage that's pictured, this has the effect of reducing the stage's bandwidth, because the voltage on the cap immediately before the switch closes is basically what it was the last time the switch opened. For something more complicated, like an ADC where the sampling is charging up a C-2C ladder, any residual voltage on the capacitor may be less predictable, and the resulting error introduced by a poorly-buffered signal would be worse than just low-pass filtering.
H: Why do we need to use transmission line theory? I just started to learn transmission line theory. I'm still confused about a lot of things. Every source says that we need to use transmission line theory when the length of the wire is 1/4 or longer than the wavelength of the sine wave; some sources use a different number than 1/4. My question, that no one addresses, is: why does that exactly matter? Why do reflections happen only when the wire is that length? Is it true that every wire is considered a transmission line, but we ignore this effect when its length is shorter than 1/4 of the wavelength? If the answer is yes, does that mean we can technically use transmission line theory and terminations for circuits whose wires are shorter than 1/4 of the wavelength without affecting them? AI: Why do reflections happen only when the wire is at that length? Reflections always happen, it's just that when the line is really short compared to the frequencies involved the effect of the reflections become more and more as if the line were purely capacitive or inductive. Where it really starts to matter depends on the the circumstances of the circuit, and, to some extent, the attitudes and methodologies of the circuit designers. I'd peg the "need to start thinking about it" threshold at around \$\frac{1}{10}\lambda\$; I'd consider \$\frac{1}{4}\lambda\$ to be the point where it becomes imperative to pay attention to transmission line effects. Note that the frequencies involved aren't always obvious. As a for-instance, the first time that I got bitten by transmission line effects with digital circuitry involved a clock line to a 74HCxxx part, and a clock that was in the hundreds of kHz; the clock line was somewhere between six inches and a foot (I can't remember), but coupled with no line termination at all, it meant that my part was seeing multiple clock pulses. Based on the length of the line, that means that my part was responding to a "clock" at between 500MHz and 1GHz. The actual clock was much less, but 74HC logic puts out really sharp clock edges, which mean there was enough energy at 500MHz that when the un-terminated transmission line was excited by that square edge, the ringing was enough to clock the receiving chip.
H: Is the LM399 Temperature coefficient with the heater on or off? The LM399 datasheet specifies a temperature coefficient "\$\Delta\$T/\$\Delta\$Temp". The conditions state an ambient range but don't specify if that's with or without the heater on. Which is it? AI: If you read the Note 3: "These specifications apply for 30V applied to the temperature stabilizer and –55°C ≤ TA ≤ 125°C for the LM199; and 0°C ≤ TA ≤ 70°C for the LM399." Means that the heater is ON and powered through 30V. If you use different heating power, precision might vary.
H: How can a current be constant and there still be a voltage drop I'm having trouble understanding how this can be possible. Obviously, I'm missing something but I'm not sure what it is. The amount of voltage at a point depends on the electrical potential energy at that point, which depends on the density of charge (electrons). So how can the voltage drop if the density of charge can't change, since that would cause a change in current. I've tried thinking about it two ways and run into problems each time… The charge density does increases but the speed decreases causing the same current. But if this were the case, and the density did increase in the resistor, then the resistor would have a higher potential than the wire and charge would stop flowing. The charge density decreases across the resistor and the resistor acts like a funnel, creating a higher charge density at one end than the other. But the difference in charge density is counteracted by an increase in speed, allowing the current to remain constant. However we know resistance slows down charge, it doesn't speed it up. AI: Maybe you're confusing electric field strength with electrical potential energy. It's true that the two ends of a resistor can have different potentials, endowing the charges at one end with more potential energy than charges at the other, but the electric field inside the resistor, between those ends, is uniform, it's strength being equal at all points. There may be a voltage source somewhere, like a battery or capacitor, that has a non-uniform distribution of charges, where you may associate potential with charge density, but the field resulting from that potential difference is not confined to the battery. Any resistor across that source is effectively a uniform distribution of charges within a uniform field, with equal force on each charge in the conductive path, and no charges grouped up anywhere inside the resistor. Potential is a measure of how much work can be done by a charge in travelling from one point to another, consistent with the concept of work done equalling force times distance. The force on each charge in the resistor is the same throughout the resistor, because the field strength at all points is the same, but some charges will have to move further to get to the end with the lower potential. Therefore some charges are able to do more work than others (those nearer the high potential end), because they have to travel further, dispite all being pushed by the same amount of force. You can't associate potential with charge density, in the same way you can't say that there are more blades of grass per square metre at the top of a hill than at the bottom because grass at the top has more gravitational potential energy. Soccer balls rolling down hill are not moving because there are more balls per square meter at the top of the hill, they are moving because they find themselves in a gravitational field. You can say that somewhere there may be a charge density imbalance giving rise to a potential difference, but charges in the circuit connected across that potential difference are merely passengers its field.
H: Do I need Linux for UCB's SPICE(3)? I have been reading a SMPS book with many simulated examples. I want to recreate the sims in the book and follow along with the shown examples. The author states that he uses SPICE for all examples, which I hear was developed at UCB a few decades ago. I already had LTSpice on my PC, so I figured I'd just use it. Then, some circuit examples use components I cannot find in LTSpice (e.g. adders, gain blocks). It seemed like too much work to try and create circuits that are effectively the same in LTSpice (I'm not very skilled in LTSpice as it is). So I figured I'd search for UCB's SPICE. I quickly find the UCB SPICE page, and after enough clicks I find a page with downloads for SPICE. I was expecting executables, but find files with ".tar.gz" and phrases like "SPICE for linux." Is it the case that UCB's SPICE software is available only for linux? If not, could anyone currently using it guide me to place it on my windows machine, or have suggested alternatives? Would I be better off finding workarounds in LTSpice? Thanks for your help! AI: You could try ngspice. This is effectively the "successor" of Berkeley SPICE, supports Windows, and is still maintained. From what I understand it does include effectively the original SPICE components, and has had the plotting and other parts ported to work with modern Linux or Windows. I think your only other good option is using an actual Linux install (either via a virtual machine or as a dual boot), though I would probably not resort to this unless you really want to run the original Berkely SPICE.
H: Identify temperature sensor in thermostat I somehow managed to damage the temperature sensor in my thermostat while cleaning corroded battery contacts. Apparently the only fix is "buy a new thermostat", though it is a pity to replace this for a component worth a few pennies. This is a "British Gas" branded RF-linked Drayton Digitstat thermostat. The temperature sensor seems to be a two-pin ceramic device with blue body and white tip. As far as I can tell there are no other markings, and there is no standard colour scheme. Update: before desoldering I checked the connectivity, and it seems to be connected to the traces after desoldering I measured resistance, and room temperature I've got ~55kOhm. Holding it between fingers it goes to ~45kOhm AI: If I was a betting man, I would guess it's a 10K\$\Omega\$ \$\beta = 3977\$ or so thermistor. But there is really no way to know from the appearance. You could take it off and measure it, and refer to a R vs. T table to see if that's close. It should be 10K at 25°C if my guess is correct and about 4% lower or higher for every degree C the thermistor temperature is different from 25°C. Be sure to let the thermistor cool and avoid touching it if you are trying to make that measurement. If you get the base value right, chances are if the \$\beta\$ is a bit different it won't make much difference in a substitution, since thermostats are usually used very near to 25°C. Alternatively, if the thermistor is actually bad (in which case the measurement won't make much sense), then substitute fixed resistors for the thermistor and note the temperature reading for each resistor and compare with data sheets of likely potential substitutes. For example, if you substitute a 10K resistor and the display reads 25°C you have one point. If you then substitute a 5K (say 2x 10K in parallel) resistor and the reading is close to 41.5°C then my guess was correct. Edit: Based on your measurement, maybe it's a working 50K @25°C thermistor. Instead of looking at the sensor, I would take a close look at the power supply.
H: Electric field in a circuit and potential drop With reference to the above image: Here the battery produces a constant (will it be constant? pl. explain) electric field across the length of the wire, lets say its value is \$\vec E\$ . This field exerts a force on the electrons in the wire and they start moving from higher potential to lower potential. First of all what will be the value of this constant electric field (if it is so?). Will it be \$ |E| = \frac{\Delta V}{\Delta r}\$ where \$\Delta r\$ is the length of the whole wire across the circuit (including resistances)? Now I have been told that the potential of +ve terminal of the battery and point A (in image) is same which concludes that there is no potential difference between +ve terminal and A thus there should be no electric field ! But electric field exists how? also if electric field exists then why there is no potential drop ?!!Coming to resistances $R_1$ and \$R_2\$. Now this constant electric field enters $R_1$ and there is a drop in potential this time i.e. potential of point B is lesser than that of A. Now does the electric field across this resistor be same as calculated above i.e. \$ |E| = \frac{\Delta V}{\Delta r}\$ or will it change ? Here Ohm's law states the \$\Delta V_1 = iR_1\$ so does it conclude that electric field inside this resistor will be \$|E| = \frac{\Delta V_1}{\Delta r_1}\$ where \$\Delta r_1\$ is the length of the resistor \$R_1\$ . What will happen and which is true ?? What will be the electric field through resistor \$R_2\$ ? and what will be the filed between C and -ve termianl of the battery ?Please explain all these ? AI: Here the battery produces a constant (will it be constant? pl. explain) electric field across the length of the wire, The electric field will be constant with respect to time, but it will not be uniform (with respect to location), throughout the circuit. The electric field obeys the microscopic version of Ohm's law. $$\vec{E} = \frac{\vec{j}}{\sigma}$$ where \$\vec{j}\$ is the current density and \$\sigma\$ is the conductivity. Only if the ratio of these two terms is uniform, for example in a uniform wire, will the electric field be uniform. This is unlikely in a circuit with lumped components. Now I have been told that the potential of +ve terminal of the battery and point A (in image) is same This is a very good approximation. The voltage drop in the wire between the battery and resistor will be small. But it is not exactly true. The voltage drop will be non-zero. It depends on the resistance of the wire and the current flowing through it. so does it conclude that electric field inside this resistor will be \$|E| = \frac{\Delta V_1}{\Delta r_1}\$ where \$\Delta r_1\$ is the length of the resistor \$R_1\$ . No, the electric field will follow the microscopic version of Ohm's law given above.
H: Is a combinational logic circuit a Finite State Machine? Is a combinational logic circuit a Finite State Machine? In other words, is the class of combinational circuits a subset of the class of Finite state machines? AI: EDIT: This answer is answering the original question, that was edited later: "Is combinatorial logic can be seen as a subset of FSM". Combinatorial circuit is a Finite State Machine. In Mealy representation it will be an FSM with a single state and self-transition on any input with outputs depending solely on the inputs. For example an AND gate can be represented as the following FSM: (Note about notation - the triangle is indicating the FSM "initial state".) In Moore representation it will have the number of states corresponding to the number of possible outputs. For AND it will look like this: In the above notation, the square is indicating output in this state. The initial state can be any of these.
H: Will this work for power path? My goal is to disconnect the battery while the device is connected to USB Input. However when the device is disconnected from USB power, the battery will be connected to the system load. I am using back-to-back NMOS transistors with gate connected to USB power to turn on the lipo charger when the device is plugged in to USB. When USB power rail is low, the NMOS transistors should block current. I am trying to achieve the opposite effect with the connection of the battery to the load with back-to-back PMOS transistors. The diode is intended to prevent the battery from turning the NMOS transistors back on. My two questions: Will this work in theory assuming I select appropriate transistors? I'm not doing anything stupid right? Is the pull down resistor necessary for the PMOS transistors? Thanks, this is my first post so go easy on me. AI: For full enhancement and minimum Rdson ("saturation"), most power MOSFET's are specified at Vgs = 10 V. Logic level FETs are spec'd at lower Vgs, such as 5 V. In your circuit, the drive voltage for M1 and M3 is nowhere near this. Assume both FETs are ON, write the circuit voltage at each of the six pins on the drawing, and see if the required conditions are met. For high-side switching with an n-channel FET, the gate has to be 5 V or 10 V above the input voltage being switched. This is why most high-side power switch applications use p-channel FETs. Linear Technology, Maxim, and others make high-side driver chips with an onboard charge pump produce the required gate voltage.
H: Split one DC supply into 4 independent DC supplies I want to take one DC source from a Rigol 832 and split that output into 4 adjustable, independent DC outputs that I can connect in series. Is this possible? The goal is to make a circuit that will emulate a 4S battery pack where I can adjust each independent output individually. AI: 4 shunt regulators connected in series. simulate this circuit – Schematic created using CircuitLab depending on what to need to do with the outputs you may need something stronger than a TL431 eg: figure 21 from the TL431 datasgheet simulate this circuit
H: Solder bridge pad spacing and scratch trace width To save cost on a board I'm designing, I'm considering replacing an 8x2 male pin header intended for jumpers (which previously replaced an 8-position DIP switch) with 8 pairs of rectangular solder bridge pads. My question is: How much of a gap should I leave between the pads in each pair? Also, if I want some of them to be closed by default, where you scratch away the trace to open them (and can rebridge them with solder later), how thick should I make the scratch trace? AI: There's no definitive answer to your question, because it depends: How often do you expect to need to bridge or un-bridge these? What is the accessibility of the area like? Are other components nearby making it difficult to work on the bridges? Who are these jumpers intended for? End users? Hobbyists? Engineers? Are you space-constrained? The bridges could be 0201 component size and require magnification to work on. I expect this to not apply, but what is the voltage? Higher voltages would require a larger gap. Have you hand-soldered 0402 components? The gap between pads of an 0402 is about 0.5 mm. On the larger end, 2.54 mm (0.1 inch) is the same distance as typical perfboard (proto board) and those are relatively easy to bridge with enough solder (of course the copper-to-copper pad spacing is probably closer to 1 mm). As long as you keep the gap wide enough for the fabricator tolerances (e.g. 0.15 mm or 6 mil is sometimes a minimum), and voltage is not a concern, select a gap that meets your criteria, similar to the ideas I listed. As for track width for closed-by-default jumpers, that depends primarily on current and how much effort you want to go through to cut the track later. If these are just short tracks for logic voltages, consider using an 0402 component with a 0.2-0.3 mm wide track connecting them. That's what I do for prototypes and it works well. Mind you, cutting tracks with a knife later is a bit messy. I prefer to use 0-ohm 0402 jumpers instead of tracks meant for cutting.
H: Request for circuit review: varactor-tuned AM radio receiver I apologize if this is not the place for newbie questions or circuit review requests, but I was hoping for some wisdom on an AM receiver circuit I have sketched out. From left to right it is intended to read as tuner/demodulator/amplifier. Actual component values aside, is there anything that jumps out as woefully incorrect in terms of how the circuit is structured? Varactor-tuned AM receiver and amplifier: AI: Basic block functions (tuner, demodulator, amplifier) are reasonable, and in correct order. However, each has its own problems: Tuner varactor should be biased so that resonator Q remains high. Tuner EARTH is not shown. Demodulator diode (D2 shown below) often has no DC bias. Audio amplifier should not be DC coupled from the demodulator in an AM radio. OPamp might benefit from a negative as well as positive DC supply, since its audio inputs and output is ground-referenced. simulate this circuit – Schematic created using CircuitLab A single DC power supply is possible, but requires extra components. C3 and R3 are needed so that the demodulator stage does not pass on its DC output voltage to the high-gain audio opamp. This simple diode (D2, C2, R2) demodulator generates a DC output voltage proportional to AM carrier amplitude, with audio riding along...you want the audio, but not the DC.
H: Why does reactive power injection mitigate sagging grid voltage? I understand that reactive power injection can be used by generators to support the grid voltage when sagging, but I’ve never understood why this works. I thought it could be charging transmission line capacitance or something, but I can’t make sense of it when I try to follow through with this starting point. I’ve seen similar questions on here with great answers. In Why is it desirable to inject reactive power into a transmission system? the answer explains how capacitive compensation on the transmission line can reduce the reactive power draw from a generator to power a partially-reactive load, ultimately explaining why reactive power is relevant in a power system. In Why does reactive power affect voltage? the answer from Olin explains how a reactive load can draw additional current from a generator and result in real power loss and voltage drop across the transmission line. I am guessing there is enough info from these answers that I could be able to come to an answer to this question, but I think I need some spoon-feeding. The assumption in my question is that reactive power injection from a generator supports grid voltage. Is this generally true? If so, could someone help me understand how reactive power injection can be used to raise a sagging grid voltage, or the voltage at a local bus? Are there truths or assumptions made about the grid’s loads and lines in doing this? AI: The primary assumption about a grid is that a high percentage of the load energy is used by induction motors. Induction motors have a lagging power factor and thus require reactive volt-amperes, aka reactive power. Reactive VA is a measure of energy stored during one half-cycle and released during the following half-cycle of the current waveform. If the generator must supply the reactive VA, the current from the generator to the induction motor loads is higher than is necessary to supply the energy taken by the motor and converted to mechanical energy. If the reactive VA can be supplied from capacitors located closer to the motor, the voltage drop and lost energy between the generator and motor is reduced. Locating capacitors or "injecting reactive power" close to the loads having lagging power factor is essential. Doing that at the generator helps with losses and voltage drop inside the generator, but does nothing to help with the transmission system component losses. Reactive power can be "injected" anyplace between the generator and the reactive load, but the benefit increases as the injection point gets closer to the load. The word "injected" is really not very appropriate. It implies forcing something into a place against some opposing force. It is really more like providing an outlet. It is more like providing a temporary storage place or overflow reservoir What is reactive energy? may be of interest.
H: How can I add soldering holes, which are attached to IC pins, in Kicad? I'm now almost done with a schematic in Kicad's Eeschema, but some links to the "outside world" are still missing, and I don't know yet how to add them to the schematic, so that they will appear as soldering holes in the PCB later. Here is the relevant part of my schematic: As you can see, it's all about 7 pins on a microchip (TDA7318 mixer): 3 of them belong to its I²C interface, and the other 4 ones are audio outputs, which shall lead to an amp residing on another PCB. Now I'd like to know the following: Which parts shall I add to the schematic, so that Pcbnew will add one soldering hole per pin? For the I²C link, I've already added a GS3 connector (sadly without knowing whether this is the correct part). In real life, I would attach a level shifter with a 3-pin 0.1" header here. For the 4 audio output pins, I would simply solder one short wire each, which would then lead to the amp PCB. Any suggestions? AI: Use Connector_Generic, Conn_01x03 and then for the footprint, choose Connector_PinHeader_2.54mm, Connector_PinHeader_2.54mm:PinHeader_1x03_P2.54mm_Vertical.
H: Designing a small footprint DC/DC converter This is my first time diving into DC/DC converters, so I'm learning a lot of new information and am aware that I may miss some important details. I'm hoping that others with more experience could point out and educate me about those, why they are crucial, and what they cause. A little bit about the circuit My input voltage is around 5V (I put 4.5 to 5.5V in the TI designer), with an output of a little over 3.3V. The expected load of the powered circuit is around 800mA, worst-case about 1.4A for a few minutes. For this purpose, I chose TI's LM2831XMF/NOPB (mostly because it was one of the only available 1.5A DC/DC converters I could find in stock near where I live due to chip shortage). I changed some of the components from the original "compact" design provided by TI's generator, like the inductor (IHLP1212BZER3R3M11) and the "drain" diode (SL04-HE3-08), which should be decent replacements for the suggested parts. The schematic This is the schematic I came up with at the end: The PCB The board I use has 4 layers. The red layer is the top layer, lower I have a ground layer followed by a VCC layer and the bottom layer. I try to provide a good ground for the output capacitor, diode and input capacitor (like suggested in the switching chip datasheet). I also put big pours as traces to reduce resistance between parts. Furthermore, I tried to reduce both fill and drain loops as much as possible, so this is the layout I came up with at the end: Without copper pour for better visibility: My existing concerns What I'm worried about the most here is the feedback line, which I routed not that far from the inductor and could potentially cause output ripple. Would it be better to route it around the other side (to avoid the inductor more)? Doing that, I'm worried I'll make it into an even bigger antenna and worsen the situation. Do you perhaps have any suggestions on how to fix/deal with such an issue, or is it a non-issue as-is? The solution: This is the final design (imgur.com) I'll use in production. I'll also update with some ripple data once I get the circuit and assemble the regulator. Update: this design produces under somewhere around 5 mV of ripple, which should be good enough for most designs. AI: The SW copper pour sticks out too much beyond L1 pin. This does not improve current carrying capacity much but increases noise. The footprints for capacitors look rather small. Did you cut the voltage rating too close? But the biggest problem is that C7 is way too far from GND pin of U1 and connected to it through two sets of VIAs. I recommend moving it closer and connect via uninterrupted GND pour. Something like this, perhaps?
H: Why does a voltage divider linearize the effects of a non-linear thermistor? I saw a design specification with an NTC thermistor in a voltage-divider configuration. The design method said that we must have $$R_1=\sqrt{R_{NTC\min}R_{NTC\max}}$$ to produce a linear output voltage for a given range despite \$R_{NTC}\$ being highly non-linear (Why?) Isn't the effect of \$R_1\$ to make the seen resistance or the Thevenin resistance to be close to \$R_1\$ if \$R_{NTC}\$ is sufficiently large or in other words $$ \frac{R_1 R_{NTC}}{R_1 + R_{NTC}} $$ is not a linear equation nor does it linearize the highly non-linear thermistor even for a given range. So we still have a non-linear input voltage at the Op-amp I can't see why the output voltage will be linear even for any value of \$R_1\$ AI: No linearization takes place in the op-amp circuit, so you can ignore that in any analysis. They are just saying that to make the best of a bad bargain the series resistor should be the geometric mean of the end points of the range in resistance. Another way to look at it is that the \$\Delta V/\Delta T\$ is maximized at the temperature where the thermistor resistance equals the series resistor resistance. The output per degree decreases on either side of that value. By adding series resistance to a bridge circuit it is possible to get an S-shaped error from linear that is fairly good over a relatively narrow range of temperatures, at a cost of complexity and output, however the fearsome nonlinearity of the thermistor wins out over a wider range, so schemes involving resistor networks with multiple thermistor elements are sometimes used. simulate this circuit – Schematic created using CircuitLab This works because the energizing voltage of the bridge proper varies in the desirable direction with the thermistor resistance (as the thermistor resistance increases at low temperatures, the energizing voltage goes up, increasing the output per degree). I played around with this a few days ago, mostly to see how easy it would be to do with modern tools (using python scripts and libraries and extended Steinhart-Hart equations). You can see one result here. However, in 2021, a more sensible approach in most cases is to provide an ADC with a lot of dynamic range and deal the the nonlinearity in the digital domain.
H: Phase angle of induced voltage in transformer? Why does the voltage induced in the secondary coil of a transformer lag 90 degrees from the flux in the core? The secondary voltage is induced when the flux cuts it, but then, why does the induced voltage in the secondary lag 90 degrees behind the flux? What is the reason behind it? AI: Because of the laws of induction, V = -N dφ/dt. Notice the d/dt bit? That naturally shifts a sine wave by 90 ° In other words, the secondary induced voltage is proportional to the rate of change of magnetic flux and, the number of turns.
H: How to (generally) identify a temperature sensor? What should be the approach to figuring out what a temperature sensor is, when taking apart an appliance or “reverse engineering”? This has come up in a couple scenarios for me recently: I am installing an alternator controller, with a dedicated 2-lead temperature sensor. (I wanted to try a replacement in case the original was damaged, as it had been crushed somewhat.) I am trying to figure out how my “automatic” range hood “works”. (It doesn’t really.) In the first case the sensor was encased in something and wrapped with heat shrink. In the second case it is a tiny black blob (smaller than a discrete transistor style blob). Is there a good approach/device/controller that could work through the most common types of thermocouples/RTDs/etc. to figure out what it is? If so, is there a safest order of testing given the sensor, and maybe its polarity, are unknown? AI: Many temperature sensors you'll run into with consumer gear are resistive and polarity does not matter. So the (two) leads are the same color (if any) and they measure the same resistance in either direction. They run the gamut from precision thermistors used for body temperature measurement to more-or-less linear bulk silicon resistive sensors used for high temperatures and base-metal RTDs used for temperature measurement in comfort heating. The more linear resistive sensors have relatively little sensitivity, which a bit of temperature change will illustrate. Some are identical in appearance to SMT resistors. Image: Wikipedia Image Newark Canada: Image Digikey: (glass bead) Thermocouples are necessary for higher temperatures such as kilns and are very common in industrial equipment such as molding machines and extruders. There will be two wires of different colors (or a special color-coded and polarized connector) and more complex circuitry. The colors may be a clue to the thermocouple type, although there are a number of standards and you may need to be a bit of a student of industrial history to guess the code that applies. They generate a small voltage when a temperature difference exists from one end to the other. An ordinary multimeter on the 199.9mV range will easily pick that up with a bit of heating. A type K thermocouple has an output of around 41\$\mu\$V/°C near room temperature so a 5°C temperature difference from a warm paw will give you a couple counts on the 200mV range. One use in consumer equipment is the "thermocouple" (actually a thermopile) used as a safety in gas valves. Image: Newark Image: Omega Precious metal RTDs are common in some industrial equipment and food equipment. Typical standards are 100\$\Omega\$ or 1000\$\Omega\$ at 0°C so they would measure around 8% more at room temperature. There are two standards, DIN (\$\alpha = 0.00385\$) and "American" (\$\alpha = 0.00392\$) for curve but the latter is rarely seen in new equipment. There may be two wires, three wires (common in industrial equipment) or four wires (common in scientific equipment). Usually the element itself has two wires and the extra wires are attached a short distance away. Image: Phigets Image: Digikey (thin film Pt type) Image: Axolia (wirewound Pt type) Semiconductor sensors are more likely to turn up soldered to a PCB than near something being measured. They're usually marked. They may be two wires or three wires. Maybe more, depending on the package. Common ones for remote use include the ancient and cantankerous LM35 and the digital one-wire Dallas DS18B20 and clones. There are quite a variety of other digital and analog sensors, but most are poorly suited for use off a PCB because of the packaging. Image: Wikipedia Of course, any given sensor may be enclosed in a tube or thermowell and you may not be able to easily tell what the sensing element inside actually looks like.
H: Why is mutual inductance coupling high in a microstrip PCB line? I am learning about crosstalk on PCBs from the high-speed signal propagation book by Howard Johnson. It says mutual inductance coupling is high in microstrip lines compared to mutual capacitive coupling. Why it that? AI: It is what it is. If the two tracks that make up a differential micro strip pair have a transmission impedance of 50 Ω then the ratio of inductance to capacitance is 50 squared or 2500. Regarding cross talk, the magnetic field from a single micro strip line easily couples to an adjacent single micro strip line just like wires easily cross couple magnetically. There is also capacitive coupling but, the "plates'" are not facing each other in a micro strip scenario hence, you can easily say that the orientation of the plates is not optimized to produce maximum cross capacitance. But, as always, the devil is in the detail and simplified over views are a dangerous thing to make. They are what they are.
H: Matching via sizes to trace widths Let's say I've determined a trace width (\$t_w\$) for a trace, and now I want to determine the size of a via on that trace. My intuition is that to maintain the same amount of copper as the trace through the via (and assuming the plating inside the hole has the same thickness as traces do), then the via's minimum drill diameter (\$⌀_{min}\$) should be chosen such that its ​circumference is at least the trace width, and the minimum annular ring (\$r_{min}\$) should be at least half the trace width. I.e., given \$t_w\$: $$ \begin{aligned} ⌀_{min} = \frac{t_w}{\pi} \\ \\ r_{min} = \frac{t_w}{2} \end{aligned} $$ The total diameter being at least \$⌀_{min} + 2\times r_{min}\$. So, if I have a 24 mil trace, then \$⌀_{min}\approx\$ 7.64 mil and \$r_{min}=\$ 12 mil. However; vias are cylindrical, and physics is confusing. So my question is: Am I on the right track and, if so, do I theoretically have to make extra allowances (especially in \$⌀_{min}\$) to account for things like capacitance, inductance, and other EM effects inside the via hole? Is it different for power vs. signal traces? Are there maximum dimensions where weird things start happening? AI: Am I on the right track You are on the right track. However, the track may be 100's of mm long while the via is probably only 2 mm or less. You may be able to live with a small localized resistance at the point of the via. You might also rather use multiple small vias in parallel rather than one big via to carry a large current. do I theoretically have in make extra allowances (especially in ⌀min) to account for things like capicitance, inductance, and other EM effects inside the via hole? It's fairly rare to need to worry about both high current effects (which would drive a high via diameter to avoid over-heating) and electromagnetic effects. If you are running high current through the via, size the via to achieve an acceptable resistance and acceptable self-heating. If you are running RF signals (above maybe 700 MHz) through the via, size the via and its pads and anti-pads to maintain close to the desired characteristic impedance through the via structure. If you are running RF at high enough power to worry about self-heating, then you have a bigger challenge. You will notice that many high-power RF circuits avoid vias altogether. They also use ceramic (alumina, for example) substrates rather than fiberglass to improve thermal conductivity. Are there maximum dimensions where weird things start happening? For power? No, not really. It's common to just use a split plane rather than a track to carry high current power nets. For RF signals? Yes, a too-wide track won't meet the characteristic impedance requirements. Even if you move the reference plane away to meet the characteristic impedance requirement you might end up with a multi-mode transmission line, which would cause signal distortion and/or radiation loss.
H: SMPS turning ON from earth flowing back into GND output I've got a problem with a design using a SMPS. The input is AC phase and neutral, one wire going through a power switch. The output is 12V DC with the GND wired to the AC's earth. The issue is that when the neutral (or phase) is cut OFF, the SMPS turns ON briefly for 0.5 seconds every 20 seconds (outputs enough current to move a cooling fan and a relay). My guess is that there is some kind of coupling inside the SMPS because the GND output is wired to the main's earth and either the neutral or phase on the SMPS input and it's charging a capacitor somewhere to the level at which the SMPS turns ON by itself. Is that an expected issue? I would like to avoid using a diode at the output (which would defeat the purpose of earthing the ground and mess with the levels). If anyone has an idea, it would be great help! I'm going to reverse-engineer the SMPS but if someone has a simple fix, I'll gladly take it! Notes: I have to wire the GND to earth to avoid some audio amp buzzing when someone touches some volume knobs/pots found later in the power-path. Phase and neutral can be swapped depending on how the user plugs it/wall outlet wiring so I cannot control if either the phase or neutral is cut by the power switch. The power switch is only for one wire (because, yes, it would be much easier to have a switch that cuts off both phase and neutral but I can't change this at this point). The SMPS is a unknown brand, I didn't reverse-engineer it yet. Thank you for your input! Edit: The question was deleted due to lack of clarity. I've got a perfectly fine answer from one member and it was indeed the issue and solved the problem I had. So no, this question do not lack clarity. AI: Almost all cheap LED lamps did this when they got to market. The problem is leakage current caused by capacitive coupling between wires, so AC voltage will gradually pump up charge to SMPS input cap until voltage is enough for the SMPS to start and the cap will the discharge quickly. I do not recommend modifying these yourself, unless you know what you are doing and can use suitable components rated appropriately for the job. In theory, there should be something like a 1 Mohm of resistance at the mains input, or maybe at the main input capacitor, to have enough load on it so the capacitive coupling can't charge the input capacitor. In real life, and depending on what is your local mains AC voltage and your local regulations about safety, one single resistor is typically not enough. Simply put, if you can'y change the switch, change the SMPS to one that works. Make sure you get one that is meant to be used with live and neutral only, i.e. make sure your SMPS does not require earth to operate normally and safely if you don't have earth.
H: Identify Bobcat miner component Today I tried to power up my helium bobcat miner with POE. I used the TP-LINK POE10R PoE Splitter, but after 1 minute the miner turned off and didn't power up anymore... I opened the miner and I discovered one component burned with the label "2132 BG". I have already searched for this component but I don't figure out what component it is. Someone can help me? AI: Probably a SMBJ13CA bipolar TVS, assuming it's an SMB
H: Breakdown voltage in diode and Zener diode, what are the practical differences? Until now, I've only studied with diodes asuming they are ideal and I could tell the difference between normal diodes and Zener diodes. Now, I started to study non-ideal diodes and I see that this non-ideal diode has a breakdown voltage like the Zener diode. What are the differences between these two? Does this mean that the normal diode can work at reverse polarization like the Zener one? AI: The biggest practical difference is that a Zener* diode is made to break down in a useful and predictable way, while a rectifying diode is made to predictably not break down. So if you buy a Zener diode that's designed to regulate to 12V, then you can figure that if you arrange for it to get the right current in the reverse direction, you'll get pretty close** to 12V out of the thing. On the other hand, if you buy a rectifying diode that's sold to operate up to 100V, you can expect that over all the other advertised operating conditions of the diode (temperature, mostly), the thing won't break down at or below 100V -- but if it breaks down at 200V, it's still within specifications. You can also figure that if you design your circuit such that the diode does normally go into breakdown, you may not have a guarantee that the thing won't change it's characteristics over time***. * "Zener" diodes work by a combination of the Zener effect and avalanche breakdown, mostly depending on the voltage they're designed to regulate. But in the market (in English at least) they're usually all called "Zener" diodes. ** generally \$\pm\$ 5% or 10%, depending on how much you paid for it, how close your current is to the design current of the thing, temperature, etc., etc. *** A formerly-well-known hack for electronics hobbyists, back when we could all be counted on to have several dozen small-signal transistors in our collection, instead of a dozen microprocessor development kits of varying ages, is that if you connect base and collector of a small-signal transistor together, the base-emitter junction makes a pretty reliable Zener diode with a voltage of around 6-7V -- but I was always told that doing so damages the junction, and you shouldn't trust that transistor as a transistor thereafter.
H: Inductive coupling using earth ground instead of negative terminal in a power supply I was a TA for an introductory EE class when I heard this comment. We would help students through basic lab exercises. Helping them troubleshoot, the students would sometimes use the ground connection instead of the negative terminal of the power supply when using it as a voltage source. I would tell them you shouldn’t do this because the power supply is regulating the voltage across the + and - terminals, and not across + and earth ground, and left it at that (hard to expect much insight from an undergrad TA). Once, when the professor introduced the equipment to the students, they were explaining that you should use the negative terminal and not the earth ground terminal because “connecting to earth ground will introduce inductive coupling to the circuit.” It was the first week of the intro course, so no one really understood the prof’s point or inquired further. Why does this happen exactly? I am thinking that the loop area of the entire circuit increases when the return connects to earth ground and creates a significant inductance throughout the circuit path. Is this the idea? Is there more to it? AI: I don't think there would be any inductive coupling. Professors can make mistakes too, maybe someone has given the same explanation to professor at start of studies so the same mistake just gets passed on. The earth socket is just connected to mains earth that comes in to the power supply via the power cord. The blue and red sockets are the negative and positive terminals of the isolated and thus floating output voltage, which has no conductive path to earth. It would simply not complete a circuit so the powered circuit would not work if connected between red and green terminals. Sometimes there is a removable link between green and blue terminals. If it is present it can cause ground loops. If the device that is powered by the power supply is also connected to something else, like a desktop computer which has it's common ground connected to earth too, then it will complete a circuit via the building mains earth wiring inside the walls. So by not using the green terminal on a power supply at all, it allows the powered circuit to float, so that it can be safely connected to other devices without ground loops.
H: Derivation of Formula for Power The power dissipated by a resistor can be written as $$V(t)I(t)$$ where both the voltage and current can vary with time. I don't understand why the formula is not $$\frac{dW}{dt}=\frac{d}{dt} (Vq) = \frac{dV}{dt}q+\frac{dq}{dt}V=\frac{dV}{dt}q+VI$$ Where W is the work done by the resistor. I've seen it derived as $$\frac{dW}{dt}=\frac{dW}{dq}\frac{dq}{dt}=V\frac{dq}{dt}$$ But what makes the first expression incorrect? AI: See pages 1-2 of this 22 page reference: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-071j-introduction-to-electronics-signals-and-measurement-spring-2006/lecture-notes/03_kirchhoff1.pdf Potential: $$V = \frac{dW}{dq}$$ this is also called voltage it is the work required to move a positive test charge from a specified point to another specified point through a conservative electric field. Note time is not a factor in the definition of work done in a conservative field. Current: $$I = \frac{dq}{dt}$$ is equal to the amount of charge q passing through a cross-section per second. The result for electrical power development at an instant of time is shown in your question. The chain rule does not apply here because the definition of voltage/potential and current are already in the forms given. The potential is considered constant at each instant of time even if it varies from instant to instant and the current is the amount of charge moving at each instant even if it varies from instant to instant. Therefore power development is defined at each instant.
H: Powering a RPI board with a 22V battery pack I built a 22V battery pack with 6 18650 cells connected them to a 6s BMS. I will use this to power a portable audio amplifier. I also want to use the same battery pack to power a Raspberry Pi Zero W embedded in the device. According to the specs, the Zero W has a typical / max current draw of 150mA / 1.2A. My idea is to connect the second tap (7.4V) with a resistor and the B- on the BMS to the RPi 5V USB input. I have two questions: Will this work? Is the BMS designed to be used this way, drawing the full 22.2V from the whole array and 5V from only 2 batteries? In calculating the voltage drop resistor for the 7.4 → 5V conversion, which current draw shall I use? The max. rated 1.2A? Thanks in advance. AI: No and no. Neither are good ideas. Unparallel draining of your batteries will lead to issues. 2 will die sooner than the others in best case. A resistor or a resistor divider is not a good way to drop the voltage for a non-constant load like a SBC. As the current varies the voltage drop will vary. You should just use a switching step down power regulator. A common off the shelf 24V car usb charger would do well here. Many 12V car usb chargers accept 24V as well. Or any number of modules for sale online or make your own from the numerous ICs.
H: Fixed emitter current or does it change after connecting collector? The first picture shows the base current while the collector is disconnected. It shows 8.3 mA. While the second on shows the base current reduces a bit (maybe because of Early effect) and the collector current is about 374 mA. As far as I knew, whatever current is injected by the emitter into the base, the collector would take in the ratio of 1:beta. But when collector is disconnected, no one is there to take this 1:beta amount of current and so all the current injected by the emitter would come out from the base. The emitter current is fixed because the base circuitry is fixed. Now if we connect collector, emitter current should still be fixed(8.3 mA), but this time, the collector would take most of this 8.3 mA. But the total current is still be fixed (8.3 mA). But as seen in the picture it looks like the collector is drawing additional huge current from the emitter this is why now the collector current is 374 mA. What is wrong here? AI: Ebers-Moll The very first complete (valid in all four quadrants of operation) model of the BJT was described by J. J. Ebers and J. L Moll (both associates at the IRE at the time) in 1954. Their Appendix uses Green's theorem and is essentially unrecognizable, today. This early BJT description has since gone through development into various versions: the injection model, then the transport model, and finally the hybrid-\$\pi\$ model -- through a series of refinements. (Just changes in perspective -- all three versions are computationally equivalent, if not conceptually so.) All three versions are today lumped as the Level 1 model (having three versions.) In general, \$\text{EM}_1\$ is a DC model and its very good for figuring out the DC operating point of a BJT. Since that's the point here, we should apply it and see how things come out. It's also non-linear. Which means it can be a bit of a mathematical pain, at times. And it makes a number of assumptions (such as the requirement for instantaneous maintenance of a quasi-equilibrium condition obeying the Boltzmann relation.) Which is why there are later developments that include two additional levels, \$\text{EM}_2\$ and \$\text{EM}_3\$, then Gummel-Poon and its variants, then VBIC, and then the continually revised MEXTRAM. You can quite literally make a life's work out of keeping up on the BJT, if you want. Full non-linear versions aren't often taught, these days. Too many other subjects crowd the curriculum and it's relevance has been reduced to mainly its linearized form of the hybrid-\$\pi\$ version of \$\text{EM}_1\$ or \$\text{EM}_3\$, depending on whether or not the Early Effect is included. That need, the linearized small-signal model of the BJT, still continues to find its small place in the curriculum. The injection and transport models, despite early popularity into the mid 1970's, have mostly disappeared. They are useful if you want to re-connect back to the physics, though. Preface To focus on the residuals and their explanation, you really need to eliminate some of the confounding factors. In your problem case, you used a very large base current for your simulation. And because of that, too many factors are conflated into your results. So it becomes far more confusing to try and tease out what you want to really know, when you do that. (If you were an early experimental research physicist and you didn't already know the model and parameters, then you'd be stuck in this conflated, difficult-to-tease-out situation. And you'd have to come up with lots of different experiments to run.) So the first thing I'd recommend here is to remove as many of the conflating factors as you can, especially those you don't need to understand because you already do, so that what's left helps you focus on answering your more fundamental question. Since we all easily understand the idea of bulk resistance at each pin of the BJT as a series-resistor, let's get rid of these from the SPICE model so that these hidden values don't add further confusion. This means setting RC, RB, and RE very close to zero. Since I don't trust SPICE programmers not to make things easier on themselves by forcing a zero value to some arbitrary non-zero value I don't know about, I will use very small but non-zero values here so that they aren't tempted. In this case, I'd suggest setting them to \$1\:\text{n}\Omega\$. No measurable effect, then! Also, this means setting VA to some very large number, like \$1\times 10^9\:\text{V}\$. (To get rid of the Early Effect.) So let's use this model: .model MYNPN NPN( IS=1E-14 VAF=1E9 BF=200 BR=3 RB=1n RC=1n RE=1n IKF=0.3 XTB=1.5 CJC=8E-12 CJE=25E-12 TR=100E-9 TF=400E-12 ITF=1 VTF=2 XTF=3 ) That was "stolen" from LTspice's model of the 2N2222 and then modified according to my above recommendations and named MYNPN. If you are using LTspice in particular then you could also just write: .model MYNPN ako:2N2222 NPN(VAF=1E9 RB=1n RC=1n RE=1n) That just sucks up the existing model for the 2N2222 and modifies parameters, as shown above. Removing the effects of RC, RB, RE, and VA will allow the SPICE simulation to better illustrate what you want to see and understand. Also, since I'll use LTspice for the simulations, it's default temperature (\$27^\circ\text{C}\$, if memory serves) yields a thermal voltage of \$V_{_\text{T}}=25.865\:\text{mV}\$. I'll be using that value, where appropriate. I'll use the \$\text{EM}_1\$ non-linear hybrid-\$\pi\$ model. This is not the model used in SPICE programs, however. Their results will be slightly different for this DC case because they take into account many factors learned later on. But hopefully, this will allow predictions that mirror the SPICE results. This first step, the validation test, is needed before proceeding towards an understanding of the quantitative differences. If we can't match up well, then something serious is missing from our parameters under control and that means more work (or a better DC model.) If we match up well, then we can reasonably assume we've captured the important parameters under control and can then and only then hope to learn something (come to an experimental result) from the residuals between the two case examples and therefore explain why and by how much. So let's look at the model, itself. \$\text{EM}_1\$ Model Here's the hybrid-\$\pi\$ non-linear model diagram: simulate this circuit – Schematic created using CircuitLab I'll use this diagram in the following sections. The relevant equations for this model are: \$\frac{I_{_\text{CC}}}{\beta_{_\text{F}}} = \frac{I_{_\text{SAT}}}{\beta_{_\text{F}}} \cdot \left[ e^{\frac{V_{BE}}{V_T}} - 1 \right] \$ \$\frac{I_{_\text{EC}}}{\beta_{_\text{R}}} = \frac{I_{_\text{SAT}}}{\beta_{_\text{R}}} \cdot \left[ e^{\frac{V_{BC}}{V_T}} - 1 \right] \$ \$I_{_\text{CT}} = I_{_\text{CC}} - I_{_\text{EC}}, \rm{(generator \,\, current)}\$ \$ I_{_\text{C}} = \left( I_{_\text{CC}} - I_{_\text{EC}} \right) - \frac{I_{_\text{EC}}}{\beta_{_\text{R}}} \$ \$ I_{_\text{B}} = \frac{I_{_\text{CC}}}{\beta_F} + \frac{I_{_\text{EC}}}{\beta_{_\text{R}}} \$ \$ I_{_\text{E}} = -\frac{I_{_\text{CC}}}{\beta_F} - \left( I_{_\text{CC}} - I_{_\text{EC}} \right) \$ Note that all terminal currents point inward and must sum to zero. So for the NPN BJT, the emitter current will usually be negative. Disconnected Collector -- A Prediction Your disconnected collector case looks like this: simulate this circuit We know that \$I_{_\text{C}}+I_{_\text{B}}+I_{_\text{E}}=0\:\text{A}\$ (KCL) and as \$I_{_\text{C}}=0\:\text{A}\$, from equation 5 and 6 we can write: $$\begin{align*} \frac{I_{_\text{CC}}}{\beta_F} + \frac{I_{_\text{EC}}}{\beta_{_\text{R}}}&=\frac{I_{_\text{CC}}}{\beta_F} + \left( I_{_\text{CC}} - I_{_\text{EC}} \right) \\\\ \therefore \\\\ I_{_\text{CC}} &= \frac{\beta_{_\text{R}}+1}{\beta_{_\text{R}}}\cdot I_{_\text{EC}} \end{align*}$$ Also, from KCL and the fact that \$I_{_\text{C}}=0\:\text{A}\$ we know that the generator current is the same as current in \$D_{_\text{EC}}\$. So starting with equation 3: $$\begin{align*} I_{_\text{CT}} &= I_{_\text{CC}}-I_{_\text{EC}} \\\\ &=\frac{\beta_{_\text{R}}+1}{\beta_{_\text{R}}}\cdot I_{_\text{EC}}-I_{_\text{EC}} &&= I_{_\text{CC}}-\frac{\beta_{_\text{R}}}{\beta_{_\text{R}}+1}\cdot I_{_\text{CC}} \\\\ &=\frac{I_{_\text{EC}}}{\beta_{_\text{R}}}&&=\frac{I_{_\text{CC}}}{\beta_{_\text{R}}+1} \end{align*}$$ From the above, we know the ratio of these two currents and therefore we know the ratio of the two diode currents: $$\begin{align*} \frac{ \frac{ I_{_\text{EC}} }{ \beta_{_\text{R}} } }{ \frac{ I_{_\text{CC}} }{ \beta_{_\text{F}} } } &= \frac{ I_{_\text{EC}} }{ I_{_\text{CC}} } \cdot \frac{ \beta_{_\text{F}} }{ \beta_{_\text{R}} } = \frac{\beta_{_\text{F}}}{\beta_{_\text{R}}+1} \end{align*}$$ Now, I just happen to know from the Shockley diode equation that there will be a voltage difference of: $$\begin{align*} \Delta V &= V_T\cdot\ln\left(\frac{\beta_{_\text{F}}}{\beta_{_\text{R}}+1}\right) \\\\&=25.865\:\text{mV}\cdot\ln\left(\frac{200}{3+1}\right) \\\\&\approx 101.2\:\text{mV} \end{align*}$$ Since \$D_{_\text{EC}}\$ has the larger current, this will invert the polarity of the voltage across \$I_{_\text{CT}}\$ to the opposite you would expect, so this then subtracts from the base-emitter voltage you see outside the NPN BJT, causing the case with an open collector to appear to have a lower base-emitter voltage than the case where the collector is connected up. Note: I haven't yet bothered to calculate actual base voltages. You can do all that yourself using the Shockley diode equation. (Iteratively or else with a closed expression using the LambertW function.) All that I've done is make a prediction based upon the simplest complete NPN BJT model of what I would expect to see as a difference between base-emitter voltages in the two cases. Let's see. I've not run the test, yet. And I've never done this calculation before, as like many people I've not yet needed to look. So I'm frankly a little worried at this moment, hoping for the best but not knowing what LTspice shows me. We find ... ... that LTspice shows \$846.916\:\text{mV}- 746.484:\text{mV}= 100.432\text{mV}\$!!! Well, I'm no longer shaking in my boots. And the world is right, again. Feel free to change \$V_{_\text{CC}}\$ in the schematic, such that \$V_{_\text{CC}}\ge 1\:\text{V}\$ (or thereabouts -- the main idea is to make sure it is above the base voltage.) The results will be the same because VA (Early Effect) is nulled out in the above LTspice run. So the results should be identical. If you are still curious, try changing BR from 3 to 4 (or 5, or whatever.) Then go back and try another run. See if the computed \$\Delta\,V\$ matches. (I have just tried a few times and it does match up every single time.) This is strong evidence that the experimental result (the conclusion reached from current theory and measurement from simulation) is correct. The generator current source voltage makes the difference. Summary First, take note that this voltage difference cannot be explained by the slight differences in base current. Not even remotely. I've also provided everything needed to do any and all of the computations you want. The entire four-quadrant model does a great deal for you. It is shockingly good, even given the fact that it still makes a fair number of assumptions. Note that there is now just a single reason exposed into view to explain the voltage difference at the base. By eliminating the confusing factors that might have otherwise made this a much more difficult exploration, we can isolate the single element that leads to the difference. It tells you how and it tells you by how much! It's not just a bunch of hand-waving, after all. I want to make something abundantly clear. There is a vast chasm of difference between hand-waving about possible "explanations" which sound plausible on the surface ("Loki did it") for an observed effect on the one hand and then instead providing a theoretical explanation that is both quantitative and predictive, as well as providing an explanation, on the other hand. Keep this in mind. If an explanation doesn't provide quantitative prediction, it's not really an explanation. It's mere hand-waving if it cannot be used to make quantitative predictions (and associated, quantitative boundary conditions.) P.S. I just decided to check to see what I get for the open collector voltage and it is also almost exactly \$8\:\text{mV}\$, which is what LTspice shows, above. Prediction Addendum I proposed a result from the model above. But I wanted to subject it to further testing. I'm curious if something had escaped my view above and I wanted to subject it to yet another test. So I modified the schematic to change the BR parameter over a range and then compare it with the formula I developed based upon the Shockley diode equation. Given that LTspice uses a more sophisticated model, I believe the results bear out the above, earlier conclusion.
H: Which op-amp parameter indicates the minimum amplificable voltage? I'm a newbie trying to measure μV-range voltages using cheap op-amps, with some software calibrations or error corrections, for education and practice. At first, I looked at some op-amp datasheets and looked for parameters with their unit defined as "mV" or "V", because I thought it should be indicated as volts, but didn't find anything which I could relate except Vio. I think that aforementioned parameter which I call "sensitivity" should be Vio (input offset voltage), because it's the minimum differential voltage that can make the output non-zero, and hence, amplify it. According to this document from TI it is defined as: The input offset voltage is defined as the voltage that must be applied between the two input terminals of the op amp to obtain zero volts at the output. So, is this unknown parameter, the Vio? If the answer is yes, the LM358 datasheet mentions a Vio of 7 mV maximum; this means that, for example, a differential voltage of 1 V and 1.007 V should be amplified. But the question is, does the output change when we change the 1.007000 V to 1.007001 V? Also, in some op-amps, the offset voltage is "nullable". Does this affect the sensitivity? If the answer is no, then which parameter indicates the minimum voltage? So we know if this op-amp is good to use in the μV or even nV range? Thank you for your time and knowledge. AI: Opamps are analog. There is no smallest step to them. Instead there are various noise sources at play, which will be given in datasheets..You weigh those against your signal magnitude to obtain a signal-to-noise ratio (SNR). When you know SNR you can calculate the necessary integration time needed to discern a certain voltage step. However, it is not possible to just integrate arbitrarily long to attain arbitrary precision. There is a fundamental precision limit. As you speak about nanovolt measurements I assume you have a low source impedance and current noise is inconsequential. In this case the base precision is given by the point where the 1/f slope of input voltage noise density crosses 1 Hz. E.g. for the OPA211: It has very low wideband noise, but due to 1/f noise, it is impossible to make a DC voltage measurement, more precise than 6 nV, regardless of how long you integrate. Due to 1/f noise the precision has a lower bound. So what you should be looking for is opamps with low 1/f voltage noise, e.g. zero-drift opamps. Alternatively, you can do AC voltage measurements, but I will not go into this as your question doesn't suggest you can modulate the source easily. About the LM358: The input offset voltage and current is inconsequential in my opinion. These are static offsets, that you can easily compensate in software unless you let them saturate the opamp due to ultra gains of 1000 or something. The noise discussed above is dynamic and cannot be simply removed in software. For the LM358, I could only find a plot down to 10 Hz: Without thinking too much, one can guess that the noise at 1 Hz will be about \$\sqrt{10}\$ times higher, so this will lead to the base precision of about 174 nV, for integration times that are at least a few seconds long.
H: Where are the GPIO pins on Digilent Basys 3 FPGA Board? I do not have a great understanding of electronics circuits and their fabrication. So please pardon me for the noob-ness of this post. In one article that I am reading now, they have attached wires to 3.3V general-purpose I/O pins on the Digilent Basys 3 FPGA Board. Can anyone please point me to which pins on the board they are referring to? AI: All sections each describe a subsystem and describes where each pin of the subsystem is connected to on the FPGA, such as buttons and LEDs. If you are interested in external connectivity, look at the PMOD section.
H: FET: How to understand noise specification? I was reading an article about electret microphones and there was a link to a datasheet of the FET which was used for impedance matching. It is a popular Fet which is used in many electret microphones. Here is the link to the datasheet: http://www.openmusiclabs.com/wp/wp-content/uploads/2011/03/2SK5961.pdf There is a noise specification which I am trying to understand. Here is what is written there: Output noise voltage Vno (Vin-0, A curve) -110 dB What does that mean? dB is specified as a ratio between two values. I am working with a high-gain amplifier and trying to evaluate the noise level in the voltages the FET generates. How can I recalculate that ratio into voltage? What does "A curve" mean? Thanks. AI: I would read that datasheet as -110dB(V) noise output for that FET connected in the specified test circuit. That is, -110dB below 1V rms, or 3uV rms (A weighted), for Vin=0V. "Specified test circuit" is given on page 1 of that datasheet, "Output noise voltage" near top of page 2, and V(NO) vs I(DSS) graph at bottom of page 3. This graph shows how the noise voltage varies as I(DSS) for a FET connected as specified. This I(DSS) variation is a function of the threshold voltage variation during FET manufacture. NOTE that connecting the FET differently will produce different results : a higher load resistance will increase the gain but also increase the output noise voltage. There is no reference to the mic capsule in this datasheet; it may contribute additional noise.
H: Why doesn't yaw calculating axis change after board reboot? I have an MPU9250 on my board and I want to change the yaw calculating axis after reboot depending on how this board is placed, like this: And I thought it would change if I reboot the board, but it doesn't. I'm not very good at this kind of math, but I really hope you can help me. Here is the code that I use to get φ, θ, ψ. In my main.c loop: Process_IMU(); q[0] = q0; q[1] = q1; q[2] = q2; q[3] = q3; a12 = 2.0f * (q[1] * q[2] + q[0] * q[3]); a22 = q[0] * q[0] + q[1] * q[1] - q[2] * q[2] - q[3] * q[3]; a31 = 2.0f * (q[0] * q[1] + q[2] * q[3]); a32 = 2.0f * (q[1] * q[3] - q[0] * q[2]); a33 = q[0] * q[0] - q[1] * q[1] - q[2] * q[2] + q[3] * q[3]; float sinp = a32; if (abs(sinp) >= 1) pitch = copysign(M_PI/2,sinp); else pitch = asin(sinp); //pitch = -asinf(a32); roll = atan2f(a31, a33); yaw = atan2f(a12, a22) * 2; pitch *= 180.0f / pi; yaw = atan2f(sinf(yaw),cosf(yaw)); yaw *= 180.0f / pi; roll *= 180.0f / pi; Process_IMU(): void Process_IMU() { //read raw data uint8_t data[14]; uint8_t reg = ACCEL_XOUT_H; uint8_t mpu_address = MPU9250_ADDRESS_DEFAULT; while(HAL_I2C_Master_Transmit(_MPU9250_I2C,(uint16_t)mpu_address,&reg,1,1000) != HAL_OK); while(HAL_I2C_Master_Receive(_MPU9250_I2C, (uint16_t)mpu_address, data, 14, 1000) != HAL_OK); /*-------- Accel ---------*/ Accel_x = (int16_t)((int16_t)( data[0] << 8 ) | data[1]); Accel_y = (int16_t)((int16_t)( data[2] << 8 ) | data[3]); Accel_z = (int16_t)((int16_t)( data[4] << 8 ) | data[5]); /*-------- Gyrometer --------*/ Gyro_x = (int16_t)((int16_t)( data[8] << 8 ) | data[9]); Gyro_y = (int16_t)((int16_t)( data[10] << 8 ) | data[11]); Gyro_z = (int16_t)((int16_t)( data[12] << 8 ) | data[13]); Accel_X = 10*(float)((int32_t)Accel_x - Accel_x_bias)/(float)accel_sensitivity; Accel_Y = 10*(float)((int32_t)Accel_y - Accel_y_bias)/(float)accel_sensitivity; Accel_Z = 10*(float)((int32_t)Accel_z - Accel_z_bias)/(float)accel_sensitivity ; Gyro_X = (float)(((int32_t)Gyro_x - Gyro_x_bias)/(float)gyro_sensitivity)*M_PI/180.0f; Gyro_Y = (float)(((int32_t)Gyro_y - Gyro_y_bias)/(float)gyro_sensitivity)*M_PI/180.0f; Gyro_Z = (float)(((int32_t)Gyro_z - Gyro_z_bias)/(float)gyro_sensitivity)*M_PI/180.0f; // Get data of Magnetometer Get_magnetometer(); MadgwickAHRSupdateFixed(Gyro_X,Gyro_Y,Gyro_Z,Accel_X,Accel_Y,Accel_Z,Mag_X_calib,Mag_Y_calib,-Mag_Z_calib); } Get_magnetometer(): void Get_magnetometer() { uint8_t raw_data[7]; uint8_t reg_ST1 = ST1; uint8_t mag_address = MAG_ADDRESS_DEFAULT; uint8_t reg = XOUT_L; while(HAL_I2C_Master_Transmit(_MPU9250_I2C,(uint16_t)mag_address,&reg_ST1,1,1000) != HAL_OK); while(HAL_I2C_Master_Receive(_MPU9250_I2C,(uint16_t)mag_address,raw_data,1,1000) != HAL_OK); if (raw_data[0] & 0x01) { while(HAL_I2C_Master_Transmit(_MPU9250_I2C,(uint16_t)mag_address,&reg,1,1000) != HAL_OK); while(HAL_I2C_Master_Receive(_MPU9250_I2C,(uint16_t)mag_address,raw_data,7,1000) != HAL_OK); // Read the six raw data and ST2 registers sequentially into data arra if(!(raw_data[6] & 0x08))// Check if magnetic sensor overflow set, if not then report data { Mag_x = (int16_t)((raw_data[1]<<8) | raw_data[0] ); Mag_y = (int16_t)((raw_data[3]<<8) | raw_data[2] ); Mag_z = (int16_t)((raw_data[5]<<8) | raw_data[4] ); } Mag_X_calib = (float)Mag_x * asax * mag_sensitivity - mag_offset[0]; Mag_Y_calib = (float)Mag_y * asay * mag_sensitivity - mag_offset[1]; Mag_Z_calib = (float)Mag_z * asaz * mag_sensitivity - mag_offset[2]; Mag_X_calib *=scale_x; Mag_Y_calib *=scale_y; Mag_Z_calib *=scale_z; } } MadgwickAHRSupdateFixed(): void MadgwickAHRSupdateFixed(float gx, float gy, float gz, float ax, float ay, float az, float mx, float my, float mz) { float recipNorm; float s0, s1, s2, s3; float qDot1, qDot2, qDot3, qDot4; float hx, hy; float _2q0mx, _2q0my, _2q0mz, _2q1mx, _2bx, _2bz, _4bx, _4bz, _8bx, _8bz, _2q0, _2q1, _2q2, _2q3, _2q0q2, _2q2q3, q0q0, q0q1, q0q2, q0q3, q1q1, q1q2, q1q3, q2q2, q2q3, q3q3; // Use IMU algorithm if magnetometer measurement invalid (avoids NaN in magnetometer normalisation) if((mx == 0.0f) && (my == 0.0f) && (mz == 0.0f)) { MadgwickAHRSupdateIMU(gx, gy, gz, ax, ay, az); return; } // Rate of change of quaternion from gyroscope qDot1 = 0.5f * (-q1 * gx - q2 * gy - q3 * gz); qDot2 = 0.5f * (q0 * gx + q2 * gz - q3 * gy); qDot3 = 0.5f * (q0 * gy - q1 * gz + q3 * gx); qDot4 = 0.5f * (q0 * gz + q1 * gy - q2 * gx); // Compute feedback only if accelerometer measurement valid (avoids NaN in accelerometer normalisation) if(!((ax == 0.0f) && (ay == 0.0f) && (az == 0.0f))) { // Normalise accelerometer measurement recipNorm = invSqrt(ax * ax + ay * ay + az * az); ax *= recipNorm; ay *= recipNorm; az *= recipNorm; // Normalise magnetometer measurement recipNorm = invSqrt(mx * mx + my * my + mz * mz); mx *= recipNorm; my *= recipNorm; mz *= recipNorm; // Auxiliary variables to avoid repeated arithmetic _2q0mx = 2.0f * q0 * mx; _2q0my = 2.0f * q0 * my; _2q0mz = 2.0f * q0 * mz; _2q1mx = 2.0f * q1 * mx; _2q0 = 2.0f * q0; _2q1 = 2.0f * q1; _2q2 = 2.0f * q2; _2q3 = 2.0f * q3; _2q0q2 = 2.0f * q0 * q2; _2q2q3 = 2.0f * q2 * q3; q0q0 = q0 * q0; q0q1 = q0 * q1; q0q2 = q0 * q2; q0q3 = q0 * q3; q1q1 = q1 * q1; q1q2 = q1 * q2; q1q3 = q1 * q3; q2q2 = q2 * q2; q2q3 = q2 * q3; q3q3 = q3 * q3; // Reference direction of Earth's magnetic field hx = mx * q0q0 - _2q0my * q3 + _2q0mz * q2 + mx * q1q1 + _2q1 * my * q2 + _2q1 * mz * q3 - mx * q2q2 - mx * q3q3; hy = _2q0mx * q3 + my * q0q0 - _2q0mz * q1 + _2q1mx * q2 - my * q1q1 + my * q2q2 + _2q2 * mz * q3 - my * q3q3; _2bx = sqrt(hx * hx + hy * hy); _2bz = -_2q0mx * q2 + _2q0my * q1 + mz * q0q0 + _2q1mx * q3 - mz * q1q1 + _2q2 * my * q3 - mz * q2q2 + mz * q3q3; _4bx = 2.0f * _2bx; _4bz = 2.0f * _2bz; _8bx = 2.0f * _4bx; _8bz = 2.0f * _4bz; // Gradient decent algorithm corrective step s0= -_2q2*(2*(q1q3 - q0q2) - ax) + _2q1*(2*(q0q1 + q2q3) - ay) + -_4bz*q2*(_4bx*(0.5 - q2q2 - q3q3) + _4bz*(q1q3 - q0q2) - mx) + (-_4bx*q3+_4bz*q1)*(_4bx*(q1q2 - q0q3) + _4bz*(q0q1 + q2q3) - my) + _4bx*q2*(_4bx*(q0q2 + q1q3) + _4bz*(0.5 - q1q1 - q2q2) - mz); s1= _2q3*(2*(q1q3 - q0q2) - ax) + _2q0*(2*(q0q1 + q2q3) - ay) + -4*q1*(2*(0.5 - q1q1 - q2q2) - az) + _4bz*q3*(_4bx*(0.5 - q2q2 - q3q3) + _4bz*(q1q3 - q0q2) - mx) + (_4bx*q2+_4bz*q0)*(_4bx*(q1q2 - q0q3) + _4bz*(q0q1 + q2q3) - my) + (_4bx*q3-_8bz*q1)*(_4bx*(q0q2 + q1q3) + _4bz*(0.5 - q1q1 - q2q2) - mz); s2= -_2q0*(2*(q1q3 - q0q2) - ax) + _2q3*(2*(q0q1 + q2q3) - ay) + (-4*q2)*(2*(0.5 - q1q1 - q2q2) - az) + (-_8bx*q2-_4bz*q0)*(_4bx*(0.5 - q2q2 - q3q3) + _4bz*(q1q3 - q0q2) - mx)+(_4bx*q1+_4bz*q3)*(_4bx*(q1q2 - q0q3) + _4bz*(q0q1 + q2q3) - my)+(_4bx*q0-_8bz*q2)*(_4bx*(q0q2 + q1q3) + _4bz*(0.5 - q1q1 - q2q2) - mz); s3= _2q1*(2*(q1q3 - q0q2) - ax) + _2q2*(2*(q0q1 + q2q3) - ay)+(-_8bx*q3+_4bz*q1)*(_4bx*(0.5 - q2q2 - q3q3) + _4bz*(q1q3 - q0q2) - mx)+(-_4bx*q0+_4bz*q2)*(_4bx*(q1q2 - q0q3) + _4bz*(q0q1 + q2q3) - my)+(_4bx*q1)*(_4bx*(q0q2 + q1q3) + _4bz*(0.5 - q1q1 - q2q2) - mz); recipNorm = invSqrt(s0 * s0 + s1 * s1 + s2 * s2 + s3 * s3); // normalise step magnitude s0 *= recipNorm; s1 *= recipNorm; s2 *= recipNorm; s3 *= recipNorm; // Apply feedback step qDot1 -= beta * s0; qDot2 -= beta * s1; qDot3 -= beta * s2; qDot4 -= beta * s3; } // Integrate rate of change of quaternion to yield quaternion q0 += qDot1 * (1.0f / sampleFreq); q1 += qDot2 * (1.0f / sampleFreq); q2 += qDot3 * (1.0f / sampleFreq); q3 += qDot4 * (1.0f / sampleFreq); // Normalise quaternion recipNorm = invSqrt(q0 * q0 + q1 * q1 + q2 * q2 + q3 * q3); q0 *= recipNorm; q1 *= recipNorm; q2 *= recipNorm; q3 *= recipNorm; } I have tried to find information about this, but couldn't. Only this book: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.110.5134&rep=rep1&type=pdf but I'm not sure that it will help me. So, the final question is - how to change the yaw calculating axis after board reboot? Thanks for any advice! AI: The x, y and z orientation of the device is fixed. Extract from the data sheet: Even when you flip the board into a different orientation and repower, these axes will remain the same relative to the device; they are not dynamically allocated at power on. What you need is a Flip value. This value would then be used to map the device axes to axes that match the orientation of the board. Using the NED orientation system, yaw is calculated as rotation around the z axis: So what you need to do is determine what your board axes need and map your magnetometer axes to this. For example, mag_x_flipped = mag_y_unflipped. Without knowing what orientation the magnetometer is on your board I can't say what values you would need for each of your three scenarios. The flip value needs to either be set as a user setting or, I can imagine it could be dynamically set at power on by looking at your accelerometer axes values. In each of your three scenarios, one axis will measure 1g while the others will measure 0g (or there abouts).
H: Kicad ERC error #3: Power input pin not driven when attached to transistor The ERC on my schematic keeps on failing due to a Type 3 error (VDD pin of an IC not driven) despite the fact that it does receive power via a voltage regulator and an NPN transistor. See the schematic attached: Pin #2 is configuread as a +9 V power input. The current flows from the +12 V source in the upper left corner (which is attached to a power flag and input connector) through the DC regulator (U1) and the transistor (which runs a capacity multiplier following an advice I received in this forum; Q1; type BC547C) to the IC (U2A). Nevertheless, this construction fails in the ERC: ErrType(3): Pin connected to some others pins but no pin to drive it @ (80.01 mm, 38.10 mm): Cmp #U2, Pin 2 (power_in) not driven (Net 50). Why not? Is this because of the transistor, whose emitter is not regarded as a power output? UPDATE: I do use power flags. See this screenshot: A look into the library editor revealed that the C and E connectors of Q1 are configured as "passive". May this be the culprit? AI: Kicad has no idea of your intended usage nor where power comes from. It has ERC rules that check common issues and checking that power-in is driven from something that can provide power is one such use-case. Some parts have a Power-Out but most of the time you must inform Kicad where the power comes from You need to add a PWR_FLAG symbol to inform Kicad what nets are capable of providing power https://docs.kicad.org/master/en/getting_started_in_kicad/getting_started_in_kicad.html#43 Such power flags MUST be on all nets that provide power. It isn't enough to place on the global power flags (+5, +3v3) as sometimes ferrites are used and kicad does not know the intent of a circuit. Looking at the picture from the OP, U2A.2 (Vdd) is connected to Q1.3 and thus this could be the source of this additional power rail, or is it via the top trace that is going off-screen. Nets that provide power must be marked as such
H: TRS to USB microphone wiring I am trying to use a microphone pulled out from an old video recorder. I can use a TRS connector and plug it into the AV - mic in jack. I am curious if there is a way to use USB - A interface. In short is possible to map Out and GND to USB-A VCC,D-,D+,GND? Will windows detect the device as Mic in if the wiring is correct? AI: No, USB is a digital interface that talks data. If you want to connect a microphone via USB to a computer, you need an USB sound chip that presents a sound device to PC and converts analog signals to digital audio data.
H: Op-amp Colpitts oscillator I have a question about op-amp Colpitts oscillators. I am a hobbyist building a sine wave oscillator circuit for an experiment. The frequency of the output is one of the variables in the experiment so the actual value doesn't matter too much, but will be in the range of 10-100 kHz. I have tried the circuit below without success in getting it to oscillate. The single-supply voltage is 6 V. The problem is that the output produces a steady 5.7 V. It measures 5.7 V at the negative input pin as well. There is no detectable voltage drop with my DMM across any of the resistors so the current flowing must be very small. I have tried JRC4558D and NE5532 op-amps with the same result.I can get it to oscillate at the expected frequency with a 741 op-amp but the output is distorted at this frequency range, which won't do. I have tried adjusting the R3/R2 ratio as well as the various values of R1 from 0 to 1 kΩ without any change in the output whatsoever. I have also tried removing R4 without any effect. I'm having trouble understanding where I am going wrong. AI: You are using a single supply hence, the DC voltage on the non inverting input needs to be mid rail or about 3 volts. You can't expect most opamps to work like this withput correctly getting both inputs to mid rail. You must also choose an opamp that is capable of working on a single supply as low as 6 volts. Some 741 models might but, they really are poor devices to use in this type of circuit due to their many weaknesses of performance. You will also find that an opamp circuit needs gain stabilization else the output crashes peak to peak into clipping points near the supply rail voltages.
H: Switching loss calculation formula I read an article talking about the power switching loss formula, and I am curious about one thing. How to use math to prove the 1/6 and 1/2 in these two waveforms? Could someone give me some idea? AI: These expressions let you calculate the theoretical losses when a power switch turns on or off with overlapping current and voltage. I have carried the calculations in my book for the two scenarios and the turn-off sequence in particular. The calculation is quite simple and involves an integral to average the instantaneous power \$p(t)\$ over a switching period. The below figure shows you a real shot taken on a flyback converter when the switch opens: Please note that these are idealized waveforms and switching losses are extremely difficult to theoretically evaluate. This is because many parasitics (components, layout and so on) are involved which can significantly affect final waveforms. Same with simulation which usually leads to wrong results for switching losses.
H: What is the name of a two-transistor touch sensor circuit? I remember a circuit from back in my earlier days of playing with Arduinos and electronics that would light up an LED when two leads are touched with a poor conductor (e.g. a finger). I don't remember the specifics, but I remember it involving almost nothing but two transistors, an LED, and some kind of power source. It was simple enough to solder it all together without even a breadboard and carry it to school in a backpack to get extra points with my physics teacher, heh. I've been able to find similar circuits with some rough googling but none of them seem to be it; there is a single mosfet implementation, and a few that use other ICs that I'm not quite interested in. I'm mostly trying to remember the name of the circuit, which I remember being kinda unique (I want to say it was some engineer's name, and I think it ended in an -ie sound, like -ixie or -itsy circuit or something). Thanks! Update: I found this circuit diagram in the source for one of my REALLY old personal websites. I guess I really liked this circuit! Of course the image is helpfully labeled "circuit.png", but I am 99.9% certain this is the circuit in question. AI: You're likely thinking of a Darlington Pair, which is a high-gain cascade of two BJTs that gives very high current gain: The two transistors are connected in such a manner that a tiny current passed through the touch contacts will be multiplied by the transistor beta (a measure of its current gain), and then multiplied again by the beta of the second transistor, resulting in an impressive current gain able to at least weakly light the LED from leakage through the person touching the sensor. For example, the beta of a 2n3904 (a common NPN BJT) is on the order of 100, meaning that you can get 1 mA of load current (enough for a dim LED) from around 100 nA of leakage. Note to address the 'mosfet' tag on the question - this exact topology is a bit harder to realize with pair of MOSFETs because FETs are voltage-controlled devices with very insulated gates. Rather than having a finite current gain, a FET sensitive to the tiniest bits of charge deposited on its gate electrode, meaning that it'll float around and may flicker on and off even when the touch sensor is not touched. However, TimWescott's answer mentions an alternate topology that uses a MOSFET to realize a similar touch sensor function using a single FET in a way that doesn't suffer from the above issues.
H: Why did the output voltage of this circuit had the same value no matter what the value for the resistor R2 was So here is the circuit: R2 is a NTC thermistor (its value changes with temperature) and i want vout, the output of this circuit, to be in between 0.05V and 4.95V, where 0.05V should correspond to R2 with 16k and 4.95V should correspond to R2 with 5k. Every other value is in between 16k and 5k ohms. The image below is the description of the NTC thermistor that i used in the lab, i want to read the temperatures between 15 celsius and 40 celsius. The problem that i faced was that no matter what the resistance value was (i kept changing the temperature around it with a heater) the output voltage was always around 1,4V. In Ltspice everything works ok, if i put R2= 8k i get vout at ~3V which is what i want. Did that happen because of the way i dimensioned the circuit? Or did i just messed something up in the lab? Here are 3 pics of the circuit i built in the lab: The red and orange cables are from the NTC thermistor, the blue ones are from the power supply that gives 5V AI: Most likely reason, the breadboard is powered with a floating power supply, so there is no common ground reference between the MCU and op-amps.
H: How do i make a circuit with 3 different counters that count 3 different things? The system requires 3 counters -one for the number of pills per bottle -the number of bottles filled -the total number of pills in the final bottles. AI: Hint: You can use a set of 4017B counter ICs and logic gates ICs for testing conditions/comparisons, to answer your questions. I don't have enough point to leave the hint as a comment. So I give it here.
H: Mistake in using KVL and KCL in ideal opamp-circuit I have the following circuit: simulate this circuit – Schematic created using CircuitLab And I used KCL and KVL to write the following sets of equations: $$ \begin{cases} \text{I}_0=\text{I}_1+\text{I}_2\\ \\ \text{I}_2=\text{I}_4+\text{I}_5\\ \\ \text{I}_7=\text{I}_5+\text{I}_6\\ \\ \text{I}_3=\text{I}_4+\text{I}_7\\ \\ \text{I}_0=\text{I}_1+\text{I}_3 \end{cases}\tag1 $$ And $$ \begin{cases} \text{I}_1=\frac{\text{V}_\text{i}-\text{V}_1}{\text{R}_1}\\ \\ \text{I}_1=\frac{\text{V}_1}{\text{R}_2}\\ \\ \text{I}_2=\frac{\text{V}_\text{i}-\text{V}_2}{\text{R}_3}\\ \\ \text{I}_4=\frac{\text{V}_2}{\text{R}_4}\\ \\ \text{I}_5=\frac{\text{V}_2-\text{V}_3}{\text{R}_5}\\ \\ \text{I}_5=\frac{\text{V}_3-\text{V}_4}{\text{R}_6}\\ \\ \text{I}_7=\frac{\text{V}_4-\text{V}_5}{\text{R}_7}\\ \\ \text{I}_7=\frac{\text{V}_5}{\text{R}_8} \end{cases}\tag2 $$ But when I tried to solve them for all unknowns I found that there are no solutions. This implies that my equations lead to a contradiction but I can't see where I am going wrong. Can someone show me where I took the wrong path? Thanks a lot. AI: Jan, I don't see any need for the opamp power sources and their currents in this schematic (in some others cases, I might.) So I don't think that's an important insight here. Here's the schematic without all those currents, which I absolutely do not need given that you are open to both KVL and also KCL. simulate this circuit – Schematic created using CircuitLab Let's use (freely available) SymPy: var('r1 r2 r3 r4 r5 r6 r7 r8 iout v1 v2 v3 v4 vin vout') eq1 = Eq( v1/r1 + v1/r2, vin/r1 ) # KCL node V1 eq2 = Eq( v2/r3 + v2/r4 + v2/r5, vin/r3 + v3/r5 ) # KCL node V2 eq3 = Eq( v3/r5 + v3/r6, v2/r5 + v4/r6 ) # KCL node V3 eq4 = Eq( v4/r6 + v4/r7, v3/r6 + vout/r7 + iout ) # KCL node V4 eq5 = Eq( vout/r7 + vout/r8, v4/r7 ) # KCL node Vout eq6 = Eq( v1, v3 ) # ideal opamp ans = solve( [ eq1, eq2, eq3, eq4, eq5, eq6 ], [ v1, v2, v3, v4, vout, iout ] ) tf = simplify( ans[vout] / vin ) pprint( tf ) -r₈⋅(r₂⋅(r₃⋅r₄⋅r₆ - (r₅ + r₆)⋅(r₃⋅r₄ + r₃⋅r₅ + r₄⋅r₅)) + r₄⋅r₅⋅r₆⋅(r₁ + r₂)) ──────────────────────────────────────────────────────────────────────── r₅⋅(r₁ + r₂)⋅(r₇ + r₈)⋅(r₃⋅r₄ + r₃⋅r₅ + r₄⋅r₅) All the solutions, if you want them, are: r₂ v₁ = vᵢₙ ⋅ ─────── r₁ + r₂ r₄⋅(r₂⋅r₃ + r₅⋅(r₁ + r₂)) v₂ = vᵢₙ ⋅ ──────────────────────────────── (r₁ + r₂)⋅(r₃⋅r₄ + r₃⋅r₅ + r₄⋅r₅) r₂ v₃ = vᵢₙ ⋅ ─────── r₁ + r₂ (-r₁⋅r₄⋅r₆ + r₂⋅r₃⋅r₄ + r₂⋅r₃⋅r₅ + r₂⋅r₃⋅r₆ + r₂⋅r₄⋅r₅) v₄ = vᵢₙ ⋅ ─────────────────────────────────────────────────────────── r₁⋅r₃⋅r₄ + r₁⋅r₃⋅r₅ + r₁⋅r₄⋅r₅ + r₂⋅r₃⋅r₄ + r₂⋅r₃⋅r₅ + r₂⋅r₄⋅r₅ -r₈⋅(r₂⋅(r₃⋅r₄⋅r₆ - (r₅ + r₆)⋅(r₃⋅r₄ + r₃⋅r₅ + r₄⋅r₅)) + r₄⋅r₅⋅r₆⋅(r₁ + r₂)) vₒᵤₜ = vᵢₙ ⋅ ──────────────────────────────────────────────────────────────────────── r₅⋅(r₁ + r₂)⋅(r₇ + r₈)⋅(r₃⋅r₄ + r₃⋅r₅ + r₄⋅r₅) (I'll not bother writing out iₒᵤₜ . It's longer and probably not interesting.)
H: I am planning to design comparator based on Opamp but I am not getting the exact pulse output in LTspice I am trying to simulate this but I am not getting the exact "RECTANGULR" pulse output. Kindly help me to figure out where I am going wrong. I am going to use this circuit as pulse generator for gating MOSFET. AI: The LT1017 is a micropower comparator, not an opamp. Micropower should clue you in that it is probably slow and not suitable for the input signal timing you are using. You will get better results if you use a pull-up resistor since the built-in pull up current is under 100 uA (pretty low for your requirements). You will be better off using a faster comparator like the LT1720 (about 100x faster than the LT1017 and included in the LTspice library) or something like the LMV7239 (about 10x times faster than the LT1017, but you'll need to import the PSpice model). You may want to include some hysteresis to improve noise resistance.
H: What is the best way to convert 16V DC to 4 V DC with close to 0 mA loss I'm trying to power a Attiny2313A from 4V (which achieves the Voltage/Amperage ratio I want) but the power source I have to get 4V from, is 16V. So I'm wondering what some recommendations would be. I have tried to do a voltage divider with ratio 0.25 (R1= 2k & R2= 6k) and this can power the device, but the current being used by the resistors clearly shows limitations and the output of the Attiny2313A isn't dependable. I have also tried a buck converter I bought off eBay, which draws 5.5 mA without even connecting to Attiny2313A. In case you wanna see the limitations from the voltage divider, here is two pictures, where the first is with a steady 4V from my Analog Discovery and the second is 4V from the voltage divider. Worth mentioning is that the signal you see is coming from the 16V and controlled by the Attiny via a transistor. Any ideas or suggestions? AI: I checked the datasheet for your MCU. It looks like a sub-mA device. Buck converters might have trouble with that. I tried to search for some interesting buck converter that would fit your description on mouser, but they're all inadequately expensive, and there is no guarantee they are stable at 1mA output. So I looked at the linear regulators. Haven't found too many, but there is this one: ADP1720 (Datasheet). Micropower linear regulator with up to 50mA output and special capabilities for ultra low power, such as (from the first page of datasheet):
H: RISC-V: How do you store specific values in large addresses in Venus (RISC-V sim)? I'm new to RISC-V and I'm having trouble understanding how one would store specific values in large addresses. For example, if I wanted to store the value 5 in 0x12312312, how would you go about that? AI: It requires two steps. "Load upper immediate": and add the 12 LSBits: Images source: "The RISC-V Instruction Set Manual Volume I: Unprivileged ISA" Example to build any constant/address: a5 = 0x12345678 lui a5,0x12345000(305418240) addi a5,a5,1656 For values with all 12 LSbits = 0, only lui is required. For small values, addi to register 0 is enough. Another example: add 65535 to a pointer stored at address (a3): lw a5,0(a3) -> load the base address, stored in (a3) to a5 lui a4,0x10 -> "build" the initial MSBit of the constant addi a4,a4,-1 # ffff -> "build" the final value of the constant add a5,a5,a4 -> calculate the new adress sw a5,0(a3) -> store the new address
H: RF signal amplifier I bought a tado smart thermostat but it has very bad signal. the transceiver often loses connection with the controls fitted to the radiators. i have tried various ways of pointing / positioning / angling it but the signal never reaches all thermostats. It uses a 868 MHz frequency. Is it possible to buy a signal booster? I have seen some online e.g.: https://www.archiexpo.com/prod/eberle-controls/product-53224-1278761.html which is from a different system. Would these work automatically to boost the signals? Is there anything I would need to do to make sure it is working / boosting the right signal? Any other advice welcome. AI: In general, repeaters, especially in the radio world, do not just repeat anything they hear which just happens to be in the right frequency. In most cases, they need to actually understand the signal they need to repeat (at least at the lower levels) before regenerating it for retransmission. In some circumstances it goes further as the repeater needs to actually actively negotiate with neighbouring nodes so traffic is correctly routed, de duplicated, etc. So it’s quite unlikely you’ll be able to use a repeater from a different brand just because it’s in the right frequency band. It needs to use the same protocols as your devices. It is quite possible it could work if they are indeed compatible, but we lack information to know whether this is the case.
H: Is this "definition" of "independent loop" even logical / a valid definition? I'm currently studying the textbook Fundamentals of Electric Circuits, 7th edition, by Charles Alexander and Matthew Sadiku. Chapter 2.3 Nodes, Branches, and Loops says the following: A branch represents a single element such as a voltage source or a resistor. In other words, a branch represents any two-terminal element. The circuit in Fig. 2.10 has five branches, namely, the 10-V voltage source, the 2-A current source, and the three resistors. A node is the point of connection between two or more branches. A node is usually indicated by a dot in a circuit. If a short circuit (a connecting wire) connects two nodes, the two nodes constitute a single node. The circuit in Fig. 2.10 has three nodes \$a\$, \$b\$, and \$c\$. Notice that the three points that form node \$b\$ are connected by perfectly conducting wires and therefore constitute a single point. The same is true of the four points forming node \$c\$. We demonstrate that the circuit in Fig. 2.10 has only three nodes by redrawing the circuit in Fig. 2.11. The two circuits in Figs. 2.10 and 2.11 are identical. However, for the sake of clarity, nodes \$b\$ and \$c\$ are spread out with perfect conductors as in Fig. 2.10. A loop is any closed path in a circuit. A loop is a closed path formed by starting at a node, passing through a set of nodes, and returning to the starting node without passing through any node more than once. A loop is said to be independent if it contains at least one branch which is not a part of any other independent loop. Independent loops or paths result in independent sets of equations. It is possible to form an independent set of loops where one of the loops does not contain such a branch. In Fig. 2.11, \$abca\$ with the \$2 \Omega\$ resistor is independent. A second loop with the \$3 \Omega\$ resistor and the current source is independent. The third loop could be the one with the \$2 \Omega\$ resistor in parallel with the \$3 \Omega\$ resistor. This does form an independent set of loops. I don't understand the authors' definition of "independent loop": A loop is said to be independent if it contains at least one branch which is not a part of any other independent loop. This doesn't even seem to be logical / a valid definition. The authors attempt to define an independent loop by describing it as a loop containing "at least one branch which is not a part of any other independent loop." But this reasoning is circular, since, in order to understand the definition of an "independent loop," one must use/understand the definition of ... an independent loop. And so, since this explanation of "independent loop" appeals to independent loops, it isn't even a valid definition. Am I misunderstanding something here? AI: A set of independent loops contains loops such that each loop in the set contains a branch which is not part of any other loop in the set. Some circuits may be divided up into a sets of independent loops in multiple ways. Therefore, an independent loop is not independent in itself, but only in relationship to a set of other loops.
H: Do I need to switch power and ground on USB 3.0? I am an electronics novice, so before I start cutting my cable, I want to ask people who know more than me. I have a project that has a USB 3.0 hub inside, and I want a single switch to handle the DC power and the USB connection to the computer. I saw a video of someone who put an inline switch on a USB 3.0 cable, and he switched power and ground. Is that necessary, or can I get away with switching only one? AI: OPTION 1 - DO IT IN SOFTWRE If you can find the documentation for the upstream hub. There should be a command you can send the hub to turn off a downstream device via software. You can then avoid having to make any wiring modifications at all. OPTION 2 - REMOVE 5V POWER If it's not possible to find the documentation, or too complicated to implement the software, then you can disable the downstream devices by removing 5V power, and leaving GND (and all other signals) connected. As a practical matter, disabling 5V power is going to require cutting into the cable to access the 5V wire (usually red), and putting a switch/relay/transistor in line with it. USB cables typically have a metal braid that is used for RF shielding. If you cut into that braid or move wires around, you might degrade the signal quality, or turn the cable into a source of RF emissions. Be careful with this approach and be prepared for the possibility that the cable no longer works. OPTION 3 - TOTAL ISOLATION Removing GND is probably not a good idea unless you have some compelling need to totally isolate the device from the host. GND is used as the reference for all signals. So, if you were going to disconnect GND then you would need to disconnect ALL signals. Having anything else connected, without GND connected could damage either the host or the downstream devices. Also, it is very important to maintain the integrity of the data signals, so you can't just throw any switching element in line with them and expect it to work. Given the above, I see two possible ways to switch out all the wires. Use a chip designed to switch USB 3.0 signals. Texas Instruments and others make these. But since you said you are novice, using such a chip is probably beyond your abilities. Use a mechanical relay that is rated for operation in the GHz range. Coto Relay and others make such devices.
H: Case temperature MOSFET I am going to use a nMOSFET (NTMFS5C670NL) which will a varied power dissipation. The current going from drain to source is determined by a control signal. The maximum power dissipation for the MOSFET is 1.29 W. The junction-to-case thermal resistance Rjc is 2.4 ℃/W, and the junction-to-ambient thermal resistance Rja is 41 ℃/W. The actual junction temperature formula is Tja = (Rja*Pd)+Ta. If the ambient temperature is 25 ℃, then Tja will be equal to 77.89. Case temperature Tc will be (-Rjc*Pd) + Tj = 75 ℃. These values are only valid if the MOSFET is surface mounted using a 650 mm^2, 2 Oz Cu pad. This is the only scenario the datasheet for the MOSFET gives. Is there any way to give an educated guess on what the case temperature will be with a bigger or smaller Cu pad? I have failed to find any white papers that can give a decent estimation. AI: Yes, you can compute the \$R_{th}/cm^2\$ of that copper geometry in terms \$^\circ C/W ~per ~cm^2\$ in an open convection area then add Rth for the enclosure. Then define the added subscript letter to abbreviate each interface or number for internal or external ambient layers.
H: CMOS active crystal oscillator load capacitance, to be used for PLL reference I do not have a clear idea of what load capacitance is. I am looking forward to use the ADF4106 PLL with a ~580 MHz VCO to synthesize the frequency. For the reference input to the PLL, I wanted to simply use a CMOS active crystal oscillator, directly connect it to the REFin pin, with only the AC coupling capacitor (that blocks DC). But now I noticed this term in the datasheet: "load capacitance". The oscillator I want to use is an SG7050CAN ~3 V CMOS active crystal oscillator which I will use at 24 MHz. The datasheet says: Load capacitance, L_CMOS: 15 pF max. I want to understand, in a simple, application oriented manner, what this load capacitance is. I mean, how do I control this load capacitance? What do I need to do? Can I just connect it directly to the REFin pin of the ADF4106 PLL with just the DC decoupling capacitor in between? The ADF4106 datasheet says that the input capacitance of its REFin pin is max. 10 pF. Does this mean I can simply connect the SG7050CAN output to the REFin pin, keeping the trace as short as possible on the basic FR4 PCB that I will use? NB: I am quite loaded with the project overall, so I am keen to just understand the connection and load capacitance controlling rule(s) to get the oscillator connected to the PLL reference input as simply as possible, and less interested in detailed theory or anything like that. And in the website of Epson, manufacturer of the SG7050CAN oscillators, I read that they incorporate loading capacitance or supporting circuitry like that internally in their oscillators. Thus, their oscillators can be used to run ICs directly, forming standardized components and simplifying the design process. AI: Clock trace length is not critical for pF, but is important for EMI reasons. Let me give you the long explanation 1st and conclusion last. The CMOS driver is generally balanced equally impedance driver to give equal load current for each state and equal rise/fall time. Yet from process limitations and differences in Pch. Nch will have differential offsets in Ron and Miller Capacitance at each state 0,1. This results in a spec for symmetry of the square wave for each version that also affects operating current and drive current variations. Thus the standard way to define these variations is to use a fixed load C in the parameter tolerances. Smaller load C reduces risetime, but is not necessarily a max load or an optimal load. Smaller load C is necessary only for max f which is rated with "no load". Understand that slew rate currents come from the nearby decoupling cap are affected by load C which suppress the voltage ripple from affecting nearby circuits and radiated EMI in the loop area. Since \$dVc/dt = Ic/C \$ and CMOS \$Z_{ol}=V_{ol}/I_{ol}\$ for the Nch and likewise for the Pch \$Z_{oh}=(V_{DD}-V_{oh})/I_{oh}\$. These Z values = the RdsOn values are also Vdd/Vgs dependent which are often consistent within each family of devices rated at the same Vdd max lile 5.5V or 3.6V etc. but have wide process tolerances (25% to 50%) If the load Cin = 10 pF and asymmetry test std was 15 pF then how much trace length and thus capacitance can you work with to minimize? ( is the next question) Background on stripline or PCB interconnects in general Trace impedance over a ground plane is lower than without and thus higher capacitance ~ tp 0.5 pF /cm. This reduces emissions but unless it is a a high layer count with very thin FR4 gap or might be at least 120 Ohms, if track w/h = 2 then you are close to 50 Ohms which for a short track, reflections are not a problem, so neither is Zo, but radiated noise might be. So a ground plane is needed for control of emissions to reduce the current loop area. RdsOn near 25 ohms for 3.6V logic and 50 ohms for 5.5V logic. Capacitance is linear with track width when the gap to gnd plane is much smaller than width but when w/h ratio of track geometry is << 1 the pF /cm does not change much. Recall 1st, 2nd & 3rd order effects of a plane, line and point charge vs E-field So for 4 layer boards consider for a trace width,w and 2nd layer gnd cap, h for w/h = 0.2mm/0.9mm the trace capacitance is 0.5 pF/ cm. Thus a direct connection with 5 pF trace capacitance is 10 cm! For 2 layer board it is 0.42 pF/cm. and 0.2mm is almost an 8 mil wide trace. Conclusion Thus the load=15 pF is what you might encounter with different chip loads and is not a "Target load" but perhaps a practical limit at max f. It only affects duty cycle slightly and also limits risetime for the RC time constant. If you are using both edges for certain functions, that may affect timing margins at the max f, otherwise it is not a worry with ~ 0.5 pF/cm on a trace.
H: Correcting MOSFET gate voltage I'm working on a project where a IRL520N controls some LEDs but I ran into a problem. In order to activate the MOSFET I'm using a microcontroller with an output of 5 V, but the datasheet says that the gate threshold voltage must be between 2-4 volts. So I wondered: would putting a 200 ohm resistance at the gate fix the problem? AI: The gate threshold voltage is specified between 2-4 volts (actually the datasheet I found for the closest match was 1-2 V). This means that the manufacturer guarantees that if you take any genuine MOSFET of that model and measure it, the threshold will be guaranteed somewhere in that range, so a 5 V signal is guaranteed to turn the transistor on. Being above the threshold is fine and having some margin here is actually good. Because the transistor VgsMax (maximum Vgs that the transistor can safely experience) is well above 5V, you have no concern of damaging the transistor by gate overvoltage. The resistance at the gate is a good idea for other reasons such as reducing ringing when using sharp gate drive edges or inductances in the gate wire. A pulldown resistor is also helpful there.
H: Tesla coil with flyback transformer and ZVS circuit I am new to the field of electronics and I might have some wrong concepts. I was thinking of designing an old fashioned 'spark gap' Tesla coil. Materials I have: a flyback transformer a cheap ZVS circuit a cardboard tube of dimensions - 3.5 in * 19.5 in If I want to make a Tesla coil, how many turns do I need on the primary and the secondary? What should be the gap for the spark gap? What capacitors should I use? What will be the frequency of the Tesla coil? I am confused by all these questions. Could someone help me out? AI: Take a look at http://classictesla.com/home/javatc3d/ . JavaTC is one of the most popular Tesla Coil design tools out there and I highly recommend it. It will help you determine resonant frequency, etc. based on your coil dimensions and wire size. As for the ratio between the primary and secondary, for a Tesla coil that is less crucial. Generally you should shoot for around 100 turns on the secondary. Primary turns are tuned to match the resonant frequency of the secondary and topload.
H: Unexpected voltage drop in linear dual supply power regulator I am attempting to implement a linear dual power regulator circuit using the 7805 and the 7905, as shown below. It almost works, except that the voltage difference from the ground rail to the positive rail sits at around 4.73 volts instead of the expected 5. The difference between the ground and the negative rail on the other hand is just as expected, 5 volts and change. Swapping out the 7805 for a new one changed nothing, nor did raising or lowering the voltage being fed into the circuit, nor changing the values of the capacitors. Could somebody shed some light on what might be happening here? AI: Since 0V is floating, it depends on the load ratio or quiescent current ratio of +/- LDO's. You need to buffer this with a biased Op Amp that can handle much (100x) more current than your single-ended load. Yet OA's tend to be differential loads, so that is not a big issue. simulate this circuit – Schematic created using CircuitLab
H: How do I properly interpret identification on opto-isolator IC? Lightning struck near my house and destroyed a bunch of things I had plugged in, including a treadmill. I swapped parts with another one and identified that it's the lower motor control board that is the problem and on it I see an opto-isolator that had its top literally blown off. Since I have an identical board I know the part has the following label: CWN1 PC817 SHARP I don't know what the CWN1 means. TL;DR in the above label is the CWN1 important? Is it some sort of CTR rating? Or is it just like a manufacturer ID and I can substitute any PC817 for this one? Thanks. AI: The part number is always closest to the logo or company name and the CWN1 = factory/date/batch codes. Recommendation I recommend the PC817X3NSZ1B or PC817X4NSZ1B replacement. Then do a diode test on all other parts or use a conservatively limited CC limited lab supply to bring up DC power externally to locate defective parts. CWN1 C means CTR = 2 to 4 = Ic=/If= 10~20mA / 5 mA = 2 to 4 @ Vce= 5V but as a switch CTR, like hFE, always reduces to << 10% CTR @ Vce(sat) Proof: from datasheet ... Ic/If= 20mA/1mA= 0.05 @ Vce(sat)= 0.1 to 0.2V so at Vce=Vce(sat) @ Ic=1mA CTR= 0.05/2~4= 2.5% to 1.25% Although the new datasheets do not show any distinction for CTR saturated vs different ranks, they do exist and will be around 10% of the max linear CTR for a fixed Vce(sat) max. They just overdrive the LEDs to ensure low Vce(Sat) but you app might not. W is the factory now made by LITE-ON in China N1 is the date code for 2001 January PC817 SHARP still supplies them to Mouser, but they have new part numbers and they are most likely DIP for HV spacing of 5 kV could easily have been compromised by dust and humidity. over this period. No doubt this will cause a cascade of component secondary failures. The new stock part number uses 3 for X as Rank C Ic= 10 to 20mA PC817X3NSZ1B for DIP which are the only ones in stock at this time. You could also use 4 as Rank D Ic= 15 to 30 mA both for If= 5mA which is slightly better and also cheaper a few cents perhaps due to high yield and more common, which is also stocked. PC817X4NSZ1B https://www.mouser.ca/ProductDetail/Sharp-Microelectronics/PC817X3NSZ1B?qs=sGAEpiMZZMv0NwlthflBi%252B9cc83UOuhhP4dcSDiySOo%3D other misc info There are a few other Optos rated for 5kVrms but no others at Mouser are ranked as well for CTR, Onsemi= CTR= 1 to 6 , Everlight = 1 to 2, . Sharp makes some only in Japan also which are also good, maybe better quality PC123X5YFZ1B where 5 is Rank N 10 to 20 mA / 5mA similar to Rank C in the other p/n. but is non-stock. Nobody is better than Sharp in these IR optos. Sharp IR LEDs are now licensed to Vishay.
H: Can a 12 V LED dimmer be used with a 6 V incandescent load? Can a 12 V LED dimmer be used with a 6 V incandescent load? I'd be interested in knowing what factors make such an arrangement suitable or unsuitable. Specific example: I bought a 12 V, 60 W Dimmer Box because it was reduced to $5.00 and I thought I might be able to use it to build an inexpensive microscope illumination circuit. I suspect, but do not know, that it is a PWM dimmer. I have read that PWM dimmers can be used with incandescent bulbs. The bulb in question is 6 V, 5 A incandescent. The instruction sheet says that it can handle 12-24 V and 5 A max. I include pictures of the front and back of the circuit board: The component at Q2 has a stylized "ST" and a circle containing "03" on top line The next line is "BV32" and the last line is "GE 246". The The IC at bottom right says "NXP", "PSMN016", "100PS", "PBm 1305 E5", "4970" I am wondering if this will function as a dimmer in a 6 V circuit? AI: Summary: This answer applies generically to operating incandescent loads from DC input dimmers. In each case the points to watch are that bulb inrush current when cold does not cause problems, that both dimmer and bulb output voltage and current ratings are not exceeded, That dimmer input voltage specs are met. Procedures & limits It is likely but not certain that the dimmer will work if Always set to low at turnon. NEVER set to above the 5A, 6V output position Operated from a 12V supply. Detail: More detail re brand and model and weblink (if available) and bulb specs will help greatly. It is likely but not certain that it will work with a 6V, 5A bulb. AT turnon a bulb has a very low cold resistance so current drawn will be high if the dimmer is set to "high" . Starting with the dimmer at low and turning it up should allow the bulb to heat and provide a higher resistance. Maximum bulb voltage is 6V, when 5A will be drawn. The dimmer claims to allow 5A out. In SOME cases current out allowed may be lower at low voltages but there is a reasonable chance that 5A will be available at 6V.| Setting the dimmer to above the 5A, 6V position may cause problems. Either the dimmer may overload rather than current limiting. Noting the 6V, 5A position and not exceeding it would be advisable. The 12V rating MAY be an input voltage below which the dimmer will not function correctly. Operating it on a 12V supply would avoid this. Testing at Vin below 12V MAY destroy the dimmer (and may work) - unknowable.
H: AL5873QT16E-13 LED driver I have a requirement wherein strings of LEDs need to be driven with a maximum forward current of 250 mA, and the dimming should be analog, not PWM. I have chosen AL5873QT16E-13 which has 3 channels that can independently regulate up to 250 mA each. If I have 5 V as Vin can I only connect 1 LED with a 3.3 V forward drop or more than one LED in series? How does it regulate current? The datasheet says current ratio blocks, can anyone explain in detail what's going on? AL5873QT16E-13 Datasheet AI: The AL5873Q is a simple linear LED driver. This means that the current regulation is done linearly, i.e. any voltage drop is simply turned into heat. The datasheet does not contain much details about the exact circuitry involved, but you can count on that in the end, there is an N-channel MOSFET connected between each LEDx pin to ground, which does the constant current control together with some current measurement circuitry etc. As this is a simple low side linear driver, you must supply the LED's with sufficient voltage (the total forward voltage for all series connected LEDs, plus margin for some dropout in the driver). With only 5V supply and a 3.3V LED, sure enough only one LED can be used in series.
H: Which components can be labelled as "BD" on a printed circuit board? In the process of repairing some consumer electronics, I encountered some SMDs labelled BD on the silkscreen: As this one has clearly blown, I'm interested in knowing what it could be. Here is another component on the board, having the same label, that seems to be in better shape: Note for people that might land here for a more general case: BD is a common abbreviation for "board". AI: They are ferrite BeaDs. The one that died probably did so because of a failure (short) elsewhere.
H: I don't know why UCC27324 getting hot in normal condition I have some question about UCC27324P MOSFET driver. I have experiment using UCC27324D (8-DIP type) and UCC27324P (SMD type) via this circuit. DIP type driver works fine with no problem same condition. But SMD type driver when 3.3 V PWM signal engaged, temperature increase so fast. I don't know what I miss. Here is circuit and datasheet. UCC27324 data sheet AI: Most probable cause are the missing gate resistors. It remains still questionable if the pull-down gate resistors are really needed as they also load the driver.
H: Optimal LDA210 optocoupler circuit for mains detection I am trying to sense mains with an ESP32. For that I plan to use this circuit: I want to optimize the resistors from If current in order to lower the power (W) they use so they don't get hot. According to the datasheet (page 43), the ESP32 has 45kΩ pull up resistors, so I was planning to use external 10kΩ to give it some margin. I was trying to calculate what is the minimum If current I can use in order to use as little power as possible, to reduce waste and the resistors temperature since I will place this in a small PCB. Following this article, I searched for a high CTR bidirectional optocoupler (LDA210 datasheet) and tried to calculate the minimum If current needed: Optocoupler datasheet params: Minimum If current for typical CTR calculation My concerns: This looks too tiny current to me, but I cannot find in the optocoupler (LDA210) datasheet any minimum forward current. Am I missing something? Is the Input Control Current the minimum Input current or affecting the equations in any way? AI: I'd consider using a capacitor dropper in series with a resistor that limits excessive surge currents. That capacitor value is chosen so that it has the correct impedance to supply sufficient current at the lowest AC line voltage (with a little margin to spare). Assume that the series resistor is somewhere in the region of 3300 Ω. I would also use a simulator to double check everything. From the opto data sheet I'd be considering a nominal mid range input current of about 2 mA RMS as appropriate. From a subsequent comment: - How did you get to "2mA RMS"? I can't find anything in the datasheet that gives me a hint on what to use. Look at the graphs such as this one: - . The lowest led drive current is 0.5 mA and, that figure tallies with the maximum current transfer ratio graph (red mark ups by me): - So, at your minimum supply voltage you should be aiming for between 0.5 mA and 1 mA. I added the RMS bit on the end to make it more convenient for any AC numbers you were considering. So, at maybe mid range voltages, 2 mA is about right. What you don't want to do is run on the rising slope of the CTR graph cause you'll get odd results at different voltages.
H: Skim excess power from a resonant circuit without draining it? If you have a resonant circuit, oscillating at its self-resonant frequency, how would you extract the excess power without draining it fully? A high Q resonant transformer impedance matched to the circuit perhaps? Is there another way? I actually need DC output so perhaps rectifying before extraction is a good strategy? AI: One definition of Q is energy_stored/energy_lost_per_cycle. Increase the losses, to your pickoff, and you necessarily decrease the Q. If you have a parallel LC at some point in your oscillator, then a series diode to an output is a good way of getting a DC voltage out, while only loading the circuit at the conduction instants, and leaving it unloaded throughout the rest of the cycle. For instance simulate this circuit – Schematic created using CircuitLab Note that I've included the sustaining circuit, as resonant circuits don't just resonate by themselves, especially if you skim off the 'excess power'. I've also included its power supply, just in case anybody is so entranced by the idea of resonance they think they can get free energy out of it.
H: What are these connector threaded standoffs called? These threaded standoffs are used on DB connectors and seem to be flared onto the connector flange. They are used to secure two mating connectors together. What are they called? What kind of tool is used to install them onto the connector? AI: The magic search word you're looking for is "swage", from the associated metalworking process, "swaging". (The product is called a "swage spacer".) The usual manufacturers of standoff hardware will make them, along with the required swaging die (used with a hammer or arbor press) to attach the spacer to the PCB or connector. Example spacer Example die The die may also be called a "staking tool".
H: What is the difference between these soldering tips? What's the difference between both the tips? I know one is copper while the other isn't. What is the benefit of one over the other and has anyone purchased the copper bit before? What's your experience? AI: Solder dissolves copper to a small extent. You'll never notice the small amount of copper that is dissolved from wires or PCB traces when you make a joint, but a bare copper soldering iron tip loses enough over time that you'll notice it fairly quickly. For this reason, good soldering iron tips have a copper core (to transport the heat better) and a special alloy plating that doesn't dissolve in copper. The silvery tips in the first photo are plated to protect the copper core. Depending on the quality of the plating, they should last a fairly long time before they begin to develop pits or cracks. The bare copper tips in the second photo are not plated. They will oxidize (turn black) almost immediately along the length of the tip, and the tip itself will begin showing pits and roughness after just a little bit of use. (Hours to minutes depending on temperature and the specific solder alloy used - lead free being worse than solder with lead.) You'll want to use plated tips from a good manufacturer. They cost more up front, but they last so much longer that the difference is worth it. The bare copper tips in the question are not some intermediate stage during the manufacturing. They are a finished product that you can buy on AliExpress, Banggood and other places that sell cheap junk. My first guess was that the bare copper tips could be intended for wood burning (drawing on wood with heat,) but the advertisements on Aliexpress are quite explicit that the bare copper tips are intended for electronics work. I wouldn't buy them. You'll have to periodically (every few hours) reshape and tin them. I used to have to do that all the time with the cheap soldering irons I used as a kid (late 1970s to the end of the 1980s.) I made my own tips because it was cheaper than buying the correct ones for my cheap iron - and the purchased tips didn't last any longer than my hand made, pure copper tips. Those appear to be Hakko compatible tips. Find a supplier with a good name that sells proper Hakko tips for your Hakko (or compatible) iron. A new question about soldering to stainless steel brought some special solder to my attention. That's the Alu-Sol aluminum solder from Multicore/Loctite. Alu-Sol is a solder that you can use to solder aluminum parts together or to tin aluminum so that you can solder wires to it. It will also tin other metals, such as stainless steel, so that you can solder wires to the stainless. The really intersting bit is that the datasheet for the Alu-Sol recommends using bare copper tips when using Alu-Sol. The flux seems to do bad things to the iron plating on normal soldering iron tips. From the Alu-Sol datasheet: That'd be a legitimate use of bare copper tips in a soldering iron. Whether or not that's what the company selling the bare tips had in mind is a different question.
H: Simulation Error in Circuit Lab with 2 Dependent Sources simulate this circuit – Schematic created using CircuitLab I'm practicing some problems and came across this circuit. It asks for the Thevenin equivalent with respect to terminals A and B. I solved it by hand and have the correct answer and also verified it in the back of the book. I wanted to simulate it in Circuit Lab to verify the results but I keep getting a simulation error. The text says: Error building graph: TypeError: Cannot read property 'u' of undefined Simulations will not run and values will not be displayed, until you return to Build mode and fix these errors. Is there something blatant that I'm missing? I've attached a picture of the circuit in the text book. AI: (CircuitLab developer here.) Sorry for the cryptic error message. On CCCS1, you're specifying the control current as R5 when it should be R5.nA. With this change, the simulation runs: simulate this circuit – Schematic created using CircuitLab The .nA suffix is needed to uniquely specify the current going into a particular terminal of resistor R5. In this case, as drawn, it's the terminal on the left, so R5.nA picks up the current flowing left-to-right through R5, which matches your book's labeling for \$i_b\$. See also Labeling Voltages, Currents, and Nodes (specifically the "Terminal Voltages and Currents" heading). And many more working CircuitLab examples with dependent sources on Dependent (Controlled) Sources and Dependent Source Feedback.
H: How do I prevent all valves to activate when a single curcuit activates? I'm setting up a watering system in my greenhouse. This is a 12v DC system. I have three timers, each controling its own solenoid valve + a shared water pump. How do I prevent current from going to the other two valves when a single timer (and its valve) activates? I've tried experimenting with Schottky diodes which I have plenty of - but with no luck. Its been mostly trial and error since I am not very strong in the theory of this. A very (most likely naive) idea of how this should work. I had the idea / understanding that Schottky diodes could prevent current from "going back" / "backfeed" to the other valves. However, this obviously does not work. The diodes I have tried to work with (1N5822): Timers: AI: simulate this circuit – Schematic created using CircuitLab Figure 1. Each timer drives its own solenoid and the pump. D1 to 3 prevent backfeed from one solenoid turning on the others.
H: What exactly are IBIS simulation models? Can I use them like a normal simulation component? I read a lot about IBIS. "What I understood is that they basically model the input and output voltage and current data based on actual performance instead of describing them through equations like in SPICE. Please correct me if I am wrong. I cannot for the life of me understand where they are used. It is said to be supported by 'virtually any simulation software', but LTspice and TINA don't support it. Then I come across posts like these which say it does not work like a SPICE model. What exactly do they do? Can I connect, say, resistors around an IBIS model of an op-amp in non-inverting configuration and expect to see an amplification? Do I need to convert it to SPICE? I am very confused. AI: The main point of IBIS is to see how your digital ICs interface with analog circuitry external to those chips. Physical properties such as drive strength, pin leakage, and package parasitics can be vital in certain designs. IBIS is frequently used with PCB layout packages to validate signal integrity against the specifics of your copper traces, pours, substrates, etc. The main thing to understand about IBIS is its simplified input and output structures (see Analog Devices AN-715 AppNote), which are duplicated below (first image modified to fix position of package parameters): One of the most useful applications of IBIS (in my opinion) is to use the V/T data of GPOs to calculate a Thevenin equivalent source resistance for the driver pin. Then you can easily plug a voltage source and resistor to interface with your remaining analog circuity in SPICE, or simply use that resistance value for hand calculations. There are tools/software e.g. SPISim_IBIS which can directly convert the IBIS I/V or V/T curves to behavioral SPICE curves, but it's slower and not as straightforward if you want to work out some things by hand. There's also a difference between ramp data and waveform data, and some free tools only let you use one of them. I give an example below, so you can see for yourself. Something like this can be useful when interfacing digital outputs with gain circuits (think op amp or resistive DAC). You don't want to get burned by assuming an output pin's resistance is equivalent to 0Ω or something else. I prefer doing this calculation using my own lab measurements, but if I don't have hardware available (or I'm too lazy) I'll do it using the IBIS data. STM32 microcontrollers seem to be popular lately from what I hear, so I'll use the stm32g030_031_041_so8n.ibs file found on ST's website. This is the file for the SO8N package version of the STM32G030. I'm going to work with the following model found in the file, which includes a description of the specific configuration for this type of I/O pin: io6_ft_3v3_mediumspeed "GPIO-3.3V range - medium speed" If you scroll or search for this model in the text, you will eventually see where it starts which looks like this: |************************************************************************ | Model io6_ft_3v3_mediumspeed |************************************************************************ | [Model] io6_ft_3v3_mediumspeed Model_type I/O Polarity Non-Inverting Enable Active-Low Vinl = 0.990000V Vinh = 2.310000V Vmeas = 1.650000V Cref = 30.000000pF Rref = 100.000000M Vref = 0.0V C_comp 1.290400pF 1.272500pF 1.305600pF Scroll down a little further to where it says this: [Rising Waveform] R_fixture= 50.000000 V_fixture= 0.0 V_fixture_min= 0.0 V_fixture_max= 0.0 which means that this is Rising Waveform data when the output pin is connected to 0V via a 50Ω load. The idea is to analyze the ON state of the PMOS (top) transistor of the output structure. If we scroll to the end of the time series data for that section we see something like this: |time V(typ) V(min) V(max) | ... ... ... 29.949950nS 1.783884V 1.600996V 1.892836V 31.471471nS 1.788090V 1.602697V 1.901179V 31.631632nS 1.788290V 1.602898V 1.901984V 33.793794nS 1.791394V 1.605099V 1.910528V 34.914915nS 1.793096V 1.605499V 1.913845V 36.276276nS 1.794097V 1.605999V 1.917766V 39.879880nS 1.796100V 1.607300V 1.924400V The last row is where they determined the output settled after fully switching from low to high. Another way to look at this is: when the pin is set high and you connect it to a 50Ω load to ground, there will be 1.796V (typically) at the pin. We can use the following formula to calculate the Thevenin equivalent resistance of the output pin: $$ \begin{align*} R_{\text{th(high)}} &= R_{\text{load}} \cdot \Big( \frac{V_{DD}}{V_{\text{load}}}-1 \Big) \\~\\ R_{\text{th(high)}} &= 50 \cdot \Big( \frac{3.3}{1.796}-1 \Big) \\~\\ R_{\text{th(high)}} &\approx 41.871 \Omega \end{align*} $$ If your external circuitry is only sensitive to when the pin is being driven high, then you should be fine with the above value. If not, then we can't assume the two transistors are exactly symmetric so let's do the same for the... [Falling Waveform] R_fixture= 50.000000 V_fixture= 3.300000 V_fixture_min= 3.000000 V_fixture_max= 3.600000 Notice how we're using the batch of values for when V_fixture is 3.3 V (there's another data set for 0 V, so be aware!). This means that the 50 Ω load is now connected to 3.3 V so we can analyze the ON state of the NMOS (bottom) transistor. |time V(typ) V(min) V(max) | ... ... ... 29.789790nS 1.589774V 1.421969V 1.901713V 30.830831nS 1.586260V 1.418657V 1.898195V 31.551552nS 1.584452V 1.416348V 1.895882V 34.754755nS 1.577525V 1.409723V 1.887940V 35.075075nS 1.576922V 1.409121V 1.887438V 35.315315nS 1.576621V 1.408619V 1.886935V 39.879880nS 1.571300V 1.403500V 1.880300V The formula is similar but slightly different (don't forget that exponent!): $$ \begin{align*} R_{\text{th(low)}} &= R_{\text{load}} \cdot \Big( \frac{V_{DD}}{V_{\text{load}}}-1 \Big)^{-1} \\~\\ R_{\text{th(low)}} &= 50 \cdot \Big( \frac{3.3}{1.571}-1 \Big)^{-1} \\~\\ R_{\text{th(low)}} &\approx 45.431 \Omega \end{align*} $$ You can use each resistance to build an output model using two voltage controlled switches with those two ON resistances, but a simpler approach would be to average the two resistances into one and use the result along with a simple independent voltage source: $$ \begin{align*} R_{\text{th(avg)}} &= \frac{R_{\text{th(high)}} + R_{\text{th(low)}}}{2} \\~\\ R_{\text{th(avg)}} &= \frac{41.871+45.431}{2} \\~\\ R_{\text{th(avg)}} &= 43.651 \Omega \end{align*} $$ One last caveat to point out is that usually the NMOS side has a lower resistance, and that didn't happen in this example. It could be because the time data ended without it fully settling to the final value...or it could be something else. This is one reason why I like doing my own measurements (calculations/formulas are the same), but hopefully this highlights a useful application of the IBIS model data. The main downside to the above V/T curve method is that the R_fixture used for capturing the IBIS data might not even be close to what you are interfacing to the output pin. Furthermore, in the above example the 50Ω that was used puts the driving transistors in a region where their \$R_{DS(on)}\$ is quite non-linear. So using the 50Ω data in a situation where you're interfacing something closer to 470Ω will get you incorrect results. A way around this is to instead look at the I/V data for the approximate current you intend to drive using the device. Just like before, you can use this for only the PMOS using the [Pullup] data, only the NMOS using the [Pulldown] data, or averaging the results for both. For this example (using the same IBIS model), let's aim for around 10mA and we'll look for "typical" values again. So we'll scroll to the [Pullup] data. IBIS convention is that positive currents flow into the component. Unless we're connecting our load to something higher than 3.3V we'll be sourcing current from the device and out of the pin. Therefore, we're looking for a negative current around 10mA. [Pullup] |Voltage I(typ) I(min) I(max) | ... ... ... -0.30 11.466000mA 11.979999mA 10.497975mA -0.20 7.604799mA 7.956100mA 6.973075mA -0.10 3.786300mA 3.967000mA 3.464625mA 0.00 0.230000pA 0.0A 0.643000nA 0.10 -3.671800mA -3.824800mA -3.370709mA 0.20 -7.144100mA -7.386300mA -6.596109mA 0.30 -10.421000mA -10.692000mA -9.677909mA <===== 0.40 -13.506000mA -13.751000mA -12.618008mA 0.50 -16.404000mA -16.568000mA -15.418008mA ... ... ... Skimming the data, I chose the line of data with the arrow pointing to it, which seems to have a typical current closest to our goal. Using the magnitudes of the voltage and current here along with Ohm's law gets us: $$ R_{\text{th(high)}} = \frac{0.3 \text{V}}{10.421 \text{mA}} \approx 28.788 \Omega $$ Doing the same for the Pulldown data, but looking for a positive current (sinking current; i.e. into the pin) gets us: [Pulldown] |Voltage I(typ) I(min) I(max) | ... ... ... -0.30 -10.656865mA -11.865999mA -8.724131mA -0.20 -7.110396mA -7.918200mA -5.825767mA -0.10 -3.558800mA -3.963500mA -2.915933mA 0.00 20.000000fA 0.0A 0.0A 0.10 3.478700mA 3.853300mA 2.862306mA 0.20 6.788800mA 7.476800mA 5.610805mA 0.30 9.930300mA 10.870000mA 8.245504mA <===== 0.40 12.903000mA 14.033000mA 10.766004mA 0.50 15.707000mA 16.964000mA 13.173003mA ... ... ... $$ R_{\text{th(low)}} = \frac{0.3 \text{V}}{9.9303 \text{mA}} \approx 30.211 \Omega $$ And finally, averaging the resistances of each transistor into a single value yields us: $$ \begin{align*} R_{\text{th(avg)}} &= \frac{R_{\text{th(high)}} + R_{\text{th(low)}}}{2} \\~\\ R_{\text{th(avg)}} &= \frac{28.788+30.211}{2} \\~\\ R_{\text{th(avg)}} &\approx 29.500 \Omega \end{align*} $$
H: Altium schematic "snap grid" vs "snap distance" Could anyone give me a practical example of what is the "Snap Distance" on the Altium Schematic? I read the below but when I place a component I can't see any difference in movement when I change the value of the "Snap Distance." Only if I change the "Snap Grid" value the movement of the object changes. Also I use the same numbers which shows the example at the below link but nothing. https://my.altium.com/altium-designer/getting-started/schematic-grids-and-preferences AI: Snap distance is how far the traces will snap together when near each other (it get's really annoying to route if this is more than a few mils) so you probably won't notice this as much. Snap grid is everything snapping to the grid. (press G if you want to change quickly)
H: Should I use servos or stepper motors in a homebrew plotter? I'm making a plotter for drawing pcb designs on copper clad board. I'm planning on using rails from a 3D printer for the 2D drawing plane and smaller vertical rails to raise and lower one of 4 pen holders. The thinnest pen I'm using is 0.2mm, so that's the target resolution. Accuracy trumps speed in this application. I'm going to use an ESP32 as the controller/WiFi. My work area will be 400mm X 400mm. I plan on writing the software myself. Should I use continuous rotation servos or stepper motors and why, please? AI: Either one will work. Servos are closed loop, you turn them on until they move to a position based on feedback from a sensor (usually encoder). Steppers are open loop, you provide steps and they move based on those steps (but you never really know if the motor got there, if an axis bound up the software would keep executing commands even if the motors were bound up.) I would probably go with steppers, most software and controllers for axis control (with gcode) is built to work with steppers.
H: IC with SDCard and LVDS interfaces for video playback, what are they called? I am looking to build a board that can connect to an LVDS LCD screen and play back videos from an SD card when triggered via a serial port command, or I2C, or similar. I have seen cards like that on the market, but can't find any ICs with that functionality. What would it be called? AI: The chips you seek are called Digital Media Processors.
H: Current meter doesn't short a circuit? I have a control card where two pins can be shorted with a push button to trigger an action (5V can be measured between the two pins.) If I briefly short the pins with a jumper, the action is triggered as expected If I short the pins with a current-meter: nothing happens. By that I mean that I'm trying to measure the current between the two pins. I would expect the current meter to short the circuit, trigger a current and measure it. My question: is that because the current-meter has too big of an internal resistance, or is it something else ? I reckon the approach is pretty naive but it got me curious now. Background of why I'm doing this: I intend to remotely control the action using an Arduino. My idea is to use an optocoupler that acts as a push button (and keep both circuits isolated.) Then I wondered if I would need a resistor to avoid frying the optocoupler's transistor so I tried to measure a current. Apparently it doesn't work. Also, should I be worried that my optocoupler will also have too big of a resistance to make the circuit work? EDIT: I feel very stupid, but I think my multimeter is simply broken :/ It does measure voltages, but setting it to current even fails to measure a current through a simple resistor. AI: Well, I feel stupid but I think my multimeter was simply broken :/ Trying to measure a current through a resistor also failed. The fact that it still measures voltages (with proper settings of course) got me confused.
H: Reverse voltage for diode in three phase circuit? I'm trying to understand what happens when a diode is not conducting in this circuit, more specifically, what's the reverse voltage applied to it? I know only one diode will be working at a time, but what about the other two diodes? If D1 is ON, and I assume 0.7v drop on it, that leaves VR = VRN - 0.7v Then if the voltage on the cathode of the diodes is equal to VR, that means in the case of D2 the reverse voltage is: VD2 = - (VRN - 0.7 - VSN) Is that right? If that's true, then for D3, the reverse voltage is: VD3 = - (VRN - 0.7 - VTN) Is that the reverse voltage for the diodes in this circuit? AI: what's the reverse voltage applied to it? Consider 3-phase voltage waveforms from here: - When Phase_1 is maximum, the other two phases voltages are not at negative maximum but at 50% of their peak negative voltage so, the maximum reverse voltage that any single diode can see is not twice the peak voltage minus one diode drop but somewhat less than that. However, it's not 1.5 times either; it's somewhere in between 1.5 times and twice the individual peak voltages. If you do 3-phase theory, you'd know that \$\sqrt3\$ is involved somewhere. In fact, if you plotted each instance of the black and red phase voltages in the picture above you'd find that the peak difference is \$\sqrt3\$ higher than the peak of either. This is because if one diode is conducting (assume 0 volts dropped) the other diode is subject the the full "line-voltage" and not the phase voltage. So, the peak verse voltage rating for each diode has to be at least \$\sqrt3\cdot V_{PHASE} - D_{DIODE}\$. However, anyone designing a three phase rectifier will choose a diode reverse voltage that is far in excess of this value in order to account for surges, drop-outs, imbalances and back-emfs from loads.
H: How to explain the strange results from this circuit (op-amp : differential amplifier?) I have a differential amplifier. Problem of "gain" of the op-amp or "how it is wired." I have simulated this as a voltage source controlled voltage (VofV.) As a generic amplifer. As a good old LM741C. Very strange behavior when there is an error of "wiring." But quasi good results. What is happening? Precision : it is a "Dynamics-DC" analysis (so called) by microp12 Spectrum-Software ... and the results are, I think so, "numerically" correct. EDIT : for obtaining what are "real" and correct results, one must make some other analysis on these examples, as TRANsient analysis. So, when you have a circuitry to be analysed, always try different kind of simulation. And, "in fine", ... Realize in a 'real world' by assembling it on a breadboard ... and viewing on a scope (or other toosl) the behavior. When you use "simulators" or "mathematical" tools, don't forget to verify some "answers" given. Be critical ... AI: It appears that you have fallen into the trap of using a simulator with positive feedback on an op-amp and gotten what appears to be a stable result. Those circuits drawn incorrectly as differential amplifiers i.e. those circuits that use positive feedback will give conditionally stable outputs that appear valid - they are not AND, whenever using a sim, you need to be sensible and not create this situation.
H: splitting ground plane I have a 4 layer board, where I have a 3.3V STM32 and some 3.3V components, and also a 24V H bridge to drive a 24V DC motor. The board is supplied with a 24V DC power supply, the 3.3V is created by a regulator on the board. I split the power plane accordingly, like this: You can see where the 24V power input is and where the other components are. The yellow is of course the 3.3V and the red is the 24V power plane. Now my question is regarding the ground plane. Should I "split" it like this (notice that it is not entirely split): Or leave it as a whole like this: I know that generally speaking it is not a good idea to split the ground plane because of the return paths. But if I don't "split" the plane, then I directly couple the 24V H bridge noise onto the ground pins of the 3.3V electronics, which does not seem to be a good idea either. The motor eats 2A by the way. So which one would you advice? AI: There is a 3rd option, change the layout so a gap in the GND is not required. The general rule is "low frequency" return current will take route of lowest resistance (typically a direct route) while "high frequency" will take the route of lowest impedance (typically roughly following the trace as this realises the lowest inductance). With that in mind, and assuming the H-Bridge is directly coupled to the the 24V, you would want that flow to not cross any of the digital signals. Likewise the 3v3 switcher (assuming switcher), you do not want its return current interfering with the digital signals. By arranging similar to below you keep the sensitive parts away from potential noise while keeping a contiguous ground plane. Why a contiguous ground plane? mitigates creating a gap-antenna, now this might be a bit of a remote concern for your present board, but it is worth keeping the concept in your mind because when you create one, they are the bane of EMI
H: I need to implement DC and transient analysis of diode and transistor circuits. How can I do that? I am doing a MATLAB project at my university. I have tried to implement DC and transient analysis of diode and transistor circuits by linearizing them and using iterative methods, but the results don't converge. How do SPICE simulators do the magic? Links to resources would be very helpful. I have followed this website. AI: I strongly recommend that you get and read "The Designer's Guide to SPICE & SPECTRE" by Kenneth S. Kundert. The book covers every topic in which you are interested, and in detail. It will do more for you than any other single resource. (It certainly helped me.) In Chapter 2, on DC Analysis, he immediately dives into the problem in 2.2 DC Analysis Theory and then, 2.2.1 Solving Non-Linear Equations, and then 2.2.2 Convergence Criteria, and then followed by 2.2.3 Convergence, where he starts out by writing, "Failure of circuit simulators to converge is a serious problem. One large electronics company estimated that their circuit designers spent an average of two hours a day trying to cajole their simulators into converging." The author also discusses a key difference between SPECTRE and SPICE. SPECTRE uses KCL to determine convergence. The problem with this is that tiny parasitics (such as \$1\:\mu\Omega\$) can leave computations requiring better than an absolute voltage precision of \$10^{-18}\$ in order to converge and, often, this means that KCL is never satisfied in SPECTRE and it just won't converge. In contrast, SPICE decided not to use KCL as a convergence requirement. But as a result of that decision, SPICE can and does falsely converge where it should not. You'll also learn why it is that MOSFETs capacitance models in SPICE are not and cannot ever be made to actually model a MOSFET, properly. Charge conservation is vital with MOSFETs. But Meyer capacitances are incomplete and inconsistent. So there cannot ever be a charge function, when differentiated, that gives Meyer capacitances. The mapping doesn't exist. So it isn't mapped and people live with the problems in SPICE. (SPECTRE can be made to do it more readily, but then again it may not converge, either.) As I say here, you really need this first book. Also, go and get the primary document, "SPICE2: A Computer Program to Simulate Semiconductor Circuits" by Laurence W. Nagel, directly from Berkeley. This one is entirely free. Just click on the PDF link at that site. It's probably an essential base -- everyone knows about it, refers to it, and it is a primary resource by one of the primary people involved in developing SPICE. (The first book mentioned above provides the overall perspective you need, and is an essential read that doesn't lose sight of the necessary details, before diving deep into SPICE2.) Finally, go and get "The SPICE Book" by Andrei Vladimirescu. This book is also excellent and will help you a great deal, as well. But I'd place it as the 3rd book to get, if you can only consider two. You really do want to have the first two, for sure. But I think this one is almost just as important. This third book provides excellent examples of applying KCL nodal equations in preparing circuits for analysis. (Right away, in fact, in the very first chapter on What is SPICE?.) The clarity in these examples were what enabled me to develop the insights that I use today in my own KCL methods, which differ from those found in textbooks. These three books in particular were the ones that Mike Engelhardt, who is responsible for LTspice from Linear, recommended to me many years ago when I was struggling to learn how SPICE works, inside. And I can assure you that I was not in any way disappointed by his recommendations. I can pass them on with my grateful recommendation.
H: How do I compute the angular resolution of a radar system? I hope this is the right place to ask a radar related question, otherwise I would be grateful for a pointer to the correct forum. I want to compute the angular resolution of a radar system but so far I have found different formulas and I am not sure which one to choose. I am also wondering why there is a range dependency in some formulas while anothers I found just assumes the angular resolution to be proportional to the 3dB-Beamwidth, see last equation below. Further it would be nice to know, where the following formulas are coming from and what their main underlying assumptions are. I know this might be a lot to ask, but a brief explanation would be great since all of these formulas are some kind of "rules-of-thumb" as far as I understand. One formula I found here: To start with, let us assume that we can build an antenna as large as necessary to meet our azimuth resolution requirement. The rule of thumb governing antenna size is dazimuth ≈ λR/L where, dazimuth = resolvable distance in the azimuth direction, λ = wavelength of radar, R = range, L = length of the antenna. Here is another formula, I found here S_A >= 2Rsin(Θ/2) Θ = antenna beamwidth (Theta) SA = angular resolution as a distance between two targets R = slant range aim - antenna [m] Another document I only have as a hard copy computes the angular resolution as: dazimuth ≈ kappa * BW where, dazimuth = resolvable distance in the azimuth direction, kappa = is a real valued constant between 1 and 2, BW = is the 3dB-beamwidth. Edit: I want to model the angular resolution of a uniform linear array with drone targets flying at approx. 100kmh at low altitude above ground. AI: All three formulas are basically the same. The general relationship for azimuth resolution is range X beamwidth. In your first formula the beamwidth is approximated by L/λ (the size of the antenna in wavelengths). In the second formula, for small beamwidths, sin(Θ/2) reduces to Θ/2. Then the formula reduces to RΘ which is the same as the first formula. The third formula only uses the beamwidth and not range so it only gives the angular resolution. For azimuth resolution you need to multiply by range. Then the formula becomes the same as the first two except for the arbitrary constant kappa which appears to be an empirical estimate to account for characteristics of a particular radar.
H: What does 'rotating vector addition' mean in this context, and how does it generate AM and FM? I am currently learning about wave modulation. In communication theory, it is said that 'rotating vector addition' generates AM (amplitude modulation) and FM (frequency modulation). What does 'rotating vector addition' mean in this context, and how does it generate AM and FM? EDIT Chapter 2.6 Phasors and the Addition of Waves of Optics, fifth edition, by Hecht: Chapter 2.11 Twisted Light of Optics, fifth edition, by Hecht: AI: This 152 page reference may have relevant information concerning AM: https://web.sonoma.edu/esee/courses/ee442/archives/sp2019/lectures/lecture06_am_modulation.pdf Shows this slide on page 16: And this slide on page 19:
H: How does relative permittivity change at high frequency? I am trying to design RF radar optics for 60GHz. However, I do not know how 'constant' the relative permittivity of a substance is. One material I am making an RF lens out of is PEEK which measures the er at 1MHz (link to datasheet below.) My question is in 2 parts: Does anyone know of a compendium of materials with the er measured at high frequency? Is it OK to assume that the er is relatively constant from 1MHz to 60GHz? As a follow-up, what are my options to get an accurate value of er? https://www.theplasticshop.co.uk/plastic_technical_data_sheets/peek_1000_technical_data_sheet.pdf AI: I doubt PEEK will work with any EHF as there are no Dk and Df or loss tangent specs and nano contaminants tend to be lossy. Attenuation constant is linear with respect to loss tangent and can be significant contributor when tanδ ~ 0.005 - 0.01 Getsinger effective permittivity Equations are given in the link below along with supplier’s recommended dielectric and ultra-smooth metal layer, Astra MT. Above a few GHz the effective Dk starts to decrease then due to surface roughness of the conductor, effective capacitance increases and thus along with that the effective Dk. But loss tangent is worse and more critical in the EHF range. https://www.isola-group.com/wp-content/uploads/PCB-Material-Selection-for-RF-Microwave-and-Millimeter-wave-Designs-1.pdf
H: Why is the output not zero for zero differential input? To start with, I am simulating a StrongARM latch. Surprisingly, the circuit is not giving a zero output for zero differential input. In the circuit, one clock cycle is 2 ns and VDD is 1.8 V. The MOSFET model used was BSIM3, HSPICE level 49, 180 nm technology. Transistor sizing: M1,M2: CMOSN l=0.18u w=50u ad=18e-12 as=18e-12 pd=100.72u ps=100.72u M3,M4: CMOSN l=0.18u w=10u ad=3.6e-12 as=3.6e-12 pd=20.72u ps=20.72u M5,M6: CMOSP l=0.18u w=25u ad=9e-12 as=9e-12 pd=50.72u ps=50.72u S1-4: CMOSP l=0.18u w=2.5u ad=0.9e-12 as=0.9e-12 pd=5.72u ps=5.72u Would it be due to an offset, or do these models have an offset automatically - or is it some simulation error? AI: In an ideal world, your circuit would indeed not put out any differential voltage between points X and Y because it is fully symmetrical. There are no random offsets in SPICE, it does exactly what you tell it to do. However, SPICE isn't perfect - it uses floating-point numbers and therefore necessarily has rounding errors. These rounding errors get amplified by the positive feedback loop in your circuit, causing you to get non-zero output voltage. In other words: The circuit you've built is metastable around its zero output voltage point, any kind of disturbance will cause it to fall into one of its stable states (it's a latch after all). The rounding errors of SPICE are enough to make this happen.
H: Is there a manual in English for the UYIGAO UA6013L Digital Capacitance Meter? The UYIGAO UA6013L meter I ordered arrived with an instruction manual in Chinese. I have been unable to obtain a manual in English for this device from the seller or locate one online. Does anyone know if an English manual for this device exists? AI: The UA6013L appears to be only a capacitance meter. 200pF-20mF Auto Range Tester Capacitor Capacitance Digital Capacitance Meter Specification: It is applied with CMOS double-bevel A/D convertor that is automatic in zeroing and polar selection, and makes instruction for beyond measuring range. Wide measuring range, covering 9 measuring sections from 0.1pF to 20,000μF that includes nominal value of any capacitance. Biggest display value: 1999(3 1/2). Reading: 2-3 readings/sec. Zeroing: There is a zeroing knob on the front, easy for operation. Data hold: Put the “HOLD” switch to "ON" and the DH sign will appear on the display. Press the yellow light button , the back light will turn on. Temperature for accurate measuring: 25℃±5℃ Temperature range: working temperature: 0℃ to 40℃ Storage temperature: -10℃ to 50℃ Relative humidity:<80% Power: a 9V laminated battery (NOT including) Dimensions: 143x74x35mm/5.6x2.9x1.4 Inch Material: Plastic I don't see an English manual for this model but you can easily point a smartphone at it for translation for the few points which might not be clear. For example, they suggest starting with 200nF for unknown capacitances, and to use the jack rather than the test leads on small value capacitances for better accuracy.
H: What is systematic offset? I am bit confused about what exactly systematic offset is. I understood that generally offset could be due to two reasons: Random offset which could be due to mismatch in size of transistors or any other component. This will be like noise with mean zero. Systematic? It would be great if you can provide an example with a differential pair or any such circuit so I strengthen my understanding. AI: Here is an example circuit with a systematic offset: simulate this circuit – Schematic created using CircuitLab The bias current of a bipolar op-amp such as the venerable LM741 is not randomly distributed around 0nA, but rather always flows into the inputs (when balanced and under non-pathological conditions) and is typically 80nA. Schematic from datasheet: If 80mV typical (500mV maximum) offset at the output is unacceptable you can add a 500K resistor in the path of the non-inverting input and the circuit will no longer have a systematic offset. Since the bias currents don't match perfectly and the resistors have tolerances there will still likely be an offset, just not a systematic one.
H: Why do silicon wafers look rainbow colored? . Image taken from Can somebody identify this 12" silicon wafer? So this silicon wafer looks multicolored (and beautiful). But how does it get multicolored like a rainbow? What is the reason for this phenomenon? AI: If it's an old image or an image of any wafer made by a old process or with large node technology that does not have "dummy patterning" then the only thing that will make the scribe lanes show color is thin film interference. Source However, starting at roughly 0.25 microns (early 1990's) dummy patterns were added everywhere (including the scribe lane) to wafers at topographical layers (as opposed to implants) to handle problems with processes that suffered from microloading and other pattern density-dependent processes. These included reactive ion etching and chemical-mechanical planarization (or polishing) as well as problems coating with the thinner photoresists necessary for 248 nm Deep UV photolithography which was introduced around the same timeframe. While optical proximity correction is a different think entirely, the adding of larger features like this is often folded in to the overall OPC corrections flow of mask pattern generation. If that was the case for this wafer, then this could be either thin film or diffraction effects (a bit like DVDs do) If it's thin film then the film has to be thick enough to produce different colors at slightly different angles for this photo, which often is the case after an oxide passivation covered with a nitride cap. The top, often silicon-rich nitride has an index of refraction of 2.0 or higher, and so Fresnel reflection is higher and the air-nitride interface acts more like a top "mirror" than an air-oxide (or spin-on glass) circa 1.5 index interface would. But with technology nodes of dimension so much smaller than a wavelength of light, how could it be diffraction? Yes, once a feature is below a half-wavelength in air, you can not see colorful diffraction patterns for that wavelength looking at it at that wavelength, even edge-on with a flashlight. But the dummy patterns don't always have to be minimum resolvable feature size, it depends on the details of the process and the step, but some can be much larger.
H: Is it possible to use single power supply for both op amp bias and powering the mic? Question as in the title. Can Vcc from identical source be used both for the op amp bias and powering the mic? AI: C5,C6,C7 are filtering the bias supply to the microphone so it should be fine to use the same supply for both purposes.
H: How to display my date in the 2nd line of the LCD? Below are the codes I using to display dates and times and basically I used the function "write(hr1,hr0,min1,min0,sec1,sec0);" to display the time and "day_compute(day);" to display what the day for the input date. For simplicity, I just show the input send to LCD between these 2 functions. I want the time to display on the 1st line of the screen and day to display on the 2nd line of the screen, what command line should I add in between this 2 so it will separate the day to the 2nd line?Refered to the datasheet but still not able to understand how to do that. void LCD_set(); void main() { TRISA=0x00; TRISD=0x00 while(1) { initialize(); write(hr1,hr0,min1,min0,sec1,sec0); day_compute(day); } void initialize() { PORTD=0x00; LCD_set(); PORTD=0x0C; LCD_set(); PORTD=0x06; LCD_set(); PORTD=0x80; LCD_set(); } AI: Using these commands sets the configuration and positions the cursor. Rs Cmd Executes 18 38 2 lines and 5×7 matrix 16 80 Force cursor to beginning ( 1st line) 17 C0 Force cursor to beginning ( 2nd line)
H: Choose a varistor for a broken PC PSU My computer's PSU (Corsair RM 650X) has blown. I've searched a bit online and I have found that this is a common defect due to a low quality varistor put in those kind of PSUs before 2018. I know this is delicate stuff to handle so after a few months (I am using in the PC another PSU I had around) I opened it up and I confirmed that the varistor is blown. I see that some people are replacing it with SCK035, others are choosing NTC 3D-11. I would like to choose the component with higher quality / durability but I don't know how to read the specs sheet. Can you help me in doing the right choice? AI: "Quality" can mean the stress margin of the part or the design for reliability. Here you have presented an NTC thermistor designed to reduce the massive >10x inrush surge to charging the bulk capacitors with high voltage by using a thermistor whose room temperature resistance is about 10x higher than the functional operating temperature with a load. Factors that weigh into this design are; the time it takes for power to reach full voltage before power is good, and a load is applied. the time constant of the thermistor to reach low resistance the surge current ratio with rated steady current with NTC selected this depends on the Q=CV storage charge in the supply to hold up the voltage for XXX watts for 1 AC cycle dropout. the power supplies delay constant to restart from hot, often this is a couple seconds as the power declines to off before an automatic restart can begin. When hot Rt is low and a high start current will stress all the parts. (caps, diodes and NTC) These metal oxide parts are designed to operate at very high temperatures and even though these simple parts are reliable, Arrhenius Law states the exponential decline in reliability with rising temperature due to thermal aging. Different component materials may operate at a lower temperature in steady state to make it more reliable and still have a 10:1 R ratio. The problem here is the NTC has failed to protect itself. Choosing a larger diameter part, means the thermal time constant will be longer, but it may be more robust for peak currents when the power restart dwell time is too short to cool down. It is difficult to reason why without guessing on the system design how the part was qualified, and I have no precise datasheet for the original as the part number appears incomplete. I also do not know the replacement brand, but the datasheet looks impressive. Conclusion: Technically, it's still a crap shoot without a full reliability analysis, so follow the "herd's advice" online or consider alternatives. Personal experience I do know TDK is a leader in this industry, and perhaps their part is better. At least, the datasheet is more comprehensive, and they do not boast "Bigmaterialconstant(Bvalue),Smallremainresistance.·Highreliability." rather just "Highly stable electrical characteristic" But that may be just a cultural difference between JP and CN. https://www.digikey.ca/en/products/detail/epcos-tdk-electronics/B57236S0309M051/3913362
H: MOSFET simulation using LTspice MOSFET, Im having some problem in the simulation for the input and output circuit, what I get was always a straight line AI: You should really take some notes, or follow some tutorials on basic SPICE usage. You'd see then that the source, as you have it, is not set up properly for your requirements: a sine with a frequency of 1 kHz (initially) and an amplitude of 10 mV, for .TRAN (as noted by G36). Your value of 6 AC 10m means the source is set up for an .AC analysis of 10 mV input with a 6 V offset. To correct it use: SIN(0 10m 1k). There's no need for an offset because of C1. Then you used kp=0.8m, instead of 0.4m. Plus, lambda defaults to zero, so there's no need to write it in clear (but it won't hurt, either, it's just redundant). Then, if you're not using any .IC, or ic flags, or any of startup or uic, then you're relying on LTspice to calculate the operating point, and it correctly determines it. Therefore you don't need to simulate for 3 seconds. Not lastly, a timestep of 0.01 is ten times the period of the signal, which means it's useless. Try .tran 4m (4 periods for a 1 kHz signal). Adapt accordingly as you change the frequency to 10 kHz and 100 kHz, respectively. And, I shouldn't even say this, but probing the voltage is clearly marked in the schematic, in your problem context.
H: Confusion in proportional controller I was revising proportional controller from here and got confused in following figure which is common in many books also. Since output of controller is proportional(by factor K) to error function but error function and output of plant have same unit(otherwise substraction not possible) which implies that input and output of plant should have same unit or in other words should be proportional but how this is even possible when we already know that there is no such constraints on the plant and plant may have any possible input output relation,so where my reasoning is wrong ? 2.And if we assume plant is an integrator then is it implies that proportional controller should be a differentiator , so that at least variables dimensionally makes sense ? AI: In a purely theoretical presentation, everything is dimensionless. You get this in basic control theory because control systems are difficult and complicated, and trying to deal with everything at once means you just won't get there. So first you learn how to do it in mathemagic land, and then you learn (in a class or on your own) how to translate that to the real world. In the real world, as you point out, things have dimensions. A much more realistic block diagram is shown below. G1 would have an input in whatever domain the summation block is working (most likely digital numbers, but maybe volts or PSI or whatever). It would have an output in whatever the input of H is -- so, more digital numbers (if, for instance, you're modeling your DAC as part of "the plant"), or volts, or PSI, or liters of sewage per second, or whatever. H, would, presumably, have some real-world output, i.e. meters of sewage* or volts or degrees C. G2 would have the same input, and its output would be the units of the summation block. So G1 may be in units of volts/LSB (where "LSB" is a stand in for "1", and emphasizes that you're dealing with computer numbers), H may be in units of m/V, and then G2 would have to be in units of LSB/m. When you're actually modeling this stuff, a quick trip around the loop, checking that dimensions are consistent with gains, is a good way to avoid errors. simulate this circuit – Schematic created using CircuitLab * I'm trying to make control systems look as romantic as possible here.
H: How does power factor show itself in this data from sensor readings? Suppose I have a single phase system and I have a sensor attached to to measure both voltage and current. The sensor samples for 50 kHz and the data is saved to a file. When we then graph out the file we get something like this: In the beer analogy, power factor is equal to true power multiplied by apparent power. Since the units are in kilowatts I suppose we have to multiply the voltage reading with the current reading. What is then the result when we multiply the current and voltage data? Is it true power? Apparent power? I doubt that it is apparent since I have read that establishments are billed for power factor. If the result is the true power, where is the reactive power on the graph? If the result of the multiplication is already the apparent power, then is the meter incapable of reading reactive power? I don’t think so since there are digital meters and they should have information like the graph above. AI: I find the most illustrative way is to get an out of phase V and I graph, and multiply them together to get instantaneous power graph. Modified from: https://learn.openenergymonitor.org/electricity-monitoring/ac-power-theory/introduction In the power graph, everything is real power flowing in one direction or the other at any point in time. You can see sometimes positive, sometimes negative. Here is the illustrative part: Add up (integrate) all the positive parts. That's all energy flowing from source to load. Dividing that by your time interval T gives you the power going from source to load. Add up (integrate) all the negative parts. That's all energy flowing from load to source. Dividing that by your time interval T gives you the power going from load to source. It's all real power flowing in the end, but not always is it being dissipated to do work. The difference between the two tells us the real power actually being supplied or generated. You can see over time some power just flows to the load and back again, doing no work but still consuming ampacity. This is the reactive power. It's the power flowing from source to load which has an equal amount of power flowing from load to source with which to cancel out. If I have 1000W of real power flowing to the load over time interval T, and 300W of real power flowing back to the source over T, only 700W of real power is dissipated/consumed by the load to do work. The 300W flowing back and forth is the reactive power. If you average the power graph over some time interval, to get a single average power number, that is the same as the real power (or net power) delivered or generated. Apparent power is the maximum real power delivered we would get if we fiddled with the phase alignments and is just used as a reference number (the theoretical best). Power factor is a measure of how close we are to that number with our power delivery.
H: Noise Figure of an Attenuator it is known that the noise figure of an attenuator is equal to its attenuation coefficient L. A proof of this statement is provided here. Basically, it uses the formula (Reference): $$F = 1 + \frac{N_a}{GN_i}$$ for a general block with gain G (equal to 1/L in case of an attenuator), where Na is the output referred noise power of the attenuator. The first source evaluates Na as the dissipated noise power in the attenuator. I have some questions: Why does it evaluate Na as the dissipated noise power in the attenuator, instead of half of it? If an amount of power is dissipated and creates noise, why should it go totally to the load (output)? I'd say it goes half to the load and half to the source (if both are 50 Ohm). Why doesn't it consider the dissipated signal power to? Dissipated signal power becomes noise, I think.... AI: The whole derivation of the N.F. of the perfectly matched attenuator can be done in simpler way. Let's assume the attenuator is a transmission line hybrid which is properly terminated. It's every branch inputs and outputs noise power density kT or as well over bandwidth B there's noise power kTB going both directions in every branch assuming nothing generates new noise or if generates, it also absorbs as much noise as it generates. That lifts resistive attenuators to the same line as transmission line constructions. The signal input power is P. The input S/N = P/kTB The output signal power is P/L where L is the attenuation without decibels, for ex. attenuation 6dB means L=4. The output S/N= (P/L)/kTB The noise figure = The input S/N divided by the output S/N = L Your linked proof of attenuator's N.F. starts from the formula of N.F. when a 2- port circuit adds its own noise. It leads to the same result when the added noise is selected well for the known result. The formally right formula for the added noise = that part of input noise which is not passed through the attenuator. That makes the input and output noise powers equal like they were in my version and so it gives the same result, too. It's the formally right formula, but it leads readers easily to assume that what's removed from the input power in the attenuator becomes noise to the output if no heat conduction to elsewhere or storing happens. You extended that idea by assuming that also the lost input signal should be ejected as noise. Both of these assumptions are pure nonsense. The removed part can be ejected from the attenuator through the 3rd port of the transmission line hybrid or converted to heat in resistive parts of the attenuator. The temperature of the resistive parts can slowly rise due the input signal but the resistive parts do not immediately output the absorbed input signal as noise. The resistive parts output noise power kTB like all resistors and absorb as much if they are connected to other resistive parts directly or via transmission lines and temperature is everywhere in the system =T. The noise that resistive parts of the attenuator output is not the absorbed noise signal reflected immediately "as is", it's uncorrelated with the absorbed noise, only average powers are the same.
H: Why are low values of resistor in non-inverting configuration of opamp causing gain peaking? I am trying to simulate frequency response of TLV07 in non-inverting configuration for gain 10 as shown below: The frequency response shows peaking as shown below: It is worse for smaller values of resistors like 1 and 10: The response is considerably better for 1k and 10k: What is the reason for this? In my understanding, peaking is generally seen at higher values of resistances because of influence of parasitic capacitors but here I am observing the opposite behaviour. AI: It is due to the opamp's open-loop output impedance. As you can see in Figure 21 of the datasheet, the open-loop output impedance is inductive from about 100Hz to 10kHz. I modeled it using this circuit and got the same results. * sham opamp * GBW is 1MHz * open-loop gain at DC is 240dB * open-loop output resistance is 900Ω // 2mH G1 0 1 vinp vinn 1 C1 1 0 159n R1 1 0 1T E1 vout1 0 1 0 1 R2 vout1 vout 900 L1 vout1 vout 2m V1 vinp 0 DC 0 AC 1 R3 vinn vout 10 R4 vinn 0 1 An inductive output impedance and the load resistance form another pole in the loop gain. We usually want the loop gain to have only one pole before its magnitude reaches 1. Here the pole brought by the opamp output impedance is at 800Hz, which means the loop gain will almost have a phase shift of 180° when its magnitude reaches 1, which means ringing is likely. As of why the open loop output impedance is inductive at that frequency. This is a rail-to-rail output opamp. Its final stage is common-source. It has a large output impedance. But when people use opamps they usually want a small output impedance. Probably the guy who designed it added feedback in the output stage to lower the open-loop output impedance. However, the feedback works better at low frequencies, and thus the open-loop output impedance looks inductive. Question is where to put the inductive part. For stability with a capacitive load, the open-loop output impedance should look resistive around the GBW. That explains why the output impedance is what it is from 100Hz up. For lower frequencies, there is probably abundant loop gain, so there is no need to lower the open-loop output impedance. We can see that the simulation is correct, and the people at TI have done a good job designing the SPICE model and the component.
H: Simplest circuit to toggle through 3 colours on RGB LED I would like to make a bracelet that has a coin cell battery, a button, some super simple logic, and an RGB LED. I'd like for the user to be able to toggle through the LED being lit green, yellow, red, and off. I know I could do it using a decade counter and maybe even with shift registers, but I'm looking for something simpler than that! Thanks AI: you could use a 2 stage johnson counter. you could build this from a dual flip-flop like this: simulate this circuit – Schematic created using CircuitLab
H: 232RL as COM port: Recognize the device on commercial product - from the software's perspective I am making a device that uses 232RL to translate from UART to USB and windows see a COM port. The connections are like so: simulate this circuit – Schematic created using CircuitLab I want to find a way so that when the customer plugs the product and opens their software, the software can find in which port the product is connected to. I design both the Hardware (product) and the software, so I can have access to anything I might need to change. My solution, which I think is not optimal: I solved this problem by opening the COM ports one by one (done automatically from the software from COM port 0 to 255) until I find a COM port that sends a code that only the software and the product knows. That way, I know I opened the correct COM port and continue with the communication. (This takes less than a second so it works for me, I just feel there must be a better way) The question: Is there a 'faster', simplier way to overcome this issue? Like, Can I program the 232RL so that it shows a specific name (vendor name) along with the COM port? I use win32 API and C/C++ for the software, but I can switch to another API or library if I need to, I can even change the 232RL to another IC if that is what It takes. AI: You can list which serial ports exist in the system so you don't have to blindly open all ports that could exist, but only the ports that actually are connected at the moment. You can also query the description of the serial ports, so you can skip the ports that don't look like FTDI USB ports, and only try opening the ports that look like FTDI USB ports. And finally skip ports that are already opened by other programs. FTDI also provides a tool for configuring the FT232RL EEPROM parameters so it can be used to customize some things that might be helpful for detecting the port.
H: How many instructions can a vector processor issue per cycle? I currently have vector processors in class and wondered whether a vector processor only issues one (vector) instruction per cycle (in order processor). Maybe this question is stupid and it depends solely on the individual processor, however, it would be nice if someone could explain this in a bit more detail. AI: A vector processor is usually¹ defined as a processor which has instructions that are meant to operate on vectors. Whether or not these instructions take one or multiple cycles, whether they even might have a throughput under some situations that is higher than one instruction per cycle, whether or not there's even a pipeline at all: all up to the individual processor design. ¹ you need to check the definition used in your class, as with everything.
H: Propagation and contamination delays with different delays for rising and falling edges In the Digital Design and Computer Architecture by David Harris, Sarah Harris the authors explain what are propagation delay and contamination delay in the following way: The propagation delay \$t_{pd}\$ is the maximum time from when an input changes until the output or outputs reach their final value. The contamination delay \$t_{cd}\$ is the minimum time from when an input changes until any output starts to change its value.. . \$t_{pd}\$ and \$t_{cd}\$ may be different for many reasons, including different rising and falling delays.. So I draw from the bold text (emphasized by me) for any circuit there is only a pair of these measures values. That is to say, if a circuit has a different delays for its rising edge (transition from 0 to 1) and falling edge (transition from 1 to 0), \$t_{pd}\$ is about the longest between them, and \$t_{cd}\$ - about the shortest. The authors show such circuit to define the critical and short paths: and then expand on the aforementioned measures by stating that taking notice of both the critical and short paths, it is true for this circuit: $$t_{pd} = 2*t_{pdAND} + t_{pdOR}$$ $$t_{cd} = t_{cdAND}$$ Let's suppose we encounter the very case: that AND gate has different delays for its rising and falling edges. Does it mean these equations might be wrong? There is the falling edge shown in the picture. So if this edge is quicker in terms of delays, it is either nonsense to use \$t_{pdAND}\$ here (because it indicates the slower rising edge measure but we are talking about the falling one) or \$t_{pd}\$ is not relevant to the matter in hand at all (falling edge). As I'm concerned, for these types of situation it would be reasonable to exploit two different pairs of values: one for a rising edge and another for falling one but this is out of line with the bolded author statement. What's the right way to address these possible circuit delays differences? AI: Yes, these equations are all wrong. They are just rough models of circuit behavior. In fact, the propagation delay will be different for different input pins of the same gate, and may also change depending on the current state of other pins, the input rise/fall time, and the loading on the gate output. As an engineer, the question you should ask yourself is whether the equations are useful. The equations are useful because they give you a quick estimate of the worst-case propagation delay through the circuit. If you want a better estimate, there are CAD tools specifically designed for this task, which is called "static timing analysis". Whether or not any particular model is the "right way" is for you to decide. You might begin with a crude model, and if it says that the circuit is much faster than it needs to be then you can move on to analyzing a different circuit. If the crude model says that the circuit is close to failing then you probably want to run a more accurate analysis.
H: Please help me clear up this confusion regarding gain margin and phase margin and unity feedback I understand the physical significance of gain and phase margin. However, I would like some clarification regarding the mathematical aspects. Videos such as the following: https://www.youtube.com/watch?v=ThoA4amCAX4 (time: 6:28) https://www.youtube.com/watch?v=ThoA4amCAX4 (time: 7:12) define phase margin and gain margin for unity feedback. Even in Matlab when we plot bode of any transfer function, it assumes unity feedback. What's so special about unity feedback? My next question is, if I want to find gain and phase margin of system with gain G(s) and feedback H(s), whose gain and phase should I plot among the following: G(s)? G(s)/(1+G(s)) ? (unity feedback transfer function) G(s)/(1+G(s)H(s)) ? (closed loop transfer function) G(s)H(s) ? (loop gain) I feel it should be option 4 but I would like to confirm since nothing seems to mention which bode plot they are checking to obtain PM and GM, and unity gain feedback seems to be extremely popular for some reason. AI: Unity feedback is the most critical case regarding the stability of a system with feedback. All other applications (higher closed-loop gain) are less critical. For this reason (and to get a "feeling" for the margins) some companies specify the gain/phase margin for unity feedback only. This enables the user to compare different opamps regarding their stability properties. Yes - your feeling is correct. All the stability margins (gain, phase or vector) are defined for the loop gain function only. (The vector margin can be found in the Nyquist plot for the loop gain only). In this context, you should know that in some cases the inversion at the summing junction (negative sign) is included in the loop gain definition - and in some cases it is not. Therefore, there are two options for finding the phase margin: Find the phase difference (at unity loop gain) with respect to -360deg (0 deg) or -180deg. However, in any case the phase difference between the phase at very low frequencies (0 deg resp. -180deg) and the phase at unity gain must not be larger than 180deg (when the closed loop has to be stable)
H: Input image format to Verilog I'm working on image processing using FPGA. My aim is to begin by performing basic image processing such as brightness control, conversion to gray scale and then later on, advance to implement the other functions as well. In what form is it preferable to give the input image (.bmp, .bin, .hex)? AI: Since your image processing unit is working on raw data, binary (.bin) format seems the best fit. That said, you will have to ensure that the resolution and pixel depth match what your IPU needs, which a binary file doesn’t have any information about. You’ll need to keep track of that. .bmp format has that image format meta-information, so your test bench could reference that and extract the binary. Hex makes no sense really. Hex encoding expands the file size by more than 2x. Hex is usually used for making PROM files. You’ll probably want to investigate conversion tools so that you can accept a variety of formats, including jpeg, png, tiff, RAW, piecewise-linear (PWL), etc. That is, have a setup that converts them to binary as needed for your block. Finally, get a good understanding of color spaces and gamma. sRGB is the most common, but there are others to consider depending on the end use of your processing block. More here: https://www.anandtech.com/show/13054/at-101-understanding-laptop-displays/4
H: What is the purpose of this extra capacitor in a notch filter? I understand notch filters, I can write transfer functions and draw Bode diagrams to see how they work. As far as I know, the notch filter only has: C1, C2, C3, R1, R2, R3. The schematic I have here includes: C1 = C2 = 0.05uF C3 = 0.1 uF R1 = R2 = 40k Ohm R3 = 20k Ohm (It does not let an 80Hz signal pass.) Additionally, there is C4 = 0.05uF. I have found the transfer function of the schematic with and without C4 - the Bode diagrams are the same. What is C4 for? AI: You can't determine the effect of C4 without taking into account the output impedance of the previous stage. C4 is directly across the input terminals - assuming your analysis used zero output impedance for the previous stage C4 will have no effect. In a real world system the previous stage would have a finite resistance, C4 will then add a first order low-pass filter to the overall response. That is probably the intent.
H: ohm's law and power equation I have a question about electricity. I know that transmitting power over long distances is better in AC. I also heard that the voltage needs to be high to reduce losses. I want to calculate this for myself. Imagine, there is a windmill that produces 100 kW of power. This 100 kW needs to be transported over a 1 km wire that has 1 ohm of resistance. We can transfer this 100kW of power at 1kV or 10kV. But we are going to look which one is most beneficial. When we send the 100kW at 1kV: This means we have 100A (Power/Voltage) with a loss of 10kW (Current^2*Resistance). When we send the 100kW at 10 kV: This means we have 10A (Power/voltage) with a loss of 0.1kW (Current^2*Resistance). This proves that the losses are less when the voltage is high than when the voltage is low. However, I noticed something odd. When we send the 100kW at 1 kV I calculated that the current is 100A via the power equation (100 000 / 1 000). But when I calculate the current via ohm's law I get 1kA (1 000 * 1). What am I doing wrong? AI: When we send the 100kW at 1 kV I calculated that the current is 100A via the power equation (100 000 / 1 000). But when I calculate the current via ohm's law I get 1kA (1 000 * 1). What am I doing wrong? Hope forgetting about the load. The 100 kW is (mostly) delivered to the load, not to the 1 ohm of the transmission line. Similarly, the voltage is (mostly) dropped across the load, not across the transmission line.
H: Does putting magnetic sensor in a metal housing affect performance of the sensor? I'm putting IMU sensor to read earth-magnetic field. Does metal housing affect it? It might be basic question, but I lack experience in this exact scenario. (My device needs a bit of a reliability, so I'm asking this) Currently, metal case is aluminium; about 0.8~0.9mm thick. AI: Aluminium doesn't affect magnetic field as long as the magnetic field is constant. Others have warned you that you should prevent all current in the aluminium housing because current make electric magnet which disturbs your measurements. If the magnetic field is not constant but changes there's some current in aluminium because changing magnetic field generates a voltage (=induction, the basic principle behind generators) and voltage causes current in metals. The total effect is that induced current cancels partially the changing magnetic field. If the magnetic field stays a long time constant but then changes to a new value and stays at that value again a long time the sensor will see the change slowed, but finally the new value is seen right. If the magnetic field changes slowly enough the effect can be neglible, but it exists and must be examined carefully. We should know the spectral content of the change of the magnetic field. The skin depth at the highest remarkable frequency would give some idea of the magnitude of the effect when compared to the thickness of the housing. See this: https://en.wikipedia.org/wiki/Skin_effect If your magnetic field hasn't any remarkable spectral components (due the change) say at 100 Hz and higher frequencies, the skin depth is about 10mm or more ==> Less than 1mm thick housing causes well below 1% cancellation. If you plan to some control system where the magnetic field is controlled by feedback you have an extra RC lowpass filtering like effect in series with the sensor and that must be taken into the account in controller design. Testing with and without the aluminium housing would be an excellent way to see the effect if the skin effect analysis suggests that the effect cannot be skipped. I guess you must build a test lab because the real conditions need the housing to protect the sensor.
H: Voltage divider. Very Basic Question So in this basic voltge divider in order calculate \$U_4\$ I wrote \$U_4/U = R_A/R\$ . However looking the correct expression is \$U_4/U= R_A/R_B\$. While I see the second is correct, I dont see why mine is not... How should I properly think about this in order to always write the voltage divider expressions correctly? AI: I presume \$R_A = R_3||R_4\$, and \$R_B = R_2+R_A\$. The voltage across \$R_1\$, is the same as the voltage across the series combination of \$R_2\$ and \$R_A\$. \$R_1\$ doesn't affect the voltage dropped across the series combination of \$R_2\$ and \$R_A\$. So you have to think only about how the voltage will now be divided between \$R_2\$ and \$R_A\$. ie., why the second expression is the right one for \$U_4\$. $$\frac{U_4}{U}=\frac{R_A}{R_2+R_A}=\frac{R_A}{R_B}$$
H: Why is the noise gain these opamps 500? I feel it should be 250 I wanted to check the bandwidth of the first stage of an instrumentation amplifier configuration and this is the circuit: Since, the noise gain of an opamp is (1+Rf/R1), for each of the op-amps the noise gain should be 250. The GBW of the opamps is 1MHz. This should give us a -3dB bandwidth for each of the opamps as 1MHz/250 ~ 4000Hz. However, on simulation, I get the bandwidth as ~2KHz indicating a gain of 500. I don't get why. Yes the overall gain of first stage is 500, however, for the individual signals it should have been 250. Please tell me what I am missing here. AI: It's a gain of 500 because R5 can be regarded as two 10 ohm resistors in series AND, due to the balanced signal you are applying, that resistive centre point is naturally at signal 0 volts hence, the gain of the amplifier is the double of R6/R5 or, double of R7/R5.
H: How much voltage is too much for an LED? I have an amplitude modulation LED driver (a dimmable constant current reduction LED driver.) It can power an LED up to 30W, 750mA, 6-40V. The LED I am connecting is good for 750mA, 12V. Will it be too much voltage? I hear people say too much current will kill the LED and way too much voltage will wear it out with excess heat over time, but no one seems to detail out when it is too much. How much voltage is too much? What does it mean when the LED driver has a flexible range of voltage? AI: LEDs are generally designed for a specific current, you are advised to not exceed that current. What is important to know is that the voltage-current relation of LEDs is such that it is difficult to know what voltage to apply to get a specific current. In addition to that, a small change in voltage can result in a large change in the current. This means that if we apply a voltage to the LED which is slightly too high, a lot more current than we want could flow. Also, the current - voltage relations is dependent on temperature and it is dependend on the individual LED. LEDs that are made in the same "batch" are often very similar but there can be significant differences between batches of LEDs even from the same manufacturer. To deal with these uncertainties and risking overdriving and destroying the LEDs we should not apply a voltage to the LEDs, instead we set the current. That means that the voltage will vary over temperature and between LEDs but that's OK as long as the current is kept under control. That current doesn't need to be accurate as long as it stays below the maximum current for the LED. Your LED driver is actually OK for this LED. Your driver delivers 750 mA (same as the LED) and supports voltages between 6 and 40 V. Your LED has a nominal voltage of 12 V which falls within that range. So with this LED the driver will deliver 750 mA (which is good) and the voltage will be determined by the LED at around 12 V (also good). As this driver supports up to 40 V that means you could even power up to 3 of these LEDs in series with this driver. In series, the voltages add up so 2 LEDs = 2 x 12 V = 24 V, 3 LEDs = 3 x 12 V = 36 V and that is still below the 40 V that the driver supports. The current will stay 750 mA which is what is needed. Also very important: keep your LED cool, 750 mA at 12 V means about 9 Watt so the LED will get hot so a heatsink will be needed. I would consider a heatsink of around 10 cm x 10 cm. The cooler you keep the LED, the longer it will last!
H: USB-C and >5V Power Supply I have a question about power supplies and Power Delivery on USB-C. I'm trying to understand the USB-C specification, Power Delivery, UFP and DFP, DRP... but this is a nightmare. There are not clarifying examples on the USB-C Specification, only a ton of schemes with 1200 configurations with different specs but i find it hard to understand. I have the next USB-C connector, configured as 5V 3A supplier to my board. My question is, What happens if I connect a 5V-9V-12V-20V supply? How it will be the negotiation between power supply and my board? I suppose that with the CC resistors is a standard USB connection, and it will be chosen 5V. Is that correct? AI: By default, you get nothing out of the Vbus pins of a USB-C port until you plug something into it and the port detects that through resistors attached to the CC pin(s). To get the standard 5V Vbus supply, all that is needed is a pulldown resistor in your device. This combines with the pullup resistor to 5V in the host port to detect that a device has been connected and advertise the amount of current available from the port at the default output of 5V: (Image from https://www.silabs.com/community/mcu/8-bit/knowledge-base.entry.html/2016/09/26/what_s_the_role_ofc-kQYe) The value of Rp identifies the current available at 5V as follows: 56kΩ: Default (500mA or 900mA) 22kΩ: 1.5A 10kΩ: 3.0A As Rd is defined as 5.1kΩ it's possible that Rp may be implemented as a current source to get the same effect rather than a fixed resistor. Any higher voltage from Vbus requires negotiation through the CC pin, which means that any device can assume it will get a supply of 5V without it. This is essential to ensure that legacy devices will not be damaged by supplying them with higher voltages than they can accept. If you do need that higher voltage, there are plenty of devices on the market you can build into your design to do the negotiation for you without having to worry about the details.