text
stringlengths 83
79.5k
|
---|
H: Microprocessor & sub 2.4GHZ transmitter for underwater application
I am looking for a device that has following properties:
Can transmit data underwater (sub 2.5GHz,1-2m below surface)
Very small (max. 15mmx15mmx2mm)
In-built microprocessor
Able to read sensors
So far I found the XBee, which is too big, and the Bluegiga BLE121LR Bluetooth® Smart Long Range Module, which uses Bluetooth. I read that Bluetooth is really unreliable underwater. Is that true even for only 1-2m? http://www.silabs.com/products/wireless/bluetooth/bluetooth-low-energy-modules/ble121lr-bluetooth-smart-long-range-module
I have little experience when space becomes the limiting factor, that's why I am asking here.
Thanks!
AI: Depends on the water.
Saltwater is very conductive, so it will hardly work at all at any frequency above VLF.
Freshwater is better and it may work out to many meters at a couple of MHz.
2.4GHz is out of scope I'd suggest.
This amateur radio paper may help you with some ideas.
|
H: how to modify TNY264-based smps for different output current?
In one of the reference applications for the TNY264 switcher, we can see the following circuit.
and in part of the circuit description we read:
... The output voltage is determined by the
sum of the optocoupler U2 LED forward drop (~1 V), and Zener
diode VR1 voltage. Resistor R8 maintains a bias current
through the Zener diode to ensure it is operated close to the
Zener test current. A simple constant current circuit is implemented using the VBE
of transistor Q1 to sense the voltage across the current sense
resistor R4. When the drop across R4 exceeds the VBE of
transistor Q1, it turns on and takes over control of the loop by
driving the optocoupler LED. Resistor R6 assures sufficient
voltage to keep the control loop in operation down to zero volts
at the output...
But it is not very clear to me what component values I should change to get the same 5Vdc at the output but 2 amps instead of 500 ma.
Does anybody have experience designing with this component that could help me to know what to change to achieve 5Vdc @ 2 amps at the output?
AI: You need to change the chip. According to the datasheet, the TNY264 is only capable of 9W maximum output, even when adequately heatsinked and ventilated. Some of the other members of its family can handle >10W.
|
H: Configuration problem with pic16F877A
I am building a program for a PIC16F877A. It seems to be running correctly in the simulator MPLABX. I'm using about half the available program and data memory.
During programming I receive the following message:
The following memory area(s) will be programmed:
program memory: start address = 0x0, end address = 0xfff
configuration memory
Debug Executive
Address: 1f00 Expected Value: 1170 Received Value: 3fff
Failed to program device
Under the simulator situation I have looked in the program memory. Address 1F00 is not used and has indeed the value 3FFF. How to proceed in finding the error?
The program is written in XC8.
AI: For hardware debugging an extra piece of code called the 'Debug Executive' is loaded into program memory from 0x1F00-1FFF. For some reason your programmer is not able to write to this area. Three possible causes I can think of:-
The Flash is Write protected.
Ensure that all program memory is writable by adding #pragma config WRT = OFF to your code.
You actually have a PIC16F877 (no 'A'), which has different configuration bits.
The 16F877 can be configured to write protect 0x1F00-1FFF while leaving other areas alone. Solution is to select the correct MCU and set the appropriate configuration bits for it.
The chip is faulty, or it's actually some other PIC that only has 4k words of program memory.
Try padding the program until it fills more than half the memory, then if programming fails above 0x1000 you know it only has 4k of working Flash!
|
H: Can a failed write to Read/Write partation on SD card cause corruption of a Read Only partition?
I have a project that uses a raspberry pi zero where the power will be cycled unexpectedly, so I configured the filesystem as Read Only. I also have a small amount of data that I want to update every few minutes, which I don't mind losing if it doesn't happen often.
I would like to create another partition on the SD card that is Read/Write for this extra non-critical data, with the expectation that if power fails during a write it will only corrupt the one partition.
I know that SD cards have their own controller build in that remaps addresses to the actual flash cells. So, are partitions on an SD card treated independently at the card level or is there some internal process on the card will cause data from both partitions to be modified, thus allowing potential corruption of a read-only filesystem?
AI: So, are partitions on an SD card treated independently at the card level…?
No.
The partition table on an SD card is just a specific piece of data written to the first sector. It doesn't have any implications for the hardware, and most flash controllers on SD cards will ignore it.
There are essentially no bounds on what a buggy flash controller can do if the card is powered down while a write is in progress -- in a worst-case scenario, internal flash translation data could be corrupted, "bricking" the card. Even in less dramatic scenarios, data can potentially be corrupted in any location on the card.
In practical terms, keeping your read-only data isolated to a separate partition will usually be safer than keeping it on the same partition. But there are no guarantees, especially if you're using cheap cards.
|
H: Why anode attracts Anions if anode is positively charged and anions are negatively charged
Why does anode attract anions if anode is positively charged while anions are negatively charged ? Same is with cathode ...cathode is negatively charged and attract positively charged cations ..why is it so?
AI: I did a search for you and found this page
http://electronics.stackexchange.com/questions/29992/why-is-the-anode-positive-if-anions-are-negative
here is the answer by a member named clabacchio :
In the diode, and specifically in the so-called depletion region, there is diffusion of carriers (electrons and holes) from one region to the other. Since the Anode is positively doped, it will attract electrons from the cathode, and this will cause the formation of Anions in its side of the depletion region.
|
H: Driving a 5A current through a coil
To many electronics engineers this might seem a stupid question but I was given by my boss the task to find a solution to drive approx. 5A through a coil.
The coil's resistance (not considering inductance which would be ~15mH) is about 90 Ohms. The signal is sinusoidal with a frequency of about 3kHz. Since he made this sound pretty easy I am a little bit unsure of how to proceed.
Considering Ohm's law, driving 5A through a 90 Ohms would result in 450V which doesn't seem so trivial to me.
I have a feeling that I'm overlooking some fact but I cannot figure out which one.
A possible idea would be to use a OPA549 which gets at least roughly into that area (bandwidth, output capability) but again, I think that I'm overlooking something. All power amplifiers I can find are intended for audio applications and typically drive a 4 Ohm or 8 Ohm load.
I'd be really happy if you could tell me where I'm wrong or if this actually isn't such an easy task.
AI: The inductance of the coil at 15mH gives you a reactive impedance of about 280 johms at 3kHz. That's a larger impedance than your resistance, so will dominate the voltage you need across the coil, 5A * 280ohms = 1400 volts, before you add the extra voltage for the resistance.
Assuming the 5A is 5A rms, you will be dissipating \$I^2R\$ = 2250 watts in the resistance of the coil, no small amount, and way out of the league of anything like OPA549.
I would suggest that you reduce the VA you need to drive the coil by cancelling the series inductance with a series capacitor. To resonate 15mH at 3kHz needs a capacitor of 180nF. You will still need to supply the full 5A, but only supply the 450v needed to drive it through the resistance. The alternative parallel tuned circuit connection ideally needs a current drive (or an inductor) and must supply the full 1500v resonant voltage, but at a lower current. Obviously, the series connection is easier on two counts.
Finally, you need a voltage source. One option is to buy a 3kW audio amplifier, and use a transformer to match its output drive to the requirements of the tuned circuit. Obviously this transformer will need to handle 2.25kW, but at 3kHz, you will be able to use a much smaller core than an equivalent rating mains transformer. At 3kHz, it cannot be a conventionally iron-cored mains transformer, you will need to use ferrite. At that low frequency, ferrite heating losses will be low, so you will be able to run up near the saturation field.
Another option that's just within reach is to use a 450v H bridge. Here, although the voltage supplied to the resonant circuit will be a square wave, the current flowing, and the voltage across the coil, will look very sine-like. If you can tolerate the waveform distortion, and engineer the high voltage H bridge, then this will be cheaper than a 3kW amplifier and a transformer.
|
H: Efficiency of obtaining differential signalling from a single-ended manufactured transducers
This will be a conceptual question. I'm sometimes dealing with data-acquisition of transducers, like strain gauges, accelerometers ect. sort of sensors.
Most of these sensors have their own precision amplifiers. So what I mean by the transducer output is the amplified sensor signal.
These signals then go to data-acquisition's input amplifier which is simply a differential amplifier ect.
But most of the time the transducer outputs are single ended.
Sometimes I encounter all sorts of noise, common mode noise ect.
Since differential signalling is more immune to noise, I thought about converting a single ended signalling to a differential signalling as below(I want to implement Figure 2):
So here are my questions.
1-) Some transducers are manufactured and sold as differential signalling transducers. So they are ready to be wired to a differential amplifier.
But if one has a transducer and want to use it as differential signalling as in my Figure 2, would that be a wrong treatment?
Im asking because If I invert the signal myself to obtain a differential signalling as in Figure 2, then I might introduce noise to the inverted input by interacting it with the inverting opAmp circuit,
and that will not be common on both signals.
So my first question is: is it a common practice to convert single ended signalling to differential ended signalling(in the aim of noise immunity) where the transducer was actually designed for single ended signalling?
2-) If this method makes sense. Here is the typical inverting opAmp configuration:
I would choose R1 and R2 10k. How does the input impedance of the data-acquisition's differential amplifier have affect on choosing R1 and R2 here?
I want the inverting as precise as possible. Is there an opAmp category for that, an example would be great? I dont want use LM741 for instance.
AI: Since differential signalling is more immune to noise
Any signalling is susceptible to noise - it's how your receive amplifier handles those received signals that determines how much immunity can be acquired.
However, you can have a perfect differential amplifier attached to a single ended source (via a properly balanced cable) that has problems. If the output impedance of the hot wire is several tens of ohms compared to the impedance of the 0 volt transmit reference you have what is known as "earth impedance imbalance". Note that I said imbalance.
If noise comes along and "hits" the cable, it will develop a larger signal on the hot output than that developed on the 0 volt reference signal. Here's what I mean for a good scenario: -
The signal source is "perfect" in that it presents the same low impedance for hot wire as 0 volt reference. Clearly, if any noise comes along then it hits both wires in the cable and, because both wires have equal impedance balance to ground, the noise received by the diff amp is equal and can be quite easily cancelled.
If the signal source has an output impedance that isn't zero then there could be a problem that can be overcome by this: -
Now, the impedances are largely the same - the added resistors are chosen to be identical and "swamp" the difference in impedance between hot wire and 0 volt reference. Earth impedance balance will be good and noise will be the same on both received wires (providing your input amplifier has good input earth impedance balance as well).
Adding an inverting stage can make things worse - keep the earth impedance balance at the sending end good and you minimize problems without adding an amplifier. Of course, in extreme circumstances you have to transmit a bigger signal and this can be done (carefully) with a balanced buffer. To keep "balance" (the same for both signals) use an inverting amplifier and a non-inverting amplifier - this largely ensures that the impedance at high frequencies will be equal.
You cannot achieve this using the "original" signal and a buffer amplifier because you have no way of controlling the impedances relative to each other. If it works it's just luck and that's not good engineering.
|
H: Why are many IR receivers in metal cages?
I'm guessing it's a Faraday cage around the receiver, but don't know why they might need one. Is there some sort of common interference around 38kHz (their operating frequency)?
It's the only component I think I've used that gets this special treatment. A larger cage may be around one in a VCR,
and a little baby cage sometimes appears around the standalone PC mount component:
Thanks for your insight!
AI: [ added 2_D resistor_grid methodology for exploring shielding topologies ]
You want that IR receiver to respond to photons, not to external electric fields. Yet the photodiode is a fine target for trash from fluorescent lights (200 volts in 10 microseconds) as the 4' tube has that restrike-the-arc action 120 times a second. [or 80,000 Hertz for some tubes]
Using the parallel-plate model of capacitance, $$C = E0*Er*Area/Distance$$
with diode area of 3mm*3mm and distance of 1 meter, the capacitance is
$$9e-12Farad/meter * (ER=1 air) * 0.003*0.003/1$$ or ~~ 1e-11 * 1e-5 = 10^-16Farad
What current from a fluorescent light, at 20Million volts/second slewrate?
$$I = C * dV/dT$$ or I = 1e-16Farad * 2e+7 Volt/second = 2nanoAmp
That ---- 2 nanoAmp ---- apparently is a big deal (the edge rate, 10 us, is close to 1/2 period of 38 kHz).
The metal cage protects by attenuating the Efield in an exponentially improving manner; thus the further the cage is in front of the photodiode, the more dramatic the Efield attenuation. Richard Feynman discusses this, in his 3-volume paperback on physics [I'll find a link, or at least a page #], in his lecture on Faraday cages and why the holes are acceptable IF the vulnerable circuits are spaced back several hole-diameters. [again, exponential improvement]
Are other Efield trash sources near? How about digitally noisy logic0 and logic1 for LED displays; 0.5 volts in 5 nanoseconds, or 10^8 volts/second(standard bouncing of "quiet" logic levels, as MCU program activity continues). How about a switching regulator, inside the TV; regulating off the ACrail, with 200 volts in 200 nanoseconds, or 1Billion volts/second, at 100 kHz rate.
At 1 billion volts/second, we have 100 nanoAmps aggressor currents. Of course, there should be no line-of-sight between a switchreg and the IR receiver, is there?
Line-of-sight does not matter. The Efields explore all possible paths, including up-and-back-down or around-corners.
simulate this circuit – Schematic created using CircuitLab
HINT TO BEHAVIOR: the Efields explore all possible paths.
================================================
From the master of clear-thinking himself, in his own words, I offer the explanation of Mr "Why did the space shuttle explode high over Cape Canaveral?", the gleeful Dr. Richard Feynman.
He provided a 2 year introduction to physics at Caltech, approximately 1962. His lectures were transcribed, very carefully to serve as reference material,
[its worth getting these 3, and re-reading them every 5 years; also, the curious teenager will savor the realworld discussions in Feynman's style] and published in 3 paperback volumes as "The Feynman Lectures on Physics". From Volume II, focused on "mainly electromagnetism and matter", we turn to Chapter 7 "The Electric Field in Various Circumstances: Continued", and on page 7-10 and 7-11, he presents "The Electrostatic Field of a Grid".
Feynman describes a infinite grid of infinitely long wires, with wire-wire spacing of 'a'. He starts with equations [introduced in Volume 1, Chapt 50 Harmonics] that will approximate the field, with more and more terms optionally usable to achieve greater and greater accuracy. The variable 'n' tells us the order of the term. We can start with "n = 1".
Here is the summary equation, where 'a' is the spacing between grid wires:
$$Fn = An * e^-Z/Zo$$ where Zo is $$Zo = a/(2*pi*n)$$
At distance Z = a above the grid, thus we are 3mm above a grid spaced 3mm, and using only the "n = 1" part of the solution, we have
$$Fn = An * e^-(2 * pi * 1 * 3mm)/3mm$$
Since this Fn is e^-6.28 smaller than An, we have rapid attenuation of the external electric field.
With 2.718^2.3 = 10, 2.718^4.6 = 100, 2.718^6.9 = 1000, then e^-6.28 is about 1/500. ( 1/533, from a calculator)
Our external field of An has been reduced by 1/500, to 0.2% or 54dB weaker, 3mm inside a grid spaced at 3mm. How does Feynman summarize his thinking?
"The method we have just developed can be used to explain why electrostatic shielding by means of a screen is often just as good as with a solid metal sheet. Except within a distance from the screen a few times the spacing of the screen wires, the fields inside a closed screen are zero. We see why copper screen---lighter and cheaper than copper sheet---is often used to shield sensitive electrical equipment from external disturbing fields." (end quote)
Should you seek a 24 bit embedded system, you need 24*6 = 144dB attenuation; at 54dB per unit_spacing, you need to be 3*wire-wire spacing, behind the grid. For a 32 bit system, that becomes 32*6 = 192 dB, or nearly 4*wire-wire spacing, behind the grid.
Caveat: this is electrostatics. Fast Efields cause transient currents in the grid wires. Your mileage will vary.
Notice we only used the "a = 1" part of the solution; can we ignore the additional parts of the harmonic/series solution? Yes. With "n = 2", we get the attenuation * attenuation, and "n = 3" yields atten * atten * atten.
=================================================
EDIT To model more common mechanical structures, to determine the ultimate trash levels as an Efield couples into a circuit, we need to know (1) the impedance of the circuit at the aggressor frequency, and (2) the coupling from a 3_D trash aggressor to a 3_D signal chain node. For simplicity, we'll model this in 2_D, using the available grid_of_resistors
simulate this circuit
|
H: Input circuit for GPIO of a 3.3v tolerant microcontroller
The micro controller I'm using detects Low for 0-1.8v and high from 1.9-3.3v.
The level I need to check is if the input if from 0v-5v stay low, 5v-24v go high on the micro-controller pin( above 1.8v).
I cannot use a Voltage divider here as the high range has to work from 5v till 24 while giving an output of 2v-3.3v.
I tried using a basic comparator circuit but here the output becomes Vcc. I need the output to become 2v-3.3v.
Output State needs to be digital and not analog.
EG:
0-5v should be LOW and 5v-24v should be HIGH
I have a 3.3v power source for the micocontroller.
How could this be achieved?
Reference attached for comparator.
AI: Use an open collector comparator like a LM339..
simulate this circuit – Schematic created using CircuitLab
|
H: Choosing an equivalent relay
I have to replace a faulty SPDT 895-1C-C relay so now I'm looking to buy the product. The problem is that this model it's not the most popular in the world so I thought that I might use something equivalent instead.
I wrote a list with few properties that I consider are important when searching the equivalent relay:
to have the same PCB terminal type (eg. 1C => SPDT)
to support the same resistive load (eg: NC=10A/14VDC, NO=20A/14VDC)
to support the same coil voltage
max carry current per unit of time
Other props that I regard as optional are insulation type, shock resistance of life expectancy.
Since the relay would be soldered on a PCB its size should be also considered, for obvious reasons.
Btw: on the PCB board there are 4 such identical relays. I want to replace only one. This should be also taken into account.
What do you think that is most important when searching for such equivalents (specifically for relay equivalents) ?
AI: You're on the right track...
Relay Type (Form A, Form B, Form C)
Footprint
Coil voltage
Switching voltage Max (greater than or equal to 895-1C-C)
Switching current max (greater than or equal to 895-1C-C)
Once you have these parameters, a quick parameterized search on your favorite Vendor's website should give you any/all matching parts.
Keep in mind you really only need to match the switching voltage/current for your specific application in this case. For instance, if you are switching a motor, the motor switching ratings are important. Otherwise you can disregard them.
|
H: Is 'grounding a desktop PC via 1MΩ resistor' unsafe when working inside the case?
Activity: maintenance (installing/replacing hardware parts, like motherboard, CPU, DIMMs, graphics cards) inside an ATX case. Goal: ESD protection and user safety.
Original question: Is 'grounding a desktop PC via the power cable' safe when working inside the case? Answer: in theory, maybe. In practice: NO!
Edited question: Is 'grounding a desktop PC via low impedance (less than 1Ω) path' safe when working inside the case? (Considering the PSU is completely disconnected from live and neutral.) Answer: if a dissipative antistatic mat should be grounded via 1MΩ resistor, then a good conductive surface like the unpainted inside of an ATX case should too.
New question: Is 'grounding a desktop PC via 1MΩ resistor' unsafe when working inside the case? (Considering the PSU is completely disconnected from live and neutral.) Answer: probably not as PE (Protective Earth) is only needed when an appliance is plugged into a wall outlet.
People often advice 2 things to prevent ESD damage:
Antistatic wrist strap, connected to the case of your desktop PC.
The PSU should be turned off, but still plugged into the wall outlet.
I do understand a low impedance path is desirable in case of a residual-current. But when a PC is powered on, you should not be working inside the case anyway.
If the power cable is disconnected, then is a low impedance path to mains earth safe when you are building a PC or doing maintenance (installing/replacing hardware parts) inside the case? Can the case become a shock hazard in a worst case scenario?
After all, you're probably going to touch the unpainted conductive inner surface of the case at least a few times, while (un)screwing things. Should there be another resistor?
AI: NO! You really don't want to work inside the case with the power cable plugged in period. You are relying on a switch for safety, which is simply not enough isolation even if it is open.
PC should be unplugged and grounded through a standard anti-stat cable with alligator clip end.
Wrist-strap should go direct to ground, not through PC case.
|
H: NE555P Frequency Drop to 0 after a couple of minutes?
I'm trying to build a frequency measurement circuit using NE555P and Atmega328P. The circuit is the following :
I use this library https://www.pjrc.com/teensy/td_libs_FreqCount.html
Always, The system works fine at the beginning (3 to 4 minutes) and then the frequency values start to drop until reaching 0.
Does anyone knows what causes this strange behaviour ?
Thanks in advance.
AI: It is very difficult to see, but as far as I can tell it looks like you have the wrong pinout for the 555 timer. The pins on the right-hand side should be numbered 5-8 from the bottom up, not from the top down:
If you have it connected the way the picture shows, I expect that's why it's giving you problems.
|
H: A question on Vb Ic characteristics of an NPN transistor
I wanted to plot a Vb Ic curve of an NPN transistor in LTspice as follows:
Vbe is swept from 0 to 2V.
But all the tutorials I encounter shows this curve as the following:
Why in this characteristics they always omit plotting the settled part(I guess the saturation part) or am I doing something wrong?
Edit:
Here when I set series source resistance to zero:
AI: This is the standard 2N2222 model in LTSpice:
* Copyright © 2000 Linear Technology Corporation. All rights reserved.
*
*
.model 2N2222 NPN(IS=1E-14 VAF=100
+ BF=200 IKF=0.3 XTB=1.5 BR=3
+ CJC=8E-12 CJE=25E-12 TR=100E-9 TF=400E-12
+ ITF=1 VTF=2 XTF=3 RB=10 RC=.3 RE=.2 Vceo=30 Icrating=800m mfg=Philips)
As you can see, there will still be about 0.5 Ohm (RC+RE) to that will limit the current through the transistor.
Furthermore, there is a base resistor of 10 Ohm in that model. This will limit the base current to some 100-120 mA when your input source is 2 V. On top of that, the DC current gain will not keep constant at BF = 200, but will fall at high collector currents due to parameter IKF. You can expect BF to fall down to 20 or so (something similar can be seen in the actual device datasheet).
Hence you'll have 20 x 120 mA = 2.4 A collector current @ 2 V input. However, a 2N2222 will blow well before reaching that current.
|
H: Is there a simple way to transform an analog signal into its first derivative?
I have an analog signal in the range 0.5 ~ 4.5 V. I'm less interested in the signal's absolute value than I am in its rate of change. Is there a relatively simple passive circuit that I can put between the signal and an ADC that will do this? Is it as simple as throwing in a capacitor? The input range of the ADC is 0 ~ 3 V.
AI: simulate this circuit – Schematic created using CircuitLab
Using an ideal op amp (the part number was just the default), the difference between the output and ref will be inversely proportional to the rate of change multiplied by R and C. Multiplying voltage/time times capacitance yields current, and multiplying current by resistance yields voltage.
Depending upon the amplifier, it may be necessary to add some capacitance in parallel with R1, and depending upon the characteristics of what's driving the input it may be necessary to add some resistance in series with C1. Such changes would shift the behavior of the circuit away from reporting a pure derivative, but if the series resistance is small relative to r1 and the parallel capacitance is small relative to C1, they may help make things more stable.
|
H: What are the extra microstrip elements on this RF amplifier board for?
I am planning to build an amplifier with the HEMT CGH40010. In its datasheet (http://www.wolfspeed.com/downloads/dl/file/id/317/product/117/cgh40010.pdf), there is a picture of a evaluation kit on page 8, and an even better picture is on page 9. There, one can see several small 'copper islands' near the microstrip lines from the connectors to the FET, and near C1 for example is a tuning stub (that one which is bent towards the left hand side) and at the end of this stub, there are also a few of these strange copper islands.
I wonder what the purpose of these small copper pieces is. I guess it is either some kind of resonator, or it is for tuning. But in the 2nd case, how does it work? is the idea to 'scratch off' some of the copper with a scalpel to tune the stub, or what exactly is it?
(Source: wolfspeed.com)
(Source: wolfspeed.com)
AI: Those small islands are there for tuning the input and output impedances seen by the power amplifier. They're not to be removed! Another example (it's common practice, in fact):
You can solder several of them together in order to build a stub of the length you need, placed at the distance you need for impedance and/or noise matching.
There are also sections of different width at the output. You can tune the length of each section by soldering some of those islands together.
|
H: Do dual-supply switch ICs provide DC Offset internally?
I am using analog switch ICs for the muting of audio sources, and am looking for some clarity on what I think I understand about them.
Below is a simplified comparative schematic of how I believe these switches can be used to yield the same function.
Where I Need Some Clarification:
Does the GND pin on the dual-supply IC provide DC offset, eliminating the need to do so for each switch channel?
If so, do the dual-supply switches still require AC coupling, or can they be used like a mechanical switch?
Bonus question: What is the purpose of VL? I read about it somewhere on EE.SE, but can't locate the question again. If it's not related enough, I'll keep looking.
I'm looking to minimize component count, as well as understand the differences in single vs dual-supply switches so that I can select between them. I've looked at the internal schematics and block diagrams in data sheets, and I see the connections, but am not 100% clear on what they are doing and how to exploit them.
EDIT: Forgot the links to Data Sheets
CD4066
DG412
AI: You don't need to AC-couple your signal in or out of a dual-supply switch, and no DC-offset biasing is necessary - as long as your signal's voltage range falls within the switches' allowable range (determined by the supplies you give it).
Your first circuit with a CD4066 has AC-coupling and DC-offset because it's not a dual-supply switch.
You need to shift your input signal up above ground for it to pass through the switch, and then AC-couple it on the output to get it back where it was.
VL is the Logic power supply. Tie it to whatever supply is used to drive the switches' control inputs. It doesn't have any effect on your signal through the switch.
|
H: What control scheme do ESCs for RC planes use?
I have just built an electric skateboard based on a Hobbyking 50A ESC, controlling a 400Kv outrunner-type brushless motor. The skateboard requires a little nudge at the beginning to run regardless of the speed setpoint, otherwise it struggles and vibrates. This is not an issue of torque capability, as it can carry me up a slope without issue once it has non-zero speed.
I'm starting to think, in spite of discussions I find on the web saying these sorts of controllers use sensorless position feedback algorithms, that the stator rotating field is set to rotate at a given speed (i.e. fixed frequency) regardless of the rotor position - resulting in very little torque when driving very high load inertia at zero speed. I was expecting some kind of closed loop control of the stator field with respect to the rotor field (90°), wouldn't you?
AI: I was expecting some kind of closed loop control of the stator field with respect to the rotor field
Indeed, there is. The ESC senses the back-electromotive force (back-EMF) produced by the motor, and keeps track of the rotor position using that information.
The back-EMF is a term for the voltage induced into the stator windings due to them being exposed to a time-varying magnetic field, in turn created by the spinning permanent magnet rotor. In essence, the back-EMF is the voltage that would be present at the terminals of an extenally spun motor without anything being connected to it.
If the motor is at rest, the rotor isn't spinning, so there is no back-EMF to sense. The only thing that the ESC can do is to blindly spin the rotor by driving the motor phases open loop, until back-EMF builds up enough for the ESC to figure out the true position of the rotor. This works acceptably with just a propeller or the inertia of a lightweight transmission and chassis of a RC car as a load, not so much with a high geared motor driving a human.
|
H: Relays: understanding "Rated carry current" and "Maximum contact current"
I'm trying to understand the limitations of connecting multiple relays, and how it relates to current.
Consider a relay with the following specs (rounded for easier math):
Rated current: 100mA
Rated voltage: 12V DC
Rated carry current: 3A
Maximum contact current: 3A
Let's also consider a power supply of 12V DC and 10A.
Is the following correct?
I could wire 100 (10A / 100mA) of these relays in parallel and be at the theoretical limit of my current.
I could wire 30 (3A / 100mA) of these relays in series and be at the theoretical limit of the relay themselves. At that point, the contacts would get too hot or the device would fail in some other way.
The power supply wouldn't really care how the relays are connected, as long as the total current draw is under its maximum. That is, "Rated current" * "Num Relays" <= "Power Supply Current"
I could mix and match parallel and series connections as long as stay under the limits in #1 and #2.
I'm new to hardware and I'm trying to not kill relays or burn down my house.
AI: You seem to be mixing up the ratings on the coils and on the switching contacts. Notice the ratings you quoted come from two different tables in the datasheet:
If your power supply is being used to drive the coils, then you need to consider the required coil current to switch the relay. As your item #1 says, a 10 A supply could potentially supply 100 100-mA coils simultaneously.
If your power supply is being used to drive the load (switched by the relay) then you need to worry about what current your load draws when powered by 12 V. If it does indeed draw 3A, then you could only power 3 such loads with a 10 A supply.
If one relay is being used to switch current to the coils of a bunch of other relays, then you get the limit of your item #2. The 3 A allowed through the contact of the first relay is enough to power the coils of 30 100-mA coils that are its load.
Also, if you want your system to be reliable for a long time, you'd probably want to de-rate all of these specs rather than operate the power supply or the relays at their maximum current limits. (The coils will operate at roughly the spec'ed currents due to the coil resistance and Ohm's Law)
|
H: Replacement for uknown broken transistor - bipolar or MOSFET?
I try to repair the receiver board of Syma X8 series quadcopter. One of four DC motors is not drived properly, i. e. does not turn on after the signal from transmitters (the other motors do). The motor works if it's connected to other outputs.
Using an oscilloscope I found out that all big 4 MOSFET at right part of the board are OK but one of four SMD transistors marked Y1 (circled in green) connected to the gate of one MOSFET is dead.
I have no previous experience with SMD parts but I found that the Y1 marking is usually used for Zener diodes or transistors. I think this could be the latter, because I see square signal at the "input" pin 1 (base/gate) of other 3 transistors that is amplified at the "output" pin 2 of the other three.
Using the diode test at the multimeter I measure 797 mV drop from pin 3 to 1 and 795 mV from 1 to 2. So I think this could be NPN or MOSFET.
I tried a NPN like BC547 but it heated up after the start in a few seconds and the motor still not working. If I tried a PNP like BC556, it did not heated but still not turning on.
How can I find the right replacement for the broken transistor?
What should I measure to find out the correct type?
AI: Your diode test results and operating waveforms indicate that it is an NPN bipolar transistor (if it was a FET the Gate drive voltage would be higher, and the Gate would measure open circuit to both Source and Drain).
The circuit probably looks something like this:-
simulate this circuit – Schematic created using CircuitLab
The transistor should not run hot because both input and output currents are limited by R1 and R2. So there be a short a short somewhere, either in a component or between tracks on the PCB. The most likely culprit is FET1, which may have burned out and is now a short circuit from Drain to Gate and open circuit from Drain to Source. If this happens then Q1 will try to drive the motor directly, but won't be able to because it can't supply enough current.
Remove the FET from the board and measure resistance between Gate and Drain. It should be infinite. If it reads low resistance then the FET is shorted. You should also test the Schottky flyback diode (D1). If this shorts out the FET usually follows.
|
H: Oscillator circuit analysis
I am working on an oscillator circuit. Here's the diagram [See below for output diagram]:
Since it's a bistable circuit I will have to assume two separate conditions that each transistor might be in. Here's my analysis.
My solution:
First, I assume Q1 is saturated and Q2 is cut-off. Therefore I will go and simplify the circuit and prove if my guess was true. Then I will consider the other case and work it out. I just don't want to go through equations I just let you know what I got on paper:
Case I
..........
Part State
Q1 Saturated
Q2 Cut-off
..........
Results:
I(R1)=1.79mA
V(B1)=Vout=2.39V
V(C1)=2V
V(E1)=1.1V
About Q1 since it has just started to charge up it's certainly in cut-off. As you see my initial assumption looks correct and Q1 is in saturation. Now let's consider the other situation where I assume they change states:
Case II
..........
Part State
Q1 Cut-off
Q2 Saturated
..........
Results:
V(C2)=Vout=0.2V
V(C1)=3V
If you look my analysis sounds more or less reasonable. The question that I have is when C1 charges up to 3V what would happen to it so that it suddenly discharges to 2V?
Diagrams
A.The capacitor voltage: The green shows the upper end.
B.The output voltage.
AI: This is a classic example of a circuit where re-drawing it properly helps to understand it better (in my mind, anyway.) Here's how I'd draft the schematic, were it me doing it:
simulate this circuit – Schematic created using CircuitLab
It's the exact same circuit. But here you can very easily see a few things now, which were perhaps more obscured before. Any signal at the input of \$Q_2\$ would be inverted (\$180^\circ\$) at its collector (which is also the input to \$Q_1\$. The collector of \$Q_1\$ inverts that one more time. So we are back to \$0^\circ=360^\circ\$ at the collector of \$Q_1\$. However, the emitter of \$Q_1\$ is \$180^\circ\$ and this is fed back as negative feedback to the input of \$Q_2\$ via \$R_2\$. So, without \$C_1\$'s feedback also present, the circuit would be stable with the collector of \$Q_2\$ approximately two \$V_{BE}\$'s above ground and about one \$V_{BE}\$ across \$R_3\$. I'm going to guess this would be a quiescent current of about \$680\:\mu\textrm{A}\$ through \$R_1\$ and \$R_3\$ and therefore about \$820\:\mu\textrm{A}\$ through \$R_4\$ and into \$Q_2\$'s collector.
[Using \$R_1\$ as the source impedance for computation purposes, I gather the voltage gain is about 3.5, or a little more. More than 1, at least. It can oscillate.]
That's the set up. It's not complicated. It's just a couple of NPN BJTs, some modest negative feedback, and biasing by \$R_4\$ and through \$Q_1\$'s \$V_{BE}\$ junction, through \$R_2\$, and through \$Q_2\$'s \$V_{BE}\$ junction. I've placed my guesses about the quiescent voltages in blue on the schematic, assuming that \$C_1\$ is not present, and I've placed little red arrows to indicate my small-signal directions. (In the following discussion after the positive feedback is added via \$C_1\$ you should completely ignore the blue quiescent values, though. They will no longer apply.)
Now \$C_1\$ is added to provide positive feedback to the input to \$Q_2\$ (and in the start by also pulling hard on \$Q_2\$ via \$R_1\$.) Clearly, a small change at the input of \$Q_2\$ will be replicated (with substantial gain) at the collector of \$Q_1\$, in phase. So \$C_1\$ represents positive feedback and way more than enough to overwhelm (for a while) the modest negative feedback that exists with \$R_2\$. In fact, the key here is that \$R_2\$'s negative feedback is constant while \$C_1\$'s positive feedback is time-dependent (stronger earlier, weaker later.)
To start, the voltage across \$C_1\$ is zero and since \$Q_2\$'s base-emitter
junction isn't likely to move much away from ground, most of the ground-related voltage change will occur at the collector of \$Q_1\$ as \$C_1\$ charges through \$R_1\$. \$Q_2\$ will be very hard on and this will mean that \$Q_1\$'s emitter will barely be above ground and it will be slightly pulling away current through \$R_2\$, but nothing like the current flooding in through \$R_1\$ via \$C_1\$. (\$Q_1\$'s collector current is close to zero during this time.)
But as \$C_1\$ charges up, \$Q_1\$'s collector rises towards the rail and the current in \$R_1\$ declines rapidly. Very soon, the remaining currents continuing to charge \$C_1\$, small as they become, are enough to start pushing \$Q_2\$'s base downward enough that it can no longer support it's collector current through \$R_4\$ and the base of \$Q_1\$ starts to rise. As that happens, \$Q_1\$ begins to turn on and this pulls it's collector downward.
This downward direction pulls that end of \$C_1\$ downward, too, and this just pushes downward that much more on \$Q_2\$'s base, turning \$Q_2\$ off even more (that's the positive feedback.) Again, this allows \$Q_2\$'s collector to rise still more, causing \$Q_1\$'s collector to fall still more.... and so on, until \$Q_2\$'s base is literally driven slightly below ground (\$C_1\$ will have a little more than \$2\:\textrm{V}\$ across it and there is no possible way that \$Q_1\$'s collector can go below about \$2\:\textrm{V}\$.)
Now the \$Q_1\$ collector side of \$C_1\$ is at \$2\:\textrm{V}\$ and the other side is slightly below ground. \$R_1\$'s current (almost \$2\:\textrm{mA}\$ is almost entirely going through \$Q_1\$'s collector (and not into \$C_1\$.) But \$Q_1\$'s emitter is now close to \$2\:\textrm{V}\$ and is supplying current through \$R_2\$ to discharge \$C_1\$. As the collector and emitter of \$Q_1\$ are close to each other, this should nearly discharge \$C_1\$ if allowed to continue long enough.
That won't happen, though. As \$C_1\$ discharges, it's voltage diminishes and this begins to pull upward on the base of \$Q_2\$. When the capacitor \$C_1\$ gets back down to about \$1.4\:\textrm{V}\$ (remember, \$Q_1\$'s collector is still pulled down to about \$2\:\textrm{V}\$), the base of \$Q_2\$ has returned to about \$600\:\textrm{mV}\$ and this means that \$Q_2\$ starts to turn on. There's still current arriving through \$R_2\$ (the emitter of \$Q_1\$ hasn't yet fallen much below \$2\:\textrm{V}\$ yet) so \$C_1\$ continues to discharge more and this pulls \$Q_2\$ towards being on, yanking down on the base of \$Q_1\$ and reducing its emitter voltage as well as letting it's collector voltage head back upwards (pulling \$C_1\$ up, again, adding still more to the forward base voltage of \$Q_2\$.)
I'd guess that in equilibrium, the smallest voltage across \$C_1\$ should be about \$1.4\:\textrm{V}\$ and that it's peak voltage should be about \$2.3\:\textrm{V}\$.
In very round numbers, it charges via \$R_1\$, which is about \$I_1=\frac{3\:\textrm{V}-\frac{2.9\:\textrm{V}+2.0\:\textrm(V)}{2}}{560}\approx 1\:\textrm{mA}\$ (less the average bleed via \$R_2\$ which is about \$I_2=\frac{600\:\textrm{mV}}{3.5\:\textrm{k}\Omega}\approx 200\:\mu\textrm{A}\$.) And discharges via \$R_2\$, which is about \$I_3=\frac{2\:\textrm{V}-\frac{2\:\textrm{V}-2.3\:\textrm{V}+600\:\textrm{mV}}{2}}{3.5\:\textrm{k}\Omega}\approx 500\:\mu\textrm{A}\$. So I'd guess the timing would be about:
$$\begin{align*}
\Delta t &= C_1\cdot\Delta V\cdot\left(\frac{1}{I_1-I_2}+\frac{1}{I_3}\right)\\\\&=330\:\mu\textrm{F}\cdot 900\:\textrm{mV}\cdot\left(\frac{1}{1\:\textrm{mA}-200\:\mu\textrm{A}}+\frac{1}{500\:\mu\textrm{A}}\right)\\\\
&\approx 970\:\textrm{ms}
\end{align*}$$
The above discounts the fact that charging and discharging currents aren't linear, but actually follow a curve. But it should get somewhere into the right ballpark, anyway.
You can also play with the duty cycle. Perhaps the easiest way is to adjust \$R_2\$. Making it smaller will reduce the discharge time, leaving the charging time alone. Making it larger will do the opposite.
|
H: Using register retiming to pipeline a module
From my reading I understand that modern synthesis tools are able to perform register retiming where registers are moved between combinational logic to meet timing constraints.
So for example in your HDL you would describe comb -> reg1 -> reg2 -> reg3 and the tools would move those registers to get comb1 -> reg1 -> comb2 -> reg2 -> comb3 -> reg3.
When would one manually pipeline logic rather than depending on register retiming?
AI: Well there are at least two scenarios where I would opt for manual retiming:
Where I know there is a specific optimal geometry, for example, a logic tree, and I don't want the synthesizer to do this alone since it could make a suboptimal election.
Synthesis running times can be long. I may prefer to make these decisions alone instead of letting the tool take them, where I may have to check what it did and possibly rerun synthesis.
|
H: Chosing the right motor for a vibration platform
The following photo depicts a vibration testbed that uses a motor, springs and some wood. The purpose of this effort is to make a test bed vibrate the attached kinetic energy harvester at the movement frequency controlled by the motor.
The vibration plane is connected to a stable surface by means of two springs. On the vibrating place is a motor that has a weight attached to it. This weight is fixed towards a side of the motor. This is similar to the approach utilised in mobile phone vibrators except in this case, I want the motor to drive a larger weight.
As the motor drives the weight (rotate the weight), the vibration surface will start moving up and down at a perticular frequency. This is what I want. Inorder to limit other directions of motion, I will made sure the moving platform is 1 degree of freedom. i.e. it can only move up and down.
By doing this, the attached kinetic harvester will also move up and down at the same frequency of the moving surface. This is the whole purpose of this setup; a testbed for my research.
I want to have a frequency ranging approximately from 0.5Hz - 4Hz. My mass may range from 100g to about 600g.
In that case, can someone please propose me a type of motor that I can utilise for this purpose? I think it should be a DC brushless motor. But I am not sure. Could someone please be kind enough to tell me what type of properties my motor must have? Any additional information will be highly useful. Thank you for your time.
AI: How much weight you want to rotate is in itself irrelevant. What you need to know is two things:
what is the vibrating mass, vibration frequency and how fast you need to change it
what amplitude of vibrations you need
how much energy you expect your "harvester" to absorb
Vibration amplitude will give you an estimation of torque. For example, a vibration plane of 100 kg vibrating at ±1 cm amplitude will produce a force \$F=mr(\frac{2\pi}T)^2≈40N\$. That force has to be countered by your rotating weight, e.g. 1 kg weight attached to a 1 m arm. That weight would have the moment of inertia \$I=mr^2\$, and you will need a torque \$\tau=\frac{2\pi I}{T*T_x}≈6.28 N*m\$ to get it rotating at \$\frac1T\$ frequency in \$T_x\$ time (assuming \$T=T_x=1\space s\$).
The power of your motor will need to allow that torque to be applied up to 1 RPM plus cover whatever power you expect your harvested to absorb: \$P=\frac{\tau*RPM}{9.5488}+P_{harv}\space(kW)\$, which is 0.66kW + \$P_{harv}\$ in my example.
Hope that helps somehow.
|
H: What kind of IC is "ATMLU324" "16B 1" "Z8J0534B"?
I recently bought what I thought it would be an ATtiny85. After a few unsuccessful tests, I discovered (guess what?) true ATtiny85 had "ATtiny85" printed on it, like this one:
Mine is this one:
The printing says:
ATMLU324
16B 1
Z8J0534B
I searched for a datasheet, but all I found was datasheet sites searching for it too. What is this chip?
AI: It looks for all the world like an Atmel AT24C16B, a 16kbit two-wire serial EEPROM chip.
In particular, page 14 of the datasheet has this diagram explaining the markings on the DIP version of the chip:
Seal Year
| Seal Week
| | |
|---|---|---|---|---|---|---|---|
A T M L U Y W W
|---|---|---|---|---|---|---|---|
1 6 B 1
|---|---|---|---|---|---|---|---|
* Lot Number
|---|---|---|---|---|---|---|---|
|
Pin 1 Indicator (Dot)
Y = SEAL YEAR WW = SEAL WEEK
6: 2006 0: 2010 02 = Week 2
7: 2007 1: 2011 04 = Week 4
8: 2008 2: 2012 :: : :::: :
9: 2009 3: 2013 :: : :::: ::
50 = Week 50
52 = Week 52
Maybe you were simply sent the wrong part by mistake. Or maybe the seller tried to scam you with a cheaper (or counterfeit) part.
|
H: What can you put in a cable to lower a voltage from 5V to 3.3V?
When charging an iPod shuffle using this cable, https://www.google.com/search?q=ipod+shuffle+charger&client=ms-android-verizon&prmd=sivn&source=lnms&tbm=isch&sa=X&ved=0ahUKEwi9odvBqoHTAhVlxlQKHTpTBcIQ_AUICCgC&biw=360&bih=559# , one typically plugs the cable into a 5 Volt AC/DC converter (like a cell phone charger). Alternatively one could use the 5V output of a USB port on a computer.
However, when measuring the voltage at the end of this cable, I get 3.3V. Why is that?
AI: There is a 3.3V voltage regulator circuit board built into the plug on the cable.
|
H: How to generate complementary PWM signal using an IC (integrated circuit)?
I am outputting a PWM signal, but the problem is that I can't generate a complementary PWM signal with dead time because this timer channel doesn't support it. So I want to use/buy an integrated circuit (IC) that generates a complementary PWM signal if I feed the PWM signal into it (with dead time). Is there such an IC available on the market?
AI: A typical way of producing what is required (deadtime & complimentary level) is via an R-C network (to produce a delayed waveform) and then feeding the two waveforms into suitable logic gates
for completeness, a similar reply by Andy aka
|
H: Can I use TRIAC dimmer output AC rectify to DC
I am playing with TRIAC dimmer from past few days, by firing TRIAC at different phase angle the overall output can be lowered.
but i was thinking if i can use a diode rectifier and a filter capacitor to get DC out of the Dimmer. I know it will be a non isolated topology.
Transformer are always there but, thinking if i can use triac dimmer output and rectify it to DC.
AI: You are free to rectify the output of the TRIAC. You should of course ensure that whatever circuits the TRIAC is supplying can accept the dc output. You don't actually say what it is you intend to drive.
To give one real-world example, rectifying TRIAC output has often been done with universal motor controllers. This can reduce the size and weight of the motor, and decrease wear. Torque ripple can be reduced, as well as low frequency noise. This is due to the reduced rms and peak to peak current.
|
H: Second Reflow with BGA
For a DIY project BGA reflow soldering is really expensive to get it done by commercial means. I approached a university which allowed me to avail the help of their staff but they said they would only do the BGA part.
I thought of getting the other parts done through commercial means since they don't cost much and then get the BGA done. Would this work?
I checked the datasheets and some components have second reflow capabilities whereas its not specified for others. Can I use something like epoxy glue to hold the others in place while the BGA is being reflowed?
Its a pretty tight knit board with about 300 components packed into a 7cmx4cm board, so I'm pretty concerned about them shifting during the second reflow.
AI: It would probably be better to use a hot air rework station to replace the BGA. You still have to be careful with components with this method, but it can be done. I would find a service to do this for you.
If not, then even if the other components move around (which they shouldn't move too much because the solder has surface tension that keeps the part in place), you should be able to fix any problems with a hot air rework gun (and flux) or two soldering irons for passives.
No matter what you do its going to be expensive, BGA's are hard to deal with and the process have to be done right because you can't see the connections unless you have an x-ray machine.
|
H: Bypass capacitor vs low-pass filter
i'm trying to get my head round two intermingling factors relating primarily to an rc low pass filter and bypass capacitors for providing a low impedance path for high frequency AC signals, essentially filtering them out.
I was initially confused by the need of a resistor within an RC filter. But the following picture explains how the input port matches the output port. (Actually taken from another stackexchange question)
But then looking at bypass capacitors:
I understand these can provide voltage if it dips, but i have not found a reasonable explanation as to why an rc filter requires the resistor but a bypass capacitor can take out high frequency signals without one? Essentially filtering, low pass filtering?
AI: All filters are voltage dividers, with Zin and Zshunt. Sometimes the Zin is hidden, or just part of the wiring. In an RC LowPass, we have the R*C timeconstant; invert that to find radians/second at the 0.707 halfpower point (also the -3dB, 45 degree phaseshift point); divide that by 2*pi and you have frequency in Hertz.
Thus the RC filter gives predictable corner frequency; 1MegOhm and 1uF is 1second tau, 1 radian/second frequency, and 0.16 cycle-per-second (Hertz).
Another valuable feature of RC filters is the built-in dampening. Our circuits always have inductance; my default rule-of-thumb is 1nanoHenry/millimeter for wire or skinny PCB trace over air. If wire scotch-taped atop a metal sheet, or PCB trace over GND/VDD plane, I use 100 picoHenry/millimeter.
Our capacitors always have some inductance; any non-zero length of circuit has some inductance; hence every capacitor has the L+C to ring; we should think about dampening that ringing, with resistive losses R = sqrt(L/C).
We often place two capacitors in parallel for VDD bypassing; we have just formed a PI resonator, with peaks and nulls of filtering. Examine this simulation, with 10 milliVolts (typical ripple levels) into a CLC PI filter; C1 = 100uF; L is PCB inductance of 10nH; C2 = 0.1uF; the source includes 100nH (4" wiring) and 1milliOhm. The rightmost 3 stages show the ideal C_L_C, and are de-selected from the simulation; right after the source are the CLC used in the simulation, checked to be active. Note the horrific peaks and nulls in the bottom plot of frequency response.
How can we have such peaks and nulls? Because all resistors (in source, in each cap of value 100uF and 0.1uF, and in the top middle PCB inductance) are only 0.001 Ohm.
What does the peaking do? We have 23dB peaking at 50KHz, or 140 milliVolts of ringing. We have 26dB peaking at 3MHz, or 200 milliVolts of ringing. Unfortunately, 3MHz is near SwitchReg clocking and ringing frequencies.
Lets increase the resistors (in 10mV voltage source; in cap#1 100uF, in top middle PCB inductance, in cap#2) to 10 milliohm. Here is our BODE:
We STILL have no filtering at 3MHz. What to do? We need to dampen that 3MHz peak. Lets increase the top middle Resistance from 0.010 to 0.100 ohms;
Some attenuation (-10dB, or 0.316X). Can we improve this? Lets compute!
Using sqrt(L/C) as sqrt( (10+10+10nH) / 100nF) = sqrt(30/100) = sqrt(0.3) = 0.55 ohm, we increase top middle R to 0.55 Ohm:
What is the final circuit?
simulate this circuit – Schematic created using CircuitLab
But there is more. Lets use many 0.1UF, and place 0.55 ohm in series with some.
Thus the final final circuit has NO series R in VDD line, preserving VDD headroom, but does dampen.
simulate this circuit
Notice we've done nothing to improve the low frequency filtering: 60Hz, 120hz.
(1) Large R and C are needed, using up the headroom of VDD and making OpAmp VDD vary as the load current varies. (2) LDOs help with 60/120 but add their own ThermalNoise (some inject a millivolt of random noise between DC and 100KHz; others inject just a microvolt but have high Iddq; LDOs also fail at high frequencies because the PSRR(1MHz) is near 0dB just like many OpAmps. (3) Use inductors, large inductors, in the VDD path. Instead of the 100nanoHenry, use 100milliHenry.
Another way to provide dampening brings Ferrite Beads into the schematic; these require low or moderate current levels to remain effective; at 3MHz or 30MHz, consider a bead. Examine the loss-level (the "resistance") and test in with your capacitor(s) of choice. Watch out for temperature effects. (This is why I suggest Resistors for dampening.)
Summary: for high-precision and high-SNR measurements, you must also design the VDD networks. For high-gain, with multiple OpAmps sharing a supply, you must now design a VDD Tree, to avoid feedback and oscillation or delayed settling.
|
H: Spark when connecting battery module to DC power supply
I am using a 60v DC power supply to charge a 48v battery module. I set current and voltage limits first, turn off the DC power supply, connect positive to battery's positive side. When I try to connect the negative side there was a spark, I wonder why it happened and how do I get rid of it.
AI: The spark is caused by your battery charging the output capacitor on your power supply, which is zero volt when you start and have very low impedance so the current consumption from the battery is huge the monent you make contact, i.e. big spark.
The easiest way to get rid of it is to place a suitable diode in series with the power supply and increase the voltage by 0.7 V to compensate, or 0.4 V if you go with Schottky.
Source: designed battery chargers for a living for six years.
|
H: BLDC PWM frequency
I am driving a BLDC motor using a 6-step commutation table at a 40kHz PWM frequency and I am at a loss choosing the optimal one. I understand that the max frequency depends on the motor's R/L ratio and the MOSFETs's dead time.
According to my motor's datasheet:
the phase to phase terminal resistance is 0.0686Ω.
the terminal inductance is 0.0811mH.
it has 7 pole pairs.
operating voltage is 48V.
nominal current is 3.59A.
71.1A max power motor current (48.5 DC Link current at max power, 198A stall current).
5300rpm no load speed.
What's the relationship between these numbers and the max frequency?
I am even more confused regarding the MOSFETs' dead time. Is it the sum of the turn on delay time and the rise time? I am presuming that the PWM period can't be smaller than it, is this correct?
AI: L/R determines the minimum PWM frequency. To avoid excessive power loss the L/R time constant should be much longer than the PWM period, so that most of the voltage is dropped across the inductance rather than the resistance. It also smooths out the current flow, which lowers peak current and reduces losses in other parts of the circuit.
Taking your motor as an example, the equivalent circuit looks like this:-
simulate this circuit – Schematic created using CircuitLab
At 50% PWM the motor receives an average voltage of 24V. As it spins it generates a voltage which is slightly less than 24V due to voltage drop across its internal resistance. When SW2 is switched on current builds up in the inductance, and when it switches off the current decays as the magnetic field collapses. The L/R time constant is 81.1uH / 0.0686Ω = 1.18ms. At 40KHz the PWM period is 25us, much smaller than the L/R time constant.
The resulting motor current waveform looks like this:
Average motor current is 3.33A, while rms current is a bit higher at 3.5A. This causes about 10% more loss in the winding resistance than a smooth DC current, which is probably acceptable.
However if the PWM frequency was lowered to 1KHz, current would climb to 120A during PWM 'on' time and drop to zero during 'off' time. To get the average current back down to 3.33A you would have to lower the PWM ratio to ~11%, and then the rms current would be 8.4A and the waveform would be a series of spikes peaking at 32A! This would greatly reduce efficiency as well as making the speed control very non-linear.
Maximum PWM frequency is generally limited by MOSFET switching losses. While switching the FETs have both voltage and current across them, so they dissipate high power. These spikes only occur for a short time, but at higher switching frequency there are more of them so average power dissipation increases. The dissipation limit is usually reached well before switching time encroaches on PWM period.
Dead time is more about turn off time than turn on time. If one FET has not turned off by the time the other one turns on then current will 'shoot through' both FETs causing very high dissipation. The FET will usually start turning on well before Gate voltage reaches maximum, and not completely turn off until below the threshold voltage. Therefore it tends to take longer to turn off than to turn on, which is the opposite of what you want. The amount of dead time required depends on how quickly the driver can transition the Gate voltage (which depends on driver strength, Gate capacitance, Gate threshold voltage and power supply voltage) as well as intrinsic FET turn on and off times.
However, dead time is really only required for 'active freewheeling' where the lower and upper FETs are switched on alternately. If PWM is only applied to the lower (or upper) FET then you effectively have 100% dead-time. During 'off' time the upper FET's body diode takes over the job of recirculating current through the motor. This is slightly less efficient because the diode drops ~0.7V whereas a turned on FET drops 0.1V or less. In a high voltage system this slight voltage loss is hardly significant, but it does cause the upper FETs to heat up a little more.
|
H: A mechanical switch current/voltage rating
I have a mechanical switch, and it has been rated 5A at 125V, and 3A at 230V.
I think it is not because of power, because 5*125 is not equals 3*230. Why does the current rating change depending on voltage?
Both ratings are for AC
AI: When a mechanical switch is turned off the contacts separate, but the current tries to keep going (as an arc) until the gap is too large - then the current stops. The higher the current you're switching and the higher the circuit voltage, both will tend to keep the arc going for longer. So for a particular switch you can balance voltage and current ( as in your example ) - lower voltage lets you use higher current and vice versa.
If we assume the above is AC then you may also notice that the DC rated current for a particular switch is lower than the AC rated current - this is also due to how switching arcs develop. In AC the current drops through zero 100 or 120 times a second which helps the current to turn off, but DC doesn't have this 'help' and so the current rating for DC has to be lower.
|
H: How does this circuit control motor speed?
I was curious to see how the foot pedal of a sewing machine drives the universal (?) motor inside the appliance. I had expected a simple variable resistor like the ones in vintage machines but that's not what I found. My experience is limited to digital circuits, so I would appreciate any explanations you can offer.
From a mechanical point of view, the R4 variable resistor is the main control input, and the switch S1 is linked so that it opens as R4 is pushed to one of its endpoints thereby breaking the circuit and resetting the triac.
There is one unknown component on the PCB- it looks like the other 0.5W resistors, but only has a single black band going down the middle.
My (naive) assumption would be that C3 or C1-R1 provide a low impedance path for the AC, so how can the rest of the circuit affect the motor speed? Also, is L1 part of an LC circuit or does it serve a different purpose? The appliance is rated at 0.9A @ 110V, how can they get away with only 0.5W resistors in that case?
AI: It's a dimmer circuit, electronics-tutorials has a really great article on it: http://www.electronics-tutorials.ws/power/diac.html
"If we wish to control the mean value of the lamp current, rather than just switch it “ON” or “OFF”, we could apply a short pulse of gate current at a pre-set trigger point to allow conduction of the SCR to occur over part of the half-cycle only. Then the mean value of the lamp current would be varied by changing the delay time, T between the start of the cycle and the trigger point. This method is known commonly as “phase control”."
Looks like L1 is just there as a low pass filter, it's hard to tell without values, but probably the switching on and off of the Diac creates a sharp rising edge that would create unwanted audible noise in the circuit.
|
H: What's the difference between the LM2917 and the LM2917-N?
The schematic I'm looking at says to use an LM2917. The specs from the TI site use both LM2917 and LM2917-N and the shop on ebay uses both names too.
http://www.ebay.com/itm/like/270864373784?lpid=82&chn=ps&ul_noapp=true
http://www.ti.com/lit/ds/symlink/lm2907-n.pdf
Is this some kind of naming convention I should know about for the future?
AI: N is TI's designation for their Plastic DIP through hole package. Note that the SMD SOIC package designation is M.
Note, these are old ICs first designed by National Semi in the 1970s (This App note is from 1976) and later cloned by everybody. TI eventually acquired NatSemi in the 2000s and absorbed their ICs into their product lines.
I believe but can't find the original first revision datasheets to confirm, that these ICs first came out when Ceramic DIP were still common, so the N designator for moulded plastic DIP was important to point out the difference. -8 at the end also designated it as the 8 pin version, so LM2917N-8 is the 8 pin PDIP.
|
H: How can I modify this switch?
Power > toggle > light > extension plug
When the toggle is ON there is power to the light and power to the extension's plug
When the toggle is OFF there is no power to the light and no power to the extension's plug
How can I make the light so that it has power when there is NO toggle (no power to the extension's plug) and vice-versa?
AI: You can't easily or safely do what you ask.
However, you can make it so the little light shown in your picture is on all the time. It looks to be just a neon bulb with series resistor. Those take very little power. This was a common way to make old night lights, which were basically on all the time.
Think about what the point really is in turning off the light when the switch is on. Presumably you want the switch to be lighted when off, since the room may then be dark or something. You don't need the light when the switch is on, but it probably does little harm either.
The additional power is miniscule too. You already want the light on when the thing being switched is off. When the thing being switched is on, it likely takes many times more power than the neon light.
Apparently the input power comes in at top right in your picture, and the switched power goes out at bottom left. If so, move the lower connection of the neon bulb assembly from the left pin of the switch to the right one.
|
H: STM32 clamping diodes - what is the maximum input voltage?
As you can see on the attached picture from a reference manual of STM32F7, GPIO pins have internal clamping diodes to protect from overvoltage.
But what maximum voltage can I put into the pin? I know that 5V is max, and 4V is max for VDD. But what would happen if I put, let's say, 10V into the pin? Shouldn't the diodes clamp this overvoltage?
What parameter of a diode decides how many Volts can be clamped safely?
If I want to use external clamping diodes, let's say 1N4148, then what will be maximum voltage that I can put into the pin?
AI: The answer to "Shouldn't the diodes clamp this overvoltage?" is yes and no. It really depends on the output impedance of whatever is feeding it and the strength of the power rail.
If you happened to connect that pin to a 10V power supply, what do you think is going to happen? Will the diode pull down the 10V supply, or will Vdd be pulled up to 10V minus a diode drop. Or will the diode just burn out.
It really all depends on the rest of the circuitry. But as crude as that example is, you can perhaps grasp the idea that the diode has a limit on how far it can go in the role of signal clamping.
The pins ALSO have a current limit. The diode will only survive as long as you do not exceed that current limit.
Will adding an external diode help?
Sometimes the internal clamping diode is not actually a diode but an FET type circuit. An external diode that clamps at less than the internal protection voltage will allow you to dump more current.
If you can't find a diode that clamps to less than the internal circuit, then there is no point. The chip may fry before your diode ever really kicks in.
You still can't short to the other rail though.
In summary, when attaching non-device level signals care needs to be taken to ensure the source impedance is high enough to not over-rate the device and not swamp the rail you are clamping to. Adding an appropriately sized series resistance is generally required.
Addendum: The input protection diodes are really there for "just in case" protection. They should not normally be relied upon to be a functional part of your design. Proper signal preparation before injection into the pin is the better design method.
Generally I prefer this approach.
R1 needs to be chosen to limit the current to the zeners specified reverse current at the indicated zener voltage. That is, the Zener may be rated at say \$3.1V @ 5mA\$.
So \$R1 = (V_{signal} - V_{zener}) / I_{zener}\$
simulate this circuit – Schematic created using CircuitLab
|
H: How does armature reaction affect brushes in DC generators?
"Due to armature reaction, the natural direction of the magnetic field of the poles is distorted. Thus the neutral plane is also altered. Therefore, the armature conductors are not at zero potential when they come in contact with the brushes. This leads to sparking across the brushes and loss of power."
I've seen that statement in various different videos regarding the subject. However I'm confused. I understand how the neutral plane shifts due to armature reaction, but I'm wondering why the armature needs to be at zero potential when it comes in contact with the brushes?
Aren't brushes supposed to transfer the current generated in the armature to the load? Why would the armature have zero potential, when in contact with the brush?
Any help would be appreciated, thanks!
AI: I'm not familiar with the theory of DC generators, but what your sources are trying to explain sounds fairly straightforward just on general electrical principles.
Look at the brush/commutator connection as a switch. In any kind of switch, when contact is made or broken, there will be a short time period of very high resistance (whether this involves arcs or merely a small contact area). That resistance will dissipate power uselessly, and arcing will cause wear.
If you instead arrange the timing of the commutator so that the contact is only made or broken when there is no potential difference and therefore no current — or, realistically, just very little — and the majority of the current flows while the commutator is “fully on” (contact over the entire area of the brush) then there will be no/little dissipation or arcing, so the generator will be more efficient.
That point would be when the magnetic field and armature are aligned so the magnetic field does not induce any voltage (the “neutral plane”) — the same position as when, if it were an AC generator without a commutator, the output voltage would cross zero.
|
H: Current gain in MOSFET CS Source degeneration
There is always some formulas for calculating "Au" or "Gu". But there are never given any equations that reffer to current gain in MOSFETs. I know current gain is almost infinite in normal CS amplifier, but can that value be shown as an equation?
And what happens when "Rs" is applied to the circuit? Since voltage gain is explained I know that "Rs" decreases voltage gain. But what happens with current gain in Common Source with source degeneration circuit? Is it also decreased? Can be that value calculated through equation that defines current gain ("Ai" & "Gi")?
AI: Current gain for MOSFETs is never referred to because it is meaningless. The drain current is not controlled by the gate current. (It isn't in BJTs either). When used in a circuit with other components the current gain can be relevant but it depends almost completely upon those other components.
With your example adding source degeneration (Rs) the voltage gain will decrease and so will the current gain because more current will be required to drive that voltage into R1 and R2.
|
H: Why is my sound level LEDs bar indicator not working?
I want to amplify a microphone signal and use the output to drive a speaker and an LEDs bar, all works nicely except for the LEDs bar:
simulate this circuit – Schematic created using CircuitLab
The LEDs bar is supposed to change level (number of illuminated LEDs) in accordance with the sound level, but all it does is either light all LEDs at once or none. How do I make it proper reactive to sound like this one on YouTube?
AI: You have 8 evenly spaced (in voltage) comparators, you really want a log scale, try making each resistor in the series chain half the value of the one above, this will give you 6dB per LED which is likely to be more satisfactory.
Add a diode, a cap and two resistors to give the thing some dynamics.
Or use a LM3914 or such, lower parts count and that part is made for this job.
|
H: Why is transistor needed when using a relay?
I will start building a relay based on this answer:
https://electronics.stackexchange.com/a/464345/56969
Why is T1 needed? Every other component has an important purpose. But does T1 has an important purpose as well? Can I just remove it?
What would happen if I remove that transistor?
Edit
I know Arduino cannot supply more than 40mA on each pin and that is the reason why there is a separate power supply to turn on the relay. I guess my question should have been "Can the optocoupler supply 100mA of current?" If so, that means I can remove the transistor and have fewer components.
AI: Image from linked question: -
Basically the opto-coupler cannot provide enough current at low enough voltage drop to turn on the relay coil shown. The transistor acts as a power buffer and it "delivers the goods" with a small input signal power from the opto-isolator.
Addition
You may be able to replace the opto, the transistor and the relay by using Panasonic's PhotoMOS product range: -
Pick the DC/AC contact rating you need and the isolation voltage and if it fits your application then you're good to go.
|
H: Why a capacitor is connected between two ground terminals and what difference does it make?
I was referring to a datasheet where I found a capacitor is connected between two ground terminals. Are these really ground terminals or am I reading the datasheet in the wrong way? Can someone help me to understand what this part means is datasheet and why is this type of connection is used.
See the "Page 27" of the below datasheet
Reference datasheet: http://www.ti.com/lit/ds/symlink/lm3450.pdf?ts=1591116185629
AI: Figure 1. The circuit has two grounds. The hollow ground symbol is used on the mains (live) side of the isolation. The solid ground symbol is used on the low-voltage DC side of the isolation.
To suppress the high frequency common mode is is necessary to put capacitors between the input and output side of the power supply with a capacitance substantially higher than the capacitance in the flyback transformer. This effectively shorts out the high frequency and prevents it escaping from the device. Source: What does the Y capacitor in a SMPS do?.
Links:
Safety Capacitors First: Class-X and Class-Y Capacitors.
|
H: Why shows the effect shown in the video below left-right asymmetry?
I made this video which shows what happens when I move the point of a wooden chopstick over my TFT-LCD screen, with firm pressure.
It's remarkable that one can see that there is a difference between moving the chopstick to the right and to the left (with about equal pressure).
When moving the pressure point to the right the blue blob is positioned on the right side of the pressure point (and between the two red "wake" circles), and while moving it to the left it's also on the right side (so the blobs are on the opposite sides of the pressure point's direction of moving).
The pattern has a different colored "tail", dependent on the direction of motion, being green when moving to the left and pinkish when moving to the right.
What's a pronounced difference too, are the yellow structures: When moving to the right there is one in front and one behind the pressure point. When moving to the left there are two yellow concentric "circles" of which the outer one is much fainter. And so on.
I'm sure it's got something to do with the distance between the two glass plates in the LCD part (so the polarization vector of the light is changed, which changes the color). It's strange that the pattern is direction-dependent, though at second thought when moving the pressure point on a flexible, smooth, and flat structure, the pressure is of course not symmetrical distributed around the pressure point. So the pressure on the upper plastic sheet is asymmetrical wrt the moving pressure point, and those nice patterns form because the pressure varies in a continuous way, making the distance between the sheets smaller in a continuous way too. I just wonder why the color distribution is different if going in different directions. The pressure distribution will be the same when moving in both directions (see the answer below, where a comparison is made with the movement of the chopstick on a rubber sheet which is obviously direction independent), won't it?
So why is the pattern dependent on the direction in which I move the pressure point with the chopstick?. Shouldn't the same bulge pop up in front of the moving chopstick, and the same stretching behind it, no matter in which direction I move the chopstick? In other words, why does the effect shows no left-right symmetry?
AI: Imagine you were dragging on a rubber sheet instead of the LCD screen. The sheet would stretch out a bit behind the stick and bunch up slightly in front. The pattern would look exactly like what you're seeing on the LCD.
A LCD screen works by changing the polarization of light as it passes through a very narrow gap between 2 plastic sheets. This gap must be very precise, because if the gap changes the polarization will change. As you drag the chopstick around on the screen, you're disturbing the top sheet, which is changing the thickness of the gap. This causes a visual disturbance.
|
H: A warning on power pins of MCUs, explanation?
I came across with this warning:
Most Chinese development boards do not have any kind of protection on
the +5V rail. This means that the +5V pin of the USB connector is
directly connected to any +5V/VIN pin on the development board. Always
check if this is the case when you’re connecting your development
board to an external power source while using the USB port.
I could not understand fully that what is the danger? What will be the consequences of this action?
Thanks.
AI: Because when you connect two 5V sources together, they fight each other since no two 5V sources are identical. What happens if your computer's USB is 5.1V and your external power source is 4.9V? They cancel out leaving you with 0.2V across a short-circuit across
|
H: Female terminal pin
What is this female terminal pin called, Everywhere I look I can only find slightly similar ones, but no matches, Even a reverse Google Image search pulls up some weird alternatives
The only ones I can find are these
I need this specific type because the lip at the end of the female terminal locks into a peg-pergeo battery connector... Peg-perego is a battery connector for a kid's "plastic" 4 wheeler. I don't have the slightest clue where to start looking for connectors besides Amazon and ebay, so any Information would be greatly appreciated.
This is the other end of that connector
AI: This is "TAPP12V Connector" and if you google it you will see literally hundreds of sources for them. Here are some examples: one, two, three.
They mostly come pre-wired with slightly more common connector on the other side. You can either use that connector or cut it off and splice the wires.
Main point is - do not waste time looking for pins, buy "Peg Perego battery harness" and use it to replace the connector.
Another option (which I don't like) is to crimp the wire where it is supposed to go - into second pair of tabs. Since you would not be able to crimp insulation too chances are it will break off again eventually.
|
H: Required timer count for a number of same frequency different duty cycle PWM signal
Ok, it may look a bit confusing. I am specifically talking about STM32 MCUs, or even more specific, STM32F103C8T6. I did some amount of reading, but could not find the answer to this.
Let's say I want 4 PWM signals each at 50 Hz but all with different duty cycles. In such a case will I need 4 distinct timers, or can I use one timer with 4 channels? I mean can each channel of same timer be configured with different duty cycle?
AI: That's why timers have multiple channels, to generate multiple PWM signals with same timer. If you use multiple separate timers, their output may not be synchronized.
|
H: LM317LM as a current source
I have been using LM317L as a current source for generating 2-50mA current for a load, by following the datasheet, P14:
So far so good, but now I have come across LM31L-n, particularly LM317LM/NOPB, which also comes in 8-Pin SOIC form which in the datasheet states that 4 of the pins are Vout.
My question is that is it possible to use those 4 outputs to generate 4 different current sources by using different resistors on each line, given that they are in the allowed range and all together will not pass 100mA, doesn't seem right or am I missing something?
My goal is to have multiple current sources within the given range, so if I can have less IC, it would be great.
datasheet
Thanks and regards,
Pat
AI: Figure 1. The 8-pin package from page 3 of the datasheet.
Since there is only one ADJ pin you hopes are dashed. It's one LM317 inside with VOUT connected to four pins. They'd be labelled differently otherwise.
There are a couple of other clues:
There's only one ADJ pin. You can only set one voltage / current.
To do what you were hoping for you'd need 4 × ADJ, 4 × VOUT and VIN. That's nine pins, the chip only has eight and two of them are NC (not connected).
Sorry.
|
H: How can I trigger an AC output ON when an AC input is OFF?
I have a low power 24V AC circuit powering some motorised ball valves for switching water on and off. Several of the valves are 2-wire auto-return meaning that presence of an AC signal will open the valve, and absence of an AC signal will close the valve (ie. via a spring).
One valve however is 3-wire and requires an AC signal between one wire & ground to open and an AC signal between the other wire & ground to close.
In effect, I'm trying to replace the relay in the following circuit with discrete components:
simulate this circuit – Schematic created using CircuitLab
The closest I've been able to get to a solution here is to take the switched AC input, convert to DC (ish) with a Diode Rectifier & Capacitor, then use this to trigger an inverted Transmission Gate but this feels overcomplicated and I'm sure there's a better way.
AI: OK, just as an academic exercise, here's one way to achieve what you want. The transistor circuit inhibits passing current to the motor if there is also voltage on the control input (R1). Otherwise, the pass transistor behaves as a \$\frac{100 \Omega}{\beta}\$ resistor. Two bridge rectifiers allow it to work on both halves of the AC cycle.
simulate this circuit – Schematic created using CircuitLab
Select a transistor for Q2 that can handle the motor current.
Equivalent circuit for positive half-cycle:
simulate this circuit
Equivalent circuit for negative half-cycle:
simulate this circuit
|
H: Which formula is correct for a Digital-Analog-Converter?
I am looking for the formula for an ideal DAC: a device that takes in a digital code and returns an analog value
rfwireless-world.com claims
$$V_{out} = D\cdot V_{ref}/(2^N-1)$$
where D is the digital code and N is the resolution of the DAC.
However, sciencedirect.com claims it is
$$V_{out} = D \cdot V_{ref}/2^N$$
Which is correct? The second equation seems to imply that \$V_{out}\$ will never equal \$V_{ref}\$ since the highest value \$D\$ can take is \$D_{max} = 2^N-1\$
AI: This is the classic 'fencepost problem'. \$2^N\$ is the number of codes, and \$2^N-1\$ is the number of steps between codes. Since the MSB transition is defined as \$V_{ref}/{2}\$, the full-scale code falls just one step short of \$V_{ref}\$.
edit:
Here's an example, taken from the MAX541 datasheet (full disclosure: I am an applications engineer at Maxim Integrated):
https://datasheets.maximintegrated.com/en/ds/MAX541-MAX542.pdf
Note the full-scale value for analog output is 65535/65536 of VREF, or 32767/32768 of VREF, depending on configuration.
|
H: Do I need to put a resistor in series with relay coil?
I need to design a circuit to drive a relay and I have some doubts.
Do I need a resistor in series with relay coil, I mean, between
transistor and relay coil ?
Why I always find relays with 12V, can I use a 5V source to energize the coil?
Unfortunately, I don't have current specification neither a relay model. My doubt is about functioning, I'm afraid to fry the relay or the transistor.
simulate this circuit – Schematic created using CircuitLab
:
AI: You only need a resistor in series with your relay coil if your are applying a voltage that is different than what your relay coil is rated for. At the rated voltage, the coil resistance will be sufficient to limit the current to the levels necessary to energize the coil.
However, if you apply a larger voltage than what the coil is rated for, the coil resistance is insufficient to limit the current to safe levels. In that case you must add enough series resistance so that the rated voltage appears across the coil and the rated current runs through the coil, even though you are applying a larger voltage.
This is easy enough to calculate using V=IR along with your applied voltage, and two of the following: the coil's rated voltage, current, and resistance from the datasheet
If you are applying a voltage less than the relay's rated coil voltage, you do not need a resistor but the relay will also not switch since there is not enough current.
|
H: Understanding the operation of LM219 Comparator
I'm trying to use the LM219 (equivalently LM119 or LM319) comparator in a project I am working on (I chose this over other comparators because it has a fast response time for the desired supply voltages). I am using supply voltages of +-12 volts, and I am generating a square wave output from a triangle wave input (+1 to -1) and DC bias (for duty cycle control). Essentially if the difference in the inputs are greater than 0, it should rail to +12, and rail to -12 otherwise.
However, all of the datasheets for the LM219 I have looked at don't have a simple comparator application circuit (all typical application circuits don't intuitively explain how the comparator works), and I don't completely understand how to wire it up. How does the part decide whether to rail to +12 or -12 based on the connections? Would the circuit below suffice for my application?
A related question, all SPICE models for the part I found online don't seem to work well. Who can I contact to find a SPICE or TINA-TI model for this part?
AI: The LMx19 has an open-collector output. If you look at Fig 6.1 - Functional Block Diagram in the TI datasheet, you will see that the output transistor can only pull the output towards Ground when Low, and lets the output float when not Low. It cannot drive the output to either supply (well, if V- is tied to Ground it can drive the output to V-/Ground).
|
H: Zero Crossing Detector using comparator
I followed SNOA999 for a zero crossing detector using TLV7011 as a comparator. I am not sure about short-circuiting grounds between AC and DC power sources. My concern is creating negative currents on the DC side and thus damaging some more sensitive components.
I tweaked the circuit from the application note to achieve the following:
simulate this circuit – Schematic created using CircuitLab
Where D1 and D2 are schottky diodes.
Vout works as planned in a SPICE simulation. But, my doubt is if by not short-circuiting V- with the inverting input will affect the stability of Vout as zero-crossing reference.
AI: Shorting grounds will result in saftey hazards and most likely burnt components if the voltage source is AC mains (120V or 220V). One reason is both neutral and hot can carry current (especially in the event of a fault). An isolation step down transformer should be used in between the mains source and the opamp. For example a 10:1 step down would give a max voltage of 11V and provide isolation for saftey.
Or if you just need to detect the cycle you can use a circuit like this with an opto-coupler
|
H: Capacitor does not hold charge when switching power
I'm trying to create a power source switch using a relay. This's supposedly will be used for my Raspberry Pi. So what I want to do here is :
Create a backup power source with battery that will replace the main power if it's failed/off
Add a capacitor to holds charge, so when the relay is switching, the power will not completely cut off
Here's my schematic:
I already tried to create this, but the problem is when the main power is cut off, the output still drops and then backs up to 5V. So I've assumed the capacitor is not doing its job.
But strangely, if I cut both of the power, the output still shows 5V (and slowly going down), so this means the capacitor actually holds a charge, right? Then why when the relay switching, the power still temporary went down? Do I did this incorrectly?
AI: The cause of the problem could be the relay's switching characteristics.
The 'pull-in' voltage of a relay would be closer to its nominal voltage whereas its 'drop-out' voltage would be much lower.
After occurrence of a power failure, the relay's 'drop-out' characteristic would ensure that it stays on in spite of the decaying power supply voltage.
Hence the capacitor would also get discharged before switch-over to the battery could occur.
Here's an alternative scheme using a Schottky diode for switching.
The PSU would predominate, with it's voltage set marginally higher. In any case the probability of the PSU voltage exceeding that of the battery is quite high.
With availability of Schottky diodes having a forward voltage as low as 150mV, voltage drop during backup should not be an issue.
|
H: Basys 3 400MHz Logic
The Basys 3 advertises "internal clock speeds exceeding 450 MHz," but the default clock pin is connected to a 100MHz oscillator. Is it possible to configure the Basys 3 to use a 450MHz clock?
AI: The circuits on the board have internal PLLs. These are blocks that can generate a signal (like a clock) that is related to the reference (in this case the reference is the 100 MHz oscillator), but not necessarily equal. There are various ways to implement these blocks, but almost all big digital circuits will contain one (or more) to create clocks based off a reference.
One of the benefits is that a PLL can usually change the ratio between the generated signal and the reference. This means you can use a single, fixed reference, yet still change your clock speed of the digital circuit (for example to achieve lower power)
|
H: How to drive the gate of a MOSFET with a PWM like signal
I'M trying to use MOSFET as a driver for my 10W LED with 12V supply. This LED will be used as a transmitter with signal coming from Arduino nano. I'm using IRFZ44N MOSFET with a Vgs(min)=2V and Vgs(max)=4V. The problem is whenever I give the transmitting signal to gate of the MOSFET it doesnt turn ON to the fullest. The voltage drop across LED would be 6V and across Vds its 6V. When I checked the drop across Vgs the DMM reads only 2V. As of what I read the DMM averages the value for such signals but that should mean the drop across Vgs is aprox. 5v and hence turning on the MOSFET but it doesnt turn ON. Therefore, I tried giving the blink program's 1 second ON and 1 second OFF signal to the gate. What I noticed was the MOSFET turned ON completely with very less Vds and max. voltage drop across the LED but the same isn't true for higher frequency. I would like to know how to drive the mosfet with very high frequency like PWM.
AI: Summary of solution
The 10 k series resistor will restrict the ability to drive sufficient gate voltage to most MOSFETs - this should be replaced by (say) 10 ohm or 100 ohm)
This will also improve the high frequency activation of the MOSFET due to the low drive impedance charging gate capacitance up a lot quicker.
The LEDs will look dimmer at a high frequency 50:50 drive compared to being switched on at a slow rate due to the persistence of the retina
Using a DVM to measure 6 volts across the LEDs indicates that at a high frequency, the actual wave form is a fast on-off 12 volt signal that of course, averages to 6 volts. Ditto the voltage across the MOSFET.
Picture originally posted by OP: -
For a start the series 10 k resistor that connects to the Nano's output can be reduced to circa 100 ohms. At the moment it halves the drive voltage from the Nano and this will mean poor performance from the MOSFET in terms of switch on resistance.
If the Nano can produce 5 volts logic drive then you should be OK with the IRFZ44N: -
At higher frequencies, gate-source capacitance will degrade the signal seen at the gate if the driving current isn't sufficient. The gate source input capacitance is nearly 1.5 nF and with an effective 5 kohm source, this forms an RC low pass filter of cut-off 21.2 kHz. Significantly higher drive frequencies will turn into a mushy DC level at the gate. Try removing the series 10 kohm resistor and replacing it with 100 ohms for a start.
|
H: Why do I need phase margin if I know the transfer function?
What is the point of examining the phase margin (or gain margin) for a closed-loop system if I can just solve for the transfer function. The transfer function will give any poles and zeros, which can be used to know if your system is stable, the step response, etc.
In fact, the Q of a two-pole system can be solved in terms of phase-margin, and vice versa.
AI: Why do I need phase margin if I know the transfer function?
Short answer: you don't
But if all you have is a real device that may become unstable then a physical measurement may be all you can do. The physical measurement may also hint where the poles might be but for sure, the physical measurement will deliver phase margin or gain margin.
|
H: Is there a component that will OR the last 1-2ms of its input?
I need a component/circuit which will output 1 if a 1 has been input at any time in the last 1-2ms.
I could make such a circuit out of several shift registers, an oscillator and several OR gates, or I could use an MCU.
Is there a simpler (maybe analogue) alternative? I've considered:
bus -->|--+--+-- out
| |
= R
| |
_ _
- -
_
>| diode = capacitor - GND R resistor
but the 1V diode drop is quite big, making the circuit very sensitive to the bus being driven at less than 3V3. There may be slew issues, with the output component not recognising an edge.
Maybe a 555 can do this?
The required logic looks like:
_ ____ _ __
bus __| |________________________| |_| |_| |______________________
_____________________ _________________________________
out __| |____| |__
|<----- 1-2ms ----->| |<----- 1-2ms ----->|
AI: You need a re-triggerable monostable and this can be made from a 555 timer. Waveform: -
555 circuit using a BJT to keep the timing capacitor discharged: -
|
H: Button debounce ringing
I have a circuit where I am using a tactile switch to pull 3.3v down to ground and investigating debouncing the button. My relevant part of the circuit is
The 100nF cap does work here my signal has ringing that operates in the undefined region and causes a few false clicks. Increasing the capacitance to 1uf makes it worst. I tried using 10nF which was better. It reduced the ringing time but did not remove it.
Image of scope trace below for 10nF.
This ringing is not really a switch bounce contact problem is it.
What causes this ringing. There is capacitance on the power rail could it be that ?
My best results are with removing the caps entirly.
I know I can use software to solve this issue so it is not a big issue but I am using this as a learning exercise.
My easyeda design:
Top:
Bottom:
AI: The ringing is caused by poor layout (possibly breadboarding is the worst culprit) AND the fact that your button's switch-contact is shorting out a charged capacitor - the instantaneous current flow is (practically) tens of amps and that surge (along with poor wiring and layout creating excessive loop inductance) causes the ringing.
Try putting a 10 ohm resistor in series with each capacitor.
|
H: The meaning of leads in equivalent resistances
Following on this question (How to combine two resistors with a voltage source) and faced with a similar example, it would seem the opposite applies. I am given the circuit below:
simulate this circuit – Schematic created using CircuitLab
and told Rth is (R1||R2)+(R3||R4). In this question (How to combine two resistors with a voltage source), it seemed the terminals were implied to be meaningless, implying that here, R1––R2 and R3––R4. What am I missing?
EDIT
In response to some of the answers below, I will redraw the circuit to better illustrate my confusion:
simulate this circuit
What is the role of the red leads? What is the meaning of asking for a resistance between these vs "the whole circuit"?
AI: You can't just take away the red nodes as you did. You want to find the equivalent resistance as seen at these two nodes, so if the nodes are removed then you can't get a meaningful solution anymore. In problems like this we assume that something will eventually be connected at the red nodes.
And, when you combine circuit elements in series the node that previously existed between them is no longer part of the circuit....it is buried somewhere inside the single new element that is equivalent to the two original elements.
|
H: Altium->Coupled Inductors
Im glad to be with you. I have question about coupled inductors in altium. I want to make a flyback converter and tried first as in Image 1 . I converted 310V DC to 12V DC as you can see. I wrote spice model for T1(Transformer)
Then, I wanted to ascend my outputs as you see in the Image 4
But sim out came like in Image 5
For 3 output transformer spice code that I wrote
What is wrong?. I think 3 output transformer spice model wrong. But I didnt figure out. Give me advice please about that.
With my best regards...
AI: What is wrong?
You've made the beginner's error of not understanding the importance of dot-notation when implementing a flyback transformer design. In your first circuit you have, in effect, got lucky with the right output voltage but it's not acting as a flyback converter but more like a regular forward converter and using the transformer as a regular step-down device.
This is not how flyback designs work.
The lower picture shows how I've altered the secondary coil to suit it working as a flyback converter. Flyback converters work by charging the primary coil and reverse biasing the secondary diode then, turning off the primary MOSFET so that "fly-back" occurs and the energy stored in the transformer's magnetic field is released and forward biases the diode. It's a two-phase operation.
Correct use of transformer secondary phase relative to primary phase: -
Notice the change in the position of the dot on the secondary - you can keep the dot as per your original diagram but you then need to connect the secondary diode to the non-dotted end of the secondary (as shown in my amendment to your original diagram).
Picture from A Guide to Flyback Transformers by CoilCraft.
Other reading: Mean Well - Flyback converter: -
|
H: Why should the smallest bypass capacitor be placed nearest to the IC?
I have found and read many papers about bypass capacitors, but there is still one thing I am really struggling with. Let's say I have an IC and I place 3 bypass capacitors (eg. 10µ, 1µ and 100n). I am familiar with the idea that the smallest should be placed nearest, but I am still not certain why. From the papers I caught these 2 reasons:
The smallest capacitors are faster; thus, they can react fastest.
The goal of the smallest capacitor is to "filter" higher frequency noise. (This one is the one where I struggle.)
From what I've read, the reason to place the smallest closest is that high frequencies are affected by the length of the trace more than smaller frequencies. Is there someone who could explain this part to me?
Edit:
When I consider the typical graph of capacitor impedance based on frequency (the V shape graph), does it mean that the point of lowest impedance will be due to the inductance of LONG traces shifted left thus not filtering the high frequencies?
AI: Because inductive impedance increases with frequency $$Z=j\cdot2\cdot\pi\cdot f\cdot L$$ you need to have lowest inductance (=shortest trace) for highest frequency filtering.
|
H: Why are tapped transformers needed in RF amplifier circuits?
I'm not really sure why, but I have seen many different RF circuits in which the input to a transistor is taken from a center tap instead of across the entire secondary, such as this example:
Why is the input taken from the tap instead of across the entire secondary?, I don't understand the need for the tap.
Sometimes also the output might be connected to the tap instead of the bottom side of the primary like this example:
And in some extreme cases, like this IF amplifier, only the tap is used, the outer end of the primary is left disconnected.
What is the logic behind this?
AI: Impedance conversion.
Tapping the top of a tuned cct tends to load it, reducing its Q, broadening the bandwidth where you want a narrowband to reduce interference. In your first example the FET is a high resistance load but it still has input and Miller capacitances which you don't want across the tuned circuit.
Tap at 1/3 of the inductor, and you transform the equivalent impedance across the whole tuned cct by 3^2 = 9.
Same in the 2nd cct : while the transistor approximates a current source, it adds a high but not infinite resistance across the coil : by connecting it to a tap, the impedance across tho whole coil is increased. (Also the transformer steps up the resulting voltage, increasing gain).
In the third circuit I suspect it's merely a way to get one of several step-up ratios (and impedance transformations) out of a standard transformer. I see part of a second one : was that connected differently?
Sometimes it can be used to convert from a standard impedance like 50 or 75 ohms from an antenna or transmission line to a higher one for tuning.
"Impedance conversion" by adding resistors would add Johnson noise generated by the resistor : impedance conversion by transformer does not, as well as stepping up the voltage when an impedance increase is desired.
|
H: Is ther a standard circuit/component to control the temperature 500W resistive heating element from a PCB?
I have a 500W (230V) cartridge heater element and a K-type thermocouple. I want to use the feedback from the thermocouple to be able to vary the temperature of the cartridge heater. I have no concerns about being able to implement the thermocouple circuitry and software, however I have never used a high power heating element before.
I'm looking into the control of the element. I personally don't have an issue with an on-off control (as in, I'm okay with applying some hysteresis and having, say +/- 10°C from the desired value, or using PWM), however maybe there are some other considerations as to why this wouldn't be a good idea (switching life cycles for example). If there is a component to safely vary the power to the heating element, so I can implement PID for example, then I would also be interested in that.
As I want to control the heating elements from a microcontroller (preferrably 3v3, but can level shift to 5v if needed), I've been looking into relays. However, I am struggling with the power requirements - from what I find, 500W seems to be a lot of power for a PCB based relay (although since I've never used a relay before, maybe I am misinterpreting the specifications). I'm also concerned about the actual traces on the PCB - I'd much prefer to have the relay controlled by PCB traces but the heating elements just connected to the relay by wires, if such a device exists.
I've looked into latching, non-latching, solid state relays, SCRs, triacs, but I feel like the more I look into it, the less sure I am. I'm now at the point where I feel like there must be a standard way to do this, but I'm just getting lost in the sea of options.
So, is there a standard circuit/device to safely control the temperature of 500W (230V) heating element using microcontroller-level voltages? The cheaper/simpler method is usually better in my eyes, as long as it's safe.
EDIT:
Thanks all for the comments and answers so far. Looks like I'll go with some type of relay, and possibly some backup protection if I want to use a latching relay. One of the main issues I'm having with finding a suitable relay is geting the specifications correct. For example, the following is taken from a relay datasheet:
The first thing I see is that max switching voltage and max switching current are both high enough - great. Then I saw the rated load box, when confirms this. However, what confuses me is the 'Max. switching power' box. I'm already a little bit hazy on the difference between VA and Watts, but the fact that at has a maximum of 300W, while my element is rated at 500W, confuses me. Can anyone help explain this?
AI: 500 watts at 230 volts is only 2.17 amps. If you don't want to have the power continuously on at that power level, you might want to design for 2.5 to 3 amps. An PCB mounted electromechanical relay for that current rating and a resistive load should not me difficult to find.
You should also consider using a solid state relay to trial with zero crossing control. The idea would be to pass a certain number of full cycles of AC line current and block a certain number, varying the on/off ratio under PI or perhaps PID control. Regulating to only +/- 10°C seems like quite a pessimistic expectation. The old-fashioned thermostat suggested by @Hearth would do better than that.
|
H: What is 1.5%rdg + 4dgt for 15,23 V?
Where "Rdg"is for reading and "dgt"is for digits.
So, 15,23 * 0,015 = 0,22845.
Rounding to most significant digit it's 0,2.
So is this correct?
$$15,23 \pm 0,2$$
Since 4dgt is smaller we ignore it. Or am i doing this wrong? I need to find uncertainty in the way I did for a physics lab.
AI: Why did you round to 0.2 for just one significant digit? Nothing anywhere in the question has only one significant digit. The operand with lowest number of digits in the question is two so why did you round to just one significant digit?
\$ 15.23 \times 0.015 = 0.23\$ rounded to the same number of significant digits as the operand with the lowest number of significant digits (which is the 1.5%)
Then \$\pm4\$ digits on top of that means \$\pm4\$ of the least significant digit of your reading which was 15.23. The least significant digit of that reading is 0.01. You use the digit from the 15.23 and not the 1.5% because the digit error was quantified relative to your reading. It makes more sense if you just think about how your meter must have an finite minimum absolute error. If it was purely percentage error then it would imply the absolute error of your meter becomes infinitesimal as you approach a reading of zero.
Therefore \$ \pm0.23 \pm0.04 = \pm0.27\$
|
H: Voltage induced accross an inductor, conceptual confusions
simulate this circuit – Schematic created using CircuitLab
OK, I always have a hard time to understand inductors. Now, I do know that inductors will not let current through them to change instantaneously. And the voltage induced across them is given by the formula:
$$
V_{ind} = L \frac{di}{dt}.
$$
Suppose, the circuit was at steady state. So the current through the inductor is 1 A at t = 0-. At t = 0, I move the SW1 to position B. Now, using the above formula, the magnitude of the induced voltage across the inductor should be 1 V. And the polarity is such that
$$
V_{node\, C} = GND - 1 V = 0 - 1 = -1 V.
$$
So, node C is at -1 V now, I know that the inductor will try to keep current flowing from C to GND, but the polarity forces me to think the other way around, that is current should be flowing from GND to C from both sides(also from B to C). Then I become getting confused, node C looks like a new ground, a sink to current.
So, it is clear that I am having some hard time on this concept, any help would be appreciated.
Thanks.
AI: When you throw the switch, the inductor circuit is changing from "being a motor" to "being a generator" and it tries to keep the +1 amp flowing by altering the node C voltage (the only node that it can alter) to ensure that +1 amp still circulates at that instant following switch-over. The only viable voltage at node C that ensures this is -1 volt.
This forces 1 amp (at that instant) to flow through both resistor and inductor in the same direction prior to the switch changing position. The voltage clearly has to be -1 volts (node C) across the resistor to satisfy ohms law for the resistor for 1 amp flowing. This is because one side of the resistor has been connected to 0 volts by the switch changing position.
At the instant the switch changes over, you can assume the inductor to be equivalent to a constant current source of 1 amp and that means that whatever load impedance is connected across it (\$Z_{EXT}\$ = 1 ohm in your example), the voltage produced is 1 amp x \$Z_{EXT}\$. But only for that instant.
There is also one more thing that can be said at that instant; because we know that Faraday's equation is true at all times for an inductor (\$V = L\frac{di}{dt}\$) AND, because the inductor voltage has to be -1 volt, the rate of change of current is now -V/L or -1 volt / inductance. So we know the terminal voltage expressed by the inductor, the current and, the rate of change (fall) of current that will happen at that instant.
What happens hereafter is an exponentially decaying current best described by this picture: -
Picture taken from this slide show (Physics 121 - Electricity and Magnetism, Lecture 12 - Inductance, RL circuits)
|
H: How to obtain a Norton current (I_N) with resistors in the way
How is the Norton current I_N (over the red terminals) obtained in a circuit where the introduced short cannot return to a given current source I without going through some resistance.
simulate this circuit – Schematic created using CircuitLab
EDIT
The hate is strong for this question. If your concern is similar, then with a bit of time and bravery, you should find this post useful.
AI: simulate this circuit – Schematic created using CircuitLab
The approach is to take into account the resistance faced along way from I, through the short I_N and back to I. Due to current division, $$I_N = {\frac{R_1||R_2}{R_{T}}}I$$ where $$R_{T} = R_1||R_2 + R_3||R_4$$
|
H: Thevenin's theorem
So I watched a couple videos on Youtube about Thevenin's theorem and I found 2 ways to do this circuit,but I get two different answers and I'm confused now.(for finding \$V_{th}\$)
First method is
\$V_{th} = (V_s/R_1+R_2)R_2\$ and i get 24 V
The second method is using KCL
so, \$I_1-I_2+I_3=0\$ and I get \$V_{th}\$ = 30V
are both ways right and I'm just calculating wrong or I can't use method 1 for this circuit. Can someone explain please.
AI: I recommend you use Thévenin's and Norton's theorem in each of the elements instead. Start from your circuit, and apply Norton to the 32V supply and the 4 Ohms resistors.
simulate this circuit – Schematic created using CircuitLab
Then add the two current supplies and find the parallel equivalent of the 4 and 12 Ohms resistors (which is 3 Ohms).
simulate this circuit
Now all you have to do is apply Thévenin again and you pretty much have your answer.
simulate this circuit
|
H: How much time to I have to switch outputs on a CD74HC4067 multiplexer?
Looking at this datasheet, how do I determine the amount of time I have to switch between one connection and another?
It seems that as I switch from one connection to another I will go through several intermediate states as my software sequentially toggles the select pins to high and low appropriately, in principle selecting several undesired connections.
This is obviously not a problem, but how do I verify this from the datasheet? How slowly could my processor set the pins and still work?
AI: There is no speed requirement. So long as rise and fall times are not excessive the sequence of channel switching is of no concern to the mux. (if they are the chip will tend to draw excessive Vdd current due to the time the inputs are at intermediate voltage).
Normally you'd want to put the 4 or 5 control inputs on a single MCU port so they would all change at once anyway.
That said, whatever you connect it to may care.
|
H: Controlling Proportional valve with NPN transistor (TIP 122) using PWM
We are working to control the proportional solenoid valve (9C, 2W) (VSO by parker, datasheet) by changing the voltage across it. For this we are using PWM from the controller, thus changing the average current across the valve.
The controller (i.MX RT1060 from NXP) is capable to give up-to +3.3V. We are using TIP 122 NPN Darlington transistor to switch the voltage to 9V. Whenever we are changing the duty cycle from 95% to 100%, then only the valve is getting ON. If we are giving duty cycle is below 90% the valve is completely OFF.
We are giving 5KHZ PWM frequency, with 16 bit PWM resolution. The experiment result is mentioned below.
Have a look at the sudden increase in the voltage from the duty cycle 95% onward.
We are getting controlled flow, in between 95% to 99% only. If the circuit should work, if should work for the whole range.
Initially we have used base resistor of 390 ohm. We thought at lower duty cycle the voltage from the controller to the base of the transistor should be less. We decreased the resistor value from 390 to 190 and then to 47 ohm. Still did not get significant change across solenoid voltage at lower duty cycle. The flow vs duty cycle graph is not matching the datasheet.
AI: You need a diode across the valve coil so that current continues to flow when the transistor is off.
When you do get consistent current, compare with the valve datasheet values (should be included here if you want us to comment on the system behavior).
|
H: Converting a 4-20mA 24V signal to 0-5V?
How do I convert a 4-20mA 24V signal to a 0-5V signal, suitable for A-D conversion with a 5V microprocessor? Can this be done while isolating the 24V from the 5V output?
AI: The simplest way is to use a 250 ohm resistor and remove the offset digitally.
To get an isolated input might be done by creating an isolated supply and digitizing the signal, then transferring it over an isolation barrier digitally. There are other approaches, of course.
|
H: Is there a convention as to placing a resistor before or after an LED?
We know that it makes no difference whether a resistor is placed "before" or "after" an LED in a circuit.
Changing the resistor to be in front or behind an LED doesn't affect brightness?
Does the resistor have to be before or after the component
Should Resistor be before or after an LED series
Is there a commonly used convention that encourages one or the other?
AI: Consider the possibility of a momentary short from one of the LED leads to ground (ground could be connected to the chassis)- perhaps it causes no problem if there is a remote resistor connected to Vdd but, if the LED is connected directly to Vdd, the short instantly destroys the LED. Or it shorts out the power supply.
simulate this circuit – Schematic created using CircuitLab
There may be other similar situations you can come up with. But really electrically it makes little or no difference one way or the other until you invoke such possibilities.
|
H: How to simulate two 12W lightbulb in series to 220VAC?
How can I simulate two lightbulb of 12Watts in SPICE?
In proteus simulation, there is only a lamp of adjustable nominal voltage and resistance. The default is 12V nominal and 24ohms resistance. If I want to simulate two light bulbs(12W) in series to 220VAC, am I correct to not change the nominal of 12V and adjust the resistance to:
$$R= \frac{(220V/\sqrt2)^2}{12W}=2k\Omega$$
And since there are two bulbs, the total resistance is 4kohms ?
OR
Since, two light bulbs of 12 Watts each will draw 24 Watts in total, the resistance is:
$$R=\frac{(220V/\sqrt2)^2}{24W}=1k\Omega$$
Which of the two resistance will be correct in simulating two 12Watts lightbulb in series?
AI: Since, two light bulbs of 12 Watts each will draw 24 Watts in total ... Which of the two resistance will be correct in simulating two 12Watts lightbulb in series?
Two 12 W lightbulbs will only draw 24 W in total if you connect them in parallel.
If you connect them in series, they will draw less power than a single bulb.
|
H: Tantalum Capacitor Explodes When Engine Starts
I designed a circuit using a SMPS voltage regulator based on the tps65261rhbr triple synchronous buck converter. The circuit is rated for up to 18V. It is connected to a 12V lawnmower battery, which also starts and powers a gasoline engine that returns charge to the battery with an alternator.
The circuit has worked perfectly over many hours of testing and switching power on and off before I connected and started the engine. Immediately upon starting the engine it failed: one of the 25V 47uF tantalum capacitors (TPSD476K025R0150) feeding the SMPS exploded.
My oscilloscope has a max of 10V, so I can't see what the input voltage waveform looks like when the engine starts and runs. I tried anyway, and I see that when the engine is started the voltage briefly dips under 10V, but outside of that brief blip it is clipped to 10V so I can't tell what's happening. I assume the voltage must have exceeded 25V for the capacitor to explode.
I'm considering switching to higher voltage (50V?) aluminum polymer bulk capacitors and a high input voltage rated 12V LDO before the SMPS to protect against input voltage spikes.
Does this seem like a good approach? Should I get a better scope or build a voltage divider to see what's really happening? Does anyone have any experience with powering circuits from an engine alternator and battery in parallel that can weigh in on this power supply design?
AI: Tantalums are very sensitive to overvoltage so you have to derate them if you want to use them. They are already typically derated by 30-50% in normal use but you are connect them up directly to a gas engine. Gasoline engines are a very harsh source of power so you should be installing transient suppression and the like anyways such as TVS diodes or MOVs to suppress voltage spikes. Regardless, you probably shouldn't have chosen tantalums in the first place as the input decoupling capacitors knowing they would be directly exposed to something so harsh as a generator.
No LDO, or any type of linear regulator that matter. Having a linear regulator defeats the purpose of having an the efficiency of an SMPS and they are too delicate for the protection task anyways. Furthermore, 18V to 12V with a linear regulator is too much heat for any remotely moderate levels of current.
Get a big TVS diode with a working voltage (not a breakdown voltage) as close to but greater than the battery voltage at full charge. It would help if you could scope to see what the startup transients, and the transients in general are like. There's a chance the TVS diode won't be able to handle the power in which case you need to go with a metal oxide varistor (MOV). But if a TVS diode can do it, then a TVS diode will be better. MOVs do not not clamp as well as TVS diodes and have an inherent wear out mechanism each time they conduct so you don't want to accidentally undersize it if you expect it to be constantly experiencing strikes or else it will wear out early, but they can be made a lot bigger (like bricks!) so can found in much higher power levels.
And go ahead and toss in that 50V aluminum polymer. You probably don't need quite so high as 50V though. Aluminum polymers don't need very much derating.
Might as well toss in a fuse while you're at it.
|
H: Could Cat5e be used for RS485
I'm working on a project that will be using RS485 to communicate between a bunch of modules and a master controller. Low baud rate, 9600, but a decent distance, like a couple of hundred feet. I'm wondering if I can get away with reliably using Cat5e for this, and whether or not I need it to be shielded and connected to ground?
If that is the case, would I be able to use the remaining 3 pairs as the ground instead of getting cable with actual shielding?
AI: Short answer: yes.
That should work just fine.
It is quite common to use CAT5 for RS485.
More detailed:
RS485 requires a common ground reference for all devices. Yes, you can use spare pairs in the cable as ground reference. Proper shielded cables provide better noise immunity. But it is not by any means necessary for RS485 communication to work. Depends on your environment. Lots of high power electrical motors and welders around? Then you may want to use shielded cables.
Depending on your exact cable length requirements, you may want to consider thicker gauge CAT5. Up to ~100m/300ft any CAT5 should do the job. Beyond that, the resistance of the copper conductor can start playing a role in attenuating your signal. Thinner conductor=higher attenuation. RS485 cables designed for 1000m/3000ft have really thick copper wires.
|
H: Short Circuit Output Current - IC Definitions
What does "Short Circuit Output Current" mean? Does it mean that the voltage output will be to 0V at this specified current? So what would be the maximum current sourced without having a large impact on the voltage output? There is no graphic, it would be nice to have the voltage output in function of the load!
Here is the datasheet: https://datasheet.octopart.com/FAN4174IS5X-Fairchild-Semiconductor-datasheet-8824583.pdf
Thank you very much and have a nice day!
AI: These are Rail-to-Rail I/O, CMOS Amplifiers.
Datasheet indicates Vs=5V. In this context, short-circuit implies operating the CMOS FET outputs with RdsOn at any output voltage rail to rail.
Thus +/-33 mA means RdsOn = 5V/33mA = 150 Ohms typical equivalent resistance. This affects the output rise time for step pulses into a know C load. However when not shorted, Zout is reduced by feedback gain in linear mode.
added:
Under different conditions the table also says;
RL=10k to Vs/2 (for Vs=5V) Vo= 0.01 to 4.99V I=250uA, thus RdsOn=40
RL=1k to Vs/2 (for Vs=5V) Vo= 0.10 to 4.90V I=2.5mA thus RdsOn=40
proof by Simulation
Using KVL as follows;
simulate this circuit – Schematic created using CircuitLab
Only in 3.6V logic do they make lower RdsOn =25 Ohms +/50% in order to shoot-thru and yet have high speed.
|
H: Floating ground when mosfet is off - is this a good design choice?
I am designing a simple circuit for weight controller and I am not sure where I should connect the drain of N-channel mosfet. My goal is to design a circuit that when powered off (both when button BN1 and pin B6 are low) should be inactive and drain nearly zero amperes (few microampers is fine).
One of my attempts look in a following way:
+12V and FGND is where a 12V acid battery will be connected to, and VCC (3.3V) and +5V are outputs of voltage regulators. The R25 and C23 duo form low pass filter to prevent double clicks when clicking the buttons. R22 is there just to provide a way to discharge capacitors C28/C30/C27/C25, but I am not sure if it is helping much.
I am not confident about this design choice. Especially what happens when both MOSFETs are turned off and ground slowly reaches 12V - what exactly will happen to the outputs of the voltages regulators? Will The PD across VCC/GND go negative? Can this negative PD cause damage to the unipolar capacitors C27 and C28? How can I improve this design?
My second design, which looks a little better to me:
12V battery is connected to +12V/GND, RelativeGND is connected to the voltage regulators.
I don't like how the voltage regulators work with two different grounds. For input, the relative ground is used, and for output the real GND should be used, but I feel like this is not how the voltage regulators work, and that this may give me slightly increased PD across VCC/GND and 5V/GND pairs than the datasheet describes. Any feedback?
AI: Disconnecting (part of the) ground to switch on/off your circuit is really asking for trouble.
It is a really really bad idea.
Also, it will very likely not work and result in unexpected behavior.
For example: there are ESD diodes between all pins of all ICs. In the AMS1117 circuit, there will be a diode between pin 3 (out) and 2 (adj) of the IC. This diode will conduct when you want to switch the circuit off. So the circuit will not be off even when you think you're switching it off (by disconnecting the ground). There will still be a connection to ground via an ESD diode and that will be supplying power to your circuit.
This is just one of the reasons that no sane designer does this.
Instead what experienced designers do and what has been proven to work (these solutions are used in all your gadgets) is:
Use enable/disable pins on ICs that have them.
If your regulator does not have an enable pin, use a different regulator.
In some cases using a PMOS in series with the positive supply rail (not ground) can work, but you need to be careful. Take into account all ESD diodes that are in all ICs, watch this video to learn more.
put microcontrollers etc. in standby / low power mode, there is no need to disconnect the supply if you configure a uC properly.
|
H: Equivalent baseband communication channel
I was reading on my lecture slides (I do not put them here since they are not in English) this statement about equivalent baseband channel models:
In typical wireless applications, communication occurs in a passband [fc W/2;
fc
+W/2]
of bandwidth
W around a center frequency
fc. However, most of the processing, such as
coding/decoding, modulation/demodulation, synchronization, etc., is actually done at the
baseband. Therefore from a communication system design point of view, it is most useful
to have a baseband equivalent representation of the system.
Since
it results:
and
and
I do not understand what it wants to get and why. Precisely:
what does it mean with "equivalent channel model"? I'd say that it means a channel with same input, same output, and same transfer function
which is the equivalent baseband channel model between the last two pictures?
So are baseband signal really used inside a channel?
AI: For many cases where you have a passband signal (meaning, a signal with some carrier frequency, \$f_c\$, and a signal with bandwidth \$W\$ around it, typical in communications systems), analysis is more convenient working with the so-called baseband equivalent version of that signal. It is called baseband equivalent, because it takes out the dependence on \$f_c\$, and can be viewed as a sort of baseband signal that is equivalent to the passband signal in a way. In your example, in the frequency domain, the passband is \$S(f)\$ and the baseband equivalent is \$S_b(f)\$.
Similarly, if you pass the passband signal through a channel, then you'd need a baseband equivalent channel model, that you can pass the baseband equivalent signal through. The whole system is known as the baseband equivalent representation of the system (that is referenced in the quotation you provided).
What is the baseband equivalent of the passband signal? Your example shows a baseband equivalent signal, \$s(t)\$ being derived. Notice that it is no longer real-valued, but complex valued in general. However, it is not a function of \$f_c\$ anymore, which helps with analysis. At the output of the system, you can always convert back to passband if desired.
|
H: Identify this SMT device
Can anyone tell me what are these smt devices. Especially the one with square drawn at centre
AI: They are zero ohm links. Maybe mainly used as jumpers on this single sided board to allow for a track to run under them, but they have other uses, see What is the usage of Zero Ohm & MiliOhm Resistor?
|
H: If two identical transformers are connected in parallel, referring to LV(secondary) side, any change?
Assume transformers are 10/0.4 kV 500kVA transformers. Primary is high voltage side and secondary is the low voltage side.
Normally we do Z_secondary(LV) = Z_primary(HV) * (Vrated2(LV)/Vrated1(HV))^2 to refer impedances from primary to secondary in case of a single transformer. My question is, if instead, two identical transformers were connected in parallel, will the referring be the same? Or will there be any changes in the above formula?
Thanks.
AI: Theoretically, with ideal transformers with infinite magnetization (primary) inductance, the formula for impedance transformation is: -
$$Z_{primary} = \left[\dfrac{N_p}{N_s}\right]^2\cdot Z_{secondary}$$
But because there is magnetization inductance (\$X_L\$) the above formula becomes: -
$$Z_{primary} = X_L||\left[\dfrac{N_p}{N_s}\right]^2\cdot Z_{secondary}$$
And with two identical parallel transformers, the formula becomes: -
$$Z_{primary} = \dfrac{X_L}{2}||\left[\dfrac{N_p}{N_s}\right]^2\cdot Z_{secondary}$$
Just switch this formula around to solve for the secondary impedance.
And, just in case there is any doubt about what \$X_L\$ is, I include the equivalent circuit for a transformer (to avoid ambiguity): -
If you need to consider leakage inductance then \$L_P\$ and \$L_S\$ are shown above in their circuit positions.
|
H: Clearance and creepage for AC Optocoupler for 230V line detection
I was wondering how I need to handle clearance and creepage in this scenario.
It's AC mains line detection circuit.
I need to achieve around 2.5mm creepage.
AI: I also posted the question on reddit and got the answer there:
So a few things before the actual answer:
What's the mains voltage? 2.5mm sounds like 240V if I'm not mistaken.
(correct)
In situations like this, you can increase creepage distance by adding
a board cutout (slot) between the pins. You can also reduce creepage
requirements by adding a conformal coating.
It would be weird if the datasheet of a component intended for AC
mains use violated clearance/creepage requirements, and made no
mention of the above, right? That usually means something else is
going on.
In this case (real answer now) you are looking in the wrong place.
Because you have two LEDs that are anti-parallel, one of them will
always be on. That means the voltage between the pins is not
120/240/whatever but just the forward voltage drop of the LED. You
don't have mains voltage between the pins, you have 1.4V (per the
datasheet). So that spacing is perfectly fine.
What you do need to consider is that nearly all of that voltage is
actually dropped across R2. That is where you need to worry about
creepage, and in this case the two SMT pads of that resistor do look
way too close together. I also do not see any current limiting,
meaning that if that resistor fails short or a short otherwise
develops, you're in for a bad time.
In this case just choose a larger resistor package to maintain the
requisite spacing requirements. You can also add a slot between the
resistor pads to improve the situation slightly, but you'll need to
upsize either way as the resistor body itself still provides a bridge.
220k is also quite a large value, make sure it's enough to actually
turn the output on sufficiently. That's 1mA at 240 which is probably
enough but double check. And of course power rating and all that which
you're already aware of.
Credit: u/happyhappypeelpeel
|
H: How to bond flex to rigid pcb
I have a rigid PCB and a flex (4 layer) PCB. I want to bond them instead of using a connector (although if what I am expecting is impossible I will use a connector).
What methods are available for bonding a flex to the surface of a rigid PCB? Can we back them with solder like an SMT component and then bake them?
AI: This can be accomplished using soldering.
You basically create a set of pads on the rigid PCB which line up with a set of pads at the end of you flex PCB.
You can either join such that the pads extend right to the edge of the flex PCB and you solder with both sets of pads facing upwards. In this case the pins will be wired 1-to-1.
In this approach the pads are both tinned, flux is applied, then a soldering iron used to drag the solder up from the rigid PCB onto the pads of the flex PCB to make a join. This is not the strongest way to join the boards, so some additional flexible glue between the flex PCB and rigid PCB might be called for.
A second soldering option is to have pads on the bottom of the flex, not quite at the end of the board, and top of the rigid PCB. Then the pads are joined with solder sandwiched between the pads like a QFN style connection. In this case the pins on one board are mirror imaged as they will be bonded n-to-1.
For the second option, to solder, tin both sets of pads and once cooled add some extra flux. Alternatively apply solder paste. Then line the pads up, and heat the joints applying gentle pressure.
|
H: How can I determine what type of logic structure this circuit represents?
I have the following circuit:
How can I determine which one of the following functions it represents:
F = (AB+CD)'
F = ((A+B)(C+D))'
F = AB+CD
AI: You can determine the functionality by analysing which transistors are controlled by which inputs.
Make a list of which are turned on or off when a given input is high and low.
Then make a list of which transistors need to be turned on in order for the output to be high or low.
Compare the two and you'll be able to build a Karnaugh map for the circuit.
From the Karnaugh map, you can get the SOP terms, and simplify them using boolean algebra. This will give you the answer which is indeed one of those three options.
|
H: Calculation of MOSFET gate drive current
I know that there have been many posts on this topic, but I'm confused about calculating the gate current of a MOSFET.
I have calculated the gate current as Igs = Qg/t. For example, I want to drive IRF540n with PWM at 100 kHz. It has Qg = 94 nC and 100 kHz = 10000 ns.
If I use Igs = Qg/t then Igs = 94/10000 = 0.0094 A. I think it's quite a small current.
Are my calculations correct or not?
AI: the method you are persuing is calculating the average current needed to switch a MOSFET. This is one part of the calculation as one of the key need is the peak current to ensure you are switching fast enough.
what the best way to calculate Rg gate driver for Mosfet
As far as your calculations are concerned, this is only half of the calculation.
Every switching period has two switching edges
Turn on
Turn off
Each edge requires the transfer of charge (in your case 94nC). If you half the period you will be closer to the average current flowing.
The other approach is to calculate the rms current
\$I_{rms} = \frac{1}{R_g} \sqrt{ \frac{\int_{0}^{period}(V\cdot e^{-t/R_g \cdot C_g})^2{} }{period}} \$
|
H: Characterising an Audio Isolation Transformer
Rummaging around the junkbox in an effort to find a 600:600 ohm audio isolation transformer, I found a possible candidate rescued from a store closure some time back. It's basically a cheap, low power unit that looks something like this:
Unfortunately, it came without a datasheet, but I had marked it 1:1 (probably after some quick measurements). Now the 600 ohm termination resistances are significant and I'm wondering how I might determine if this is actually a 600:600 audio isolation transformer. The inductance of both primary and secondary are both around 395mH, whilst the winding resistances are 40 ohms and 50 ohms (oddly, not quite the same).
What's the best way to determine if this audio transformer is designed with 600 ohm terminations in mind?
AI: What's the best way to determine if this audio transformer is designed
with 600 ohm terminations in mind?
Probably the only clear-cut way is to drive one winding from a signal with 600 ohm source resistance and load the other winding with 600 ohm and plot the frequency response.
If you get a decent flat frequency response then that tells you that it is good for 600 ohms in your target application. But it might also be good for lower impedances like 100 ohms. The thing is about transformers is that they aren't suited necessarily to one impedance regime - they will work across a wide range of circuit impedances. However, the lower you go in impedance the more it is likely that higher frequencies will become attenuated due to leakage inductances in each winding.
So, even if a transformer is good for 600 ohms across a flat frequency range of (say) 100 Hz to 10 kHz, it might only deliver 100 Hz to 2 kHz in a regime where the impedance of the load is 100 ohm.
Additionally, with a 100 ohm regime and using a 600 ohm transformer, you might get slightly improved lower frequency response.
the winding resistances are 40 ohms and 50 ohms (oddly, not quite the
same).
This might be because one winding is wound beneath the other i.e. the winding wound closest to the core will use less copper wire for the same number of turns.
|
H: Continuous Reset generated by LDO to Microcontroller
I have the below schematic
Datasheet of the LDO - L4995 -SSO24 package
LDO Specification :
Input Voltage : 12-16V
Output Voltage : 5V
Load Current : 100mA
Issue :
My LDO is generating RESET pulses continuously.
Objective:
I am trying to program my microcontroller initially. But since, the RESET is continuously generated by this LDO, my debugger is not able to program the microcontroller.
Steps taken :
I removed the R2705 resistor. My microcontroller cannot provide the watchdog pulse since it has not been program yet.
I tried removing the C2704 capacitor, C2703 capacitor each one at a time and tested. But still RESET is generated.
Below is the waveform obtained after removing R2704 and C2704.
Measured at pin 19, 20 and 21.
Questions:
What is the reason? Like the time period and the delay are set by the capacitors C2704 and C2703. Without that also, how is the LDO generating a RESET? Can someone please explain.
And, what should I do to make the RESET line high so that I can program my microcontroller through the debugger?
AI: What is the reason? Like the time period and the delay are set by the capacitors C2704 and C2703. Without that also, how is the LDO generating a RESET? Can someone please explain.
The output voltage is lower than 6% below reference (Vo_ref) 5 Volt in this case.
The low time period has a guaranteed minimum, either you are overloading the output or the input/output is unstable.
Or you're having a fight with the watchdog.
And, what should I do to make the RESET line high so that I can program my microcontroller through the debugger?
Fix the regulator operation. Or remove R2704.
For production, add the option to feed the wdt with your programmer. Inhibit reset to your MCU, or make a bootloader image to flash within the watchdog time.
|
H: How to connect photo diode to trans impedance amplifier?
transimpedance example here
I need to design a trans-impedance amplifier to amplify the signal from a photodiode to make a PPG sensor
I need a signal to acquire from the wrist.
photodiode I am going to use is SFH 7060
I don't understand how to connect anode and cathode of SFH 7060 to the amplifier input pin.
is it as follows?
connect pin 6 of SFH 7060 to (photodiode anode) inverting input of opa170?
and connect pin 7 to GND
do I need an instrumentation amplifier for this?
SFH 7060 data sheet
AI: Photo-current flow in the opposite direction to standard diode forward current so, if you want the op-amp output to rise when photo-current flows, you connect the anode to ground. If you want the op-amp output to fall when photo-current flows, you connect the cathode to ground.
Vout will rise in the schematic below when light hits the photo-diode: -
Equivalent circuit of a photodiode from here: -
|
H: I2S Data Structure (Inter-IC Sound)
There are tons of electrical descriptions of I2S; however, I cannot find information about how the data is formatted/structured. What I mean by format is, what does a value on the serial data line mean? Do these values have information on volume, pitch, or something else?
As an application example, I have a .wav audio file. I unpacked the file and obtained a stream of data samples. How should this sample data be transmitted on I2S so that a receiver can play the audio? 0x 0011 2233 4455 6677 8899 AABB CCDD EEFF 0011 2233 4455 ... and so on. Should these values be formatted in some way that is meaningful to the receiver? Maybe something like 0x0011____, 0x2233____, where blank means other miscellaneous information?
Let's assume for the case of I2S Standard, 24 bits per sample. Any information would be helpful. Thanks.
AI: I2S data for each sample is is the amplitude of that sample. It is sent MSB first. Typically data frames have 16, 24, or 32 bits per sample, and both devices must be configured to matching format. If you have 24-bit data from WAV file but 32-bit frames then data must be expanded correctly with zeroes. The WAV file most likely stores least significant byte first so perhaps some shifting or swapping is needed to send it. It should be pretty straightforward but depends on which MCU or DAC you are working on.
|
H: What is wrong with my buck regulator circuit using the RY8310 IC?
A component in a larger circuit I've designed is a step-down regulator circuit. It provides an output voltage of 18V from an input of up to 29V. For no other reason than the PCB manufacturer had it in stock, I chose the RY8310 buck converter.
I read the datasheet and believe I followed the 'recipe' to obtain 18V output. However, when I ran my initial test using 20V input, there was a brief moment after I closed the Enable switch, where I saw around 300mA flow and then nothing. I'm pretty sure that I've toasted the circuit, but since it's built using SMDs it's going to be pretty difficult for me to isolate which component(s) failed.
Since it's a pretty simple circuit, I'm hoping someone might be able to spot where I might have gone wrong. Here's a link to the datasheet for the RY8310, and I've included an image that shows
the typical circuit from the datasheet
the circuit I built
the calculations I used to determine the voltage control resistor
the values I used for the caps, inductor and resistor.
I appreciate any help with figuring this out.
As requested, here are the details of the caps and inductor in the circuit.
Based mainly on feedback from @Adam Lawrence, I've made changes to my schematic and posted a copy of this. I've also taken a closer look at the failing PCB and can clearly see that a side of the tantalum cap has blown off, and the Vcc track between the buck converter IN pin and the cap has been overheated. I've attached a picture of the power circuit part of the failed PCB, together with the PCB layout image.
I'm still puzzled by the need for a bypass (1uF?) ceramic cap on the Vin pin of the converter - there's already a 47uF cap between this pin and ground, so is the 1uF still required? Does it serve a different purpose to the larger cap?
Id appreciate feedback on the changes I've made which comprise
change inductor to be a power inductor
change caps C5 and C7 to ceramics, by aggregating multiple caps
added pulldown resistor to EN pin and introduced voltage divider
If the basic components I've changed are deemed ok, I'll then populate the PCB and share this for further comment on component placement before I proceed with re-manufacture. I will also look at downstream protection for the board once I've got consensus that the power circuit is likely to work.
Many thanks for feedback.
The damaged board.
My revised schematic, showing new part details.
The old PCB layout - I've highlighted in bright green the track that failed and the side of the cap that failed
AI: I will offer some general troubleshooting advice:
Share your PCB layout. It's critical that the physical arrangement of the powertrain components be as tight as possible to avoid EMC issues and potential noise problems with the rest of the circuitry. This is a 1.4MHz device, so many things can go wrong.
You should have ceramic decoupling capacitors as close as possible to the Vcc and ground pins of the buck. This isn't shown on the reference design schematic but is mentioned elsewhere in the datasheet: Bypass ceramic capacitors are suggested to be put close to the Vin Pin. The exact values are often empirically determined but I would start with 1 microfarad and see how it performs.
Your output capacitor is tantalum, which the datasheet warns against: Tantalum capacitors are less desirable than ceramic for use as output capacitors. Because this device operates at 1.4 MHz, ESR and ESL are going to play huge roles. Parallelling multiple ceramic capacitors with good dielectric (X7R) is likely a better choice. Proximity to the inductor is also crucial.
You really need a pull-down on the EN pin. If you leave it floating, any stray signal could unexpectedly turn the controller on and cause you trouble. Per the datasheet the minimum turn-on threshold for this pin is 1.05V. Add a resistor to ground and have your enable switch in series with another resistor to create a divider on EN.
I will not repeat the previous advice re: the output inductor, but it is all correct - the part needs to be power-supply appropriate and able to handle the load current without saturation.
Is your downstream circuitry OK if the buck IC fails and the input rail gets connected to the output rail (i.e. if the device fails short-circuit)? I would guess "no", if so you should have an SCR crowbar and some form of fuse or PTC device to disconnect the input power in event of an overload.
These integrated-compensation devices sometimes oscillate or behave weirdly; because the compensation is internally fixed, you have limited control (only the powertrain components and resistor divider values on the feedback pin). You really need to capture some switching waveforms with an oscilloscope: a good place to look is the SW node with respect to GND. The pulses should be regular and smoothly changing in duty cycle at start-up, and under steady load should be completely periodic (no fluctuation). If they aren't, you're in for a fight - could be part values, could be layout-related.
You should consider adding the feed-forward (Cff) capacitor in your circuit - leave it no-pop to start with but add it if you find the dynamic response of the buck a little slow.
Further comments based on your edit:
Decoupling capacitors are critical not only for bulk charge but for frequency response. A ginormous electrolytic capacitor likely will not perform as well as several small paralled ceramics for instantaneous energy supply, again due to ESR and ESL. So, yes - a 1 microfarad decoupler serves a much different purpose vs. a 47 microfarad.
Your old layout needs rethinking. Both the IN and SW traces seem very small to me and the switching node (SW / L2 / C7) looks very loose (lengthy). The IN trace should be larger, and the nodes between SW / L2 and L2 / C7-C8-C9 should be as compact as possible to minimize EMI and avoid excessive ringing due to parasitic inductance.
Bonus info: TI has a good reference to buck converter layout - refer to SLYT614 for some guidance. Below are some excerpts. Your application is slightly different with a low-pin-count device but pay attention to the sections on powertrain layout and single-point grounding. Start with the most crucial elements - input cap, inductor and output caps - then add the rest of the circuitry. Keep noisy ground areas away from sensitive areas like the feedback divider.
Step #1. Place and route the input capacitor
Step #2. Place and route the inductor and SW-node snubber
Step #3. Place and route the output capacitor and VOS pin
Step #4. Place and route the small-signal components
Step #5. Make a single-point ground and connect to the rest of the system
|
H: Lithium-ion battery charger using LM317
I have come across the following circuit in a datasheet for a battery charger:
I have also seen a similar use of the charging circuit for use as a lithium-ion battery charger, e.g. here.
But is this circuit implementation actually suitable for a lithium-ion battery? (I know the 6 V circuit would require the voltage output of the LM317 (R1 and R2) to be set depending on the attached lithium-ion battery voltage.)
I have read that lithium-ion batteries need the following in order to charge safely and correctly (assuming the battery is not below its discharge condition, in which case trickle charging is required first):
Constant Current (CC)
Constant Voltage (CV)
The constant-current stage requires the battery to be charged to its max. (fully charged) voltage with a fixed current. The max. charge voltage is set by R1 and R2 in the top picture. R3 and the BJT act as a current control.
The constant voltage stage requires the battery to be maintained at its max. (fully charged) voltage, whilst the output current is steadily dropped. The battery is deemed "charged" when the output current is below a very small current limit, which is specific to the lithium battery. This stage initiates when the voltage across R3 gets close to around 0.6 V: the BJT starts to turn on, which shorts the adjust pin and drops the LM317 output voltage until it is stable at around 1.25 V, which then reduces the output current to the battery.
Questions:
Does R3 affect the LM317 output voltage as it's in series with R2?
How do the BJT and resistor precisely work in this setup? Won't the battery initially try and draw as much current as it can, meaning 0.6 V would develop across R3 before the battery is sufficiently charged to transition into the CV stage?
Wouldn't the 0.6 V developed across R3 mean that the potential "seen" by the battery is approx 6.3 V? (6.9 V LM317 output, subtract 0.6 V). Wouldn't this be insufficient to charge the battery fully?
When the BJT turns on, the voltage output of the LM317 will drop to 1.25 V as the adjust pin is shorted (which bypasses the resistors). Surely this is not suitable for the CV stage of charging, as the potential drops to way below the battery charging voltage. How is current output to the battery affected during this voltage drop?
Is the BJT even required for the CV stage? If just the resistor was used to limit the current, wouldn't the current to the battery decrease as it reaches full charge anyway?
AI: Does R3 affect the LM317 output voltage as it's in series with R2?
If the battery is not connected (or, equivalently, if there is only a negligible current through the battery), the circuit operates as a constant voltage source with an output voltage of \$V_{0}=V_{ref}\cdot \left( 1 + \frac{R_2+R_3}{R_1}\right) + I_{Adj}\cdot (R_2+R_3)\$. Since \$R_3\$ is typically much smaller than \$R_2\$, \$R_3\$ has only a negligible influence on \$V_0\$. On the other hand, if a battery is connected, as long as the battery voltage is lower than \$V_0\$, the circuit works as a constant current source and the current through the battery will be approximately \$\frac{0.6V}{R_3}\$. In this case, the voltage across the battery will automatically be adjusted such that the current remains constant. Once the battery voltage reaches \$V_0\$, the current can no longer be kept constant. In this case, the voltage across the battery will be constant, namely \$V_0\$, independent on the current that flows through the battery.
How does the BJT and resistor precisely work in this setup? Won't the battery initially try and draw as much current as it can, meaning 0.6v would develop across R3 before the battery is sufficiently charged to transition into the CV stage?
Yes, the battery will try to draw as much current as it can, but it will not succeed, because the higher the current through the battery, the higher the voltage across \$R_3\$. When this voltage reaches 0.6V, the transistor starts to conduct and the output voltage of the regulator will decrease. Therefore, the current through the battery is automatically limited to approximately \$\frac{0.6V}{R_3}\$.
Wouldn't the 0.6V developed across R3 mean that the potential "seen" by the battery is approx 6.3V? (6.9V LM317 output subtract 0.6V). Wouldn't this be insufficient to charge the battery fully?
Note that the battery is connected between the output of the voltage regulator and the base of the transistor, thus the voltage across \$R_3\$ has no influence on the battery voltage. The voltage regulator makes sure that the voltage across the battery will not exceed \$V_0\$.
When the BJT turns on, the voltage output of the LM317 will drop to 1.25V as the adjust pin is shorted (which bypasses the resistors). Surely this is not suitable for the CV stage of charging as the potential drops to way below the battery charging voltage? How is current output to the battery affected during this voltage drop?
The transistor will not conduct completely. The control loop ensures that the current through the transistor is just large enough to maintain a constant current. During charging with a constant current, the voltage of the battery will slowly increase. Once it reaches \$V_{0}\$ the current can no longer be kept constant and the voltage will remain at \$V_{0}\$.
However When charging lithium batteries, switching from CC to CV must take place at a precisely defined voltage. Therefore, \$R_2\$ should be made adjustable in order to set \$V_0\$ to the required level. This circuit is interesting to analyze and it might be a cheap solution, but there a integrated circuits that do a better job.
The simulation (using a rather crude battery model) shows the relationship between battery voltage (green) and current through the battery (blue). The constant current source does not show ideal behaviour.
|
H: Calculation of switching transformer at 100kHz
I am not able to desgin transformer for switching frequency 100kHz. It´s easy to desgin usual transformer for 50 Hz. But I´m totaly out when comes to switching transformers. What material should I use? Im not sure if Ferrite core E55 is enough.
My desired parameters is: approx. 315 V primary (after 230V rectifying), 12V secondary and 1000 VA of power.
and full-bridge switching mode.
AI: I´m totaly out when comes to switching transformers. What material
should I use? Im not sure if Ferrite core E55 is enough
In the example below I used an E55/28/25 core set and 3C90 core material just to take a stab at the viability of the design. I'm not really focusing on the numbers; rather I'm taking you through a form of procedure that gets you somewhere close to being able to wind the primary winding. My figures below are just as they come out of my calculator and I'm not vouching for them being correct when I first post this answer. They need double checking and any errors pointed out to me will result in this answer being corrected.
Regular AC power transformers (50 or 60 Hz) have a primary inductance. After all, the primary is just a coil of wire wrapped around a magnetic core and, you don't want the current taken by that inductance (secondary unloaded) to be "tens of amps". So, you wind the primary with enough inductance so that the residual current taken (called magnetization current) is not excessive.
If it is excessive then the core will saturate and you end up in a mess.
It's the same for any transformer operating at any frequency - you have to avoid core saturation and it's the magnetization current that causes core saturation. It's got nothing to do with load current because load current magnetic fields produced by primary and secondary cancel out.
In effect, the magnetic field in the core is ONLY due to magnetization inductance and the current that is taken by that inductance due to the applied primary voltage.
This is what you design first and subsequently, the secondary winding has the right number of turns to give you the right output voltage based on the number of turns on the primary.
So say you chose this core set: -
And let's say you chose an un-gapped core. It has an \$A_L\$ value of 8000 nH/turn. This means that 1 turn of wire would have an inductance of 8 μH. If you applied two turns it doesn't change to 16 μH but it rises as the square of the turns i.e. with 2 turns you get 32 μH.
But how much inductance do you need?
This sometimes takes a little bit of trial and error. Going back to a regular 230 volts 50 Hz transformer, it might have a primary magnetization inductance of (say) 10 henry. That's at 50 Hz so, for 100 kHz we should be ball-parking around 10 henries * 50/100,000 = 5 mH.
That would need about 25 turns i.e. 25 squared x 8 μH = 5 mH. This is just a ball-park estimate to see how things shape up. The next thing to calculate is how much peak magnetization current we get voltage is applied over 5 μs. I say 5 μs because that is what half a cycle at 100 kHz is.
We can also say that because the circuit uses a full-bridge driver, the current will start at -Ipk and rise to +Ipk over 5 μs. This means that for 2.5 μs the current changes from 0 to +Ipk. We know the applied voltage (315 volts DC) and we know the ball-park inductance (5 mH) and we know Faraday's law of induction: -
$$V = L\cdot\dfrac{di}{dt}$$
So, \$\frac{di}{dt}\$ = 315 volt / 5 mH = 63,000 amps per second. We know time (2.5 μs) hence the peak current will be 157.5 mA. It will alternate as a triangle waveform shape between -157.5 mA and +157.5 mA.
So now we know peak current (157.5 mA) and we know the number of turns (25). That gives us the MMF (magneto motive force) of 3.9375 At. We also know the mean length of the core (specified in the picture above in purple). So, MMF / core length tells us the H-field. With a core length (\$\ell_e\$) of 123 mm, H = 32 At/m.
Will this saturate the core?
This can be found by looking at the BH curve for the 3C90 material I chose: -
I reckon the peak flux density will be about 250 mT and that's OK in this application.
However, the secondary winding is only going to be about 1 turn i.e. 315/25 = 12.6 volts and, after rectification, this produces circa 12 volts. But this is just at an input voltage of 315 volts (~230 volts AC). If you need this to work at lower voltages the secondary turns need to be higher and you need to control the transformer primary with PWM to make the average voltage smaller at the higher supply voltages (requires a secondary inductance to average the voltage out).
Anyway, that's how I'd start to design the transformer then, I'd look at the range of supply voltages that the output is meant to stay regulated over. Then I'd look at what PWM chip to use to control it and what output inductor is required.
Will you get 1000 VA through an E55/28/25 core set like the one above - my gut feeling is more closer to 300 VA.
|
H: Why does a channel need to have bandwidth requirement for digital data transmission?
I'm not from EE background and try to understand how digital is sent in a channel as a non-technical audience.
Let's say we use fiber optics to transmit digital data 010101....,
Since fiber optics use light to transmit information, I can picture something like this:
A torch can flash light
a flash means 1
no flash means 0
this torch is connected to one end of the fiber optics cable
as long as the torch's flash rate is fast enough like flash/no flash X 100 million times per second
theoretically we can transmit as many bits as we want if we can have a torch with unlimited flash rate
how many bits we can transmit is really determined by the torch, not the bandwidth of the channel (fiber optics)
I know my assumption is wrong, could someone give me an intuitive example to explain why it doesn't work like this with little EE terminology?
AI: so how many bits we can transmit is really determined by the torch, not the bandwidth of the channel(fiber optics)
The "channel" includes everything: sender, receiver, and medium. Realistically light communications are limited by how fast you can turn the light on and off (known as "rise time" and "fall time"), and how fast the sensor can respond to that.
Diffraction may become important as well; the front of the wavefront can interfere with itself, so what starts out as a clean sharp rise at the transmitter may smear out along the way.
|
H: Will AC switch work for DC
I have a on-off switch that'll use for a kill switch for my car using the gas pump. The switch says 6A/125V 3A/250V 22 AWG.
For my purpose it needs to handle 12V 20 Amps max.
AI: You cannot use a 6A switch with a 20A load. You may also run into issues with lower voltage rating at DC, but that hardly matters as you are already so far beyond the current rating.
|
H: Real consumption of SMD5050 Led Strips
I'm building my home lights with an SMD5050 Led strips.
And I'm completely lost after comparing specs consumptions with real measured data.
So on, I have 5 stripes and 1-to-5 Splitter from my controller:
2 legs for 10m each, both sides fed from the splitter.
5th one is single-feed from a same 1-to-5 splitter.
So, that's properly builded scheme and I don't expect
In total: 10m 30Leds/m for 7.2W/m and 15m 60Leds/m with a 14.4W/m. Oh yes, that's should be 288W and 20A x 14.4V!
I have a Wattmeter, that stands before 220V input of my 15A 12V Led power supply, and I can measure only 90W of consumption in the HARDEST case, which I mean 14.4V output of supply and White permanent color on strips. That's 3 times less than proposed to be (don't counting imperfection of 220-12V transformation).
Why so? Does it mean that I can connect more and more strips, and the power supply will handle it? (Specs for supply is 180W max consumption, so currently it's loaded just 50%).
How to figure out, where the real consumption is? I don't have a reason to untrust my Wattmeter - used it with everything in my home from phone chargers (4W) till AC (800-1200W), and collected data seemed very real..
AI: This is the difference between max current (20ma per channel per diode) and actual current. If you look at the resistor values on your strip you can calculate the actual current the diodes will receive. In this case it is obviously a lot less than the nominal 20ma max for 5050 diodes.
And yes, if you have strips that drive each diode at 10ma instead of 20ma, you can have twice as many diodes per power supply.
|
H: Why this Plier is advertised with conductive handle?
See the product here.
Ok, I cannot think why I would want the handle to be conductive. What is the deal?
AI: The handle is "dissipative" so it's very slightly conductive and won't allow charge from the user to damage the components (and they won't allow a live circuit to shock the user).
I don't have any of that brand, but the cheap US-made Xcelite 175D nippers I have only conduct about 0.9nS when lightly held in the hand (a bit over 1G\$\Omega\$).
|
H: Questions about coil voltage and phase shift in the WPT system
I performed simulations of wireless power transfer in matlab simulink. It was powered with DC voltage which was then converted into a rectangular voltage with a frequency of several dozen kHz. The figures show a schematic diagram and a graph of the voltage obtained at the inverter output (red) and the voltage obtained at the L1 coil (black).
Why are there voltage surges in the voltage obtained on the coil? I
noticed that these jumps occur when switching pairs of transistors
in the inverter and that, the difference between the two sinusoids
forming the voltage on the coil is the value of the set voltage.
Why is there a phase shift between primary and secondary voltages in
this system?
I apologize in advance if this question has already been asked on the website or it is common knowledge that I did not assimilate. Please help.
*edit
V3-measuring voltage at the inverter output (yellow)
V1-capacitor voltage measurement (red)
V2-measuring coil voltage (blue)
AI: Because you are measuring the voltage directly across R and L, the capacitor acts like a short circuit for the square wave drive voltage edges and, those "edges" pass through to R+L unhindered; just like in this simulation of a tuned RLC series circuit that I "threw together": -
Why is there a phase shift between primary and secondary voltages in
this system?
If the two coils are not perfectly tuned (or if they are but are close together) you will have a detuning effect that produces significant phase shifting. It's not a problem.
|
H: I would like to know if the circuit is correct?
simulate this circuit – Schematic created using CircuitLab
I have used the voltage follower concept in order to keep the output logic non-inverted.I would like to know the issues that I might have if I connect the output pin to the micro.
I am aware that the output will be (3.3-drop across NMOS).I am concerned about the output impedance of the circuit and how do I select the value of the R7 resistor?
AI: There is no one "right" value; it is a collection of trade-offs.
When configured as a general purpose digital I/O pin, most microcontrollers have a pretty high input impedance. R7 wants to be no more than 10% of that value. If the datasheet input impedance is 100K, R7 should be 10K or less, etc. A minor consideration is the static current through R7 when the circuit output is high. Another consideration is that the higher the R7 value, the more susceptible the uC input is to external noise when M2 is off. Another point in a high volume production situation is that the fewer number of distinct parts, the better. If increasing R7 to 10K eliminates the only 2.2K resistor in the entire assembly, that's a good thing.
|
H: Should via use be used when it is not essential?
I am working on characterising a transistor at approximately Vds=1000V and Ids=100A pulsed at around tON=20ns.
Should the power and ground paths (Top and Bottom layers) have vias connected to the respective groundplanes? Will this reduce signal integrity?
From my knowledge, inductance increases when vias are added to the power planes which would compromise signal integrity. However, back-drilling to remove the stubs of the via (e.g. if connected from Top layer to Top groundplane) will improve SI compared to a standard via. Would this give a SI improvement compared to just having a Top layer conductor path?
More specific details can be provided if needed. However, a generalised answer would suffice.
AI: Vias have inductance and resistance, if you want to find out how much and if it affects your design, use a calculator tool. In short via inductance is usually measured in nH's and adds a few mΩ's of resistance.
For large currents (10's of Amps) it is probably a good idea to start paralleling vias.
For high frequency signals (50Mhz+) the via inductance will need to be factored into the transmission line to reduce reflections.
|
H: Help Identifying Inductor From a Schematic
I’m repairing a laptop but I don’t know what value inductor to use/get. The schematic has values, I just don’t know how to interpret other than it’s a 2mm inductor. It’s labeled 5A_Z120_25M_0805_2P on a 1.35V line.
AI: Looks like something with 120 Ohms Impedance (z120) at 25MHz (25M) that can take 5 Amps (5A) in a 0805 Case (0805)
Close but not quite would be a Würth 742792025.
|
H: Explanation for parallel darlington behavior
I've built and simulated the circuit below in real life, LTSpice and Stack-Echange-Spice. It behaves in a similar fashion in all of these Environments. When the "LAMP2" load is disconnected, the Base currents become unbalanced, with most going through Q2 - This, in turn, turns Q1 all but off, since only a very small current is going through its base:
simulate this circuit – Schematic created using CircuitLab
Now i drew myself what i believe is the equivalent circuit of those darlingtons:
I suspect that when the top load on the right side is disconnected, the base current of Q2 turns on the "main" NPN of Q2 and allows current to flow through the first BC diode, increasing consumption. But why does this not happen with the load connected as well? What is happening here?
AI: The double-diode model for transistor is little use here.
What happens is that when you disconnect the load at Q2, there can be no collector current available for the first transistor in the darlington pair Q2, so all the current to drive must come via base terminal.
And therefore, as there must be equal voltages at the base of Q1 and Q2, the Q2 needs more base current to match the VBE drop of Q1.
This is the reason you should not parallel the bases, but have separate base resistance for each transistor, as they may have different VBE drops due to manufacturing tolerances, or different VBE drops due to the collector current.
Same thing with LEDs, you can't parallel a green and red LED with single resistor, only the red one will light up as they have different forward voltage drops.
|
H: Identify the symbol
This probably has been asked a bunch of times but I couldn't find anything with image search.
The component is designated as R1 and the only other piece of info is 500mA written on top of it. It is placed right after the power input. Due to the position I thought it was either a fuse or a ferrite bead, but as far as I can tell this is not a standard symbol for either.
Any help?
EDIT
The schematic is the ReSpeaker Raspberry Pi hat from seeed:
https://files.seeedstudio.com/wiki/ReSpeaker-4-Mic-Array-for-Raspberry-Pi/src/ReSpeaker%204-Mic%20Array%20for%20Raspberry%20Pi%20%20v1.0.pdf
The current flows to a reverse polarity protection circuit and a stack of LEDs afterwards.
AI: I've not seen that symbol before. My best guess is that someone or something got messed up in the library (Altium/Eagle/whatever) and it's been drawn incorrectly.
Since it's on a voltage input and specifies a current amount, it's most likely a fuse. Because it's designated as a resistor, it may well be a resettable fuse, such as a PTC.
Here are some similar symbols for fuses. It may be that the symbol you show is a badly drawn version of one of these:
(Source)
|
H: What if everyone uses voltage stabilizers?
A hypothetical question to improve my understanding of stabilizers and power supply.
Consider a rural village where frequent low voltage issue happens.
Now, if everyone there uses stabilizers to overcome the low voltage issue -
Does it give more load on the transformer which supplies electricity?
Why should it have more load? V * I will be almost same,
as less Voltage is compensated by more Current by stabilizer, so summation of
all Voltage * Current of each house will give same load on the transformer.
But possibility of main transformer burning up because of high current(from all house) caused by low voltage.
Most of the stabilizers go to cut off because of extreme low voltage?
What happens to a home which doesn't have stabilizer?
AI: When the demand current rises from a drop in voltage, the stabilizer increases the demand and thus causes more drop to the shared network.
But if everyone has a stabilizer, then the distribution becomes more unstable and probably unusable. Effectively, AC stabilizers are a negative incremental resistance.
It is better to regulate the source with manual or auto-taps then ensure the rated load does not drop more than 10% voltage in each distribution. Normally 5% for generation and 5% for distribution in well-designed networks worst-case.
|
H: KiCad footprint or supplier footprint?
I download symbols and footprints from the component supplier into a project's library to go faster but I realized that many of the standard components footprints are slightly different in KiCad, like for example the SOT-23-6 so my question is: is it better to stick to KiCad footprints or just use the suppliers footprints?
AI: There is more to good footprints than just "matching the pads to the pin". Depending on the dimensions and distances the results of soldering can strongly vary. Good footprints can reduce the risk of shorts and grave stones for example.
From my personal experience, I strongly advise using the IPC-7351 footprint recommendation because the standard library coming with KiCAD did not really live up to the expectations (at least back then I started) and even the footprints described in datasheets sometimes resulted in less optimal (reflow) soldering for me.
There was a rather prominent repository of a KiCAD library derived from the IPC-7351 recommendations but I cannot find it anymore. However, it seems like there is still a version around https://github.com/alexisvl/kicad-pcblib . I typically use the "Least" but only because I do a lot of tiny crowded PCB with reflow soldering. The "Most" version should be adequate for everyone with decent knowledge about hand soldering.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.