text
stringlengths 83
79.5k
|
---|
H: Implementing timeouts/timers in VHDL
What is the best way to implement a timeout, written in VHDL?
The purpose of this is to reset a state machine to an IDLE state if a certain amount of time has passed (a few seconds at most), to avoid a deadlock.
The only thing that come to my mind is a nasty clock counter with a big integer range but my concern is LUTs usage and optimization here. And I'm assuming my FPGA might have trouble fitting the logic of a 16 bit integer counter at full speed
Thanks for your help
AI: Here's a flexible pattern I've used a lot for this and similar purposes.
I prefer to use the actual clock period and delay values to generate the count values, rather than calculating magic numbers.
It will generate a counter - but a 16 bit counter is invisibly small in any FPGA you're likely to find today. Beyond about 24 bits it may start to impact speed, then you can break it into two smaller counters, using the first as a prescaler, generating a clock enable for the second.
And the pattern shown re-uses the same counter, to matter how many different delay values you need - unless you need more than one delay simultaneously.
|
H: Are RC circuits suitable for use as power sequencers?
Are RC circuits suitable for use as power sequencers with EN pins on LDOs?
I know that there may be a problem with reverse sequence on power off, but some devices allow that on power down all power supplies must go down together or in reverse order, so cutting out power would solve power off problems related to RC circuits.
AI: Yes, you can use RC networks on the enable pins of a lot of regulators. You need to read the specification of each to understand the threshold/trigger points because these may not be similar from one manufacturer to another.
Cutting the power can be a problem but if a reverse connected diode is placed across the resistor feeding the grounded capacitor, then discharge is fairly quick on all affected enable lines.
|
H: Measure both positive and negative voltages using ADC
I am looking into measuring both positive and negative voltages using ADC. My input voltage is in range from -55 to +55V, which is total of 110V. ADC I am using is MCP3424, since it is relatively easy to pair it with Raspberry Pi. MCP3423 is differential ADC with positive and negative inputs for each port. I am looking to connect negative(-) port to gnd, so I feed the voltage to the positive input. It gives me range from 0 to 2.048V. (readings of ADC are 18-bit)
I am looking to convert -55V to +55V range into 0 - 2.048V range. With some googling and my limited electronics knowledge I came into the following circuit:
simulate this circuit – Schematic created using CircuitLab
Questions/problems are:
would this work fine?
to which voltages I should connect op-amp voltage rails? OA1 to +5 and gnd, and OP2 to +55V and -55V? In that case, I would need high voltage op-amp, for example LTC6090?
do I need protection diodes at input or at output?
Are there any other issues I am not seeing?
AI: @Andyaka's answer is basically the same as what I was going to put, but he used the inverting amp topology. So +1 to his answer.
Anyway, given I've already done the calculations, I'll post this too. The following non-inverting topology circuit should also work:
simulate this circuit – Schematic created using CircuitLab
The values of R3 and R4 can be calculated based on your supply voltage (simple potential divider). The values of R1 and R2 I have calculated based on E12 resistors. It may not be as accurate as you want - yielding a 0.175 to 1.94V output range for +/-55V input range.
If you use higher accuracy resistors, you can get closer. For example, the corresponding E48 series values (1%) will be 133k and 2.49k for R2 and R1 respectively. For that you need to generate a 1.04V reference using a potential divider of R3 and R4. You then get quite close to your desired range, getting an output range of 0.01V to 2.031V for a +/-55V input.
|
H: IAR ARM Place const symbols in flash
I have the following flash section defined in my linker file:
keep { section .SomeConst };
place at address mem: 0x0003F800 { readonly section .SomeConst };
In my code I have the following symbols defined:
#pragma location = ".SomeConst"
__root const uint8_t _value1[16] = {...};
#pragma location = ".SomeConst"
__root const uint8_t _value2[16] = {...};
#pragma location = ".SomeConst"
__root const uint8_t _value3[16] = {...};
Is it safe to assume _value1 will always be at 0x3F800?
is it safe to assume _value2 will always be at 0x3F810?
etc? I assume its not.. but wanted to check with the experts.
This is using EWARM
Thanks
Bret
AI: Your code only tells the compiler to place variables _value1, _value2 and _value3 in the segment starting at 0x0003F800. It is not safe to assume they will always get the address in order, especially if alignment comes into play.
If you need to force the compiler to store a variable at a given address, you should say so explicitly:
#pragma location=0x3F800
__root const uint8_t _value1[16] = {...};
#pragma location=0x3F810
__root const uint8_t _value2[16] = {...};
#pragma location=0x3F820
__root const uint8_t _value3[16] = {...};
You could also put your EWRAM variables in a structure which you would then place with #pragma pack. This would guarantee the order of variables inside the structure as well as the alignment, so you could effectively rely on offsets of individual fields being constant.
|
H: Why does a current mirror have low input resistance?
I know that the output resistance needs to be high otherwise the output current will vary depending on the load. However, I don't understand why the input resistance needs to be low?
AI: The input of an ideal current mirror should look for the input circuit as a short, as it is connected in series. Otherwise it will load that circuit and affect the current.
The output of the ideal current mirror should be an ideal current source, which, as you correctly said, should have an infinite resistance (it can be seen in the case of mirroring 0A current. In this case the output will effectively represent a disconnect).
|
H: Is there a relay to digitally check if mains is available
I am working on a system that will remotely turn on/off an appliance (RaspberryPi running a web server, sending input to a relay). One problem that I still need to get sorted is that, how can I check if the electricity is available. Is there some kind of a relay (or other component) that I can use to tell my RaspberryPi that electricity is available for the appliance to be turned on/off.
AI: There's many ways to do this, some more dangerous than the others. How about just taking a 3 volt wall DC supply and connecting the output to your Raspi? When there's power available, the output will be at 3 volts. And when not, it will be at zero.
|
H: Heating a wire with DC current; why is it hottest in the middle?
I am putting DC current through a wire to heat it. I would think the wire would heat up evenly but I have found that it is hotter the closer I get to the middle, or, respectively, colder the nearer to the clamps. Can anyone explain this?
AI: There are two effects going on. The heat sinking effect of the connections and the temperature coefficient on the wire.
Initially the wire is all at the same temperature.
You turn the power on and it starts to heat up.
The heating is determined by the electrical power dissipation in the wire, for any given section of the wire Power = Current * Voltage. All parts of the wire will have the same current. For a given length the Voltage = Current * Resistance giving Power = Current squared * Resistance.
Initially all the wire has the same resistance and so the heating is even along the length of the wire.
The heat flows from hotter to objects to cooler ones (this is the first law of thermodynamics). In this case the connection points are cooler and so heat flows from the ends of the wire to the connectors cooling the ends slightly. Since the very ends are cooler the bits of wire near them then cools a smaller amount and so on along the length of the wire.
This results in a very small temperature gradient across the wire with the middle slightly warmer than the ends.
Copper has a positive temperature coefficient of about 0.4 percent per degree C. This means that the warmer the wire the higher the resistance.
The middle of the wire is hotter which means its resistance increases. From the above equations this means more power is dissipated in the middle of the wire than in the ends.
More power means more heating in the middle than the ends and you get a positive feedback effect. The middle is hotter which means it has a higher resistance and more power is dissipated there which means it gets hotter...
This continues until almost all the power is dissipated in the middle of the wire, you never get all of the power in a single point because the heat conduction along the wire means that the sections near the middle also have reasonably high resistance. Eventually you reach an equilibrium where the thermal conductivity spreads the energy enough to balance the positive feedback effect.
The best example of a positive temperature coefficient is an old style incandescent light bulb. If you measure the resistance when cold it will be a fraction of the value you would expect for its power rating, they operate at about 3000 degrees and so the cold resistance is about 1/10th of the normal operating resistance when on. They are made of tungsten not copper, copper would be a liquid at those temperatures, but the thermal coefficient is about the same.
|
H: potential difference between emitter-base and collector base constant, i.e 0.6 or 1.2v approx?
i was solving questions on bjt, and i came across a question where i had to find voltage across emitter and collector and the answer was 2V, but i have read it's approx. 0.6v across both emitter base and collector base, then how does it amounts to 2v ?
AI: Your answer was for a specific load (voltage and resistance) and drive current. Depending on the circuit, the collector-base voltage can go from the collector breakdown voltage to about -0.6 volts or less. The first level occurs when the input base current is zero and the collector is at breakdown, and the second occurs when the transistor is in hard saturation, with a Vce on the order of 0.1 volts. Since in some high base-drive situations the base-emitter voltage can get larger than the nominal 0.7 volts, the voltage difference can be greater than 0.6 volts.
|
H: What is the cut off frequency of the RC filter in this circuit
I have the circuit of a heartbeat sensor module from pulsesensor.com but I can't seem to understand the kind of filter in the schematic. They Said it contains a low pass RC filter with R = 100 ohm and C = 47uF. But from my analysis I see a hpf. What kind of filter is formed by R2, C2, C1, C3 and C4. what is the cut off frequency of the filter.
AI: They Said it contains a low pass RC filter with R = 100 ohm and C =
47uF.
No, they said:-
we designed a fairly universal Low Pass Filter for the output (passive
RC. R: 100 C: 4.7uF)
but...
We made some changes to the original Pulse Sensor circuit
There are two filter stages in this circuit. The first is a passive low-pass filter formed by R2 and C2/C1. It has a cutoff frequency of ~5.64Hz and stop-band attenuation of 20dB per decade. However the load impedance in combination with R2 and C2 will form a high pass filter, so the practical result is a band-pass filter.
(Note: the 14K resistor in the simulation below is a simplification of the complex load impedance in your circuit).
simulate this circuit – Schematic created using CircuitLab
The second stage (C3, C4, U2 etc.) is an active band-pass filter with gain. In isolation it would have 47dB gain at a center frequency of about 2.3Hz, but with the two stages combined the overall result is 37.5dB gain at a center frequency of about 2.6Hz.
Analyzing a multistage filter circuit like this is not easy, due to the complex interaction between stages. Rather than trying to do the calculations by hand I simulated your circuit in LTspice. Here's the schematic:-
|
H: ULN2803 - do i need COM(VSUP) for relays ?
I'm building a relay board with SRD-03VDC-SL-C relays and an ULN2803 Darlington transistor array. As I am a beginner I don't understand the purpose of the COM pin?
In the datasheet VSUP (Coil Supply Voltage) is rated from 12 V to 100 V, since I'm working with 3V3 will this be an issue?
AI: simulate this circuit – Schematic created using CircuitLab
Figure 1. When switching an inductive load with a transistor it is necessary to use a protecting diode to conduct the inductive "kick" of the inductor as the current falls to zero. When Q1/2 is switched off current continues to flow through L1 as shown. With the addition of D1 this current can "free-wheel" around the diode until it decays to zero. This prevents high voltages being generated and protects the transistor.
Figure 2. The ULN2803 Darlington transistor array includes the diodes on the chip. The only constraint is that they share a common positive.
It would be unwise to leave the diodes out in a simple transistor application. It would be mad to leave them unconnected when using the 2803 as they have made it so simple to use.
Since I'm working with 3V3 will this be an issue?
Oddly enough it may be more of an issue at 3.3 V. For a given relay size the power required will be the same for the various coil supply voltage options. Since \$ P = VI \$ we can see that if V goes down the I must go up to maintain power. At 3.3 V the current will have to be much higher than for, say, a 12 V or 24 V relay. Without the free-wheel diode the voltage may still go high enough to damage the transistor switch.
Collector-emitter saturation voltage
Figure 3. Note the rather high collector-emitter saturation voltage. Source: ULN2803A datasheet.
You will likely run into a problem due to lack of voltage on your relay coil. Because the output transistor is driven by another one in the Darlington arrangement the output transistor doesn't saturate as well as with a single stage output. This means that the output voltage may be up to 1.6 V at 350 mA. This in turn leaves only 3.3 - 1.6 = 1.7 V for your relay.
A better solution would be to power the relays from a higher voltage supply - either the 5 V, if available, or whatever unregulated supply is powering your device. This also reduces power dissipation in your 3.3 V regulator. In this case (of higher voltage) the ULN2803 COM should be connected to the same supply as the top of the relays.
simulate this circuit
Figure 4. A better way. Use a higher voltage to power the relays. The ULN2803 will interface between the micro logic level and the higher voltage.
|
H: A circuit to find 3D position of an object using changes in magnetic fields
There is a project at
http://www.instructables.com/id/DIY-3D-Controller/
to make a DIY 3D controller with capacitors to infer the 3D position of the object being tracked (hand) without using any wiring to hand. My question is it possible to build a 3D sensor that uses changes in magnetic files and inductors (instead of the capacitors) to also build a hands free 3D tracker?-probably the hand would need a metal object possibly a finger ring to create a change in the three or more magnetic fields.
So, if possible, whats the circuit for the 3D sensor using changes in magnetic fields instead? Also is there a simple circuit to do it like known capacitor circuit mentioned above?
As a guess it could probably be done with two magnetic fields if it works something like 2d vision tracking where two 2d pictures (in suitable planes and orientations) can be used to determine the 3d position of the object.
AI: You could use three copper coils that are physically separated and have the user wear an iron or steel ring as you suggested.
As the ring moves nearer each coil the inductance will increase very slightly. And as it moves away it would decrease.
You would need to characterize how much the inductance actually varies with position. Unless the distance between the coil was small (say less than a few inches) my guess it that the inductance woudld vary by less than 1% of its initial value. Which means that your detector circuit would need to be very sensitive.
To measure the inductance change you could run a square wave or sinewave through the inductor and a series resistor of known value. Use a peak detector circuit to measure the peak inductor voltage. The measured voltage should be approximately propotional to the inductor value when considering such small changes.
By recording the voltage measurements for various positions you could build a rough lookup table of value vs position (say 64 entries). From the rough table you can determine position for any voltage readings by interpolating the table entries.
The capacitive sensor you linked is probably a lot easier since the human body makes a pretty good capacitor and therefore you can get a pretty good reading of the proximity to a plate.
This inductance to digital converter chip may also be useful.
LDC1612
http://www.ti.com/lit/ds/symlink/ldc1612-q1.pdf
|
H: Monte carlo analysis cadence
I tried to do a monte carlo analysis of the opamp that I have designed. I ran the simulation for 100 samples with matching and process, suprisingly the results I get are very bizare. I have gains that are not even 1 dB which is something I completely dint see with a normal corner/process variation . I know that the reason for this bizare result is that the simulation takes mismatch into account(shown in image below with the configured simulation).But my main question is how can I improve this result keeping my design as it is ? IS there a way through which I could tune my MOSFET dimensions such that I have imrpoved results ?
Simulation Configuration
Simulation results with mismatch included as shown in the simulation configuration.
Simulation result without mismatch but with only process.
Circuit diagram, with the resistor divider being the outputs from the bandgap reference.
AI: Most likely the problem is not your design but the testbench that you are using. The low gain and the high-pass behavior of your simulations suggest that the operational amplifier does not work properly, because the DC operating point is not set correctly.
You need to make sure that DC operating point is such that your input signal is within the input common range of the opamp and the output signal is not too high. With an offset in the range of a few mV and a gain in the order of 60dB or more this can happen quite easily.
The Monte-Carlo simulation is done with a fixed seed. So you will get the same results whenever you run a simulation. This helps to isolate the problem. Pick a run which is completely off and resimulate only this run. Check the DC operating point and fix your testbench.
This can be done by setting "Starting Run Number" to the number of the run you want to simulate and setting the "Number of Points" to one.
Update: After looking at the schematic I am sure that it is the testbench (or the lack thereof). As a quick fix you could feed back the ouput to the inverting input using a low pass filter with a cut-off frequency that is very low. Then you should make a real testbench.
|
H: Sending 120VAC to a device for a certain amount of time after a single button press
I am trying to use a push button switch to send 120VAC for a certain amount of time after it has been pressed. I want it to only be pressed once, and not have to be held in the closed position. Is there some sort of timer/timed relay I could use to accomplish this?
AI: There are industrial timing relays that can do this, in addition to the pneumatic ones that Majenko mentioned. From the outside, they appear to work similarly to traditional relays, except that there's a (usually programmable) delay between the "coil" receiving power and the contacts moving. Here's one possible way to wire one:
simulate this circuit – Schematic created using CircuitLab
When you push the button, the load and both coils get power. The normal relay closes immediately and maintains power when the button is released.
The timer relay does nothing for the programmed time. When it finally moves, it removes the normal relay's ability to hold the circuit on, and so everything resets.
|
H: ADC and DSP Requirements for FMCW Radar
I am building an FMCW radar that has a bandwidth of 1 GHz. The IF frequency is up to 1 MHz. Would the ADC requirements simply be that it has to b 2 times the IF frequency of 1 MHz? If that's the case can I simply put an ADC of 5 MHz and not worry about anything else? What are some things a should look at for DSP requirements? I'm not so clear on how range, velocity, and resolution affect the requirements. Thanks for any help. I expect editing this question as I am sure I left some parts unclear. Thanks
AI: FMCW radar is a bit tricky, and your "IF frequency is up to 1 MHz" is a bit vague.
In an FMCW system, a target returns a signal that is delayed based on its range. The frequency difference between the transmitted signal and the received signal is a function of that delay and how fast the transmit signal frequency is changing. This frequency difference swings positive and negative, but has a zero mean if the target is stationary. If the target is moving toward or away from the radar, then the mean IF frequency shifts negative or positive, respectively.
The key point is that in order to extract the maximum information from the signal (range rate as well as range), you need to be able to distinguish positive and negative frequencies in the IF, which means that you really need to cover a range of -1 MHz to +1 MHz, or a total bandwidth of 2 MHz.
Nyquist requires that your sampling rate be at least 2× your bandwidth, so a 5 Msps ADC would probably be considered "barely adequate". 8-10 Msps would probably make your life a lot easier.
|
H: How much surge protection do external power supplies provide?
This weekend a very close lightning strike took out several devices in my house. Among the casualties were:
2 TVs
1 cable modem
1 cable box
1 garage door opener
One of the TVs, the cable modem, and the cable box were all plugged into power supplies that reduced 120 VAC line voltage to 12 VDC (or similar). I tested all the transformers and they are all fried (none of them produce anywhere near their rated output now).
I replaced the cable modem and cable box, but the TV is an expensive item I'd like to salvage if possible. I can replace the power supply for about $15. Is there any chance that the supply "took the hit" and provided enough protection to save the TV? I don't have an easy way of testing it without just buying the replacement supply.
As a more general question, how much protection do consumer grade power supplies provide against voltage fluctuations? I know there's nothing consumer grade that will stop a direct lightning strike, but do they provide effective protection against voltage spikes and drops caused by other disturbances?
EDIT: As it turns out, the $15 power supply did "take the hit" and protect the TV.
AI: There are two kinds of surges, differential-mode and common-mode.
A differential-mode surge means that the voltage difference between line and neutral rises to an abnormal value. This type of surge can be caused by a lightning strike on a distant high-voltage feeder line, which then gets transformed by your local distribution transformer to a correspondingly high voltage at your service entrance.
A common-mode surge means that all of the wires experience the same abnormal voltage at the same time. This type of surge can be caused by a nearby lightning strike, which can cause the "ground" in the vicinity of your house to have a much higher voltage than "ground" much farther away. It can also cause large currents to flow in any wiring "loops" in your house, which also induces common-mode voltage shifts.
So, taking your more general question first, a typical wall-wart power supply will generally protect and/or "take the hit" for any differential surges. There are relatively few mechanisms by which a primary-side overvoltage would be coupled to the DC output. Most power supplies have spark gaps and/or MOVs to make sure that the primary voltage doesn't exceed the isolation rating between primary and secondary. (In fact, I'm fairly certain that such protection — at least up to a certain energy level — is a requirement in order to get a safety rating such as UL.)
However, these mechanisms cannot do anything for common-mode surges. A common-mode surge could easily exceed the isolation rating of the supply and cause the DC connection to the device to also experience a common-mode surge. If the device is otherwise isolated, it might survive this, but TVs (and cable boxes, etc.) tend to have another connection — the signal cable that comes from the antenna or cable company.
Now, the cable shield is supposed to be bonded to the same ground as your AC power at your service entrance, and if this is the case, then this should experience the same common-mode surge as everything else and preventing large currents or voltages from appearing. But if it is not, then large currents can flow through the power supply, the TV and the cable shield. The TV is not likely to survive this.
Also, as I alluded earlier, the path from your power service entrance, through your house wiring to the outlet, through the power supply, the TV and the cable connection forms a large "loop" with a significant amount of area. A lightning strike that's close enough can induce a large common-mode current in this loop even if the cable is properly grounded at the service entrance.
So, in spite of all of that, the bottom line is that no one can say for sure one way or the other whether or not your TV survived. For $15, it's certainly worth a try. If you're an electronics hobbyist (I presume you already have a multimeter of some sort), then investing in an adjustable bench power supply would be worth your while, because in addition to all of its other uses, you could use it to test the TV before committing the money for a new dedicated supply. Units that can produce 0 - 30 V at up to 3 A are readily available at very reasonable prices.
|
H: Boost converter drops to 4.5V when trying to achieve 6V with an 18650 cell
I don't have a lot of experience with this but I'm working on a prototype device which I want to run off a 18650 cell. I need to boost the voltage from 3.7V to 6V using a boost converter that uses a LM2587 chip that I bought on amazon.
When I connect the load, the voltage drops from the 6V that I need to 4.5V.
The cell is rated for 6.5A continuous and the boost converter is rated at 3A continuous.
Cell: https://www.imrbatteries.com/panasonic-ncr18650a-18650-3100mah-3-7v-protected-flat-top-battery/
Boost Converter: https://www.amazon.com/gp/product/B00J2PT83E/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1
At 4.5V, the circuit is drawing around 1.8A. What am I missing?
AI: Specification sheet claims are usually "best case" with optimum vin vout and power. The conditions you require do not match the converter's optimum operating conditions.
You are seeking to achieve 6V across 3 Ohms.
Power = V^2/R = 36/3 = 12 Watts.
At eg 12V in you you would need eg ~= 1.3 A average input at 75% efficiency to get 12 W out - and you could easily achieve 12 Watts output.
At 6V in you'd need 2.6A average and still doable.
At 4V in you need 3.9A average in and with duty cycle considerations you are near or above the limit for 12W out depending on overall achieved efficiency.
A single LiIon 18650 cell has a max of 4.2V, average operating voltage of 3.6V and useful voltage range of about 3V to 4V.
Let's see what we can expect.
The LM2587 has a 5A peak internal switch.
Duty cycle Toff:Ton ~~~= Vin: Vout/efficiency
Say 3.5: 6/75% = 3.5 :8 ~= 30% off, 70% on.
Max switch current = 5A.
Available Iin avg ~= Imax /2 x ton/tcycle
= 5/2 x 70% = 1.75A
Power in max = 3.3V x 1.75A = 5.8W
Power out max = Power in x efficiency
Say 5.8 x 75% = 4.3 W
Available V into 3 Ohms .
Power = V^2/R
Or V = sqrt(Power x R) =
= (4.3 x 3)^0.5 = 3.6V
You are getting somewhat better - converter is presumably operating at better efficiency than the 75% I used.
BUT, while the converter is capable of providing more power under the best Vin, Vout and load combinations, it falls somewhat short in this case.
E&OE. YMMV.
|
H: Does fixed point number processing require ASIC hardware?
I've recently started diving deeper into DSP and have come across the term 'fixed point number'. The idea of a fixed point number is simple enough and makes since to me, however, I'm somewhat curious as to how fixed point operations are carried out.
Are fixed point operations carried out by processors with instructions to perform fixed point operations? Or is there usually a dedicated chip?
Thanks!
AI: Fixed point operations are carried out as integer operations, possibly with bit shifts to get things lined up. No special hardware is required to use fixed point math, certainly not an ASIC unless you're trying to do something very specialized. The whole point of fixed point is to get some of the benefits of floating point math without the performance penalty, especially on CPUs that do not support hardware floating point math.
|
H: Measure voltage of 2 different batteries on Arduino
Currently I'm using a resistor divider to measure the voltage of a 6v battery that is connected to an Arduino via a 5v power regulator. I also want to be able to measure the voltage of another battery (7.2v) on a separate circuit with the same Arduino.
The issue is that the batteries will be on separate circuits, and thus, have no common ground (batteries are not in series or parallel). Is this possible?
AI: The simplest solution is to create a common ground. Since you want to measure two batteries, you have to connect their grounds to the Arduino ground. If you do that to both, the Arduino ground has become the common ground.
|
H: Dual power supplies using 2 male to 1 female 120V AC power cable?
I've got a computer that needs redundant power supplies. Unfortunately, it only has one PSU. So I want to plug my computer both into the wall and into my UPS (a battery) that's plugged into the wall. I'm thinking I could buy or splice a cable that puts the AC power supplies in parallel. As long as the frequency and phase shift of the AC supplies are equal, it should be fine, right? If so, are my UPS and house expected to supply equal frequency/phase, even if the power goes out (leaving just the UPS running) then comes back? I live in the United States.
I can't find this anywhere because it's too similar in keywords to a male-to-male AC power cable... and maybe because it's a horrible idea? Hope not.
AI: A simple sketch immediately shows why this is not going to work.
simulate this circuit – Schematic created using CircuitLab
You can't assume that the UPS will be the same frequency as the mains. If they were even slightly off the UPS output circuitry would be destroyed.
When the mains turns off the UPS will be feeding the rest of the house, the street and itself.
As well as overloading the UPS and you would be putting mains back on the grid which is not allowed.
|
H: N-Channel MOSFET as on-off switch between battery and load
I'd like to use 3v microcontroller to activate a small fan after the microcontroller wakes from sleep. I'm not quite clear on the placement of the load (i.e. fan) and the LiPo battery in a MOSEFT schematic. My understanding is that I can use an N-chanel MOSFET to create a closed circuit between the LiPo and fan by supplying voltage via GPIO to the gate. Am I correctly understanding this? Below is my circuit diagram. Thanks for any feedback.
AI: Your general idea is correct. I would like to clarify a few points:
The FAN connector is the other way round (the top pin is connected to the "+" of the battery, thus should be labelled "+"), while the bottom pin will be brought to ground when the MOSFET closes.
By "3v microcontroller" I suppose you mean "3.3v". In any case, the MOSFET you choose must have a threshold voltage lower than 3v. For example, the FDN338P MOSFET has a 2.5v threshold voltage.
EDIT: apart from the threshold voltage, you have to make sure that the MOSFET will be able to handle the current that goes through your fan (this one has 1.6A maximum continous drain current, which should be fine for a small fan), and should also have a low conducting resistance (Rds) while being driven from 3.3v. This one has 155mΩ at 2.5v, which is great. Thanks to Spehro Pefhany for pointing this out.
The idea that you're powering a fan from a separate power source (battery) is a bit strange: if your microcontroller is powered by another battery, but you're doing this because the fan requires another voltage, it would be a better idea to use a 3.3v LDO to bring stabilised power to the microcontroller from the same battery. This way, you will have exactly 3.3v powering the microcontroller, instead of the fluctuating voltage it gets by being powered directly from a battery.
EDIT: To expand on my second point, I would like to add that a MOSFET might not be necessary in your circuit, and could be replaced by a regular NPN bipolar transistor (BJT), since a MOSFET with such a low threshold voltage might be hard to find.
simulate this circuit – Schematic created using CircuitLab
Since the BJT, unlike the MOSFET, "opens" when there's a current between the base and the emitter, we use a resistor (R1) to limit the current.
EDIT: Another valid point (thanks to Icy) is that it would be a good idea to add a flyback diode across the fan - when the motor (or any inductive load) is turned off, it becomes a generator for a short time, because the magnetic field induces a current back into the coil, causing a huge momentary spike. The diode will suppress those spikes.
|
H: Op. Amp instead of Push pull stage as power amplifier
I have been going through the push pull stage as power amplifier. Even though the gain in this topology is less than 1, still people use it. I am assuming this is because in emitter-follower OR source-follower topology the input impedance is high and output impedance is low, that is why we are using these stages as power amplifiers.
BUT, operational amplifiers also depict the same characteristics, and hence I believe can be used as power amplifiers.
So why the book does not discuss op-amp as power amplifiers? Does op-amp lack something to act as power amplifier?
Can anyone be kind enough to throw some light in the above thinking.
AI: The main power contribution of an emitter follower push-pull amplifier is from its current gain. This can be quite substantial.
Op amps generally have low output current, on the order of 20 mA or so. That most common type are much better suited to voltage amplification.
If you put the two together you can produce large amplification of small signals. The op amp can be used as a preamplifier, to bring a signal of say millivolts up to volts (but milliamps of current), followed by the push/pull stage that keeps the voltage about the same but amplifies the current to produce watts of output.
There is such thing as power op amps intended for use without a separate power stage following them. The LM675 (datasheet here) is one example, but a search on the term "power op amp" will bring up many others.
The closed-loop output impedance of an op amp is generally already quite low, so it's not ruled out strictly for that reason. Rather it's the fact that it "saturates" (i.e. cannot deliver any more current) at a milliamp level and would generally start behaving badly (e.g. increased distortion) if used to drive a low-impedance load.
|
H: Setting time of a LO
When we say the term 'setting time' of an LO,what exactly do we mean by setting time here ?
AI: The 'setting time', or more often the 'settling time', means exactly what you want it to mean. Different applications have different requirements, and it's only in the context of the application that you can derive a suitable specification.
For instance, for a narrow-band voice FM transmitter or receiver synthesiser, settled to within 1kHz error would probably be sufficient.
For a wideband system like the 5MHz channels in 3G, where the demodulation is done digitally, and large doppler has to be accommodated, I would expect 10s of kHz error should be tolerable, but if the error budget has been written to allocate it all to doppler, then it could be less.
I worked on a coherent LO recently where the specification was to settle to within 0.1 radians of its final phase.
Depending on the detail of construction of a particular VCO or synthesiser, you might have rapid settling to within one spec, and then a longer 'tail' towards a tighter spec. Just because one LO is faster than another to one spec does not mean the same will be true at another spec.
|
H: Controlling NiMH charging with power MOSFET: MOSFET before or after the linear regulator?
I am starting a project where I want to make a simple, Arduino-controlled NiMH battery charger. I imagine several iterations, but the first one will be really simple. It will charge at a constant current of around 0.1C (where C is the battery capacity) and will monitor the voltage and charging current. For the constant current, I will be using a 7805 regulator in constant-current mode (i.e. with PIN 2+3 connected with a resistor and the battery itself connected to PIN 3 and GND), figure 4 here:
https://www.sparkfun.com/datasheets/Components/LM7805.pdf
For the voltage measurement, I want to charge the battery in cycles of 1 seconds, where after each second I stop the charge for some milliseconds to measure the voltage. The question is how to stop the charge. My thoughts are to use a power MOSFET (n-channel, enhancement mode). I have the following and it appears it should to the job.
http://www.vishay.com/docs/91291/91291.pdf
The gate of the MOSFET will be controlled by the Arduino. The question is then: Should the MOSFET come "before" the 7805 regulator i.e. so that the regulator would get switched on-off also during the cycle? Or should I put the MOSFET after the regulator, so it is only the power to the battery that gets cycled? Or doesn't it really matter?
Can the 7805 get "worn out" from being cycled so many times as would happen in this application (if the MOSFET is before the 7805)? It would be cycled once every second.
AI: It's not a good idea to do the switching after the regulator, since its COM pin will be left floating. The regulator might start to oscillate.
I would suggest putting the MOSFET after the 0.33u capacitor, before the regulator, and adding a 100n capacitor after the MOSFET. This way you won't have current spikes on each pulse to charge the big capacitor, but you'll still have something that filters high-frequency oscillations.
The 7805 won't "wear out". Being cycled every second is a very long interval, and I think it will be fine for the regulator. If you have shorter pulses (10ms or less), you might start having trouble.
|
H: UHF band repeater
I want to make a device which gets the TV antenna signals and transmits it again. Is there any module available in market which simplifies construction or i should make from scratch? Since my background is in software, please explain simple.
AI: What you describe definitely contains an amplifier; small-amplitude signals go in, high-amplitude signals go out.
Now for problems:
If you're on the same frequency as the received signal, your amplified signal will interfere with the received, weak signal. Hence, same-frequency forwarding can only work if the "end receiver" only gets one of the two signals, not a combination of both. For cable TV, that is the case, so that's why you can get very cheap UHF amplifiers. For antenna TV, that's not the case, so you can't have same frequency relaying.
If you need to change the frequency while relaying, you need what is often called a transverter: The received signal is pre-amplified, mixed with a single tone that shifts everything in frequency, and transmitted through a power amplifier. This is significantly more complex.
In either case, you'll need proper input and output filters (which will be very complex, considering the TV spectrum isn't necessarily one continuous block) and you will be operating illegally, with a device that, per definition of what it does, is easy to find. I don't know where you live, but I don't think there's any place on earth I'd do this.
Regarding the question of building such a device yourself:
Well, it's certainly not a starter project for electronics beginners; TV signals, even the ones that "look bad on screen" are sent with relatively high power by the "official" TV towers, compared to the powers that e.g. are in the cell phone signals that reach your telephone. So this will be a challenge in amplification, testing, power supply, filtering, mixing, tone generation and general RF design, which is a discipline on its own.
|
H: Device Driver function behavior on interrupt
Suppose an embedded system is running FreeRTOS and an application program makes calls to a device driver interface (let us assume I2C). What exactly happens when this is interrupted by an external interrupt? In addition, how much sense does it make to implement the driver functions as tasks?
AI: What exactly happens when this is interrupted by an external interrupt?
How interrupts are handled depends on the system. But a driver is just like any other code, it will stop execution temporarily in favour of the interrupt.
In addition, how much sense does it make to implement the driver functions as tasks?
A properly written driver consists of two modules: a hardware abstraction layer which is what your application calls, and the actual driver, which is system-specific and non-portable.
Ideally the driver should be completely free of any application-specific things, so it makes no sense to put a driver in an OS task. A driver is/should be much lower level than things like operative systems. You could however put all the application code communicating with a certain driver in a task of its own. For example a protocol encoder/decoder.
|
H: Bias-tee for adding multiple frequencies to a DC line
Objective - To add multiple frequencies on sine waves onto a \$24V\$ DC Line.
Each frequency block (\$F_1, F_2, ...\$) generates a sine wave of unique frequency within the given range. When the \$n^{th} \$ frequency block \$F_n\$ is switched on, it's frequency must be added to the 24V line regardless of any frequency present along that line.
I have used bias-tee circuit to combine the frequencies. Each yellow block has a \$100nF\$ coupling capacitor.
Sine wave generation method : Square wave to sine wave converter
Frequency Range : \$8KHz - 24KHz\$
Number of Frequency Blocks \$N\$ : \$10\$
Current\$/\$Frequency Block : \$10mA\$ at \$24V DC\$
L1 Inductor Rating :
Inductance : \$470\mu H \pm 10\%\$
DC Current Rating : \$420mA\$
Self-Resonant Frequency : \$100KHz\$
Coupling Capacitor : \$ 100nF, 50V\$
Issues -
There is a lot of high frequency distortion when detecting the signal over the power-line. (The power-line wire is \$2\$ core, \$1.5mm^2\$, \$25m\$ between each device) How do I resolve this?
The sine waveform (from one block) gets distorted when more than one frequency block is connected. How do I isolate each block but still allowing the signal to flow through?
I have used a buck-converter to convert \$24V - 5V DC\$ within each frequency block. It has huge capacitors in its input. Is that the problem? Will a diode at the input do the trick?
I don't get any sine-waves (added) at the power-line, for more than 2 blocks. Do I need to change the \$L1, C\$ values?
Do I need an end resistor after the \$N\$ blocks, connecting the positive and negative?
If this design could be improved, kindly suggest so.
Thank You.
AI: What you need is an inductor going into each frequency generator block.
Each of those block looks like a load for your frequency generator, the same as the 24V power supply does.
Everything that uses DC from the 24V line provides DC to the line needs an inductor.
Everything that puts a signal on the line needs a capacitor.
High frequency noise or distortion? I expect you mean noise. Twist the wires to reduce received noise. You might also add ferrite beads to the lines, but I think the answer I gave above will get rid of most of it.
You are mixing the various signals on one line. If you put two signals on the line, then you will see a mixture of the two. This may be the "distortion" you are seeing. You might also have problems with the different generators outputs being connected directly together. In this case, a resistor in series with the coupling capacitors may help.
See above. There's a very good chance that the power inputs to the frequency block short out the outputs when you get enough in parallel. Again, an inductor in the power feed to each block will help.
You shouldn't need a termination resistor at the end of the string. The frequency blocks put a load on the 24 volt line, so for the frequencies you are using you shouldn't need any kind of termination.
I think your inductors are too small. At 8kHz, they have an impedance of only 23Ohm - that doesn't do much for you. 100mH inductors would have an impedance of 5kOhm, which would do much more for you in terms of keeping the signals away from the power consumers/providers.
|
H: Why use a localized ground plane?
It is known that the return current starts following the conductors as the frequency increases:
So as long as one keeps the distance between the conductors large enough there shouldn't be the need for local ground planes.
Why then some people suggest the use of localized ground plane, like the answer to this question: https://electronics.stackexchange.com/a/15143/4512 It is suggested that the ground plane will work as a patch antenna.
But the current on the ground plane will closely follow the conductor, resulting in a small loop area. And the generated magnetic field is given by Faraday's law:
$$\oint_{\partial A} \mathbf{E} \cdot d\mathbf{r} = - \frac{\partial}{\partial t} \iint_A \mathbf{B} \cdot d\mathbf{s}$$
By using a smaller localized ground plane the generated EM radiation will be the same, because the loop will be the same. Additionally, the localized plane can introduce plane resonance, and one needs to avoid crossing it with high frequency signals.
So, what are the benefits of using a localized ground plane?
AI: It's not just all about loop area. Small loop area is important to reduce radiation and susceptibility, but I expect you want your circuit to do more than not radiate anything.
Currents across a ground plane cause offset voltages. That's bad since another job of a ground plane is to provide a common reference voltage to all parts of the circuit. In a purely digital circuit, you might be able to tolerate a few 100 mV offset. If the circuit contains analog components, much smaller offset could be bad.
Another problem with offsets are that they can excite the ground plane to resonate. A microcontroller in the middle of a board with part of its power/ground loop running across the ground plane can turn the whole thing into a center-fed patch antenna. The current loop may be small, but it causes voltage offsets at either end, which can cause even higher voltages at the edges due to resonance.
The more you use a ground plane, the less good of a ground plane it becomes. You have to make a tradeoff somewhere that results in the best overall characteristics taking all the competing demands into account.
|
H: Complex power, real power
Given:
\$U_q = 220 V, f = 50 Hz, R_i=10 \Omega, R_a=40 \Omega, L_a=95.5 mH\$
I'm asked to determine the real power transformed in \$Z_a\$. Here I break down the formulas that I use:
\$Z_{total}=Z_{La}+Z_{Ra}+Z_{Ri}\$
\$I={U_q\over Z_{total}}\$
\$P = Re[U_q . I^*]\$
I'm really sure that my calculation is right using that. But it gives a really different answer than the answer key, can you see what's wrong with my formulas? Can I use \$U_q\$ directly as \$U_{load}\$?
I thought the current are in series so it's must be the same, and also the voltage is parallel so it's must be the same. So how can I determine the power only in \$Z_a\$?
EDIT:
Here is my calculation:
\$Z_{total}= j\omega L_a+R_a+R_i=(50+30j)\Omega\$
\$I= {220\over 50+30j}={55-33j\over 17}A\$
\$P=Re[220({55+33j\over 17})]=711 W\$
And the correct answer is 284.7 W.
AI: I believe there are two issues preventing you from matching the answer key.
The first is that they really only want the power in Za, so you need to calculate U. Just take your current (I agree with that calculation) and multiply it by Za
U = I \$\cdot\$ (40 + j30)
Then calculate S = I \$\cdot\$ conj( U ). The real part of this should be 569 W. Now, that's still double what the book has...
So I would assume that the Uq given is peak voltage instead of the usual default of RMS. That reduces I_RMS by \$\sqrt2\$ and U_RMS by \$\sqrt2\$, reducing the product by a full factor of 2.
|
H: Heat sink calculation for TDA7376B
After years of programming, I'm trying to do some electronics again.
So I wanted to build an amplifier. I think I now have most of the information, but I'm stuck with the heat sink.
So, basically, I figured out that the calculation goes something like:
temperature increase = power dissipation * total temperature resistance
This particular amplifier IC has the following characteristics (datasheet):
max output power: 40W
Rth (junction-case): 1.8°C/W
max junction temperature: 150°C
max operative ambient temp: 105°C
So, when I have a room temperature that shouldn't go over 30°C, my guess would be that I have 120°C of 'room'. But an average heatsink adds a few temperature resistance units, easily going over the maximum temperature. So I started wondering what the actual power is I need to dissipate.
I am using 4 ohm speakers. I don't intend to drive the volume too high (the speakers themselves are 35W, and while I only have a vague idea of how loud that is it seems much louder than I would ever listen to).
This is a graph from the datasheet.
So I do have multiple questions actually:
Are my assumptions and calculations correct?
What is the maximum temperature I should use for the junction, 150°C or 105°C?
How much warmth does the chip produce? I have trouble interpreting the graph.
And what kind of heat sink would be suitable for that? It appears this is measured in W/°C.
EDIT:
I've decided to go with a different IC, as this one produces a bit too much warmth. Either one that is much lower in power (and has a less confusing datasheet), or with this class D amplifier which doesn't even need a heat sink and is thus much smaller. Thanks for the help and explanation, I think I have a better grasp on calculating heat sinks now.
AI: There are a few issues here you should understand to properly apply this amplifier IC to your project.
First, there is a distinction between the power delivered to the speakers and the power dissipated in the IC package.
Second, there are two channels. The data sheet doesn't seem terribly clear to me on this point, but Absolute Maximum Ratings Table 2 in Section 3.1 gives a Maximum Power Dissipation of 36 Watts. This implies 18 watts of dissipation per channel. That's 18 watts of power/channel dissipated in the IC, not power delivered to the speakers.
Third, looking at the graph, which is labeled "Dissipated Power & Efficiency vs Output Power, I take it that the "Ptot" vertical axis and the plot line are dissipated power. Whereas the X-axis "Po" is output power - power delivered only to the speakers. So that, for 28 watts delivered to one speaker (the maximum value shown on the "Po" axis, the Ptot plot line shows approx. 18 watts of dissipation (within the IC) on the left-hand Ptot axis. This seems to jive up with my point above that these power figures are "per channel". However, I'm not sure how the efficiency plot is calculated as it doesn't seem to agree with my interpretation of the power plot and axes. So, for two speakers run to the max capabilities of this IC you will be delivering 56 watts to the speakers, and you will need to dissipate 36 watts from the IC package.
Even though the package is rated at "36 Watts" in Table 1, from a practical standpoint getting 36 watts of heat out of package this size ( basically a 3/4" square footprint) while maintaining a die temperature of less than 150 deg-C (the "Maximum Junction Temperature") will require a pretty remarkable heat sink! You will need an immaculate assembly technique to ensure thorough and intimate contact between the back of the IC package and the mating surface of the heat sink.
To answer one of your questions about the distinction between the specified 150 degC and 105 degC values, you should interpret these as follows. 150 degC is the absolute maximum temperature the die of the IC can withstand without permanent damage. In practice you want to stay well away from this unmeasureable temperature. You should design for 130 or 135 max.
The 105 degC value means that if the IC is ideally attached to an ideal heat sink, and the die temperature is on the verge of destruction at 150 degC, the IC package surface will be at a temperature of 105 degC ( another temperature which is difficult or impossible to measure or verify because the package surface in question is bolted against the heat sink). Nonetheless, you want to select/design a heat sink which can theoretically maintain the package surface at 105 degC under maximum dissipation power conditions (28 watts into each speaker). However, you will want to derate this target temperature to 85 or 90 degC so that the die temperature stays at the targeted 130 to 135 degC temperature described above.
How do you choose a heat sink for this application? Here's my practical method. Start with one that will give you twice the dissipation capability you will need. You have 80 degC on the surface of the IC and a 30 degC ambient, that's a 50 degC drop the heat sink has to provide. You are dissipating 36 watts. So, you nominally need a heat sink with a 36/50 dissipation factor. 0.72 watts per degC. Start with one that can provide 72/50 = 0.36 watts per degC. When you get the whole amplifier circuit humming along with a 28 watt per speaker load, measure the surface temperature of the chip right where the package's side meets the heat sink. (Use a digital thermocouple type thermometer to do this. The thermocouple tip is very small and you can get a pinpoint temperature with it.) If you find that you have some temperature margin to play with (e.g. you measure 70 degC at the IC), you can cut away some of the purposefully oversized heat sink. Take away symmetrical sections of the heat sink, cutting off 10 % total mass per trial.
Another way to judge the amount of trimming needed is to get the amplifier humming under full load. Then, take multiple temperature measurements across the length and width of the heat sink. You will notice that this measured temperature drops as you move the probe further away from the IC in all directions. At a far distance you will find the surface temperature of the heat sink is very close to the ambient air temperature. This tells you the distance from the IC at which the heat sink is ineffective. This is simply due to the heat sink's ability to conduct heat over the increasing distance from the IC heat source. You can safely cut away any parts of the heat sink lying beyond this rather fuzzy boundary.
Simple rule of thumb: From a thermal perspective you can never make a heat sink too large! From a packaging perspective there are always size limits. So, use as large a heat sink as you can accommodate within your packaging constraints. The cooler the IC runs, the longer will be its life.
|
H: Differential Frontend Design for Mains Voltage Measurement
I am building a digital wattmeter intended for measurements on AC power lines. I have went through a lot of application notes and example designs, and stumbled upon a slightly confusing motif recurring in a number of designs from Texas Instruments' TI Designs.
They employ a similar frontend for voltage measurement, where the mains voltage is first scaled down using a voltage divider. The scaled-down voltage signal is then taken as differential and passed through some low pass filter before being fed to a differential-input amplifier or directly to a high-resolution differential-input ADC.
Question 1
What is the purpose of the connection from Neutral to the reference ground, if it results in an impaired differential signal?
(simplified diagram of the input section from SimpleLink™ Wi-Fi® CC3200 Smart Plug Design Guide)
After running a simulation of this input circuit, I have found that the ground connection on the Neutral line renders V_SENSE- signal thousands of times smaller than the V_SENSE+ signal, and causes a non-180° phase difference between them, resulting in a suboptimal differential signal. On the contrary, if the connection is removed, V_SENSE+ and V_SENSE- signals are of the same magnitude and 180° relative to each other, creating a perfect differential signal.
What is the purpose of this connection from Neutral to the reference ground?
Question 2
What is the purpose of using a smaller value for R5? Why is it that the previous configuration use the same value for both R5 and R6? How do I
know when to use which?
(simplified diagram of the input circuit from Smart Plug with Remote Disconnect and Wi-Fi
Connectivity)
This is largely the same as the previous configuration. Apart from the differences in component values, the most intriguing difference is in R5, which is 100Ω as opposed to 1kΩ for R6. According to the document, the justification for the smaller value is to make up for the much larger impedance at V_SENSE+. However, simulation results show that the outputs are almost identical to those of the previous configuration. Moreover, the magnitude mismatch and phase angle problems still exist.
What is the purpose of using a smaller value for R5? Why is it that the previous configuration use the same value for both R5 and R6? How do I know when to use which?
AI: Despite the galvanically isolated supply, you cannot have ground floating around wrt the line- it has to be connected (more or less) to one side of the power, specifically the side with the current shunt connected to it.
The sense voltage at the low end of the voltage divider would be expected to be almost zero.
|
H: Is There an Alternative to Resistor based Voltage Divider for an DC Audio Signal with Less Noise?
I have a LMC555 which generates a audio signal between 440-880 Hz. The signal is DC between 0-3V after the chip. To use it as input for an amplifier. The amplifier, a LM386N-1, starts clipping the signal quite early, somewhere around 20mV. Currently I use this solution:
simulate this circuit – Schematic created using CircuitLab
I lack the really deep theoretical understanding. As I know, each resistor in series will increase the noise of the signal. Higher resistance will cause more Johnson–Nyquist noise.
Here I have a 1MΩ resistor in front of the amp input, so I wonder...
...is there an alternative solution to this which causes less noise?
AI: Lower resistance will give you less Johnson noise. I seriously doubt your 555 output is going to be affected noticeably by Johnson-Nyquist noise. It is 'white' noise which sounds like old time TV static.
You may be able to avoid some wiper noise (when the pot is turned) by capacitively coupling to the divider since the 555 output is unipolar and thus effectively has an offset. That would be a scratchy noise when the pot is adjusted.
Since you are drawing the signal from a digital circuit, any noise on the power supply or ground can cause noise in the output. If you are hearing hum that is mains-related (50Hz/60Hz/100Hz/120Hz) that is probably the source.
Regarding your title- you can substantially reduce the noise in a 'DC Audio Signal' by shorting the signal to ground, since we can't hear DC.
|
H: Relation between built in potential and doping
What is the relationship between the built in potential and the doping concentration of a pn junction diode ? I could only find the relationship between the depletion region width and the doping concentration.
AI: I don't know how you missed the first formula for the built in voltage that I can find.
$$
V_{bi} = V_t\ln(\frac{p_nn_p}{n_i^2})
$$
$$ p_n = \frac{n_i^2}{n_n}$$
$$ n_p = \frac{n_i^2}{p_p}$$
and last but not least:
$$
n_n = N_D - N_A
$$
with Nd and Na being the donor / acceptor doping in the n-region
$$
p_p = N_A - N_D
$$
with Na and Nd being the acceptor / donor doping in the p-region
Assuming you know algebra you can easily express the built in voltage in terms of the acceptor and donor concentrations.
$$V_0 = V_t \cdot ln\Big(\frac{N_d N_a}{n_i^2}\Big)$$
This equation is what I missed.
|
H: How DRAM refresh cycles work?
Nano capacitors in RAM act as leak bucket and continuously lose charge. For this RAM has to be refreshed periodically in order to charge those nano-capacitors again. "During the refresh cycles memory controller reads the data of each cell and recharge it". This thing puts me in doubt. If memory controller reads data during RAM refresh cycle then that data has to be stored temporarily in somewhere else. Suppose I've 4GB RAM and 80% RAM is being used. "So memory controller requires atleast 2GB space to temporarily store data of RAM during its refresh cycles". Does memory controller also have its own memory? If memory controller reads data then who writes it after refresh? I used to think that "sense amplifiers" are used to read and write data in each column of RAM.
AI: Simplifying a little bit, think of DRAM as being a 2D array of memory cells1. Each cell in the array is a minuscule capacitor.
Along one edge of that array, we have a set of sense amps. There's one sense amp for each cell along that dimension. For either a refresh cycle or a normal read, we activate an entire row (or column, if you prefer to look at it the other direction). When we do that, we read the charge from that row (column) of capacitors into the sense amps. In doing that, we've drained (at least most of) the charge out of the capacitors that make up the memory cells themselves.
That will typically give somewhere in the range of a few hundred to a few thousand bits of data that are sitting in the sense amps. We can then read some of that data out of the sense amps to do to the outside world (if this was a read cycle) or we can just write it back to the memory cells (if it was a write cycle). Or, in the case of a refresh cycle, we read the data from the cells into the sense amps, then turn around and write it back out from the sense amps to the cells.
If the array were square, this would mean the amount of auxiliary storage needed (i.e., the number of sense amps) would be approximately the square root of the total number of bits. In reality, however, the array doesn't have to be square--we can choose a number that's convenient, and set the size in the other dimension to give the total amount of storage desired.
The number of sense amps does have some ramifications with regard to speed though. In particular, it's relatively slow to read data from the capacitors into the sense amps, and much faster to read from the sense amps to the outside world. This is what leads to the typical speed profile for DRAM sticks and such. Most will have a burst length of (say) 8, with a transfer profile that looks something like 17-1-1-1-1-1-1-1. That is, the first word will be transferred 17 cycles after the DRAM receives the (last element of) the address, then another word will be transferred each subsequent clock until the entire burst of 8 is completed.
The number in that initial position varies widely depending on the vintage of DRAM you're talking about. The reality is that the transfer from the capacitors to the sense amps has remained fairly close to constant for quite a long time. As clock speeds have risen, the number of clocks has risen with it to allow for the total time necessary for that transfer.
So, that first (long) transfer time is basically telling us the amount of time needed to transfer the data from the capacitor array into the sense amps. The subsequent transfers are reading data from the sense amps and sending them to the external bus. Of course, in the case of a refresh cycle the data is never written to the external bus. It's just read from the capacitors into the sense amps, then written back from the sense amps into the capacitors2.
That being the case, in a typical commodity DRAM, maximizing speed means ensuring that there are at least as many sense amps as there are bits in a burst. If we're dealing with 64-bit words and 8-word bursts, we want (at least) 64x8 = 512 bits of sense amps. Having more sense amps than that isn't necessarily going to gain a whole lot of speed.
So that argues in favor of the DRAM array being arranged as an Nx512 bit array, with the sense amps along the 512-bit side. A read (or refresh) then consists of activating one of those 512-bit rows/columns in the array, transferring that data to the sense amps, transferring the result out the external bus (if it's a read) and writing the data back to the capacitors.
For current large memory systems, things are a little more complex than that. Rather than a single array of N x 512 bits, the memory is further broken up into banks. For example, a 1 gigabyte DRAM might consist of, say, 8 banks of 1/8th gigabyte apiece (offhand, I don't remember the standard size for a DDR3 or DDR4 bank, but that's at least in the general ballpark).
In reality, in a modern system, there are typically (what could at least be thought of as) more dimensions than that, but 2D is enough to explain the basic idea.
Sense amps are analog amplifiers. They're typically differential amplifiers. In a typical case, the DRAM will contain some dummy cells. To start a read cycle, you pre-charge those dummy cells to (approximately) half the voltage initially stored in a normal cell (or sometimes, to the full voltage, but the dummy cells have half the capacitance). You then read in the charge from the dummy cell and the normal cell, and amplify the difference between the two, so if the cell contained less than the dummy cell, the result will be driven (close) to 0, and if it's greater than the dummy cell, it'll be driven (close) to Vcc. That (now digital) value is then stored into a flip-flop, latch, etc. The dummy cells are used (instead of, for example, just feeding Vcc/2 directly to one of the inputs of the sense amp) to semi-automatically compensate for things like losses in the bit lines, which can vary both across a chip and with the current operating conditions.
|
H: Charging 2 li-ion batteries in series
For my project I need voltage of 2 li-ion batteries in series. For charging them I would use this charger (one for each battery):
Could I connect those two chargers (with one battery on each charger) in series and plug them on 9/12V adapter? Or should I use 5V voltage regulator on each charger?
Another possibility would be to use 7.4V charger like this one.
I would use two identical batteries so I think batteries couldn't get unbalanced.
EDIT:
Could I use protection board in second link and a 8.4V adapter like this one?
Would 9V adapter be ok?
AI: You cannot use 9V/12V and plug chargers in series. You need to use 5V and connect them in parallel.
Imagine that you connect them serially to 10V, and one of batteries is fully charged and the second one fully discharged. Then the first charger won't take any current and the second one will need 1A, it is not going to work, because the first charger will block current and almost whole 10V will be on this charger.
So you have to use that 5V regulator.
UPDATE:
Your second link is for 2 cells, so yes, that is better solution than 2 separate chargers.
|
H: Regenerating the Gate of a Gated Pulse Wave
I'm looking for suggestions for alternative solutions to a design problem I have. Basically converting a gated pulse wave back into it's enabling gate. Bottom of the post lists the solutions I've come up with so far.
Background: Immediately following a user input to a black box, the box outputs a pulse wave @ 250Hz, 10% duty cycle for a duration of 70ms (all approximate values).
I need to process this pulse wave output to create a gate output. A constant output gate length of anywhere under 70ms would work, but a cost effective solution for a user adjustable gate length (from say 30ms to 500ms+) is preferred. I sketched up a crude timing diagram to illustrate.
I'll need to implement the circuit 8 times per unit (8 channels of user input, 8 pulse wave outputs -> 8 channels of gate outputs,) so a priority is keeping things compact and cheap.
So far I've tried:
Envelope follower into a Schmitt trigger - cheap and easy for constant gate length.
Non-retriggerable monostable - easy to implement adjustable gate lengths, but $pendy, I haven't found any in quad or hex packages.
I've considered using a MCU; I'll already be using one elsewhere in the design to convert inputs on 8 channels to serial data (requires 9 pins or 2 + a mux). Add to that 16 more IO pins, or 9 + a mux, or 2 + mux + serial to 8bit converter. - This seems like the most spendy solution in terms of time investment and PCB real-estate.
Does anyone have any suggestions for other solutions I could look into?
Cheers!
AI: Honestly, in this day and age, I'd go with the MCU – something with 8 more inputs and 8 more outputs can't be that expensive, especially when you then avoid having more separate components and board population doesn't come free. Any sufficiently sized Cortex-M implementation will probably do. In fact, for example, many chips come with PWM units that can be very easily programmed to output adjustable length "on" periods in a one-shot manner.
In the analog term: yes, getting the envelope should work. Assuming the drive strength of your 8 black boxes is reliable, you could do
black box 0 --> low-pass filter (RC) 0 --> Comparator 0
^
black box 1 --> low-pass filter (RC) 1 --> Comparator 1
|^
black box 2 --> low-pass filter (RC) 2 --> Comparator 2
||^
black box 3 --> low-pass filter (RC) 3 --> Comparator 3
|||^
black box 4 --> low-pass filter (RC) 4 --> Comparator 4
||||^
black box 5 --> low-pass filter (RC) 5 --> Comparator 5
|||||^
black box 6 --> low-pass filter (RC) 6 --> Comparator 6
||||||^
black box 7 --> low-pass filter (RC) 7 --> Comparator 7
MCU PWM --> low pass filter (low cutoff) ---+++++++^
You can get 4-channel analog comparators for but a couple of cents. Using two of them, you get your eight channels.
By adjusting the duty cycle of your MCU-generate PWM, you adjust the voltage that the negative input pins of your comparators see.
When an impulse train reaches a low pass filter, that filter will slowly raise its output voltage, and after the last pulse has passed, the filter's output voltage will fall again. If you design your filters to have roughly the bandwidth of 1/(impulse train duration/2), then you can choose any output duration smaller than the length of the output train. If you need longer output than your impulse train lasts, you'll need to build some hysteresis.
Regarding the 8 RC filters you'll need for your black boxes: use a resistor array. They aren't very expensive (again, you might be paying per component placed, and space wasted), nicely matched, and easy to solder. Same goes for capacitor arrays, where the matching actually is harder to get if using individual caps. I'd go for a 10nF network like this one and either eight individual 560 kOhm resistors (cheaper) or this array.
For the PWM low pass filter, simply use whatever resistor and caps you already have on your board and that give you a cutoff frequency sufficiently below your PWM frequency.
I quickly scratch-built a simulator of this whole idea: looks like this:
As you adjust the reference voltage, the duration of the output pulses change;
the underlying GNU Radio Companion/GNU Radio Flowgraph looks like this:
The left half of the flow graph is just generating the test impulses (blue in the visualization, which are then low pass filtered and compared to the reference voltage.
If this kind of job (counting impulses, synchronizing some logic output shape to some logic input shape, reacting to digital signals) really happens a lot on your board, considering CPLDs or even small FPGAs does make sense – and, of course, using the Free & Open Source Icestorm toolchain to program your own FPGA images without any vendor tools has a lot of designer street cred potential :D
|
H: USB powered and battery powered on the same connector
I need to make this Arduino Nano-like board with one single connector: a mini USB type B which will of course be connected to the PC for programming and interface. Additionally, as it will be used away from the PC, I need to be able to connect a 6V+ battery pack using the same connector. There's no way I can fit a second connector for this.
Usually the FT232 (or similar) won't take more than 5.5V, so I've thought of placing an LDO between the USB-VCC input and the rest of the circuit. The LDO would drop about 0.3V but solve my overvoltage problem. It could take 9V easily.
My question is: will the voltage drop cause problems with the USB communication (since the FT232 will get power from a slightly lower voltage source)?
AI: If you are using a FT232R chip, this should work, provided you have a minimal voltage of 4.0V at the output of your LDO with all conditions. Other FT232 chips might have similar properties, check the datasheets.
The datasheet of FT232R states for VCC on page 11: +3.3V to +5.25V supply to the device core. However, in Note 1 (page 12), they precise that to use the internal oscillator (I suppose you want to save the space for a crystal) you need a minimum VCC of 4.0V.
Thus, the manufacturer ensures that the FT232R will reliably work, without an external crystal, with a VCC between 4.0V and 5.25V. This of course includes the USB communication with the USB host, which is of course an essential capability of the chip.
If you look at Chapter 6.2 of the datasheet (self-powered configuration), you can see that the self-powered circuit can be supplied with any voltage between 4V and 5.25V and still reliably communicate with the host. There's no reason this would not remain true if the power is derived from the USB supply voltage instead.
Although not mentioned in your question, another point you need to consider is the power supply voltage of the ATMega chip (I guess you will be using a ATMega168 or ATMega328, if you want to be compatible with Arduino Nano). Everything will be fine with your 6V external power supply as you will end up with 5.0V power. However, if you power from USB, the LDO will introduce a drop on the VBUS voltage, which is 5.0V nominal. If you use a 0.3V drop LDO, you will have 4.7V on the microcontroller, which is still fine. However, if VBUS goes below 4.80V, your VCC will drop under 4.50V, which is the minimum required voltage to operate at full speed (20 MHz), and the maximum frequency decreases with the voltage (see Chapter 33.3 of the datasheet for the ATMega 168P) In this case, you will have to operate at less than 20 MHz. If you use the 16 MHz typical of Arduino boards, you should always be in the safe area.
|
H: How to provide separate voltages from a common power source
Assuming the input voltage is regulated, how do I provide two different, specific voltages to the below devices? I specifically want 4[V] in the left part and 3[V] in the right part of the parallel sub-circuit on the right.
I thought a voltage-divider circuit would help me get 4[V] after the first resistor, but all the components in the middle have got me doubting myself.
simulate this circuit – Schematic created using CircuitLab
The devices are an analog temperature sensor (such as LM35) with an operating voltage in the rage 4~35[V], and a microcontroller, with an operating voltage in the range 2~4[V]. The sensor output is to be connected to one of the pins of the microcontroller. The sensor is on the left and the controller is on the right.
AI: As your question is not very precise, and according to the few pieces of information you gave in the comments, I will assume:
You need to lower the voltage to 4V and 3V to supply the main POWER of 2 devices
The left device is a sensor, which requires a power supply of 4V
The right device is a microcontroller, which requires a power supply of 3.3V (common voltage for a microcontroller, but if you really want 3V, you can adapt my answer with 3V)
As we only talk about powering devices, any voltage divider won't work properly, whatever the values used for the input voltage or the resistor. This is because one of the main assumption of a voltage divider is to have no current at the output. This is not the case here, as both the microcontroller and the sensor are consuming current: they are considered as "loads" for the circuit.
As explained in the comments, the solution is to use voltage regulators. But, because you need 2 different voltages, you will need 2 different regulators: one with an output of 4V, to power your sensor, and a second one with an output of 3.3V (or 3V), to power the microcontroller. The 2 regulators input can be connected to the same main power source, which could be a battery, for example.
This brings us to the second advantage of a power regulator: they can accept a large range of input voltage (with limits given in the Datasheet), and the output voltage will always be regulated at the same value. When using a battery, let say a 2 cells lithium ion, the output voltage is not constant. It will vary between about 6V, when discharged, to around 8.4V when fully charged. Using a regulator will allow you to run your devices whatever the battery charge.
Note: there are few cases where you could use resistors to lower the voltage for a power supply. However, I would not recommend to do so in your case, as it requires a good understanding of the behaviour of your loads to ensure the power supply voltage will always be within the limits they can tolerate whatever the situation.
|
H: GPS time accuracy (SIM808 or similar) - process between NMEA sentences and AT command response
In an SIM808 chip or similar you can get the GPS information with this AT command:
AT+CGNSINF
+CGNSINF:1,1,20150327014838.000,31.221783,121.354528,114.600,0.28,0.0,1,,1.9,22,1.0,,8,4,,,42,,
the UTC time is in the third data field as: yyyyMMddhhmmss.sss
However in the manual the function of AT+CGNSSEQ is described as: Define the last NMEA sentence that parsed.
My question is in general:
how accurate is this time when I receive it from the serial interface of the chip?
Sub questions:
Should I assume that the time between the AT command answer and the last NMEA message that was received by the GPS unit is variable?
Can I compensate for delays in the systems and how?
Delays I expect:
Difference between NMEA sentence coming in and AT command?
Time to process NMEA sentence
Time to read AT command and prepare response
Serial communication time
AI: The message reporting the time issued by a GPS receiver generally comes out a few hundred milliseconds after the actual time stated inside the message itself. That reported time is actually the beginning of the current measurement cycle — and if the receiver has a 1PPS output, it would correspond to the leading edge of that pulse.
Unless you take special care, the polling you do with your AT commands runs asynchronously with respect to GPS time — and the times at which the messages are issued by the receiver. Therefore, you'll always have an uncertainty that's equal to your polling period added to whatever other communication delays are in the system.
Therefore, if you're polling once a second, the time you eventually see in the message is going to be "stale" by anywhere from about 0.1 second to 1.1 second. You can reduce the upper bound by polling more often and paying attention to when the time value changes from one result to the next.
|
H: PI filter (on PSU) and electrolytic capacitors after
I am designing a small DAC and I want to clean the power supply (it's noisy). I used a PI filter.
However, after the PI filter, I need an electrolytic capacitor (330 µF) to help with some transients. The problem is that the output of PI filter has a capacitor that is in direct correlation with the L...so by adding this electro capacitor I will change that correlation. Any pointers on what to do?
AI: The extra capacitance will obviously change the characteristics (i.e., frequency response) of your Pi filter, but the key is, it won't make it worse for your application in any way. Go ahead and hook it up.
|
H: Why does my UART connection still work without connecting ground?
On my prototype PCB, I have a microcontroller with reserved pins for UART communication with a debug terminal. I'm curious from a theory perspective about why my terminal works fine whether or not I connect ground to my USB UART adapter. I come from a computer science background, so assume my EE fundamentals are lacking.
AI: Check whether your connector's shields (E.G. MicroUSB) are not doing the job of linking the grounds of the systems. Depending on how the USB to UART converter board you're using was built, the GND might be connected through the connector shielding and along the cable shielding.
I'd start by measuring continuity between system's grounds with everything powered off.
|
H: Connecting atmega328 to 9v battery
I'm trying to connect atmega328P-PU micro-controller to a 9v battery. I added 20M ohm resistor and got the voltage down to 4.8v, which is in the range of atmega, but it doesn't turn on. I replaced the atmega with LED and the LED is very dim, even though I'm getting 4.8v. When I try to measure current, I guess it's too weak and the LED doesn't glow at all.
As far a I understood, the resistor is stripping down current as well. Why is this happening? How can I keep the original current potential and reduce only the voltage?
AI: 1) Your atmega is an active load (a variable resistor), lets say it draws (since I don't know but you can run through the caluclations again if you measure it) 20mA then at 5V its like a 250Ohm resistor.
$$ V = I*R $$
What if it draws 40mA when more transistors turn on? Then its like a 125Ohm resistor. If you put a 20MOhm resistor in series with a 250 Ohm resistor then your total resistance in the circuit is 20M+250 or 2000250Ohms. The total current through both resistors would be 0.002mA or 2uA. The voltage that your atmega will see will will be I*R = V or (2uA*250= ~4uV) That isn't enough voltage for it to function. Resistors do a lousy job of regulation. So you need a way to regulate the voltage like a 7805 (5V) regulator. They also make DC to DC converters that are also compatible and there are countless circuits available on the internet if you have reasonable searching skills.
|
H: CD4017 Decade Counter Based LED Flasher Question
I am creating a simple LED Flasher with NE555 and CD4017. I am actually dividing the pulse frequency of NE555 by five. Because I have merged each pair of the output pins to finally get 5 outputs.
I have attached 4 pieces of 3V LEDs (draws approx 200mA current each) in Series to each of the 5 outputs.
I couldn't find any error in the circuit. But when I test the circuit, I measure only 3V-5V at the five output pins. As a result the LEDs do not glow. What could be wrong in the circuit? My question is specifically around the values of resistors. Are they too high to allow sufficient current?
The circuit diagram (partial, showing only 1 of the 5 outputs) is given below.
AI: You can't connect 2 outputs together. When 1 output is high, the other will be pulling it low. This will explain why you are getting a low voltage on the outputs. You could use 2 diodes to connect 2 outputs together, but you will then need a pull down resistor on the transistor base.
i would have thought you would be better just using the first 5 outputs and then connecting the 6th to the reset pin.
|
H: Is there a general method for switching between PCB power sources?
I'm working on PCB design that will typically be battery powered. However, the device needs to be programmed and thus will be plugged into a computer occasionally.
I'm running into the problem of switching between a battery supply and a PC power supply, and specifically making sure I won't be applying power from both sources at the same time.
Is the general method to use relay logic to switch between power supplies? I assume I could design something that disconnects the battery supply when the device is connected to the PC. However, I assume this is a common problem and am curious if there are other methods for addressing this.
AI: A relay would be a nice easy solution. The circuit below would power the load from the battery until you applied CPU power. Keep in mind there will be a brief changeover period where no power is applied; as long as this is ok this is an easy way.
simulate this circuit – Schematic created using CircuitLab
|
H: I2C pullup resistors not working in simulation
I am trying to create a small project on PIC16F877a MCU, in which I am showing the lifetime of this project using RTC module and the temperature read from LM35 onto an LCD.
Working with PULLUP Resistor
Not working with 2k pullup resistors
The simulation of this project works fine in Proteus software, when I replace the pullup resistors for the I2C with the component "PULLUP" in the proteus library. But my ultimate goal is to implement this in hardware, so from I2C pullup resistor calculation formulas, the values calculated for this MCU is as following:
Rp(min) = (5V-0.6V)/8.5mA = 517.6 ohm
Rp(max) = 1/(0.8473)400(10^-12)*(10^3) = 2.9 Kohm
using these values from the MCU datasheet:
tr = 1000ns,
Cb = 400pF,
Iol = 8.5mA,
Vol = 0.6V,
Vcc = 5V
The I2C module is working at 100KHz.
But the issue is that I have tried various values in this range, but the time shown on the LCD remains 00:00:00. Can someone please guide me what I am doing wrong?
AI: You have answered your own question! The fact that the PULLUP is a modelling primitive tells you that it has a logical function and is not a physical component i.e. it tells the simulator something - but it's not a real resistor.
|
H: Purpose of diodes on ADC inputs in DRV8305 example circuit?
All sense pins in this schematic are used with diodes before the ADC input (on the MCU). Anyone can explain? And what diodes will work here? No info in datasheet.
AI: They are protection diodes (aka clamping diodes) which are there to ensure that if the voltage starts to go above a certain amount (in this case the Vcc supply rail) that they will start conducting and try to prevent it rising any further.
Motors are noisy, and during switching you can get spikes at the centre of the h-bridges that are much higher than the motor supply rail (they can be on the order of tens or even hundreds of volts if not careful). This means that the spikes would cause the voltage at the sense pins to rise up much further than they are rated for.
By adding the diode, if the voltage rises up above Vcc, the diode conducts and increases the voltage drop across the upper resistor in the potential divider to clamp the voltage to no more than Vcc plus the diodes forward voltage. As such a Schottky type diode is used, as indicated by the box-like ends of the cathode bar in the symbol, as they have a much lower forward voltage than a standard PN diode which in turn reduces the amount above Vcc which the sense pin can reach.
|
H: 2x2 Mimo antenna layout: Is it a requirement to length match the antenna traces
I am currently laying out a 2x2 MIMO antenna configuration for WiFI, using on board ceramic antennas.
While naturally the traces are impedance matched. Is it important to match the traces to the uFLs in length?
Is there any advantage to a length matched antenna pair?
This design is required to specifically measure RSSI.
I have seen some references to performance differences, but no application note which indicates length matching as a standard.
AI: It's not necessary to length-match the antenna's traces.
The whole point in MIMO is that an algorithm uses the fact that the two antennas are receiving different signals to do up to two things:
Diversity gain (channel influences and noise –hopefully– being uncorrelated between the two receive chains)
subchannel construction by finding Eigenvectors in the \$n_\text{TX}\times n_\text{RX}\$ channel matrix
For either thing, the actual phase of the received signal doesn't matter - the only thing that matters is that the two antennas are positioned in a matter that allows them to receive different signals (ie. don't act as part of an antenna array).
Is there any advantage to a length matched antenna pair?
If you interpret that algebraic subchannel-extraction spatially, you end up doing digital beamforming. Same thing, different kind of perspective.
If you then can relate the phase seen at one antenna to the phase of the same signal seen by the other antenna(s), you can actually deduct an incident angle, or, inversely, send a beam into a specific direction.
Again, this doesn't demand equal, but known length traces. But matched will do as "known" :D
This design is required to specifically measure RSSI.
RSSI doesn't actually mean anything unless you specify which signal and which measurement methodology you use!
So, assuming you just simply want to maximize SNR, make sure you place the antennas more than half a wavelength apart, and if possible, use different polarizations, directivities etc to increase the independence of the observations.
|
H: Are SPI slave select lines hardware enhanced?
I have a question regarding SPI communications. I feel like I have a good understanding fundamentally of how SPI works. However, I'm often confused when implementing the slave select line of SPI.
Is the slave select line on a microcontroller, in general, enhanced via hardware? That is, is there anything different between a slave select output and an output controlling an LED? Do microcontrollers allow certain I/O to be toggled faster when used for SPI?
AI: No difference, as far as drive strength or transition speed are concerned.
Some uC's I've worked with don't even have a dedicated SS pin. You can implement it in code using whichever pin is convenient.
However, some microcontrollers will toggle the SS line for you (without you having to toggle the pin in code). This can reduce the dwell time between SPI transactions, decreasing the total time elapsed during multiple-transaction transmissions.
Also, if you are designing an SPI slave device, it is very convenient to use a uC that has a dedicated SS pin, which is used by the uC's internal SPI module.
The STM32F1 ARM-based microcontrollers, for example, have a dedicated SS pin for each of their SPI busses, with the option to disable the SS functionality and free up the pin for general use.
|
H: Earth decoupled power supply
I have an electrocardiogram sensing circuit that uses an INA321 amplifier for common mode rejection on two measurement electrodes. The device is meant for hand-to-hand measurement of the heart rate and powered by a low voltage.
If supplied by batteries, the circuit works well. However, the device is now connected to a small computer and screen for demonstration purposes. The switiching power supplies of such devices tend to couple about one half of the mains voltage into the common ground of the devices. I guess they use two equally large capacitors between ground and the two mains supply lines for some unknown purpose.
The INA321 can never reject 110V of course, in respect to the heart signal of about 1mV and having it powered with 3 to 5 volts.
Even if I connect the system ground to the mains protective earth, there is still a voltage up to 50V between mains protective earth and the body standing on the ground.
So is there any way to supply computer and screen without having them tied to some hefty earth capacitance?
AI: I would think about keeping the device powered as it is, and getting the data to the computer using an optically isolated link so that there is no galvanic contact between the two.
|
H: Enable-A and Enable-B of H-Bridge L293D (or L298N)
I am using both L293D and L298N for controlling the stepper motor NEMA17. It rotates and stop as expected. I am not sure about the usage of Enabling provided by the L293D bridge. In this attached file
the pin 1 and 9 are used to enable. I am currently setting high on both these pin always and the motor rotates. I thought that enabling the pins can be controlled dynamically. For example, can I set low on pin 1, when I am not using the Input 1 and 2? I though this way the pins are not always enabled and reduce the heat in the H-Bridge. Please let me know if that works.
AI: When L293d enable pins are low, the outputs are high-impedance ('Z' on the L293D function table).
This means no current is flowing through the motor, and hence that winding is applying no torque.
That may be okay when the motor is at rest, and certainly reduces the power use, and heat dissipation.
Further, it is okay for one of the two stepper motor windings while it is turning.
A pattern for turning a stepper motor is to only energise one winding at any instant. This will use less power and run cooler. However, it will produce less torque.
Summary: Yes, switching one winding off, at any instant, by setting one of the two enables low will be fine, will reduce power and heat. However that will reduce torque.
|
H: Replacing old capacitors?
I am replacing a 4.7 uF 35v electrolytic axial leads capacitor near the cpu on an atari 2600. Can get away with using a 4.7 uF 50v capacitor?
AI: Yes you can .Using the higher volt is no problem.Many decades ago before I was born there were issues with forming voltages of electrolytic caps so you would sometimes see a minimum voltage written on the can along with the maximum voltage.
|
H: PCB trace capacitance
I am designing a capacitive touch keypad using cypress MBR series. The design guide has an excel spreadsheet to help with max trace length and button dia requirements. The idea is to keep the button + trace capacitance in a working range. For the same, they have mentioned capacitance per inch of trace length in pF as shown in table below:
There is no mention of trace width in the document (64 mils appears to be the pcb thickness because it's equal to 1.6mm - one of the standard pcb thickness).
Is trace capacitance per unit length is independent of trace width?
Is there any recommended trace width that I should use or can I use anything as per my convenience?
AI: Capacitance per unit length is proportional to trace width (neglecting edge effects).
The basic "Parallel-plate capacitor" capacitor formula for capacitance is
$$Capacitance = \epsilon * Area / DielectricThickness$$
The area of a PCB trace is the width multiplied by the length, so
$$Capacitance = \epsilon * TraceWidth * Length / Thickness$$
Now divide Capacitance by Length:
$$CapacitancePerUnitLength = \epsilon * TraceWidth / PCBThickness$$
This is assuming trace width is much greater than PCB thickness, and neglecting edge effects, as well as the trapezoidal artifact from PCB etch. Good first order approximation, and generally good enough for most common PCB work.
If you're using an aspect ratio where the trace width is very small, and somewhat close to the thickness of the PCB dielectric layer, then you may need to go back to Maxwell's laws for the more complicated solution... but the basic principle is the same.
|
H: Connecting a kitchen weighing scale to an AC/DC adapter
I have a kitchen weighing scale that uses a lithium 3volts cr2032 coin cell to power it. I also have an AC/DC power adapter that can be set to 3volts output. Would it be safe to connect the weighing scale to the adapter directly? What are the possible implications of this action? I will appreciate your opinions on this.
AI: Is the voltage stable? if it has spikes over 3.5V, it may damage the scale, meant to be used with very stable and spike-free 3.3 V cells. Many AC/DC adaptors have significant spikes when used with very light loads (and the scale is a VERY light load).
Also, why do you do that? a wall socket is quite valuable in the kitchen, and a CR2032 cell should last months on a kitchen scale anyway. It's not a shaver which needs significant power...
Moreover, the AC/DC adapter will draw continuously some power even when the scale is off. You lose a wall socket and you waste energy!
|
H: Why is my amplifier circuit amplifying more than I expect?
My Question
Why is my amplifier circuit amplifying more than I expect, and what can I do to fix it?
What I want to accomplish
I want to amplify an input that, at most, is 1.5[V] to, at most, be 2[V].
What I have tried
I have the below circuit set up. When I measure the voltage of OUT against GND, I get values that are 7 times higher than the value IN.
I used the following formula:
$$V_o = V_i * (1 + \frac{R_2}{R_1})$$
Plugging in \$2\$ for \$V_o\$ and \$1.5\$ for \$V_i\$ evaluates to:
$$R_1 = 3R_2$$
I tried using 300[Ω] and 100[Ω] for \$R_1\$ and \$R_2\$ respectively, which yielded a different, but also undesirable, gain. I recall that it was a bit lower.
What I got
Measuring the voltage at IN and OUT against GND using a multimeter gives me about 0.5[V] for IN and 3.5[V] for OUT.
simulate this circuit – Schematic created using CircuitLab
AI: Change to the following to get a non-inverting amplifier with gain = \$1 + R_2/R_1\$
The difference is that \$R_2\$ is connected to the op amp's inverting input instead of ground.
Please see Scott Seidman's answer for an explanation of what the incorrect circuit was doing.
|
H: neutral conductor in a three phase system
I do not understand the answer of the following question :
A three-phase power in the lab have symmetrical three-phase voltages 400 / 230V and
terminals marked L1, L2, L3 and N. A 60W bulb is connected between phase L1
and N. Another 60W lamp is connected in the same way, but the phase L2
and N.
Calculate the current in the neutral conductor N.
The answer is 0,26 A.
Why it is 0,26 A , should not it be 0,26 A + 0,26 A (from the both lamps)?
Thanks
AI: To confirm, from \$ P = VI \$ that the current would be \$ I = \frac {P}{V} = \frac {60}{230} = 0.26~A \$. No problems there.
A very simple way to consider this problem is that if we connected a 60 W lamp from each phase to neutral then the neutral current would sum to zero.
Now consider what happens if we remove one of the three lamps: the neutral current must change by that amount, 0.26 A. That's the simple way to calculate for this problem.
The more general way would be to add the vectors. (And this is your problem: you forgot that they are not in phase.)
simulate this circuit – Schematic created using CircuitLab
Figure 1 (a) The phase A and B current vectors. (b) A and B vectors summed to give the resultant current.
Clearly from the vector diagram, since A and B were at 120° then in (b) they must be at 60°. Since they're the same size the triangle is equilateral. Therefore the sum must be 0.26 A.
|
H: What abbreviation does they use for nominal voltage (Un) in Russian?
I need to translate the "Un" symbol for Russian client.
AI: Personally I'd go for "Uн" / "Vн", or "Uном" / "Vном" if you can afford longer labels. Note that equipment I have seen would cite the nominal voltage itself (i.e. "220/230 В", "380/400 В", "10кВ" etc.) without calling it nominal.
Russian Wikipedia has an article about nominal voltage, but it doesn't mention any official abbreviations.
|
H: Driving BLDC motor directly from Generator
There are affordable BLDC motors up to 200kW. The Controllers, however, are double in price and the AC-DC inverters breach the 4 zeroes price scale.
Q1: Could one BLDC act as a Generator for spinning another, without any electronic components in between (no pulse correction AC->DC->PWM conversion etc.)?
Q2: Can BLDC motor be driven with a sine-wave rather than a square-wave?
I am asking this because I've run a small test on two 800kV/200W motors (connected back-to-back).
By spinning the 1st-one with a drill at ~100Hz, I was able to observe smooth sine wave in its output (1.2V RMS, 0.3A, this is on a single phase).
Once connected, the 2nd motor was shaking badly and barely reaching 1Hz.
Unless I was running a test well beyond the minimal voltage, an answer to Q1 seems to be "NO"?
AI: You asked about brushless DC motors, which are typically made with permanent magnets. To approach your question though, it would be useful to first look at 3-phase AC induction motors as commonly used industrially. While those can be operated with an inverter drive to vary the speed, in simple usage connecting one to 3-phase mains will spin it up with substantial torque. This is because the magnets in the rotor are electromagnets powered by induced currents, so if the rotor is not spinning at line synchronous speed, the virtual "magnets" are able to rotationally migrate through the physical rotor - the induced magnets spin at line synchronous speed, and the physical rotor hosting them "slips" behind as it accelerates, until it almost catches up. (It will never quite catch up while doing work, rather a slight slip depending on torque produced will remain. If the motor were instead to lead the line in rotation rate it would be operating as a generator, and conceptually if coasting at exactly synchronous speed no power would flow in either direction).
Your motor in contrast has permanent magnets permanently fixed in position within its physical rotor. They cannot "slip" when not spinning at synchronous speed - essentially, you only produce useful torque when connected to a mulitphase AC source cycling at the same rate as the poles are passing the coils (or possibly a harmonically related one). You could almost think of this type of motor as a stepper motor with relatively few steps per rotation driven in a fine microstep mode, and like a stepper motor if it lacks torque to overcome the load, it will vibrate rather than turn - it cannot meaningfully turn slower than the synchronous speed.
As a result, to drive a BLDC motor, you really need drive electronics which "find" the rotor position, and match the line frequency to the rotation rate, accelerating up to desired speed. For low-speed, highly-loaded motors this is typically done with hall effect sensors to directly determine the rotor position. At higher speeds it is possible and more effective to use back-EMF detection with the drive coil themselves. (For motors that start under minimal load, for example, driving model aircraft propellers, it can be possible for a starting algorithm to accelerate the motor open-loop up to a speed where back-EMF detection starts working, though this is not perfectly reliable).
But "open-loop" drive of a BLDC motor in ignorance of the rotor position and rotation rate tends to work quite poorly for doing actual work. You can, as a breadboard demonstration wire a small CD-ROM BLDC motor to a driver (potentially even some MCU GPIOs) and excite it with a 3 phase square-wave. If the load is light enough, or potentially if you pre-spin the motor, it can end up running - but it will have very low torque, and once forced away from synchronous speed it will merely vibrate, not exert useful torque to re-synchronize the load in the way that a slipping induction motor could.
So in summary, if you want a motor which can produce useful torque to accelerate a load to near-synchronous rotation with a fixed line frequency, you need an AC induction motor; if you want to use a BLDC motor, you need drive electronics which vary the drive frequency to match the instantaneous phase to the actual rotational state of the motor.
In terms of drive waveforms, sine wave would be the most natural. In simple small systems square wave would work crudely. Most real systems use PWM to approximate a sinewave in the local average.
|
H: Simple high current alarm using transistor
simulate this circuit – Schematic created using CircuitLab
I need a suggestion to build a minimalist short circuit alarm.
With this diagram will light a LED easily. But if I replace with a Buzzer, it will not make a beep. My input voltage will be 12 Volts, alarm should be raised if current draws about 2 Ampere.
The R1 can not go further more Ohms, since I don't want too much current lost on the output. I tried to Darlington Q1, but the buzzer still won't beep.
Edit1: add the correct schematic. Sorry for the messed, this is my first thread on EE Community.
My main goal:
I want to have a simple high current alarm by using transistor(s) and to replace LED with buzzer. When high current occurs (>= 1A) it will have constant beep.
AI: Actually, I'm surprised that it will light the LED.
You are shorting the power supply to the LED/buzzer when you short V_Out to GND_Out.
This is what you are doing:
There's a sort of glaring problem there. V_out shorted to GND_OUT leaves the buzzer with zero volts to work with.
You need a second power supply for the buzzer/LED.
@jbord39 made suggestion that might help if all you need is a short "bzzt" when the short occurs.
Try it like this:
simulate this circuit – Schematic created using CircuitLab
The capacitor provides a little power to the buzzer when Q1 connects the buzzer to ground. The diode prevents the capacitor from being discharged by the short from V_Out to GND_Out.
You only get a short buzz, but better than nothing.
Whether you can get a longer buzz or not depends on the power supply.
I've added a simulated battery, and you can simulate the circuit to find out the voltage to the buzzer.
Changing R3 will change how much current the battery can supply.
Changing R2 changes the severity of the short.
If R2 is a dead short (0 Ohm) then all you will ever get is a short buzz.
In short, the wimpier your battery the shorter the buzz. If the battery is really beefy, it can supply the short circuit and the buzzer.
To get a continuous buzz regardless of the severity of the short, you will have to power the buzzer from a separate power supply.
You do realize that it will take a pretty hefty current flow to make this thing trigger, right?
It takes over 1 Ampere to make enough of a voltage difference across the resistor to reach the 0.7V needed for the transistor to turn on.
|
H: Filter only with capacitor
I have some questions about capacitor and the circuit shown below:
The original signal including noise is shown below:
There are three waveforms shown in one picture.
The top one, middle one, and last one is connected to 4.7µF, 0.1µF, and 100µF respectively. Why is the top one smoother than middle one? Why is the last one out of shape and attenuated? If I change GND to 2.5V, is it able to filter out the noise? If I can filter out the noise only with capacitor, why using RC?
AI: Your distortion is due to source unable to drive as much current for positive waves.
But I will not guess your source.
All capacitors uses electrodes and the surface area and chemical bond determines the effective series resistance , which can load AC waveforms which changes the slope of the voltage and the peak amplitude.
Usually that's a good thing but it all depends on the driving current-limited source-resistance ratio to load capacitive impedance and it's internal ESR.
simulate this circuit – Schematic created using CircuitLab
Although I could guess why out positive wave slope reduces with higher resistance, let's see here how Vpp and Ipp changes at some arbitrary low frequency and fixed series resistance.
|
H: Const type array in Vivado HLS
Could someone explain how HLS treats arrays that are declared constants? I declare an array as:
const uint8 myArray [100][100] = {....};
and then access the array by element in the loop. According to the UG902 on p.328, such array is implemented as a ROM and consequently I need to partition it to get more reads in the clock cycle. Why would HLS implement this array as a ROM and use resources instead of converting it to ties of VCCs and GNDs? Do I need to explicitly tell HLS to implement it as ties to VCC and GND?
AI: First, FPGAs can't arbitrarily configure single transistors to do things such as making a tie. An ASIC designer can, but an ASIC is not an FPGA.
"Why would HLS implement this array as a ROM and use resources instead of converting it to ties of VCCs and GNDs?"
In your mind, what is constitutes a "ROM" bit in an FPGA? Is it a transistor pair in a push-pull configuration that connects the common node to either Vcc or GND? Let's assume it is. So what is driving the gates of those transistors so the stack holds state? An SRAM? Well that's a bit pointless because now you need those two push-pull transistors plus six transistors for the SRAM. You might as well just use an SRAM.
If you don't use an SRAM, then you need to use configure a logic block as lookup table but those use many more transistors per bit, and can be used to do things other than just storing bits so its a waste.
An FPGA is not a sea of transistors you can individually manipulate. It's a sea of logic blocks.
|
H: How to find I(t)?
so I'm trying to find the current I(t) passing through the node above the switch.
I have already found the DC response of the RC circuit which is
in the form of:
so, after the switch closes a short circuit will be created, and here is the confusion, is the current going to take the shorter path ignoring the 4k resistor and the 12V source and no current will flow through them? with that being said, I don't know whether to consider them in my calculation of I(t)
What should I do?
AI: After the switch closes:
You always have to consider everything that will have an effect. In this case, the perfect short basically makes the left and right current loops independent of each other which means that the 12V will no affect the 36V. Write out the loop equations and you will see. There is no component that will have both currents from both halves running through it and therefore no voltage drop is dependent on both loops. And the only thing that carries over from before the switch closes is the cap voltage which is an initial condition.
|
H: Does ISA bus (or PC/XT bus) have some means of arbitration to resolve bus contention?
My understanding of ISA bus is that the CPU places an address on the bus and any expansion card is free to respond to that address by taking control of the data bus. I presume that tri-state buffers are used. So my question is, are there any arbitration mechanisms in ISA and/or PC/XT-bus to prevent two cards from responding to the same address? My understanding of tri-state logic is that if this were to occur the tri-state buffers would likely get fried. Perhaps there was some sort of lesser mitigation to prevent such damage in the case of contention?
Looking online I see no mentioning of arbitration or damage mitigation, and yet I also know no stories of people with fried ISA expansion cards due to address overlaps.
AI: PC and XT
The original IBM PC simply extended the Intel chipset bus to connectors using buffer drivers. The clock rate on the card bus was the exact same as the clock rate used for a CPU cycle. So with approximately \$4.77\:\text{MHz}\$ (derived by dividing by 3 a \$14.31818\:\text{MHz}\pm 5\:\text{ppm}\$ crystal rate) on the PC's CPU, this meant that a typical 6-cycle, 8-bit I/O bus transaction would take about \$1.26\:\mu\text{s}\$. This was consistent with the technology at the time, so boards could decode and latch addresses using middle-of-the-road (in terms of speed) and reasonably-priced devices. IBM would eventually published a fairly complete set of documentation on the IBM PC, XT, and PC/AT that included detailed schematics that were well laid out and understandable and a complete listing of their BIOS source code (in assembly), as well.
The PC and XT simply used the bus design that reflected Intel's chip design, without extension features (that I'm aware of.) If you tried to increase the clock rate of the CPU, then the clock rate of the bus would also increase and this put pressure on the boards. But I don't recall many attempting to do this, so it wasn't an issue.
AT
With the advent of the PC/AT and the 80286, a new 16-bit I/O transaction and 16-bit memory transaction became available. Intel also changed over to the new 82284 clock gen chip and the 82288 bus control chip. Additional DMA channels and interrupt signal lines were added by IBM and an arbitration transaction was added so that add-on cards could replace the platform CPU as the bus owner. (A little more on that, later.)
The new standard limit for the CPU was now \$6\:\text{MHz}\$. The bus rate was similarly increased and newer boards needed to keep up. IBM also introduced a number of new cards for the system.
The 80286 had four more address lines (going from 20 to 24) and could now enter a new protected mode of operation to gain access to these new lines. While Intel was able to allow the transition from real mode operation to protected mode using appropriate software instructions, they were rushed to get the chip out to the marketplace and did not manage to successfully field the new CPU with the ability to switch back to real mode. As a result, the only way back from protected mode to real mode was through a processor reset. IBM handled this problem through the keyboard interface, using the keyboard (and memory in the calendar IC they used) to force a hardware reset when instructed to do so. The BIOS supported transitions back and forth between modes and was able to hide the fact that the keyboard needed to reset the CPU each time a request was made to get back to real mode operation.
Wider bus transfers on the PC/AT bus now also supported faster bus cycle rates; a "byte swapper" was used to port around low order and high order bytes on the bus; and the new refresh cycle logic used discrete circuits.
The rush for more CPU speed
People quickly discovered that they could increase the clock rate of their expensive IBM PC/AT to about \$8\:\text{MHz}\$ by simply replacing the clock crystal. I did this and found that I could successfully push the system and the boards I used to about \$8.5\:\text{MHz}\$ before things started to get iffy. (I couldn't reach a consistent \$9\:\text{MHz}\$ on my system, so I settled in at \$8\:\text{MHz}\$ and left it there.)
The level of skill needed (and tools required) to design a motherboard was relatively low at the time. Almost anyone could find inexpensive parts and do decent layout that would work fine at these frequencies. And a lot of "mom and pop" motherboard makers soon began to enter the scene. (IBM's price point was very high for most people.)
Perhaps the first truly successful (able to emulate the IBM hardware with 99% compatibility) PC replacement was Kaypro's 286i product. Before this, there were usually too many "issues" to make the products sufficiently acceptable to the business market (though hobbyists were often okay.) Kaypro's entry was about US$2k cheaper than IBM's, so it very quickly rolled out.
As more and more competitors solved the compatibility issues and began to compete, Intel started to roll out faster spec'd 80286 CPUs, too. Board makers would incorporate these newer CPUs, include faster logic chips so the bus could run faster, and we began to see \$8\:\text{MHz}\$, \$10\:\text{MHz}\$, and even \$12\:\text{MHz}\$ offerings. But this almost immediately put pressure on the add-on cards. Older cards simply couldn't be used and newer ones were too few, too far between, and consumers faced buying a faster system that greatly reduced the number of add-on cards they could buy and successfully use.
While a few companies attempted to isolate the add-on card bus rate from the internal Intel bus rate with discrete chips (with some success), the sheer number of "mom and pop" motherboard makers and the need to separate the clock rate of the CPU from the cycle time of the bus opened the door for a new company, Chips and Technology (aka C&T), to produce an ASIC that got this job done. Very quickly after, new motherboards entered the market allowing the ISA bus cycle time to be kept (relatively) independent of the Intel CPU clock rate. Since Intel was meanwhile continuing to increase the maximum CPU frequency, this was a godsend to the many competitors, who didn't have the internal horsepower or financing to develop ASICs but who could certainly use them in new products.
As a result, the "frequency wars" started in earnest and there was hardly a month going by where there weren't new motherboard offerings with increasing CPU clock rates. The decoupling of CPU frequency from bus frequency was a huge win for C&T, too, who did quite well in the process.
Just as a note, I believe the decoupled ISA bus operated asynchronously to the platform CPU with one exception: the RESET line to the platform CPU.
I/O and Memory and DMA
The I/O and memory bus transactions are distinct, but in most ways quite similar to each other. It's just that different boards would respond. The original 8-bit I/O transaction, for example, was 6 bus cycles long. But with the PC/AT a newer, wider bus a 3-cycle I/O was included.
It was the job of each add-on board to latch and decode the address and associated signals they were interested in (IOR or IOW, for example, for cards responding to I/O bus cycles.) They then had a certain number of clocks to respond for a standard transaction. An I/O card could, however, assert IOCHRDY if it wanted added bus cycles to complete its transaction.
With the advent now of both 8-bit and 16-bit transactions with the PC/AT ISA bus, a few issues arose. For example, a 16-bit I/O slave add-on could not force the bus master (which may or may not be the platform CPU) to execute a 16-bit access when the owner only wants an 8-bit. Similarly, a bus owner intending on a 16-bit access cannot order an 8-bit slave add-on to perform a 16-bit access. So there are added signal lines to aid in these circumstances.
The DMA access cycles were a little different from the other two in this sense: DMA would use simultaneous activation of I/O and memory command signal lines to allow data to be placed onto and retrieved from the bus during the same cycle. Here, for example, the address placed onto the bus is for memory and NOT for the I/O card (which should not use it.) (The AEN is activated to indicate to the I/O card not to use the address.)
Add-on boards responding to I/O addresses were set at unique locations to avoid conflict. IBM provided guidance about this for important cards (video display, serial port, parallel port, interrupt controller, and so on), but many add-on board makers would also include means to adjust the I/O address so that if you used two or more of their boards, they would work okay together. In general, the system worked pretty well and there were few problems. (Most issues related to the graphics memory required for various types of display controller boards.)
Arbitration
Technically, there actually IS an arbitration cycle. It's just not what you are asking about. Instead, it is a means by which another bus master (presumably residing on an add-on card) can claim ownership of the bus as the master. This cycle actually starts out looking like a DMA transfer cycle and it is the DMA controller which first responds. The would-be bus master then has a fixed time in which to assert MASTER and obtain ownership. The DMA controller then tri-states its own address, command, and data signals. (I worked with a team on a MIPS R2000 add-on card for the IBM PC/AT, circa 1986.)
|
H: STM32F042K6U6 BOOT0 pin? Can not understand how to program it with DFU mode
I'm planning out a board and trying to make sure of how to go into DFU mode and the actual program on a STM32F042K6U6. I want to use the K6U6 for the small size, which also rules out using STLINK to program it (I don't want to use the board space to connect the header).
I've used the K6T6 variant before which has a dedicated BOOT0 pin which I pull low to run my program.
The K6U6 doesn't appear to have a dedicated BOOT0 pin, but, if I'm reading the datasheet correctly, apparently PB8 or PF11 can be configured as BOOT0. But I don't see how.
I'm using ST's CubeIDE btw.
Thanks
AI: All packages have a pin having the BOOT0 role which must at reset be correctly driven for the desired startup mode, regardless if you ever plan to change that or not.
If the pin is also available as a GPIO then you would need to keep your GPIO usage compatible with an approach such as having a pulling resistor that sets the state at reset, after which you are free to override the pulling resistor with a different input or output state - of course you must keep in mind the power cost of of asserting a signal against a pulling resistor.
That said it would be a severe mistake to design yourself a board with no means of access to the two key SWD signals, and really nRST and a UART too. Take them only as far as pads you can solder under a microscope or preferably hit with a pogo fixture, but don't skip them. There are just too many development possibilities you give up if you do.
|
H: Which resistors control the gain of this filter?
Update2: Nevermind the update1, question went far beyond its scope. Accepted answer was my expectation.
Update1: I have the filter(bandpass) design below, created using texas instruments filter design tool. I am also adding my specs to the question, in case it helps.
Gain at center freq = 1 V/V
Center freq = 1160
Passband width = 80 Hz( I figured lowering this made filter design more easy, more possible, though I would expect the other way around)
Stopband width = 400 Hz ( because of to be able to attenuate -/+ 300 Hz by 20dB, I chose 400 instead of 600 just in case, thinking about the non-ideal problems may occur)
Stopband attenuation = -40dB ( I have chosen 40dB just for in case, 20 or so is enough)
Allowable passband ripple = 1dB (not sure what this affecta, this was standard on filter designer software)
I can reach E12 resistors(%10) and E12 capacitors(%10)
These are the specs I have also used in the design tool.
The filtering action of the circuit below is close to desired. I have examined freq response with an oscilloscope + signal generator combination. I was expecting a gain close to 1 V/V , but it is actualy 0.15 V/V or so. Therefore, I want to increase the gain without affecting the center frequency. Is this possible? I have thought that gain can be arranged by the ratio of R5_S3 and R4_S3, and changed R5_S3 to 10k, but it did not affect gain, actually it did no difference, I can say.
So, I wonder which resistors can control the gain of this filter, if it can be controlled of course.
AI: Every resistor in that filter impacts both the gain and the frequency response. Ditto the capacitors. I'd just put a gain stage in front of it, or behind it.
|
H: How to use arduino control an ESC and flash?
Basically, I want to use PWM wave or something alike from Arduino to directly control the speed of a brushless motor through an ESC.
I saw an example on the Arduino Brushless Motor Control Tutorial | ESC | BLDC by Dejan . However, I saw some other people mention about flash ESC such as Flash ESCs with ANY Arduino!
But I was confused about why they needed to flash the ESC?
Can I just control the speed of the motor by sending PWM wave to ESC directly from arduino? I wanted to use BLHeli ESC.
AI: ESCs are designed to work with radio control receivers that have servo outputs, so they expect to see a servo pulse signal (typically a 1~2ms pulse repeated 50 times per second). This signal can be produced with an Arduino using the Servo library.
The standard firmware in most ESCs is optimized for running a model aircraft motor, and usually has a fairly slow throttle response to help the motor run smoother. 'Flashing' an ESC refers to replacing the firmware with one optimized for drones, which need the fastest possible throttle response for best stability. You can now buy ESCs which already have this firmware in them, which saves having to do it yourself.
|
H: Cannot initialize LPUART1 in STM32CubeIDE on b-l072z-lrwan1?
I was having a problem about initializing the LPUART1 on b-l072z-lrwan1 using the built-in CubeMX code generation in STM32CubeIDE.
The problem was when I started debugging, the code seemed to run fine until the execution reached MX_LPUART1_UART_Init(); The debugger kept prompting
Target is not responding, retrying...
Error! Failed to read target status
and then the debugger shut down.
My question is how to properly initialize the LPUART1? Does it require any extra work to do in order to use this peripheral?
--
Anyway here's my LPUART1 peripheral configuration
static void MX_LPUART1_UART_Init(void)
{
/* USER CODE BEGIN LPUART1_Init 0 */
/* USER CODE END LPUART1_Init 0 */
/* USER CODE BEGIN LPUART1_Init 1 */
/* USER CODE END LPUART1_Init 1 */
hlpuart1.Instance = LPUART1;
hlpuart1.Init.BaudRate = 9600;
hlpuart1.Init.WordLength = UART_WORDLENGTH_8B;
hlpuart1.Init.StopBits = UART_STOPBITS_1;
hlpuart1.Init.Parity = UART_PARITY_NONE;
hlpuart1.Init.Mode = UART_MODE_TX_RX;
hlpuart1.Init.HwFlowCtl = UART_HWCONTROL_NONE;
hlpuart1.Init.OneBitSampling = UART_ONE_BIT_SAMPLE_DISABLE;
hlpuart1.AdvancedInit.AdvFeatureInit = UART_ADVFEATURE_NO_INIT;
if (HAL_UART_Init(&hlpuart1) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN LPUART1_Init 2 */
/* USER CODE END LPUART1_Init 2 */
}
The MspInit Function
void HAL_UART_MspInit(UART_HandleTypeDef* huart)
{
GPIO_InitTypeDef GPIO_InitStruct = {0};
if(huart->Instance==LPUART1)
{
/* USER CODE BEGIN LPUART1_MspInit 0 */
/* USER CODE END LPUART1_MspInit 0 */
/* Peripheral clock enable */
__HAL_RCC_LPUART1_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
/**LPUART1 GPIO Configuration
PA14 ------> LPUART1_TX
PA13 ------> LPUART1_RX
*/
GPIO_InitStruct.Pin = GPIO_PIN_14|GPIO_PIN_13;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_VERY_HIGH;
GPIO_InitStruct.Alternate = GPIO_AF6_LPUART1;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* USER CODE BEGIN LPUART1_MspInit 1 */
/* USER CODE END LPUART1_MspInit 1 */
}
else if(huart->Instance==USART2)
{
/* USER CODE BEGIN USART2_MspInit 0 */
/* USER CODE END USART2_MspInit 0 */
/* Peripheral clock enable */
__HAL_RCC_USART2_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
/**USART2 GPIO Configuration
PA2 ------> USART2_TX
PA3 ------> USART2_RX
*/
GPIO_InitStruct.Pin = STLINK_RX_Pin|STLINK_TX_Pin;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_VERY_HIGH;
GPIO_InitStruct.Alternate = GPIO_AF4_USART2;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
/* USART2 interrupt Init */
HAL_NVIC_SetPriority(USART2_IRQn, 0, 0);
HAL_NVIC_EnableIRQ(USART2_IRQn);
/* USER CODE BEGIN USART2_MspInit 1 */
/* USER CODE END USART2_MspInit 1 */
}
}
AI: The pins PA13 and PA14 you are trying to use for LPUART are already in use as the SWD/JTAG pins for debugging, so the debugging stops immediately when you change the pin configuration. Use another set of pins for serial comms or stop using the debugger.
|
H: Circuit analysis with dependent current source and tricky voltage source
I've been wasting my time on this problem and I did get really close to a solution which took a long time but still wasnt able to get the solution.
I was wonderting that I might approach this problem in the wrong way which is the main thing that bothers me.
The question was to find the voltage Vb I tried both mesh and node analysis also tried super/node/mesh.
below is as far as I got to a solution.
I would be glad if someone can enlighten me on how to approach this kind of question as I'me having some trouble with those.
Thanks.
AI: Put the bottom common node at ground; then \$V_b \equiv V_{R_2}\$. The circuit becomes:
simulate this circuit – Schematic created using CircuitLab
KVL says: \$V_b=V_0+R_EI_E\$ where \$I_E=I_B+(1+\beta)I_B=I_B(1+\beta)\$ and \$I_B=I_{R_1}-I_{R_2}=\frac{Vcc-V_b}{R_1}-\frac{V_b}{R_2} \$. Putting it all together with a bit of trivial maths readily gives (I'll skip steps):
$$V_b+V_b(1/R_1+1/R_2)R_E(1+\beta)=V_0+\frac{V_{cc}R_E(1+\beta)}{R_1} \\V_b\left( \frac{R_1R_2+(R_1+R_2)R_e(1+\beta)}{R_1R_2}\right)=V_0+\frac{V_{cc}R_E(1+\beta)}{R_1}$$
solve for \$V_b\$ and you're done.
|
H: Thevenin voltage question
I have to find V Thevenin in that circuit.
simulate this circuit – Schematic created using CircuitLab
I did :
\$(\frac{V_1}{R_1}-I_1)*R_2 \$
Appeared its not right.
Also Im not sure I can divide the volt source by the resistor to find the current there.
Thanks for your help
AI: Using KCL and KVL you can find:
$$\frac{\text{V}_1-\text{V}_\text{th}}{\text{R}_1}+\text{I}_1=\frac{\text{V}_\text{th}}{\text{R}_2}\tag1$$
Using the given values, we get:
$$\frac{-30-\text{V}_\text{th}}{5000}+10^{-3}=\frac{\text{V}_\text{th}}{5000}\space\Longleftrightarrow\space\text{V}_\text{th}=-\frac{25}{2}=-12.5\space\text{V}\tag2$$
I also checked the answer using LTspice and -12.5 is the correct answer.
|
H: PCA9456A i2c switch channel 1 not showing up
I am using an PCA9456A i2c switch to help me toggle between i2c devices with the same address, but i am having problems opening channel 1 (second channel out of 4) and i ran out of ideas what is causing it
Here are the code and schematics, I am using the busio and board library in python
import busio
import board
i2c = busio.I2C(board.SCL, board.SDA, frequency=100000)
i2c.writeto(0x70, bytes([0x0F]), stop=True) # 0x0F = b00001111 which turns all channels on
With this code i am able to toggle between the other 3 channels (be it only 1 channel is enabled or any combination of the 3)
I am planning of using it on a raspberry pi 4.
what i have tried checking:
I have tried double checking all connections and if they are
connected to the correct pins EDIT: This turns out to be false
I have tried changing the pull up resistors' value (decreasing and increasing it)
I have tried creating another circuit using another PCA chip and a different i2c device than what i am supposed to be using. It is always channel one that i cannot enable
here a picture for the i2c scan, you may notice that there is a gap between the addresses, 0x49. That is the device connected to channel 1.
AI: If that is the real schematic of the device, the channel 1 has SDA and SCL swapped, so it does not work until you swap the pins somehow.
|
H: Equivalent circuit of a simple transformer?
EDIT: My question is: How can you translate the first circuit into one that uses simple (linear) electrical elements and, as a result, can be solved using mostly Kirchhoff's laws?
I am trying to analyse how the frequency of the input voltage affects the amplitude of the output voltage in the following circuit.
simulate this circuit – Schematic created using CircuitLab
Everything we need to know is given: \$V_{in}\$, angular frequency \$ω\$, \$a\$, \$R\$ and \$C\$. I think this circuit is supposed to work like a band-pass filter, but I am not used to working with transformers. That is why I am trying to find the equivalent circuit of this element. Is the next circuit equivalent to the above? Likely, it is not, because transformers are supposed to change the voltage across the secondary coil, at the cost of current. If not please suggest the correct one, or explain a better way of understanding the circuit (What kind of impedances might occur, if the transformer is not ideal?).
simulate this circuit
AI: What kind of impedances might occur, if the transformer is not ideal?
Ignoring high frequency parasitic capacitance, the transformer equivalent circuit is this: -
Picture from here.
So your interpretation is somewhat erroneous: -
Your "L" matches my Lm (magnetization inductance)
Your "R" matches my Rp (primary copper losses)
Your "R1" matches my Rs (secondary copper losses)
Your "aL" is not relevant
My "Rc" represent the cores losses)
Given that your tuning capacitor is on the secondary, it will series resonate with "my" Ls and this is the main componentry that produces resonance. However, the primary leakage inductance (for a transformer) is just as relevant and what folk normally do is lump primary and secondary leakage inductance together and calculate resonance based on that.
My Rp and Rs will dampen the effect of resonance making it less peaky.
|
H: BNC terminator at end of T junction
I'm planning to get an oscilloscope and a signal generator.
It seems to connect the signal generator to the oscilloscope. And it seems a 50 ohms terminator is needed.
I wonder if I can use a BNC T junction, and put the terminator on one end, and the other end with a cable towards the oscilloscope, or should the terminator be inside the chain?
AI: Usually signal generators have 50 ohm source impedance so it can drive BNC coax just fine. Some oscilloscopes have built-in 50 ohm termination too, so external termination is not needed. If the oscilloscope needs external termination, just put T junction to scope, one end to coax and one end to terminator.
|
H: What makes a input GPIO pin so sensitive?
I know that an input pin has a high impedance, but how a high impedance makes such pin be so sensitive?
According to https://forum.arduino.cc/index.php?topic=454553.0 -- #4 mentioned
If the input impedance is too high, say 100MΩ, then you'd need only
50nA to get 5V. This would make the input far too sensitive
Yes I know the calculation and this indeed follows the ohms law. However, my question is that in order to make the pin read a HIGH signal, what can the pin control is that "how much current should it draw" but not "how much voltage can the environment apply". From my understanding, as long as the environmental factor cannot provide 5V, the 100MΩ can never take 5V. (Logical reading should depends on the input voltage applied but not how much current drawn!)
Base on my assumption above, my questions are:
How can a high impedance input GPIO be so sensitive if the the environmental factor(volt) cannot be controlled by the GPIO pin? (just like you won't get a 5V circuit from a single 3.7V 18650 battery)
is it a truth that environmental factor can make a 5V voltage difference into the high impedance GPIO input pin? From my understanding, as long as the environmental factor cannot provide a 5V voltage difference (say 1V only), that GPIO shouldn't read a HIGH signal.
AI: However, my question is that in order to make the pin read a HIGH signal, what can the pin control is that "how much current should it draw" but not "how much voltage can the environment apply".
We don't "make" a pin read a signal by drawing current from the pin at that moment. The input buffer is always reading the signal, and our microcontroller chooses to act upon it in a certain way. Here's an example of what a GPIO's input buffer might look like:
[source]
Notice the devices Q1/Q2/Q3/Q4. They are FETs, short for field-effect transistors, meaning that they conduct based on an electric field related to the gate, or the pin at the left connected to 'A', the input.
Assume 5V logic. Q1 is designed so that it conducts when its gate has a lower voltage than VDD (i.e. +5v). Q2 is designed so that it conducts when its gate has a higher voltage than VSS (i.e. ground). When the signal is driven to either logic high or logic low, either Q1 or Q2 conducts, never both. This creates a signal at the gates of Q3 that is logic low or logic high, respectively, making the same guarantee for Q3/Q4. Thus, the output 'Q' is valid.
Now let's imagine that 'A' is at around 2 V, e.g. because it's floating. Now, Q1 is conducting since 2 volts is less than 5 V, and Q2 is conducting since 2 V is more than 0 V. We've created a short circuit where power is connected to ground through Q1 and Q2, which could cause unusual currents to be drawn, instability of the whole chip, or even physical damage. Moreoever, Q3 and Q4 now have an invalid input so 'Q' is also invalid, as well as anything inside the chip that relies on the value of Q.
There's a physical reason for why these charges can persist. Recall that a FET is a field-effect transistor, whose behavior is mediated by a physical electric field. Here is the construction of an N-channel MOSFET (such as Q2 in the diagram):
[source]
Notice how the gate is separated from the source and the drain through a layer of oxide (the grayish rectangle). This oxide is an amazing insulator. When the transistor is "on", there's excess charge build-up on the gate, which causes changes to the energy levels in the channel allowing current to flow. The important thing is that the gate is a dead-end to electrons, meaning that they can't go anywhere. Almost no current is drawn, and even small stray charges can influence the behavior of a floating pin.
In fact, the oxide is such a good insulator that we can use floating signals for technological benefit, in special applications. Flash memory (as seen in SSDs) contains a large number of special MOSFETs with their gates floating. By using special physical phenomena to inject a charge into those gates, we can keep a charge on a floating node for years as a form of data storage.
However, a raspberry pi GPIO isn't a flash cell, but rather an exposed pin. By a number of physical phenoma, both internal to the chip and external such as RF and static buildup, charge is easily, yet unpredictably injected onto the gates of input FETs. If the signal isn't driven with either a pullup/pulldown or some sort of input signal, the voltage on the pin will be unpredictable and will lead to the issues described above.
How can a high impedance input GPIO be so sensitive if the the environmental factor(volt) cannot be controlled by the GPIO pin? (just like you won't get a 5V circuit from a single 3.7V 18650 battery)
This sentence doesn't make a lot of sense to me. A GPIO on a CMOS-technology chip is sensitive because it measures voltage, without needing to draw much current from the pin.
is it a truth that environmental factor can make a 5V voltage difference into the high impedance GPIO input pin? From my understanding, as long as the environmental factor cannot provide a 5V voltage difference (say 1V only), that GPIO shouldn't read a HIGH signal.
It is true that an environmental factor can place a 5V voltage onto a high-impedance pin. The little shocks we get when we accidentally drag our shoes on a carpet and then touch a doorknob are easily in the kilovolt range. There are ESD protection networks inside the GPIOs such as the following:
[source]
While they bleed off any excess voltage outside the rails, they don't actually solve the problem of floating signals. I could pick up a stray 400 V charge from walking around, and then touch a floating input pin on a circuit connected to a 5 V power supply. As I transfer charge, they clamp the pin voltage to between 0 and 5V, meaning that depending on the polarity of the charge on my hand, I could force the pin high or low.
|
H: What does the plus (+) mean in a voltage source symbol?
Does the plus in voltage source symbol represent the highest potential end or does it represent the positive end?
Is that answer true for IEEE and IEC and NEC?
It may seem trivial for those who know but I did my research and did not find the answer.
UPDATE:
My question arises due to the fact that there is electron flow (the real one when electrons are the charge carriers) and convention flow (the adapted one to avoid thinking "negatively" twice).
If the plus means the positive end, then electrons are moving into that end in electron flow, since electrons will go from the negative end to the positive one.
If the plus means the highest potential end, then electrons are moving out of that end in electron flow, since electrons will go from high potential to low potential.
AI: It is both the positive end, and the end that is at higher potential.
Because the electron has negative charge, it has lower potential energy when it is at a higher potential, and vice versa. So electrons tend to flow towards higher potentials.
|
H: Read serial data from unknown source using Raspberry pi
I have a fuel dispenser, which has one keypad and two displays, each display has three of 7 segment LCDs.
I want to control the fuel dispenser with a PC or a phone. The way to do this in my knowledge is by attaching Raspberry to the fuel dispenser; program the raspberry pi so that it can imitate the same work as the keypad; and read what is written on the display so that I can insert it in a database as a record. Then I can communicate with the pi in different ways.
Now the imitation of the keypad was very easy and I have no problems with it. The problem I am facing is that I am still not able to read what is written on the display.
Let me first show you what does the display looks like:
Now the kaypad has 12 buttons ( from 0 to 9, an A button, and F button).
. If I click on A (which means I want to insert maximum amount): the size LCD shows nothing and the amount LCD shows one "0".
. If I click on F (which means I want to insert maximum capacity): the size LCD shows one "0" and the amount LCD shows nothing.
. If I just start pumping fuel with typing nothing, the display starts showing the calculations on the size LCD and its equivalent amount on the amount LCD.
Now data comes from the motherboard, which is sealed to the display through the 10 Pin IDC cable. After measuring each pin of the cable I found:
. 2 Pins of ground
. 2 Pins of 12V
. 1 Pin of 5V
. 2 Pins of 3V
. 3 Pins of -0.3V
So I started to look for which pin sends data. I used Raspberry pi 3B+, and I attached the ground of the raspberry pi to the ground of the cable, and the serial RX of the raspberry pi to each of the other pins (I used 1k Ohms resistor in between).
I found that:
. 2 Pins of 12V and 1 Pin of 5V has nothing
. 1 Pin of 3V sends data each 1 seconds (It also sends data when I click on the keypad)
. 1 Pin of 3V send data each 2 seconds or maybe 3 seconds (when I click on the keypad, it sometimes sends data and sometimes it doesn't)
. The other 3 Pins of -0.3V send only zeros (Even if I click on the keypad)
Now the problem is that I don't understand the data that I receive from the raspberry pi. I just see it as random numbers.
Here is what I see from one Pin of 3V:
This is from the other 3V Pin:
It even changes it value other than 165 sometimes.
This is the python code I am using on Raspberry pi:
import serial
ser = serial.Serial("/dev/ttyS0", baudrate=9600, bytesize=8, timeout=1)
try:
while 1:
response = ser.read()
if response:
print response
print ord(response)
print bin(ord(response))[2:]
print "---------------------"
#line = ser.readline()
#if line:
# print line
except KeyboardInterrupt:
ser.close()
I tried changing baudrate and bytesize, and received the same random numbers.
Here you can find the datasheet of the "PCF8576CT" LCD driver:
https://www.nxp.com/docs/en/data-sheet/PCF8576C.pdf
I apologize for making this long, and I hope you can find solution to this.
Thank you
AI: The protocol used for the communication with the LCD controller chips is I2C. It can't be received with an UART like you are now trying. You would need a special I2C logger tool or logic analyser to analyze it. Fortunately the RPi has I2C master interface so you could use it for talking to the LCD controller chips. Even an oscilloscope would help you to look at the waveforms, but if you can trace the I2C interface pins and understand the LCD controller datasheet how to initialize and use it, it might work, but many initialization parameters depend on what type of LCD it is and how it is connected to the driver chip.
|
H: Formula for how much power a motor generates
I'm interested in the formula for how much power a handcrank motor can generate.
I dont know much about electrical engineering but I'm assuming it's related to a few factor.
The speed on which you are cranking
The amount of copper wrapper
But what I'm confused on is the exact formula. E.g.
What would generate more power more wrapping of the coils or thicker wire? Is it based on total mass?
AI: What would generate more power more wrapping of the coils or thicker
wire? Is it based on total mass?
In general the more 'copper' (wire) and/or 'iron' (magnetic material) a generator has the more power it can produce. But deriving an exact formula is much more difficult than just weighing it.
The more efficient the generator is the more power it can produce without overheating, plus being more efficient makes it easier to crank. Lower efficiency means more heat, which may require a heatsink or fan which takes up room that could have been used by other parts.
Thicker wire reduces the resistance that causes loss due to current flowing through the coils, but makes the windings larger which increases the size of the generator. The more turns you have the more voltage it can produce without cranking too fast, but this also increases size unless you use thinner wire, which has higher resistance.
You can reduce the number of turns required by winding the wire around an iron core that concentrates the magnetic field. But this takes up room that could be used for copper, and introduces hysteresis and eddy current losses that increase at higher speed. Thin laminations of exotic grain-oriented silicon steel reduce core loss, but are expensive and difficult to manufacture.
You could also use larger, stronger magnets, and try to get them closer to the windings. But there's a limit to how close you can get them without touching, and core losses also increase. Large high strength magnets aren't cheap either.
So you must find a balance that produces the best result at the speed and size you want. The calculations required for accurate estimation of stator core size and shape, copper fill, magnet placement etc. are not easy because they involve 3-dimensional analysis of magnetic fields interacting with the various components. This is commonly done with a program called FEMM (Finite Element Method Magnetics).
If you can't handle that then just try to get the thickest wire and strongest magnets you can you can into it, with the number of turns needed to get the voltage you want.
|
H: Resistor parallel to optocoupler LED in Zener-stabilized circuit
I was watching a video about phone chargers where the following schematics was presented:
simulate this circuit – Schematic created using CircuitLab
In the video it was stated that this schematics is for the feedback circuit of a phone charger and that the resistor R1 is for stabilizing the optocoupler.
Question 1: How does the resistor stabilize the optocoupler, and why would one need to stabilize the optocoupler?
Question 2: I would argue that the voltage drop over R1 is limited to the forward voltage drop of the diode within the optocoupler. Does that make sense? If so - Why would one place a resistor there, doesn't that limit the current supplied to the led, resulting in a slower turn on time for the transistor of the optocoupler?
Thank you for your help :)
For reference video (Timestamp: 08:20): https://www.youtube.com/watch?v=bNoGCdX1IdQ&t=500s
AI: Low-voltage Zener Diodes tend to have a significant amount of leakage current while the voltage is below or near the Zener voltage. This causes the opto-coupler to start turning on too early.
Adding a resistor in parallel with the Zener Diode swamps out the effect of the leakage current. That, in turn, makes the regulation voltage much more accurate.
[Edit]
Placing a fairly-low value resistor in parallel with the LED in the opto-coupler causes the combination of the resistor and LED to require more current before the LED begins to turn ON. This extra current must come through the Zener Diode.
The Zener leakage current is still there but that leakage current results in much less voltage across the LED. As you may recall, the transfer function for a LED is such that the LED will not consume significant current below the turn-on voltage of the LED.
If you choose the appropriate value of resistor, the LED begins to turn ON when the voltage across the Zener Diode is close or very close to its rated voltage.
|
H: Solving for inductance L in an AC series+parallel circuit is .. quartic?
This seems like a painfully simple question, but I keep ending up having to solve a quartic equation. So I'm missing something obvious.
Let's say we have a an AC source feeding a load consisting of a series resistor, Rs, and then in parallel a resistor and an inductor, Rp and L. I know the frequency of the AC source, and I know the series and parallel resistors Rs and Rp. I also know the voltage of the source, \$V_S\$, and the voltage at the divider between Rs and the parallel elements, call this \$V_L\$.
Here's where things go wonky.
I know that the magnitude of the impedance across the parallel elements because it is simply \$V_L \over V_S\$ divided across \$ R_p || L\$. However, when I try to solve the parallel equation for two complex numbers I end up with a horrific quartic expression for the magnitude of the impedence as a function of L.
Here's what I did:
(1) Start with the voltage divider:
\${V_L \over V_S} = { |Z_L| \over |Z_L|+R_s }\$
(2) Solve for \$|Z_L|\$:
\$ \alpha = {V_L \over V_S}, { |Z_L| } = { \alpha R_s \over { 1 - \alpha }} \$
(3) The right side is constant, so let's set up an expression for \$L\$:
\$ { Z_L } = { { R_p j \omega L } \over { R_p + j \omega L } } \$
(4) Since I want to know the magnitude of \$Z_L\$ I need to separate out \$Re\$ and \$Im\$, which means multiplying by the complex conjugate:
\$ { { R_p j \omega L } \over { R_p + j \omega L } } \cdot { { R_p - j \omega L } \over { R_p - j \omega L } } \rightarrow {{ R_p (\omega L)^2 } \over { R_p^2 + (\omega L)^2}} + j {{ R_p^2 \omega L } \over { R_p^2 + (\omega L)^2}} \$ ... !!!!
I succeed in creating two expressions for \$Re\$ and \$Im\$, and then solving for the \$ \sqrt { R_p^2 + X_L^2 } \$ is this when reducing to a function of \$L\$:
Let \$ |Z_L|=K \$, so that \$0=(\omega L)^4 (R^2 - K^2) - (\omega L)^2(2R^2K^2 + K^4) - K^2R^4 \$ ...
R, K & omega are all constants, but did I miss something obvious because solving a 4th order seems a bit ridiculous?
EDIT: Some more research and this form appears to be a biquadratic equation which is easy to solve. Still wasn't expecting to see this complexity solving for L.
SOLUTION: I forgot mag(a/b) = mag(a)/mag(b). Applying this to 3 yields a simple quadratic for f(L). Thanks @Barry below.
AI: Since you know both the source voltage and the voltage across the divider network, you can ignore phase and just use the magnitudes of the voltages and impedances. The magnitude of the parallel combination of the L and R can be found from the expression you had by dividing the magnitude of the numerator by the magnitude of the denominator. This will result in a much simplified expression than what you did. Apply that expression in the voltage divider equation. Multiply it out and place the term with the square root on one side and all the other terms on the other side. If you then square both sides, you will wind up with a quadratic equation for L
|
H: Is there an easier way to signal this active low relay module for the pi?
Two parts to this question:
Part 1:
I am planning on setting up this relay module with my pi (which afaik has gpio pins set to LOW at startup).
1.) Does this mean if I hook it up as the picture suggests on that page, that all the relay sub-modules will be on at startup?
2.) Does this mean if I want the relays to be off at startup, I need some sort of transistor outside the relay module to provide the HIGH signal constantly so that I can use a HIGH GPIO to turn on the relay (by making the transistor go from HIGH to LOW)? Is there an easier way to do this than to use an external transistor?
Part 2:
After briefly looking for other relay modules, many seem to have this active low behavior. What is the reasoning behind this? Especially if they are meant for microcontroller usage (being a module and not just a relay), wouldn't it make more sense to have the modules be active high?
AI: A Raspberry Pi has a well-defined GPIO configuration on startup: all GPIOs are inputs, 0-8 have weak pullups, and 9-27 have weak pulldowns.
You can use a suitable pullup resistor to ensure that the pin goes high when not driven (when using 9-27 take care to ensure that the resulting voltage is high enough).
As a result, the pins will have a logic-high voltage, until you a) configure them as output and b) drive them low.
|
H: How does this transistor (pre-)amplifier work?
The following circuit is part of a larger circuit and processes analog audio. The input comes from the top-right and the output is the rightmost terminal of the pot, which then feeds into a straightforward audio amp chip (LM386N to be precise) before finally going to a headphone jack.
My basic read is that the signal comes in and
gets high-pass filtered by C10/R10
something ampey involving R11, Q1 and its base components, and C12
gets low-pass filtered by R13/C13
goes through the pot
What I really don't understand is what is going on with #2 (though a friend suggested the entire thing may be a single amplifier and the 4 parts can't functionally be separated).
Is this a single circuit or is my breakdown more or less accurate?
In either case what is #2 doing?
How does a transistor work when the signal is at the collector and not the base? In looking at amplifiers I've been unable to find a single example of this.
What is the function of the diode in all this?
What is the function of the pot? Is this related to volume, or is it related to the beta of the transistor as friend suggested?
Keywords that can readily be googled or links will suffice for answers if you don't want to type it out.
AI: This is not an amplifier. Q1 will clamp the signal at power-on to keep DC from propagating. If the input has +DC on it, the DC will propagate until C10 charges. Q1 will clamp the DC until C11 charges. After C11 charges, there is no base current path and Q1 is effectively out of the circuit. D1 keeps Q1's base from going negative on power-off.
|
H: STM Cortex M7 memory map
I suppose this is simple one, but for me as first time reader of STM32F769 datasheet it is confusing. Memory map in this datasheet declares ITCM RAM at zero address, while this programming manual (pg. 32) declares that zero memory is code space (which by my opinion must not be RAM).
My expectation is that's all about some kind of aliasing, however didn't find confirmation in datasheets. What am I missing ?
AI: The Cortex-M can certainly execute code from RAM, so "code space" can be RAM. In fact, the "ITCM" is a RAM block specifically designed to contain (performance-critical) code which the processor can access efficiently. Since the RAM is of course volatile, the application has to initialize it with code e.g. from flash or some other external source before executing it.
|
H: Turning off and Flickering of LEDs for some reason
I've been following Ben Eater's You Tube series on building an 8 bit computer. As I built and connected more modules to the same power source, the LEDs connected to the registers started to flicker and turn off. I added a few more batteries in parallel and it worked fine. After I added more modules (ALU, RAM), they began to flicker and turn off again. Wrong data was being represented by the LEDs. I have double checked the connections and there seems to be no error in it. Is this a battery problem? How can I go about solving this issue?
Below is a video describing the mentioned problem:
Buggy circuit
AI: This sounds like a power problem. If you are using TTL chips, be aware that they are more power hungry than the equivalent CMOS chips.
It is not necessarily a battery problem. Breadboards are not known for being the best at power delivery, and there are certainly quality differences between breadboards. You're going to have power losses - voltage drops - in each and every connection. If you use thin and/or long jumper wires for GND and VCC, there may also be a significant voltage drop in the wires themselves.
(You may also want to consider the placement and ESR of your bypass capacitors.)
As Andy aka suggests, check your supply voltages. Measure directly between GND and VCC at each chip while the circuit is running.
If the wires or connections are not up to the task, adding more batteries in parallel will only go so far.
|
H: Single chip level shifter from LVCMOS to 12 V
I'm working on a nixie clock based on a STM32L476RG microcontroller and HV5530 chips. The microcontroller uses 3.3 V logic while the HV5530 really needs 12 V logic, which is a pain to implement. There are buffer ICs that raise the level from 3.3 V to 5 V, but I was unable to find any that works with 12 V. I experimented with a circuit based on the LM339 with a reference voltage of 0.6 V generated with a forward biased diode and pullup resistors to 12 V. It works, but I prefer to use a single chip. I've spent hours online trying to find a chip that does this, but none comply with 12 V output. I looked for both open drain outputs or push pull outputs without success.
Is there a single chip that does 4 channel, one way shifting from 3.3 V to 12 V?
AI: You're looking for a 74HCT input (1.5V threshold) with CD4000B outputs which do not exist.
But fortunately CD4000B's have level-shifters that work from 3 to 15V. or even 18 to 20V
One of them is the CD40109B .
|
H: Why shouldn't two AC sources be connected in parallel?
In data centers, servers usually have redundant power supplies, which makes it possible to plug them to two different UPS units. This way, the server continues to function correctly if one of the PSU goes down, or if one of the UPS units is shut down for replacement.
Some servers don't have redundant power supplies. In order to be able to still connect them to two UPS units (in order to be able to replace one unit without shutting down the server first), special devices called transfer switches are used: they can receive input from multiple sources, and provide power for single-corded equipment. A transfer switch is usually expensive and contains lots of electronics inside. Basically, automated transfer switches provide power from one source, and when they sense that the source is down, they fallback to another source.
What makes it impossible to have just a cable which would plug into two sources on one side, and to one server on another side? In other words, something like that:
but the other way around, i.e. not a cable with one male and two female connectors, but a cable with two male and one female connectors?
In other words, why is it possible to wire two DC power sources in parallel, but one cannot (or shouldn't) wire two AC sources in parallel? Is it because the waves may not be synchronized and could then cancel out each other? Or something else?
simulate this circuit – Schematic created using CircuitLab
AI: When two DC sources are paralleled, blocking diodes prevent back feeding from one supply to the other. The supply with the highest voltage will always supply current without being affected by the other.
There is no comparably simple and effective provision for paralleling two AC supplies. Two AC supplies from the utility can be obtained with matching phases and voltages, but if one goes down, it must be disconnected to prevent back feeding. Even dealing with small voltage or phase differences is not particularly easy.
|
H: Bandgap Reference 1.25V
[Taken from Razavi's Design of Analog CMOS IC]
Hi, I'm just wondering, how does he get the 1.25V in 12.21?
AI: He multiplies the value of the thermal voltage, \$V_T\$, by 17.2 and then adds the nominal base-emitter voltage, \$V_{BE}\$. Both \$V_T\$ and \$V_{BE}\$ will have been discussed at length in previous paragraphs.
|
H: A logic circuit with a NOT condition
I have the following problem to solve and Im a bit confused:
I have a ceramic factory that is based on a circuit with 4 outputs:
timer that outputs "1" when the oven is on (timer)
sensor that outputs "0" when temperature of the oven is too low
sensor that outputs "0" when temperature of the oven is too high
sensor that outputs "1" when ceramic is humid
Now I should find a function that outputs "on" on the following conditions:
If the oven is on AND:
temperature is between the correct interval
temperature is above the limit BUT ceramic is humid
If the oven is not on BUT ceramic is humid.
From my understanding I have 3 different logic doors, them being:
a) oven on(time = 1) and temperature is fine (sensor = 1)
b) oven on(time = 1) , temperature above limit (sensor = 0) and ceramic is humid (Hum = 1)
c) oven NOT on(time = 0) and ceramic is humid (Hum = 1)
I have tried to build the logic circuit to get the function but Im kinda stuck. On my c) variable, Time is 0 and Humidity must be 1 but there is no logic gate that supports this. Am I wrong?
Sorry but Im still learning!
AI: The first thing to do is draw a truth table. This will clarify your thinking and is much easier to understand than a list of bullet points.
Table 1. The required functionality. 'X' is don't care.
Timer Above low Above high Humidity
On temp SP temp SP high Out
-------+---------+----------+----------++-----
1 | 1 | 1 | X || 1
X | X | 0 | 1 || 1
This has neatly reduced the problem to two lines. The two lines will require an OR gate to combine the logic.
Note that you don't need to check the LOW TEMP OK signal on line 2 as if if it is over the high temperature setpoint then it must be over the low temperature setpoint.
simulate this circuit – Schematic created using CircuitLab
Figure 1. A possible solution.
This uses four different types of logic gates. You can use DeMorgan's Theorem and Laws to modify the logic to use fewer gate types.
|
H: Flywheel current doesn't change direction
The motors are permanent magnet DC motors.
First I power the motor with a flywheel and a motor with no substantial load. (same kind of motor).
simulate this circuit – Schematic created using CircuitLab
I then disconnect the voltage source and the amperage dips negative for a moment and then returns to being positive. (The voltage stays positive throughout) Since the motor with the flywheel is being used as a generator shouldn't the current be always negative?
simulate this circuit
AI: Since the motor with the flywheel is being used as a generator
shouldn't the current be always negative?
No, it should always be positive (as your second circuit indicates).
So why does the current initially go negative? The unloaded motor has higher rpm due to not being loaded down with a flywheel, so it initially generates a higher voltage which pushes current into the flywheel motor. But the current drawn out of it causes it to slow down until it produces less voltage than the flywheel motor, which then pushes current back into it.
This behavior can be simulated with electronic components by using capacitors to represent the motor/flywheel inertia. Here's a simulation I made in LTspice:-
C1 and C2 represent the inertia of each motor, R1 and R2 are their internal resistances, and R3 and R4 absorb the 'iron' losses (magnetic, friction etc.). The flywheel motor has higher loss due to higher bearing loading and flywheel windage.
Here is the result:-
When power is applied the unloaded motor gets up to speed quicker due to its lower inertia (represented by smaller capacitance). During this time both motors are drawing current from the supply, but the unloaded motor draws less current due to lower torque load, producing less voltage drop across its internal resistance and allowing it to spin faster and generate higher voltage.
When switched off the unloaded motor initially generates higher voltage, which causes the current to go negative as it drives the flywheel motor. But this current creates a torque drag which quickly slows it down until its voltage drops below the flywheel motor. At that point the current turns positive again as the flywheel motor drives the unloaded motor.
|
H: How to modify an SD card so it will be read only?
I checked SD card pins and their descriptions and saw that there is a "data input" and a "data out". If I covered, removed the data in pin, would it block all the writing requests to the card?
AI: No. It would just render the card completely unusable. (OK, that's kind of a write protection too – but so would be smashing it with a rock.)
The SD card and the host talk a protocol, in which the host asks the card things like "could you please turn on", "go into this and that speed mode" and "give me this and that data".
The data in and out pins refer to the direction of the communication of host to card; without the data in path, the card couldn't be asked for data.
SD cards themselves don't offer a proper standardized built-in write protection. There's a small "notch" that some cards have that a reader can check. If it's there, it could consider the card write-protected. That's the same "write-protection" mechanism as in old music cassettes: totally up to the reader to support and fully up to the host operating system to respect (or not).
Some cards come with semi-/nonstandard commands that can disable writing in the card firmware – but that's just relying on software in the card instead of the host to do the write protection, and again, it's not standardized, so if supported, it's something only possible with a specific program.
|
H: Sample and hold circuit giving distorted output
I'm making a sample-and-hold circuit for a 3 bit Flash/Parallel ADC and to allow the conversion to have enough time to happen I want to maintain the input voltage steady for the duration the conversion will take. I intend on sampling an audio signal from a phone or mp3 player.
I'm simulating the circuit first and I'm having some trouble with the output from the sample-and-hold part.
I'm using the LM358N opamp for the buffers and an IRL520 NMOSFET to switch. The opamps are on dual 9V, -9V supply
To switch the MOSFET, I'm using 0 and 9V signals at 1kHz and the input signal to the buffer is a 6V peak to peak sine wave at 500Hz.
Ideally, the output should be a staircase looking waveform but with my holding capacitor at 1uF the signal is distorted.
This is the scope output at 1uF.
I've tried quite a few different values for the capacitor to little success.
I initially thought it was due to the switch resistance forming a low pass filter with the capacitor but it Rds is pretty low (0.18 ohms).
My question is why is this happening and how do I solve it. I know I'm a bit new to asking questions here so if there's anything useful I've left out please let me know.
I'd really appreciate any help in solving this.
AI: Using a transmission gate or Analog Switch
The S&H design specs MUST be done before any serious design; e.g.
Vin range, f range, Zs source impedance, aliasing filter requirements, sampling rate SNR, N bits accuracy and sampling error i.e. a total error budget
Here I will just highlite sampling errors.
For example if Analog Op Amp has a current limit or emitter resistor of 220 Ohms, it will result in a rise time to 64% of target = RC during the sample time
choose the smallest C that does not decay more than dV in dt due to buffer bias current
choose the lowest CMOS input bias current.
choose non-piezo electric caps like plastic MF or NP0/C0G as all others* have a "memory" effect (ceramic*, electrolytic)
the sampling ratio and signal resolution in bits of Fs/Fmax greatly affects the anti-alias (Nyquist filter) steepness so be generous. (proof not shown)
edit
Problems in your design.
Sampling error time 5% of 2ms (=1/500Hz = ) means in 100us the cap reaches the input voltage. so if dt=100us then choose C = Ic dt/dV . Unfortunately it is not limited by RdsON but the Op Amp current limit so this is a poor combination of Op Amp and sample cap. The output current could be increased by 100 with complementary emitter followers inside the feedback loop. The cap could be reduced to 100pF or more with metal film or C0G ceramic. The Buffer Op Amp must be changed to CMOS which also may have lower drive current so choose wisely.
Sampling at 2x the maximum frequency means you can capture the correct peak amplitude ONLY if the sampling rate is in sync with the signal. They don't stress that they Nyquest Theory basics of 2f does not include signal quality. So consider a much higher sampling rate with your quantization bits =3x or More fmax.
There exists a mathematical relationship for quantization noise relating to Nyquist Theory and SNR and N bits.
Search for it.
your FET is polarized and relies on Vgs >>2 Vt where Vs includes the signal, so this should be replaced with a CMOS TG. (analog switch 4066 or better low R)
If you are concerned about matching input bias offset voltage consider sample interval ,dt , quantization error =dV , Buffer Input bias current, Ic so that dV/=dt*Ic/C
|
H: Switching MOSFET with isolated gate
I am trying to isolate the battery which powers the gate control circuitry from the battery which powers the hight-current load being switched by the MOSFET. Here is the basic circuit configuration:
simulate this circuit – Schematic created using CircuitLab
Do the sources V1 and V2 need to share a ground in order for the MOSFET to switch on and off, or will the circuit function as is.
AI: The MOSFET turns on when a voltage is applied between its Gate and Source. In your circuit there is no path from the negative side of V1 to the MOSFET Source, so it won't turn on. All that will happen is the entire circuit (M1/Rload/V2) going up and down in time with V1.
You don't need an actual ground, but you do need to get M1's Gate and Source to the same voltage as is across V1. If for some reason that cannot be done by connecting the two 'grounds' directly together (eg. one side is connected to AC mains, or the MOSFET is switching the 'high' side of the power supply) then you have a few options:-
A level shifting MOSFET driver such as the LTC7001, which creates a local output 'ground' using a charge pump to raise the voltage. This still requires a connection between the two circuits somewhere, but voltage on the MOSFET can 'float' relative to their shared connection.
An optocoupler which uses its isolated transistor as part of a driver circuit powered from the FET circuit.
A photovoltaic MOSFET driver such as the Panasonic APV1121SZ, which produces the required Gate voltage directly without needing any circuit powered from the FET side.
|
H: Signal fidelity from fiber optic crosstalk
I was watching a video about tapping fiber optic cables when it brought up some points from an early US govt document describing security concerns about fiber optic cables. The relevant part:
Tapping attacks are possible at several points within the network due
to component crosstalk. For example, contemporary demultiplexers
within network nodes separate each individual signal (or wavelength)
received from a single fiber on to separate physical paths. These
demultiplexers may exhibit cross-talk levels between 0.03% and 1.0%.
These cross-talk levels allow a little of each signal to leak onto the
wrong path. Yet these signals may have enough fidelity to permit an
attacker to detect their presence and recover a portion of data.
If I'm understanding this correctly, this means that after some fiber station de-muxes the signals, the signal that gets sent to someone's house has some "noise" that is actually cross-talk from someone else's signal. I'm unfamiliar with signal processing, but it seems to me that it would be possible to glean some information from that "noise." That would essentially be like a low-quality copy of someone else's mail getting delivered to my mailbox.
Does modern fiber in homes (or businesses) send enough physical signal from cross-talk to be usable in some way? That seems...not great.
AI: Does modern fiber in homes (or businesses) send enough physical signal from cross-talk to be usable in some way?
Fiber to the home (FTTH) /to the premises (FTTP) is typically a passive optical network (PON) – i.e. you and all your neighbors get every neighbor's downlink through passive splitters (if you remember the ethernet hubs: like that).
On the uplink, it's a slotted time-division multiplex scheme, so only one neighbor transmits at a time, and all others have to be silent for that time. I don't actually know how directive (i.e. only letting the user uplink through to the optical link terminal, not giving the other users much) splitters are, but I'd presume well, but not perfectly.
So, it's fair to assume that at least in the good (O)SNR case, a neighbor doesn't only get every neighbor's downlink, but at least a somewhat OK uplink of a few neighbors that are especially strong in splitter crosstalk / scattering further up.
However, don't underestimate the hardness of decoding that – you're losing a lot of information through the splitter side channel.
That seems...not great.
Why? That's a broadcast channel by physical constraints, yes, but that's why we have cryptography. See, for example, the commonly used GPON Standard, G.984.3's Transmission convergence layer
specification, chapter 12, "Security", and especially "12.1 Basic Threat Model" on page 90 (emphasis mine):
The basic concern in PON is that the downstream data is broadcast to all ONUs attached to the
PON. If a malicious user were to re-programme his ONU, then the malicious user could listen to all
the downstream data of all the users. It is this 'eavesdropping threat' that the PON security system is
intended to counter. Other, more exotic threats are not considered practically important because, in
order to attempt these attacks, the user would have to expend more resources than it would be
worth.
Furthermore, the PON itself has the unique property in that it is highly directional. So any ONU
cannot observe the upstream traffic from the other ONUs on the PON. This allows privileged
information (such as security keys) to be passed upstream in the clear. While there are threats that
could jeopardize this situation, such as an attacker tapping the common fibres of the PON, these
again are not considered realistic, since the attacker would have to do so in public spaces, and
would probably impair the very PON being tapped.
All in all, from your neighbors, your downlink should cryptographically be pretty secure, your uplink pretty safe optically; as usual in security, you really need to come up with a threat model. Will someone with a 100 kilodollar device hook up to the next OLT box and try to log what you're sending? Would that someone not much more likely spend that money on getting onto your premises and bugging your meeting room? Or would that someone most likely be a secret service and simply force your provider to give them direct access to the data?
All in all: this is 2019. Internet traffic that's not end-to-end encrypted should be considered compromised, anyways. Even large providers are not immune to inserting their own ads into unencrypted http traffic, so I think you're worrying at the wrong end.
Use TLS.
|
H: stm32f103 I2C Bare Metal Programming Question
I am trying to send a simple data from STM32f103 to another one. But I am having trouble with my code. I have been working on it since 2 weeks and I couldn't find any solution. I am using PROTEUS to simulate and KEIL to compile.
according to data sheet, after start event SB bit is set in SR1 register. To test this I placed this code:
while( !(I2C1->SR1 & I2C_SR1_SB) );
But it stucks inside the loop. Can someone help me please? Thank you for reading.
And Here is my main function.
int main(){
clock_init();
//GPIOB clock enable,AFIO enable
RCC -> APB2ENR |= RCC_APB2ENR_AFIOEN | RCC_APB2ENR_IOPBEN;
//I2C1 clock enable
RCC -> APB1ENR |= RCC_APB1ENR_I2C1EN;
//Pin B6 , B7 alternative function open drain enable, GPIOB other CRL pins output and 50mhz
GPIOB -> CRL |= 0xff333333;
// reset the I2C1 peripheral
RCC->APB1RSTR |= RCC_APB1RSTR_I2C1RST;
RCC->APB1RSTR &=~RCC_APB1RSTR_I2C1RST;
I2C1 -> CR2 |= 0x08<<0; //8mhz freq[5:0]
I2C1 ->CCR &= ~(1<<15); //clear bit 15 to 0 (to have Standard Mode)
I2C1 ->CCR |= 0x0028;
I2C1 ->TRISE |= 0x0009;
I2C1->CR1 |= I2C_CR1_PE; //peripheral enable
I2C1->CR1 |= I2C_CR1_START; //Start bit set
while( !(I2C1->SR1 & I2C_SR1_SB) );
GPIOB -> BSRR |= 0x0000ffff;
AI: I'm seeing a couple of configuration problems.
You configure PB6/PB7 for open drain, but then you remap the I2C1 pins to PB8/PB9 and those have not been configured.
For fpclk1(which is your master clock/2) to be 25Mhz means that your running the master clock at 50Mhz, which is an oddball frequency. Are you running with an external or internal high speed clock?
Next, how did you calculate that TRISE? That value(26) seems sort of high. I would normally expect to see a range from 9 to 12, with 9 being set about 90% of the time(for SM). And the reason why 9 is common is because with a Tpclk1 of 125ns, the calculation is (1000/125) + 1 = 8 + 1 = 9.
Just one mis-configuration can cause a non-start, but it looks like you have more than one.
Side note: Unless your committed to dorking around in the guts of the chip, I would just find a good HAL library. NOT the one from ST! My personal favorite is the ChibiOS HAL which has an Apache license.
|
H: TL431 Vs Error amplifier
When compensating switch mode power supplies we usually use IC that incorporates error amplifiers, but sometimes there are ICs that come up without internal error amplifiers and so we can either use TL431 or an external OP AMP for compensation.
My question is when TL431 is recommended over an OP AMP ?
As far as I know OP AMP are more flexible than TL431 for designing a compensation loop and this is due to the incorporated pole of TL431, but beside this drawback, TL431 is widely used specially in flyback topology.
AI: The LM431/TL431 is an adjustable zener diode, with somewhat more stability with temperature (TL431 is better). The suffix identifies the temperature stability and static accuracy. It works very well in SMPS loops because it is often driving an opto-coupler which throttles the PWM to regulate the output voltage, often with better than 1% load regulation. In this case an op-amp would have to emulate what a TL431 already does out-of-the-box. The TL 431 is a combination precision Vref, op-amp and NPN current sink.
An op-amp is best suited for PID loops as integrators, with correction time in microseconds. This can be a bipolar powered circuit for constant RPM motors under CPU control, so a well tuned PID loop is very important. Professional grade power amplifiers often use an op-amp for excellent DC stability. Other than a master Vref, the TL431 cannot do much else in such circuits. However in linear power supplies you will find the TL431 in great use. My Chinese-made triple power supply uses several of these for a master Vref for each independent supply.
|
H: Cut 3D chunk out of PCB?
I would like to cut a small square out of the middle of a large square PCB.
The tricky thing is, I don't want to cut all the way through the board, just about half way. This would create a 1mm-deep square pocket in a 2mm pcb board.
I could do this with my CNC machine easily, but I want to know if something like this could be achieved during circuit board manufacturing. I would hate to have to load hundreds of panels of boards onto a CNC machine; this would raise the cost of manufacturing significantly.
One idea I had was to use fancy programming of the drill that drills holes in the pcb to act like a CNC drill. It would, of course, require changing the drill bit to a special one for milling.
Can anyone who is familiar with these machines (already setup in most Chinese PCB fabrication facilities) foresee a (cheap) way to use them to cut pockets out of a PCB?
AI: Yes, it can be done. It's (often) called Z-axis milling. For my usual Chinese supplier it is a standard option and adds USD $100-$150 to a small order of 2mm thick PCBs.
You could also ask for a top layer to be routed through on a multilayer board which might give you better control over the surface finish and maybe thickness.
|
H: How does this high amperage LED dimmer work while being so small?
I am trying to make a custom dimmer circuit for an off the shelf LED lamp meant for aquatic plant growth. I found a generic dimmer that many people (in the aquatic plant community) use, and am very curious how they managed to make a dimmer so small.
According to the lights it is supposed to be able to dim, it connects to a 24V@(max)4A power supply (input to power supply is 110-230V). The output is directly used by the LED lamp (which we can assume uses at least more than 2A).
Some ways to dim LEDs that I found are
1.) PWM into a mosfet...but at min 2A, I feel this thing would get much too hot. The PWM would then need another IC (at least a 555) which I don't think can fit in there.
Edit: It also seems somewhat wrong to control LED's at this amperage with PWM...especially given that there is a blackbox driver circuit that comes after this dimmer circuit and turning that on and off at high frequency seems 'wrong'.
2.) Constant current...I'm not entirely sure how to make a constant current source driver. I would imagine that this would be in the actual lamp circuit rather than the generic dimmer circuit or generic power brick. Also this dimmer is adjustable, which I am not sure how you would do in a small space for a constant current driver.
3.) basic buck converter to limit the voltage coming in. The issue again is with how you can dim this dynamically with button presses.
I can imagine building any of the three options above to control high power LED's, but I can't wrap my head around how I would go about making the result so small...
Short of me buying the thing and taking it apart, it would be great if anyone had ideas on this!
AI: For sure it's using PWM to control the lights via a MOSFET.
A tiny SMT MOSFET can easily control a few amperes at 24V and remain very cool. The controller chip is probably an ASIC or a microcontroller, given the up-down button control scheme. It might take 10 or 15 components total including the power supply for the controller.
|
H: Do "multiplicative" current mirrors exist?
What I am talking about is something like a mismatched current mirror. Like the following where the left transistor and the right transistor are unmatched therefore the drain source current are different by a factor K. Is this possible?
AI: If you're designing the actual silicon, you do this by making the two MOSFETs with different W/L ratios.
If you're designing with discretes, you can do it by including resistors between the FET source pins and ground, and adjusting the resistor values to give the desired current ratio.
|
H: In Verilog, does an event control always execute once at the beginning?
As illustrated in the image below, there is an event control with the variable r (for reset). I have not initalized c in the top module, but it shows in simulations that it starts at a low state. The only reason I think this is is due to the c output being set to 0 from the always @(r) statement. Why does this execute if r does not change? Or is it technically 'changing' when I initialize it in my simulation?
AI: Variables of the type 'reg' start simulation with the value of 'x'.
Any assignment after that, also an initial assignment, will be seen as a change and will trigger the always @(r) statement. Thus your c can change at time 0.
Having said al that: you code is behavioral and can not be synthesized as you have multiple drivers for `c.
Additionally the behavioral code is open to race conditions: if clk rises and r changes at the same time it is undefined in which order the two statements will be executed.
Besides all that your c is reset if r changes. Thus not only from 0 to 1 or from 1 to 0 but also any x or z change. There is no actual logic which can implement that in reality.
There are standard code templates in Verilog how to make a register with an asynchronous reset:
always @(posedge clk or posedge reset)
if (reset)
c <= 1'b0;
else
c <= ....
Last but not least:
Do not post picture of code. Paste the actual code. A prime example is this one where I would have had one hell of a time spotting the back-quote if the the user had not, correctly, pasted the original code in the question.
|
H: How does efficiency of transformer change with respect to primary voltage?
Suppose we have a non-ideal transformer such that secondary load is fixed and we change the voltage applied to the primary side. How does efficiency change with respect to primary voltage? I know there is a relation for the change of efficiency with load but I don't know what is the answer for the primary voltage and when the efficiency is maximum? By efficiency I mean the ratio of powers in the secondary and primary circuit.
AI: As you increase the supply voltage, you start to reach H (magnetic field strength) field levels that cause the magnetic core to saturate more.
The effect of saturating the core more is to open up or expand the BH curve and, the impact of doing so means that you lose more energy (disproportionately) in what is called hysteresis loss. Here is a picture that might help you understand: -
At moderate H fields (the blue curve above), the area enclosed is quite small but, as you increase the AC supply voltage, saturation effects cause the area enclosed to become disproportionately bigger and this means more significant losses.
Hysteresis loss is worse at higher supply voltages because you expend more energy in reversing the magnetic field each AC cycle. It's all down to what is called the remanance magnetic field - that is the field remaining in the ferromagnetic core when H field is backed down to zero. This remanance is the value on the Y axis when H is zero and, as you should be able to see on the picture above, is usually quite small for moderate H values.
See also Magnetic Coercivity; this wiki page also shows the widening and broadening effect of the BH curve as greater fields are demanded (by higher primary voltages): -
|
H: Coupling aluminium electrolytic bipolar capacitors
Can I use two antiseries electrolytic capacitors to create a non-polarized one, and use it as coupling to the input of an audio amplifier?
I know that electrolytics change capacity as voltage and frequency change, but this doesn't really seem like a problem to me. It's just a DC blocking capacitor.
The voltage also through these is very low, the current is microamps, maximum of 2Vrms.
To make sure they had no problems, I measured their intermodulation distortion with the spectrum analyzer. It seems that even using high frequencies (15 and 18 kHz) for the spectrum analyzer there are no differences in terms of THD or noise with a ceramic or polyester capacitor.
Did I miss the test? Or is it totally ok to use electrolytics for input coupling? And what happens if I use only one polarized capacitor?
AI: Bateman did lots of measurements of capacitor distortion...
https://linearaudio.nl/cyril-batemans-capacitor-sound-articles
Basically electrolytic cap distortion is due to non-linearities inside the cap, so as you'd expect it increases with increasing voltage across the cap.
This means don't use electrolytics in filters (that wouldn't be a very good idea anyway considering the tolerance). However the voltage across a coupling cap remains relatively constant as long as capacitor value is large enough, so electrolytics are fine for coupling.
For example, if you have a 2V RMS signal, a 10µF coupling cap, and a 47k resistor to ground at the input of your device then there will only be 6mV across the cap at 100Hz. According to Bateman this would result in non measurable distortion.
Two polar caps in anti-series perform just as well as bipolar caps, so you can use that without problem (check the 10µF cap measurements in the article linked above). In fact 2 caps in series distort less than one cap since voltage across each cap is halved.
If there is DC across the cap, leakage current can be an issue, that will introduce DC offset in the circuit after the cap, so don't use fancy polymer caps which are optimized for low ESR but have high leakage! Just use good quality electrolytics.
Do not use Class-2 ceramics (X7R, etc) for coupling or filters. They are piezoelectric microphones... and the capacitance varies with voltage a lot. C0G ceramic on the other hand is excellent, ideal for filters, low tolerance, sub-ppm distortion, cheap, but useless for coupling as values available are too small.
Audiophiles love big film caps. They are microphonic and will resonate, the bigger the better. Put a DC bias across it, connect that to the input of an amplifier and knock on it with your fingernail... Thump thump! Some even sound like a gong. So, sure, these will sound "different"!... Electrolytics are not microphonic at all.
|
H: Equivalent model of my (real) transformer
I have created a transformer using a small cylinder (a tube) made of ferrite. The primary coil has \$N_{1}=5\$ turns of coil wire, while the secondary \$N_{2}=50\$ turns. According to a recent question of mine, a real transformer has an equivalent circuit like the following:
Where:
\$L_{P}\$ is the primary leakage inductance
\$R_{P}\$ is the primary copper loss
\$R_{C}\$ is the core losses due to eddy currents and hysteresis
\$L_{M}\$ is the magnetization inductance
\$L_{S}\$ is the secondary leakage inductance
\$R_{S}\$ is the secondary copper loss
(taken from here)
My question is how can I determine each quantity in the above circuit, using \$N_{1}, N_{2}\$? If any additional information is needed please let me know (plus how can I calculate them, if it is not obvious). I don't seek to create an 100% precise model, just a circuit that works correctly (for example with an AC voltage source connected to the primary coil and a capacitor connected to the secondary coil the circuit must work like a band-pass filter).
AI: It’s really tricky to theorize on leakage inductance given only that you have a non-looping ferrite rod. If the rod was in fact a full magnetic core you can make the assumption that all the flux produced took the path through the core. But, then, the problem would be that there can be no leakage inductance i.e. the two coils would be perfectly coupled and, when perfectly coupled, there is no leakage inductance to form a tuned circuit with the secondary capacitor.
I would suggest that you measure the inductance of the secondary with the primary shorted in order to ascertain the effective leakage inductance that would form the tuned resonant circuit.
|
H: SMPS based on LDO
I present the following circuit found in the datasheet of MC7824 page 24/29 but without any explanation (just the schematic). It seems very strange for me on how to get a switching regulator by using a linear regulator.
Here is the circuit:
By taking a general look to the circuit, we can say that it is a non-synchronuous buck converter if we remove the MC78XX. And the LDO plays the role of an error amplifier and PWM module (Strange in terms of the internal design of LDO).
Please how does the LDO controls the output voltage of the buck converter as well as the switching frequency of the transistor?
AI: How does the LDO controls the output voltage of the buck converter as well as the switching frequency of the transistor?
Can't tell you – they're tying together input voltage and ground, so this circuit is operating the 78xx outside its operational boundaries; so, this definitely depends on the properties of one or two specific 78xx implementations, and quite possibly won't work with newer 78xx models – remember, the 78xx is fourty years old now, and people didn't have good components, so they hacked together whatever worked with the parasitic properties of what they had.
Even if the Vcc == GND regime was specified in the datasheet, the temporal behaviour specifications of the 78xx are very vague, anyway, so this is really not a case of "designed, tested, worked".
You say in your profile you want to become an SMPS expert – so don't try to recreate really obsolete stuff like this.
Update: Sam pointed out (comment below) that the VCC-to-GND connection is an artifact of TI's scan, and that they are not actually tied together; well, that leaves us still with the underdefined / vendor-specific temporal/stability behaviour that makes this design highly undesirable and unreliable.
|
H: Why turn-on speed of a MOSFET is linked to diode reverse recovery time
Here is a quote from section 3.4 "Speed-Enhancement Circuits" in "Fundamentals of MOSFET and IGBT Gate Drive Circuits" written by Texas Instruments:
When speed enhancement circuits are mentioned, designers exclusively consider circuits that speed-up the turn-off process of the MOSFET. The reason is that the turn-on speed is usually limited by the turn-off, or reverse recovery speed of the rectifier component in the power supply. [...] the fastest switching action is determined by the reverse recovery characteristic of the diode, not by the strength of the gate drive circuit. In an optimum design the gate drive speed at turn-on is matched to the diode switching characteristic.
The model considered in the document is the following:
Image source: Simplified Clamped Inductive Switching Model - Figure 3 from "Fundamentals of MOSFET and IGBT Gate Drive Circuits by Texas Instruments"
At the beginning, the mosfet is off, so Id is zero and Vds is equal to Vout plus the diode forward voltage. Vdrive is equal to Vdrive_low.
Now, Vdrv step to Vdrive_high. As Ciss (Cgs + Cgd) was previously discharged, the voltage across Cgs begins to charge, Cgd is also charge because Vgs increases and Vds is clamped. At a certain time, Vgs increases to Vthreshold. Id starts to increase linearly with Vgs (linear region). As Idc is constant, the current through the diode is equal to Idc-id (id is equal to the current through the mosfet). Ciss continues to be charged. At a certain point Id is equal to Idc and the diode stops conducting and Vds is now not clamped. At this moment occurs reverse recovery time of the diode. As Vds decreases, Cgd must be charged, this is the miller plateau… I described rapidly the processes but if you are interested, you can find more information by reading "Fundamentals of MOSFET and IGBT Gate Drive Circuits" written by Texas Instruments.
My question is: Why is the turn-on speed linked to the diode reverse recovery time?
AI: My question is why the turn on speed is linked to the diode reverse recovery time ?
It isn't. What they are saying is: usually the FET can turn on faster than the diode can turn off, however that results in both the FET and the diode being on simultaneously, ie cross-conduction, and increased losses. So it is more efficient to switch the FET a bit slower to make sure it turns on right as the diode turns off.
So a better way to put it would be "The optimal MOSFET turn-on time for highest efficiency is linked to the diode recovery time". It isn't a characteristic of the FET, it is a design choice.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.