text
stringlengths 83
79.5k
|
---|
H: Error in LCD while interfacing with ATmega 32
I am trying to interface a normal keypad and LCD .where for example if i press '1' in keypad it will show 'one' in the display . if i press '=' it will show 'EQUAL' in Display . i developed below code . but after simulating , the LCD is showing Random text like picture below. i cant figure out what happened here.
#include <mega32.h>
#include <alcd.h>
#include <delay.h>
#define keypad_ddr DDRC
#define keypad_port PORTC
#define keypad_input PINC
void main(void)
{
keypad_ddr=0xF0;
lcd_init(16);
while(1)
{
lcd_clear();
keypad_port=0b11100000;
lcd_gotoxy(0,0);
if(keypad_input.0==0)
lcd_putsf("Seven");
if(keypad_input.1==0)
lcd_putsf("Four");
if(keypad_input.2==0)
lcd_putsf("One");
if(keypad_input.3==0)
lcd_putsf("ON/C");
keypad_port=0b11010000;
lcd_gotoxy(0,0);
if(keypad_input.0==0)
lcd_putsf("Eight");
if(keypad_input.1==0)
lcd_putsf("Five");
if(keypad_input.2==0)
lcd_putsf("Two");
if(keypad_input.3==0)
lcd_putsf("Zero");
keypad_port=0b10110000;
lcd_gotoxy(0,0);
if(keypad_input.0==0)
lcd_putsf("Nine");
if(keypad_input.1==0)
lcd_putsf("Six");
if(keypad_input.2==0)
lcd_putsf("Three");
if(keypad_input.3==0)
lcd_putsf("Equal");
keypad_port=0b01110000;
lcd_gotoxy(0,0);
if(keypad_input.0==0)
lcd_putsf("Division");
if(keypad_input.1==0)
lcd_putsf("Multiplication");
if(keypad_input.2==0)
lcd_putsf("Minus");
if(keypad_input.3==0)
lcd_putsf("Plus");
delay_ms(1000);
}
}
Simulation : (Look at the LCD)
AI: You show no pullup resistors in your schematic, so this isn't going to work unless you enable the internal pullups of the MEGA32. To do this, you need to write ones to the input bits. In other words, whenever you change the column selector, the low-order bits should be 1111, not 0000.
For example:
keypad_port = 0b11101111; |
H: Continuing power during 50ms power interrupt
I have a system that is powered with 3-5 VDC but there are 50ms gaps where power is interrupted and I would like the system to continue running normally when that happens. At 3.3V, the current may have very brief peaks of 1A, but usually is a quarter of that.
How can I prevent this power interrupt from crashing my system? Would a capacitor in parallel with the power source work? If so, what size cap? Other ideas? I don't want to bridge much more than a 50ms gap because I want it to sometimes shut off purposefully
AI: You can calculate voltage drop versus time as i = C dv/dt. If i=1A, dt = 0.05seconds, and you didn't want more than, say, dv = 2V (voltage drop during cutout). Then C = i * dt / dv = 1 * 0.05/2 = 0.025 farads. That's a very large capacitor. Depending on what you are up to, it might be easier to diode-OR in an auxiliary power supply. 50ms is a long down time if there's an appreciable amount of current draw. |
H: For the capacitor specification "Ripple Current at Low Frequency" is a higher value better for a linear power supply?
I'm finding information that seems to be contradictory in various places, and I think people are talking about different ways of looking at the same spec.
The specification that Digikey uses to search/sort by is called Ripple Current @ Low Frequency.
This is measured at 120hz on many of the capacitors that I've looked at. 120hz is the frequency of the pulsed DC in a linear power supply with full wave rectification.
If the application is ripple smoothing in a linear power supply, specifically on the input side after the bridge rectifier and before the voltage regulator, assuming all other things are the same (voltage, capacitance, life), which capacitor is the better choice?
A. Ripple Current @ Low Frequency = 860 mA @ 120 Hz
-or-
B. Ripple Current @ Low Frequency = 3.4 A @ 120 Hz
AI: If the application is ripple smoothing in a linear power supply, specifically on the input side after the bridge rectifier and before the voltage regulator, assuming all other things are the same (voltage, capacitance, life), which capacitor is the better choice?
A. Ripple Current @ Low Frequency = 860 mA @ 120 Hz
-or-
B. Ripple Current @ Low Frequency = 3.4 A @ 120 Hz
Either might be, depending on your specifications. B is the better capacitor, but A might be good enough, and the better choice if it's cheaper.
The component chosen for your post-rectifier reservoir capacitor must meet at least two criteria
enough capacitance to feed the load with current when the rectifier is not conducting, while keeping the low voltage dip and the ripple amplitude below limits for the load
low enough ESR so that it does not overheat due to the ripple current flowing
Your load defines its input current, so also defines the ripple current that the reservoir capacitors see. If your particular load requires a ripple current of 100 mA, then either capacitor would do. If it requires 1 A, then you would have to use capacitor B. If it requires 500 mA ripple, then your choice would depend on your attitude to quality and longevity. It's within the published capability of A but near the maximum, and B will run cooler and last longer, especially important in a hot ambient. You'd have to make a cost/performance judgement, maybe after measurements of actual temperatures in your operating environment. |
H: How does an I2C slave prevent data overrun?
By data overrun, I mean the inability of a slave to receive or acknowledge next byte from the master?
Any response would be appreciated.
Thank You!
AI: Not really sure what do you put into 'data overrun' term, but when the slave need some time to wait and "think" there is a mechanism called clock stretching. |
H: Calculate NTC Temprature. but witout series resistor
Hello I have NTC Connected on PCB which configure as shown in following schematics.
The problem is that I can not apply Steinhart-Hart equation because there is no resistance connected in series to calculate voltage drop in MCU and evaluate NTC resistance. Is there any way to figure voltage drop across NTC without second resistance ?
AI: If this circuit is connected to opamp buffer or ADC with large input imedance, then you will always get the 3.3V voltage. So the answer is: you can't connect the NTC in that way to be able to measure the temperature. |
H: LTC4060 NiMH charger 1A
I'm designing a charger for 2 series NiMH batteries. After some research I came across the LTC4060. What I don't understand about this IC is how it is able to charge 1A over the small 0.25mm trace width of the IC pins. Below you can find two application from the datasheet. As far as I can understand figure 2 is never able to draw 2A of current. Maybe they switches the figure numbers. But even if that is the case, 0.25mm width is not enough for 1A of current. (According the KiCAD PCB calculator)
Wondering what your ideas are about these two applications. Is figure 3 an safer option?
Thank you!
AI: You can easily pass 1..2A through a .25mm track - you don’t want to have that track too long as you will get excessive temperature rise. Fatten up the track as close as you can to the ic. |
H: Understanding what exactly is the NodeMCU
So the NodeMCU looks like this:
And the ESP12 is this:
So I can see that the little ESP12 is riding on the NodeMCU.
my question is: What is included in the ESP12 and what in the NodeMCU, in context of processor, RAM, flash, firmware, etc.? Just trying to understand the individual components of it.
AI: The ESP-12 contains the esp8266 MCU, flash memory, antenna and required external components.
The NodeMCU dev board is a breakout board that contains the ESP-12 module, the required boot pin configuration and provides connectors and circuitry to supply the ESP module with power and allow programming via USB including the auto-reset circuit, and otherwise access the pins and features of the ESP and in a form factor that can be easily used by hobbyists such as plugging it onto a breadboard. |
H: Correct antenna position ESP32 on custom board
Is the antenna position correct in this design? In the ESP32 hardware design document, they say:
The board hasn't copper or tracks under the antenna. I can't put the antenna on the edge because I need a few millimeters to put the enclosure. I cut the other side of the antenna from the board. What can happen for not following the recommended mounting steps of ESP32? Thanks in advance.
AI: well, this cutout is better than not having one, but there doesn't seem to be a reason why you didn't cut the whole thing through to the edge, which would be both easier to manufacture and better for RF.
The whole pink area could be trivially cut out:
Also, there seems to be no reason U3 sits that "high". You could easily move it down by 7mm and keep traces further away from the antenna.
What can happen for not following the recommended mounting steps of ESP32?
You're de-tuning the antenna. That leads to worse reception, less transmit power, and in the end that means worse performance. Also, you're changing the radiation pattern in pretty much unforeseeable ways.
Also, I'm almost certain C4-C8 should not be all in one place, but as close as possible to the things that they are decoupling caps to. This is probably functionally much more relevant than your antenna placement. Your ground traces also don't look good – the current return paths seem pretty convoluted.
I'd say I'd re-place U3, Q1 and all the capacitors, then make sure a I have the current return ground paths directly under the supply lines (and signal lines, too), and then your layout will be much cleaner, and more importantly, you will have less noise and better supply stability. |
H: Delay caused by new MOSFET topology
We are using N-Type Enhancement MOSFET to control power supply to the sensor.
MOSFET: AO3400
Sensor: ICM-20600
Earlier, we have been using following circuit:
With this circuit we didn't have problems in our software. But since both gate control voltage and Vdd on the drain side are 3.3V, it caused high voltage drop on transistor.
Then, we changed the circuit to following:
In other words, we just moved the load to the drain side to have higher Vgs.
Now voltage drop on the transistor is very low, but we have problem in the software. Now, we have to add a delay of ~200ms after turning on the transistor, in order to use the sensor (by use I mean read I2C registers). It doesn't work otherwise. Now I want to understand what causes this delay.
Some interesting observations:
When multimeter was connected in voltage measurement mode between
ICM GND pin (drain) and GND (source), it was working great. Once
disconnected issue started again.
We added capacitor on Vdd side, it fixed problem for some time. But later issue
happened again with this configuration.
We are using following circuit for the ICM sensor:
Pins 5, 6, 7, 8, 11 and situationally pin 1 (address pin) have GND connections. I have suspicion that it may also cause problems. You see, earlier, since load was on source side, these ground connections were direct. Now, they go trough transistor. When all these pins(1,5,6,7,8,11) are connected to transistor drain side, issue persists.
If someone have any ideas please let me know. I need to fix this delayed behavior.
AI: Yes, your first circuit is a source follower so it will not be a good choice. Your 2nd circuit will work but, regards the start-up delay this may be the problem...
The data sheet has a clue: -
And that 100 ms assumes the chip is powered prior to being enabled so, it may be even longer from a true cold start. |
H: 7 segment display not showing digits simultaneously with an ATMega32
I want to show the digits'18221021' simultaneously. The seven segment display should show all 8 digits at a once.
I wrote the following code. It shows all the digits but not simultaneously. It is showing them one by one like 1 then 2 then 8 and so on.
I tried lowering the delay from 200ms to 20ms but at 20ms the output looks scrambled.
How can I solve this problem?
include <mega32.h>
#include <delay.h>
void main(void)
{
DDRA=0xFF;//DATA_BUS
DDRC=0xFF;//DATA_SELECTOR
while (1)
{
PORTA=0b00000110; //Digit1
PORTC=0b11111110; //Data selector
delay_ms(200);
PORTA=0b01111111; //Digit 8
PORTC=0b11111101;
delay_ms(200);
PORTA=0b01011011; //Digit 2
PORTC=0b11111011;
delay_ms(200);
PORTA=0b01011011; //Digit 2
PORTC=0b11110111;
delay_ms(200);
PORTA=0b00000110; //Digit1
PORTC=0b11101111;
delay_ms(200);
PORTA=0b00111111; //Digit0
PORTC=0b11011111;
delay_ms(200);
PORTA=0b01011011; //Digit 2
PORTC=0b10111111;
delay_ms(200);
PORTA=0b00000110; //Digit1
PORTC=0b01111111;
delay_ms(200);
}
}
Simulation: (One digit at a time)
AI: That appears to be an LED seven segment display. You can only light up one digit at a time. You have do it one digit after another. If you do that fast enough the digits will appear to all be on at the same time.
Take out the delays.
Blank each digit before you switch to the next.
Set the PORTC values before you set the PORTA values. PORTC selects which position the character is drawn in while PORTA draws the character. If you do it in the wrong order, your characters won't be where you expect them.
Like this:
PORTC=0b11111110; //Data selector
PORTA=0b00000110; //Digit1
delay_ms(1);
PORTA=0b00000000; //blank
delay_ms(1);
PORTC=0b11111101;
PORTA=0b01111111; //Digit 8
delay_ms(1);
PORTA=0b00000000; //blank
delay_ms(1);
//Following digits as above
The real question is how well the simulator can simulate the seven segment display and whether it can do it as fast as the simulated processor runs.
This is the kind of thing best done on real hardware. You are trying to learn the basics of how a circuit works, and having to fight simulator problems as well as the things you set out to learn. |
H: ESP8266 GPIO pin improvement
Is it possible to make this connection to an ESP8266 pin?
I saw this at Ruggeduino and tried to implement it for the ESP8266.
Below is the schematic:
Here's my reference:
This type of diode:
Here's my whole schematic:
Here's the PCB looks like:
TOP SIDE
BOT SIDE
AI: That topology works for 5V and above, but 3.3V Zener diodes are pretty sloppy. They have very soft knee and they vary a lot on when they will start conducting.
A better approach at this and lower voltages would be to use two Schottky diodes connected to the power rails.
simulate this circuit – Schematic created using CircuitLab |
H: Building a band-pass filter using an op-amp
I am quite new to op-amps but have looked at them in college and have done my own research on them. I am trying to use an op-amp to build a simple band-pass filter which has an in-band voltage gain of 5. The main goal is to have a signal which is received from an ultrasonic transducer enter this filter. The signal will be a sine-wave in the order of a couple hundred millivolts around 1.1MHz frequency. I want the band-pass filter to limit the noise that passes through. I also want the in-band gain to be around 5.
Let me go through what I have tried already. I have been looking at Texas Instrument’s TDC1000 which is an “Ultrasonic Sensing Analog Front End”.
I have been looking in particular at the Low Noise Amplifier in the receiver path. I am trying to replicate the circuit seen in Figure 16 on page 14:
According to the datasheet, the in-band gain is set by the capacitor ratio:
Gain_in-band = CIN/CF = 300pF/30pF = 10
The high-pass corner is set by the feedback resistor and capacitor:
FCH = 1/(2πRFCF) = 1/2π(9000)(30x10^-12) = 589.5kHz
The low-pass corner is set by the Gain Bandwidth Product and the gain:
FCL = GBP/Gain = 50MHz/10 = 5MHz
This all seemed to make sense, however, other websites showed different circuit configurations for band-pass filters. Some circuits included another resistor before CIN and there was no mention of the "Gain-Bandwidth Product".
I decided to build my own version of the Low Noise Amplifier circuit that was presented in the TDC1000. Picking out an op-amp was tricky enough but after doing some research, the LT1128 seemed like a good choice. The datasheet for this op-amp can be accessed below:
https://www.analog.com/media/en/technical-documentation/data-sheets/1028fd.pdf
It is a low noise precision high speed op-amp with a Gain-Bandwidth product of 20MHz. I built the following circuit on a breadboard:
Following the previous equations:
Gain_in-band = CIN/CF = 100pF/20pF = 5
High-pass corner:
FCH = 1/(2πRFCF) = 1/2π(10000)(20x10^-12) = 795.8kHz
Low-pass corner:
FCL = GBP/Gain = 20MHz/5 = 4MHz
To test out the circuit, I applied a 500mV peak-to-peak sine-wave from my signal generator and varied the frequency. The circuit was not operating as I had expected. Below are a few screenshots from my oscilloscope. The yellow channel is the input signal, and the blue channel is the output of the LT1128.
800kHz:
1MHz:
I am not too sure what is going on, changing the amplitude of the input signal did not change the amplitude of the output. However, varying the frequency of the input signal had an affect on the phase and amplitude of the output signal. What I had expected to happen was that within the frequencies from 795.8kHz to 4MHz the output signal would be inverted and 5 times greater than the input.
I decided to take a step back and just build an inverting amplifier with a gain of 4.7. Below is a schematic of the simple circuit that I built on a breadboard:
I applied a 1V peak-to-peak sine-wave and varied the frequency. At 300kHz the output signal is as expected, it is inverted and has a peak-to-peak voltage of 4.7V. Below is a screenshot from my oscilloscope, yellow channel is the input signal and the blue channel is the output:
As I increase the frequency, the phase and amplitude of the output signal changes and the amplitude of the input signal no longer has an effect on the output.
700kHz:
I am confused as to why this happens, I thought that the op-amp should continue to amplify the input signal at much higher frequencies.
I have done lots of research on the bandwidths of op-amps, but it seems that I am missing something.
Questions:
Is the op-amp itself that I picked not right for this application, is it one of the parameters of the LT1128 that I may have overlooked?
In regards to the band-pass filter circuit, can anyone spot any mistakes with the schematic or can explain what is happening?
In the inverting amplifier configuration, why does the output signal start to change above an input signal of 300kHz?
If it’s the fact that I need to do more reading, does anyone have any recommendations?
AI: For an Op Amp to have low gain error at 1MHz you need about an open loop gain of >100 x the closed loop gain or 500 MHz= GBW.
A video amp would be better suited using discrete components.
Not knowing your source impedance is assumed to be 50 Ohms. Buffer with Emitter follower if higher or if lower adjust R according as Rb(eq)/Rin controls the closed loop gain of 5.
Proof of concept for a PN2222A
The reason why Op Amps in general are not well suited for amplifying in 1MHz band bandpass filters is the GBW product needed to overcome the Error in gain is also multiplied by Q^2.
Theory of Operation.
This is my classic common emitter 50 Ohm amp using negative feedback. Gains are traded-off to improve linearity and increase bandwidth. Base pull down adjusts collector Q point for low gain. In other situations if Vc is too high (low Ic) then a bigger pull-up R base to Vcc is
used to increase Ic and thus reduce Rbe and increase gain.
There are better transistors but the old PN2222A has a minimum current GBW of 300MHz which beats any “old” Op Amp ! This is what makes it work here.
AC coupling the input signal is necessary so the 100nF is just to isolate DC from input to base bias with ~1 ohm at 1MHz. You can make this bigger which affects bias settling time after T=(50+50)C =10us rise time.
Rcb/Rs=500/50 ohms sets the closed loop gain maximum of 10 which is further reduced by other base loads.
The input impedance is further loaded by the 400R to ground which pulls up the collector DC voltage to midpoint and attenuates the gain. Re =2 plays an important role to set the open loop gain Rc/(Re+Rbe*)= 15 to 25 and at the same time raise the input impedance hFE*Re~200 ohms for hFE=100 which further attenuates the input just as the 400 Ohms does.
Rbe* = 25/Ic (mA) [ohms] is controlled by collector current , which makes it in the 1 ohm range thus reducing the open loop gain, so Re was chosen around the same or slightly higher for linearity vs gain reduction tradeoff.
Overall this is far better than an open loop H bias due to the excess gain used for error correction or negative feedback. Albeit only a small amount but sufficient.
If you need more linearity then more power can be used to achieve this with 5V and reduce Rbe from higher Ic and thus increase open loop gain for extra feedback from this excess open loop gain.
Rc =50 sets the open loop impedance and also open loop gain from Re
is reduced by base to emitter load of hFE*Re.
The net result is the closed loop gain is 5x with an open loop gain of 20 to 50 so the excess gain, lowers both the base and collector impedance by negative feedback. This is similar to Op Amps effect with NFB except the excess gain being fed back is much less and there is no integrator as in the OA to limit the open loop gain, so it has much greater BW than your OA. (Depending on GBW of transistor) |
H: What function does this resistor/capacitor arrangement provide in this reset circuitry
I have been trying to decipher what the arrangement of the resistor and capacitor does in the circuit below, but I am at a loss to explain it. Can someone explain to me what it is doing? The circuit itself is designed to hold a reset pin low for approximately 0.5 seconds to feed into a W65C02S CPU and support chips to provide a hardware reset function.
AI: From the comments:
Is the time constant for the capacitor to charge equal to 0.1 seconds (R3 * C4)?
Correct.
So the voltage begins to rise approx. 0.1 seconds after the button is pressed?
Not quite. The voltage begins to rise immediately the button is released. It will be at 63% after one time constant (τ = RC) and that will be close to the switching threshold of the input. |
H: Boolean Logic - Realization with using only 4 NAND gates
I am currently preparing the next semester of my program at university and I am stuck at a question of the "Digital Circuits" class.
Here is the task:
[...] pump P1 runs when the fill-level of the tank falls below 90% (U=0), however only when the opacity sensor gives output (S1=1) at the same time. If the lower fill-level sensor signals with L=0 that the fill-level is below 50%, pump P1 runs (P1=1) independently of the opacity sensor signals (S1), as long as U=0 at the same time. [...]
The task is first to derive the Boolean equation for the combinational logic of the pump controller (output signal).
I ended up with P1 = U'*(S1+L').
We are then to realize the controller using only NAND2 gates with the hint that if 4 NAND2 gates are not sufficient to realize the circuit, we should consider further simplifying the Boolean equation.
This is where my problem lies:
I can only get to realizing the circuit with 5 NAND2 gates:
P1 = U'*(S1+L') = U'*(S1+L')'' = U'*(S1'*L)'
simulate this circuit – Schematic created using CircuitLab
I don't see any way how I could possibly simplify it any further. Maybe somebody could give me a hint.
Thanks in advace :)
AI: Unless I have made an error (very likely) I believe this can be done with 3 NAND gates. The truth table must look like this:
$$\begin{smallmatrix}\begin{array}{rrr|cc}
U & S1 & L & P1 & \text{comments}\\
\hline
0 & 0 & 0 & 1 &\\
0 & 0 & 1 & 0 \\
0 & 1 & 0 & 1 \\
0 & 1 & 1 & 1 \\
1 & 0 & 0 & x & \text{can't happen}\\
1 & 0 & 1 & 0\\
1 & 1 & 0 & x & \text{can't happen}\\
1 & 1 & 1 & 0\\
\end{array}\end{smallmatrix}$$
Using a Karnaugh map to find the Boolean expression yields a realization that uses 3 2-input NAND-gates. |
H: control of linear network
We have a linear network with n nodes, of the form: $$\dot{x} = Ax$$.
I would like to apply some concepts of linear control theory to the network and see how it goes, by using Matlab.
As a first step I understand that this form of systems have only one fixed point at 0.
We need therefore to see what this point is, regarding its stability. I would think of 3 possible ways to do that:
A. by plotting the solutions and see if any of them explode to
infinity, just as a basic way of getting intuition on the behaviour
of all solutions in a small time-window (this way of course cannot
be a safe way to assess its stability since one would have to check
for infinite time)
B. by assessing the eigenvalues of the
system, here we have two that are positive:0.0102, 0.0100, -0.0177,
-0.0172, -0.0092, -0.0090, -0.0000, -0.0000, -0.0003, -0.0003
C. by checking the values of the determinant, trace and discriminant of A.
Here the determinant Det = 3.7870 10^(-28) (positive), the trace Tr=-0.0335
(negative), and the discriminant $$\Delta =Tr^2-4Det=0.0011$$
1st question: I would expect the trace to be positive, as it indicates unstable behaviour which is clearly the case for the specific system. What might be the reason for this discrepancy? Or is it that I got it wrong?
Let us assume for a moment that the system is indeed unstable at the origin. By use of control, we would like to make this fixed point stable.Therefore, I need to find such a \$u(t)\$ that if applied to the system appropriately (an appropriate \$B\$ is required) it will shift its dynamics from unstable to stable. For this \$u(t)\$ to exist, it is required that the system be controllable, under the specific \$B\$. I have calculated potential \$B\$ matrices that render the system controllable, so one could consider now the system $$\dot{x}=Ax+Bu$$, as controllable.
We are still looking for the \$u\$ that has the form \$u=K(r-y)\$, where \$K\$ is a matrix we will compute, \$r\$ is the reference we want to achieve, \$y=Cx\$ are the measurements, and \$C\$ is the identity matrix if we have full-state feedback. So, we have: $$\dot{x} = (A-BKC)x + (BK-A)r$$. In other words, given that the system $$\dot{x}=Ax$$ was unstable as we assumed, we add a controller which which will transform \$A\$ matrix accordingly such that the eigenvalues of the "new \$A\$" matrix be stable: the "new \$A\$ matrix" is \$A-BKC\$. I used the place command to just make only the positive eigenvalues negative, while keeping the negative ones as they were. The reference \$r\$ is just zeros. The result is the following, and shows all solutions going to zero
The problem is that even if I use a \$B\$ which doesn’t grant controllability to the system (so, if I choose different nodes), the result still shows all solutions converging to 0:
I would expect that in the uncontrollable case, things would be a little less "prone to convergence"
2nd question: Why is that happening? Does it have something to do with the system being linear and probably easy to be stabilised if not controllable? Or is there any misconception in the use of the closed-loop method above?
3rd question: I realised that the computation of controllability of the system depends significantly upon the decimals of the values of \$A\$ matrix. For instance, the components of the matrix I used to produce the results above, were of an order of magnitude of 10^(3). To control this \$A\$, one node was never enough. On the other hand, when I scaled this \$A\$ matrix by something like 30, one node was always enough to control the whole network. Does it have something to do with the how the occurring rank of the controllability matrix is being read by Matlab, when after several multiplications of \$A\$ with itself it quickly becomes a very small number?
really sorry for the long post!
AI: 1st question: I would expect the trace to be positive, as it indicates unstable behaviour which is clearly the case for the specific system. What might be the reason for this discrepancy? Or is it that I got it wrong?
I don't think that this trace condition you mentioned holds in general. From linear algebra we have that
$$ \text{tr}(A)=\sum_{\lambda \in \Lambda(A)} \lambda \\
\text{det}(A)=\prod_{\lambda \in \Lambda(A)} \lambda \\
$$
where \$\Lambda(A)\$ are the eigenvalues of matrix \$A\$. And, to be unstable all it takes is one positive eigenvalue. So you can easily come up with systems that are unstable and have negative trace (one small positive eigenvalue and all other eigenvalues very negative.) Regarding the determinant, "[h]ere the determinant Det = 3.7870 10^(-28) (positive)" might be incorrect, because it is a very small quantity of the order \$10^{-28}\$ so it might be nonzero due to numerical error.
The discriminant also does not help much for large matrices, check out discriminants for n>4.
and C is the identity matrix if we have full-state feedback.
No, if you are using the output signal (without an observer/filter) for your control that is called static output feedback. If you had a measurement of the whole state \$x\$ you would call it full-state feedback. It is not always possible to stabilize a system using static output feedback.
The problem is that even if I use a B which doesn’t grant controllability to the system (so, if I choose different nodes), the result still shows all solutions converging to 0:
I would expect that in the uncontrollable case, things would be a little less "prone to convergence"
Even if you don't have controllability you might be able to stabilize a system, say you have
$$ \dot x = A x + Bu, \quad A=\begin{bmatrix} -1 & 0 \\ 0 & 1\end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ 1\end{bmatrix} $$
it is not controllable, but you can make it stable.
2nd question: Why is that happening? Does it have something to do with the system being linear and probably easy to be stabilised if not controllable? Or is there any misconception in the use of the closed-loop method above?
The only thing that is wrong is the usage of that output matrix \$C\$ in the feedback, other than that it seems fine. I also don't know what the place function does when the system is not controllable. But you can check if it is stable by looking at the eigenvalues of \$A-BK\$.
3rd question: I realised that the computation of controllability of the system depends significantly upon the decimals of the values of A matrix. For instance, the components of the matrix I used to produce the results above, were of an order of magnitude of 10^(3). To control this A, one node was never enough. On the other hand, when I scaled this A matrix by something like 30, one node was always enough to control the whole network. Does it have something to do with the how the occurring rank of the controllability matrix is being read by Matlab, when after several multiplications of A with itself it quickly becomes a very small number?
I don't quite understand what you are asking here, like, if you have some positive number \$\alpha\$ and
$$ \tilde A = \alpha A$$
then, \$(A,B)\$ is not controllable iff \$(\tilde A,B)\$ is not controllable. Just look at the controllability matrix for \$(\tilde A,B)\$
$$ \mathcal{C}_{(\tilde A,B)} = \begin{bmatrix} B& \tilde AB& \tilde A^2B && \tilde A^{n-1}B\end{bmatrix}$$
which differs from controllability matrix for \$(A,B)\$ by having its columns being multiplied by some positive number, which should not change the rank of the matrix
$$ \mathcal{C}_{(\tilde A,B)} = \begin{bmatrix} B& \alpha AB& \alpha^2 A^2B && \alpha^{n-1} A^{n-1}B\end{bmatrix}$$ |
H: Transformer output voltage not rising with input voltage
I recently stripped a microwave of its transformer and I was looking forward to playing around with it. I didn't want to plug it directly into the wallsocket so I decided to design a circuit that can run on my DC power supply. Here is my circuit:
simulate this circuit – Schematic created using CircuitLab
I created a monostable oscillator circuit for my clock like the one shown here:
and I am using it to create a square wave across the transformer. In the schematic I have a 10K ohm resistor at the end of the transformer but in real life I have a spark gap (couldn't find one in the builder). Now as I raise the input voltage (at the peak of the square wave) I don't see anything until the current rises past around 0.03A and which point I can get sparks! The output voltage is somewhere around 30V unloaded which will drop after a load but I don't really care about this as I am using a spark gap instead of a real load. The problem is, when I raise the input voltage to around 30V with extra resistors behind it so that the current is below the 0.1A mark at which point my transistor explodes, I still only get a voltage difference of 30V at the output. Why does this happen? Do I need more current to get a higher output voltage?
AI: A transformer and an inductor are often confused. A transformer for a microwave oven is a combination of an inductor and a transformer.
Your transistor effectively has an inductor in series with its collector (imagine exchanging the positions of the supply and transformer in the circuit -- they are in series and so this has no effect) -- then when the transistor switches off, the inductor acts to keep the current flowing -- which it will -- so the collector voltage will rise to the breakdown of the transistor and ultimately damage it.
Secondly, a 2N3904 is not capable of much power dissipation (or current) -- it may just be overheating.
You don't explain precisely how you drive the base, so it's hard to be more definitive.
You may need a more sophisticated base drive (and a more robust transistor). Try using a power FET (2N3055 NPN), or a TIP3055 (MOSFET). Using a MOSFET is easier. Put a resistor (1k) in series with the gate, and add a ~ 50 V zener diode + a1N4148 in series between the drain and gate. That will protect it. |
H: Do automobile DC systems sport a device synonymous to the electrical infrastructure that we call Earth Ground wiring in standard household AC systems?
In automobile DC circuits, the chassis is typically connected to the negative battery terminal and, thus, the chassis provides the infrastructure for closing the DC circuit. In diagrams, this is often depicted with a symbol for the chassis ground. Unlike the Earth Ground wiring in a standard household AC system, the chassis ground is part of the electrical system (i.e., the circuit will not work without it), whereas the Earth Ground only becomes part of the circuit when certain other things malfunction (i.e., it is a safety device). My question is: Do automobile DC systems typically sport a safety device synonymous to the electrical infrastructure that we call Earth Ground wiring in standard household AC systems? Or is such infrastructure not needed in a car, and if it is not needed, why is it not needed? Obviously, there are breakers and fuses but those are also part of standard household AC systems. Therefore, I do not think that these components are the answer to my question. Also, while most of the time, you might be sitting in your vehicle, safely aloft your rubber tires, you might also hold or touch a connected electrical load while standing barefoot on nicely irrigated lawn outside of the vehicle.
AI: Also, while most of the time, you might be sitting in your vehicle, safely aloft your rubber tires, you might also hold or touch a connected electrical load while standing barefoot on nicely irrigated lawn outside of the vehicle.
And then what? No current flows through you, and you don't get hurt, because there's no circuit through the earth back to the battery negative terminal. The car is sitting on a nice set of insulators (tires), so the whole system is isolated from earth, and there is no need of (or benefit to) a protective ground.
It's possible to do the same with AC line power — to make a system where line and neutral are both isolated from ground, and there's no protective earth. In a trivial way, it's safer — a fault to ground won't pass any current (or just a tiny amount due to capacitive effects). But in such a system, a fault to ground also won't trip any breakers, make sparks, or do anything very noticeable, so it can pass undetected, and then a second fault to ground can easily be deadly. For this reason, and because of the logistical pain of keeping everything isolated in the first place, we don't normally do things this way. But for cars, airplanes, etc., it's a natural choice. |
H: Divide Hall effect sensor input
I've made a circuit which is supposed to take the output of a Hall effect sensor and divide by 5, so that I get one pulse per revolution of a motor. The motor is a Nidec FY9S40 variant.
Since the current out of the Hall effect sensor is very low, I have used a HCPL-4701 optocoupler. I then feed the output of the optocoupler into a CD74HC4017 counter. I have read that CMOS chips are very sensitive and can trigger on unwanted pulses if you for instance use a breadboard for testing, so I soldered the following circuit on a experiment board:
This worked quite well. I had what appeared to be some false resets now and then, but I assumed this was due to long wires between input at HALL_1 and output 19(Rx1). I made a proper printed circuit board. The problem is that when I got this and assembled the parts, the dividing does not seem to work at all. It is off every 3-4 rotations of the motor.
Yesterday, I came across some posts about current transfer ratios and calculating the resistor values of optocouplers. Due to the low output current, I found that perhaps R27 should be higher. I went with about 40K. This produced a cleaner square wave out of the optocoupler when I also lowered the value of R22 to 2k and removed R9. I believe these two resistors are in parallel anyway.
Can anyone spot some obvious mistakes in my design?
Are there some special precautions I should have taken when laying out my board?
(C14 and C7 are placed right next to their corresponding integrated circuits.)
AI: The optocoupler is probably doing more damage than good. It takes considerable current to drive the LED in the optocoupler. The 4017 is built in CMOS technology. Its inputs will draw virtually no current as compared to the optocoupler.
What you need is a Schmitt trigger to convert the most definitely not square wave output of the Hall effect sensor to a nice, neat square wave.
You might use something like the 74HC14. It is an inverter with Schmitt trigger inputs.
From the datasheet of the CD74HC4017:
That's 01. to 1 µA of current for the input of the CD74HC4017 versus 40µA for the optocoupler.
The leakage currennt for the 74HC14 is identical:
Both beat the pants off the optocoupler as far as input current is concerned.
The Schmitt trigger also has the advantage of being designed for what you really need - squaring up a squishy signal to be used as an input for the counter. |
H: TMUX1119 SC70 package first pin identification
I have a TMUX1119 with an SC70 package.
The datasheet says there is a dot near the first pin. The actual device that was received is shown below. I can see a line and below it is written 1DF.
Which is the first pin of this IC?
Is it the one which is left of the straight line?
AI: You are correct in that the circled pin is number 1.
This is indicated by the horizontal bar on that end of the IC: |
H: Putting AC across polarised capacitors in headphone amp?
This circuit is from Fig. 20.3 in Douglas Self, Small Signal Audio Design, Third Edition, Routlege, 2020.
This is a headphone amplifier, and as such, both input and output signals are alternating current. Because the TL072 can't drive large loads, the design includes a very simple Class-AB amplifier.
My question concerns the capacitors C1 and C2. I have very carefully copied them from the figure in the book, and they are very clearly drawn as polarised capacitors. Now I read, in books, and also as answers to questions on this very site, that you should not put AC across polarised capacitors. On the other hand, Mr Self is not some random dude with a YouTube channel. He really knows what he's doing and so I doubt that he made a simple mistake. So my question is, what is going on here? Can I use polarised caps in this design safely? And if yes, does the polarity really matter, given that the voltages across them will not be polarised in one particular direction for very long?
And if I can not use polarised caps safely, what should I use instead, given the rather large value especially of C2?
AI: Can I use polarised caps in this design safely?
Yes you can.
...does the polarity really matter
It does but polarized capacitors / electrolytic capacitors can always withstand a small voltage of the "wrong" polarity. A small voltage could be 1 V DC for example.
Also it is the DC voltage that matters. In this circuit both sides of the capacitors are biased at a DC voltage of 0 V. When there is no (audio) signal , the DC voltages across the capacitors is 0 V.
This can easily be done in this circuit as the amplifier is powered from a symmetrical power supply of +/- 17 V. For comparison, have a look at an amplifier which is powered from a single supply and note how the input and output capacitors will have a DC voltage acros them so then the polarity is much more important compared to this design.
Only when a signal is applied will there be some (small) AC signal across the capacitors, remember that the capacitors short the AC signal so there will not be a lot of signal across the capacitors. If there is, then a larger capacitor value needs to be used.
..given that the polarity of the voltages across it will not be polarised in one particular direction for very long?
Yes and that is why the AC sigal is OK even if that means that the voltage across the capacitor will be negative half of the time. What matters is the average or DC value. A proper AC signal has a DC value of 0 V so it does not really have any polarity. |
H: What's the difference between the D and DTR parts?
Is DTR just the same as D which is packaged in tape and reel? If so, was it really necessary to categorize them like this?
AI: According to the L6562 datasheet, the table you have copied is for the order codes. That's the part number you put in your order. If you want the L6562D delivered on tape and reel, then you must order the L6562DTR rather than the L6562D which is not delivered on tape and reel. The parts you receive will be marked "L6562D" rather than "L6562DTR," but they'll be delivered in tape and reel.
Yes, it makes sense to separate the order codes that way. |
H: What is actually being measured in this video showing "excess AC voltage" in the human body?
One of my family members recently sent me a link to this Better Earthing website about the alternative therapy of "earthing".
The site includes which includes a video that claims to show "excess AC voltage" and how to reduce it with their product.
(The video on YouTube if you don't want to visit the site itself: https://www.youtube.com/watch?v=pJJNQWKXUpE)
I'm curious about what physical or electrical effect is actually being demonstrated in the video.
The description on the video says:
Peter demonstrates how to use an autorange multimeter to measure the conductivity of [the earthing] products.
You'll see a 100 times reduction in excess AC voltage in Peter's body when he starts Earthing.
The setup:
The setup in the video shows:
a multimeter connected to ground (via the ground pin of a power outlet, no connections to the live wires)
the multimeter set to read "AC voltage"
a person touching the test lead to the multimeter
there is no current source in the circuit - i.e. no connection to AC power source at the wall socket
They then taking readings from the multimeter in a couple of sitations:
A. just holding the test lead
B. holding the test lead whilst holding a hand over a light switch
C. holding the test lead whilst sitting or lying on an earthing mat, which is also connected to ground
The video:
I've transcribed what he says at the different points of the video:
A. 1:17 "I've got 3.1 volts of AC voltage in my body"
B. 1:20 "If I put my hand [on] this light switch I'm ... up over 10 volts, I'm up to 11. So I've now got 11 volts of voltage in my body that I'm picking up from live wires."
C. 2:08 "I'm sitting directly on [the earthing bed sheet] and I've just gone down 2.1 [volts], to one third of a volt from about 3 volts. So that's about a 25-, almost 30-fold, decrease"
the measurement seems to go down to a reading of 0.13V
I only have a very basic understanding of electronics/electricity, but it doesn't make sense to me what the multimeter could be measuring without a flow of electrons to measure as current.
Questions:
So my question is pretty much: what is actually going on in this video?
In situation A (just holding the lead of the multimeter) what is the multimeter measuring?
is it any sort of meaningful measurement if there is no electricity source?
is it even considered a circuit if a person is just connected to ground?
if there is no source or load in the circuit then what flow of current is the multimeter reading as the AC voltage?
In situation B, how might holding the hand over a light switch cause the reading to increase?
In situation C, what is happening to cause the voltage reading to decrease?
the circuit ground-to-ground connection with a person and multimeter in between - how is that different to the "circuit" in situation A?
This question seems to be asking a similar question. However, there isn't a clear answer that I could understand and the experience of the question-asker seemed to be the opposite of what was being shown in this video (unless I'm mistaken?).
Thanks!
Previous discussin of earthing in the Skeptics community:
https://skeptics.stackexchange.com/questions/1652/does-earthing-reduce-oxidative-stress-and-inflammation?r=SearchResults&s=1|40.8550
https://skeptics.stackexchange.com/questions/35113/does-grounding-earthing-reduce-inflammation?r=SearchResults&s=2|40.1058
https://skeptics.stackexchange.com/questions/42972/does-earthing-therapy-reduce-cortisol-secretion-and-help-sleep?r=SearchResults&s=3|39.5105
https://skeptics.stackexchange.com/questions/23062/are-earth-grounding-mats-designed-for-connecting-humans-to-earth-ground-effect?r=SearchResults&s=4|30.4501
Not sure if it helps, but the multimeter being used is the Digitech QM1529.
AI: TL;DR; Nothing in that video is measuring anything particularly interesting, beyond the normal magnetic fields produced by mains wiring. It's nicely demonstrating how the live wire detector wands work (sticks that glow when near a live wire).
There is no "excess voltage in your body" to be concerned about. The video was made soley to extract money out of people by tricking them into buying a useless product.
So my question is pretty much: what is actually going on in this video?
In situation A (just holding the lead of the multimeter) what is the multimeter measuring?
A mixture of capacitive coupling, radiative pickup (person acting like giant antenna), and inductive coupling between the earth cable in the socket and the live cable that runs along side it for a long distance.
is it any sort of meaningful measurement if there is no electricity source?
I suppose it depends on what you mean by meaningful. You're not measuring "excess voltage in the body", but largely the voltage fluctuations of the earth wire in the socket.
is it even considered a circuit if a person is just connected to ground?
The person is capacitively and magnetically coupled to other things, just very weakly. Because the fields are weak, the currents involved are tiny.
if there is no source or load in the circuit then what flow of current is the multimeter reading as the AC voltage?
The source of the voltage reading is the mains. The load is the mains wiring and to a tiny extent you.
In situation B, how might holding the hand over a light switch cause the reading to increase?
Touching the light switch is effectively forms a large ground loop between the socket, whatever cabling is involved, you, and back to the multimeter.
This has increased the amount of inductive coupling because you are now measuring a much longer length of earth wire - you've now got a path which runs from the socket back to the consumer unit (or wherever the earth wire goes), then back from the CU to the light switch. This might now be the entire length of your house for example.
In situation C, what is happening to cause the voltage reading to decrease?
the circuit ground-to-ground connection with a person and multimeter in between - how is that different to the "circuit" in situation A?
By sitting on the earthing mat, you are effectively shorting out the antenna effect back through the mat - effectively like connecting the two ends of the probe together.
The mats act to form an equipotential region, which is only of any use if you are handling sensitive electronics - ESD mats are common in many electronics manufacturing facilities and testing labs. |
H: Possible Contradiction in N87 ferrite material μa vs B graph
Why does this graph description (of N87 material) states ^B =< 0.25mT (which is peak flux density), even though horizontal axis goes up to 0.40T?
Is the horizontal axis in tesla peak-to-peak? Is it a mistake?
The dοcument is here.
AI: The caret ^ over the B indicates this is the Peak magnetic flux density for the small AC value during a large sweep at a lower frequency. This is the preferred AC value to measure the slope of the curve.
This is like small signal AC gain vs DC gain.
Many magnetics are rated for current when the inductance drops 10%. These plots here show the peak AC flux on an otherwise low f sweep or DC flux.
When the curve goes negative, losses and temp increase into possible thermal runaway. The peak B must be avoided with some safety margin. |
H: Connecting load to SSR - Doubts regarding load position
I am using a solid state relay to control supply to a load.
I would like to provide ground to the load all the time. When the SSR is turned ON (8.4 V will become available to the load). In the datasheet, they have mentioned different types of connections. I am using TYPE C (shorting pin 4 and 6, for more current ratings.)
In this case, the load will always be connected to 8 V. Only when SSR is ON will the ground be connected to the load. I do not want this. I want the ground to be always available. Please refer to the attached image.
Is this type of connection acceptable?
The below picture is how, I would like to wire, so that Load is always connected to the ground. is this acceptable?
AI: Figure 1. You need to connect the load this way. Now the SSR output is in the battery positive and the load is connected to battery negative.
With the arrangement of Figure 1 you can now connect the battery negative to your control circuit COM.
simulate this circuit – Schematic created using CircuitLab
Figure 2. Common ground between two supplies and the load.
simulate this circuit
Figure 3. Equivalent circuit. |
H: Help a beginner understand if a design is correct
I'm fairly familiar with Ohm's law and I did some calculations I'd like experienced engineers to confirm if it is correct or point where I made a mistake
Here's the idea
I want to power a self-made strip of 20 LEDs preferably in series (assuming 2.5Vf @ 20ma)
I got a 4.5W, 5V, 0.9ampere DC power supply
I calculated the required resistor for the circuit like this:
P = V * I
P = (20 * 2.5) * 0.02
P = 1W
---
V = I * R
50 = 0.02 * R
R = 2500ohms
From that I assumed I could power 20 LEDs in series with that power supply and a 2.5Kohm resistor. I also learned that circuits in series have voltage drops but current is the same, and in parallel circuit, voltage is the same but current drops. Is that correct or did I miss something?
Again, I'm an absolute beginner in electrical engineering. I graduated in CS but never got into the physics of circuits.
EDIT
From the answers I gathered that a resistor is used to drop the voltage that isn't dropped by the LEDs, so in this case, my 5V power supply would be able to power at most 1 LED at full brightness or 2 albeit dimmer.
Could I use a step-up to get the voltage higher and power those 20 LEDs in series?
EDIT 2
All the answers helped me understand a lot more about electronics and how they work as well as to always design circuits for margin and not for the nominal values. I realized my mistake by trying to go for LEDs in series instead of parallel, it didn't come to me I'd need just two wires and not 2 for each LED, I know, dumb mistake.
Thank you all for the help, tips and information.
simulate this circuit – Schematic created using CircuitLab
AI: You can use the power supply with your LEDs ... if you connect each LED in series with its own resistor, and 20 of these in parallel.
As you have 0.9A and 20 LEDs at 20mA each, the LEDs will take 400mA total so you have plenty of power to spare.
You don't mention the type of LEDs, but blue/white/violet will take about Vf = 3V each, red/green/yellow about Vf = 2V each (for more exact values, read their datasheets).
So the resistor should be (5V - Vf) / 20mA = (5V - 3V)/0.02 = 100 ohms for white, or 150 ohms for red; increase these values for a safety margin (120 or 180 ohms) and it's perfectly safe to increase further if too bright.
There's more wiring involved, but it may still be simpler than a boost converter. |
H: AtTiny214 can't change waveform output pins
I am trying to program my AtTiny214 (TinyAVR1 Series chip, SOIC-14 version, relevant datasheet: Datasheet) so that I have 2 PWM outputs with different duty cycles.
My problem is:
Changing the contents of the TCA0.SPLIT.CTRLB register does not change the output location of the generated waveform. Enabling all outputs in the register does not work either. I made sure to set all the involved ports as outputs, but I only get a waveform on WO0 and WO3.
I would appreciate any kind of help or ideas!
Code:
TCA0.SINGLE.CTRLD = TCA_SINGLE_SPLITM_bm; //split mode enable
TCA0.SPLIT.HPER = 100; //period of both waveforms: 100
TCA0.SPLIT.LPER = 100;
TCA0.SPLIT.HCMP0 = 25; //duty cycle of individual waveforms: 25%, 75%
TCA0.SPLIT.LCMP0 = 75;
//TCA0.SPLIT.CTRLB = 0b01110111; //WO0-2 WO3-5
TCA0.SPLIT.CTRLB = 0b00010001; //WO3 WO0
TCA0.SPLIT.CTRLA = TCA_SPLIT_ENABLE_bm; //split enable
PORTB.DIRSET = PIN0_bm | PIN1_bm | PIN2_bm; //WO0-2 outputs
PORTA.DIRSET = PIN3_bm | PIN4_bm | PIN5_bm; //WO3-5
AI: After a lot of digging around, I found out that the WOn output is only affected by the TCA0.SPLIT.LCMPn, and the WO[3+n] outputs by TCA0.SPLIT.HCMPn.
By setting TCA0.SPLIT.LCMP1 to a desired value, and TCA0.SPLIT.LCMP0 to 0, I could freely change the output pin from PB0 to PB1. I hope this helps out someone with this problem in the future :)
Some code:
int main() {
TCA0.SINGLE.CTRLD = TCA_SINGLE_SPLITM_bm; //split mode
TCA0.SPLIT.HPER = 100; //period
TCA0.SPLIT.LPER = 100;
TCA0.SPLIT.HCMP0 = 5; //duty cycle of the WO3 output (PA3 by default)
TCA0.SPLIT.LCMP0 = 75; //duty cycle of WO0 (PB0)
TCA0.SPLIT.LCMP2 = 30; //duty cycle of WO2 (PB2)
TCA0.SPLIT.CTRLB = TCA_SPLIT_HCMP0EN_bm | TCA_SPLIT_LCMP0EN_bm | TCA_SPLIT_LCMP2EN_bm; //we are using HCMP0, LCMP0 and LCMP2
TCA0.SPLIT.CTRLA = TCA_SPLIT_ENABLE_bm; //split enable
PORTB.DIRSET = PIN0_bm | PIN2_bm; //WO0 and WO2 outputs
PORTA.DIRSET = PIN3_bm; //WO3 output
while(1);
} |
H: Capacitor value on supply grid
I am using ATmega328p TQFP for application described in schematic.
simulate this circuit – Schematic created using CircuitLab
So there is plenty of load on output side (up to 500mA). I have used capacitor on main battery grid of 10uF/16v.
Working : Atmega328p will stay in sleep mode until interrupt is triggered. After wake up it will turn on the transistor(FMMT720 PNP) hence the whole circuitry will be powered up. During which it does some work and go back to sleep.
Problem : Sometimes (maybe once after 20 times) when interrupt is triggered nothing happens, like microcontroller is stuck somewhere. If now I provide ground on reset pin it will reset and then again things will work fine for few cycle until it stuck again.
So I tried different things and turns out that adding capacitor of 100uF/25V parallel to 10uF/16 never let this happen again.
So was this really issue of capacitance ? or charge ? If yes how you choose capacitor before in such case ?
If so why only few device shows this problem and not all ? and not every time ?
Does it has to do anything with varying transistor gain ?
I have checked all aspects I can from software side and I don't think there is issue in firmware.
Thank You.
AI: The startup impedance of a boost converter is very low driving an uncharged output Cap.
Thus the ratio of source Cap ESR to load determines the glitch size on the input.
For this reason soft-start circuits are often used on boost converters or as you have done lower the input ESR by raising C. This is because all caps have different ESR values (low or general purpose) and the the ESR*C=T is constant within the dame family of capacitors. Typically it is <10us for low ESR types and 100~200 us for G.P. Types.
To measure glitches on any Vdd to gnd, use a small plastic AC coupling cap (low ESR) to 50 Ohms terminated at the DSO from 50 ohm coax. This can easily be done with a BNC T and a leaded resistor in the T socket if your DSO does not have this option. |
H: Where is the azimuth and radial direction on the mode plot
Hello the photo bellow presents TE01 mode.
Where on the plot can i see the azimuth and the radial directions?
Thanks.
AI: That is completely up to you to define. We can't tell you where you put your horizontal plane, and its normal.
Especially in something as cylindrical as your ... cylinder, it makes little sense to work in a 3D spherical coordinate system, so there's no canonical origin. |
H: Short circuited power supply still safe to use?
Last night I short-circuited a Samsung 5v 1A phone charger to see how much current it could deliver. With my multimeter I measured 1.2A, despite the fact it says 1A. Is it possible I have damaged my charger, is it still safe to use? Furthermore, would it be safe to use at as a 24/7 power supply for something like a Raspberry Pi?
Thanks
AI: USB chargers like these are flyback switchmode power supplies, they have inherent current limiting (as you've witnessed yourself). The charger is still fine and you can continue to use it.
A lot of smartphones also test the maximum current of a connected charger by successively increasing the current they draw until the charger goes into current-limiting mode. The chargers have to be built to survive this as it happens every time you plug in your phone.
1A might not be enough for a Raspberry Pi, though. There are official recommendations on chargers for the various Raspberry Pi models. |
H: Prevent spikes during transition of inputs in AND gate
Question
Suppose we have a two-input AND gate. The inputs do not change instantly. So, it's not a perfect rectangular signal rather trapezoidal.
Consider the situation when input 1 is transiting to the LOW state while input 2 is transiting to HIGH, there is a short instance of a time when both of them are HIGH and hence, there is a spike in the output of the AND gate.
1. What electronic circuit or methodology can we use to get rid of these spikes?
2. Is there a definite terminology for this problem?
Plots
What I have:
What I require
Background
I am currently working on a Project. The terms that I used here may not be accurate so do pardon me, and correct me if possible. Thanks in advance.
AI: There's nothing that can be done to eliminate output spikes in this circuit.
The AND gate is suffering from an 'internal race condition'. This would often be referred to as a 'decoding spike', as they are most frequently met when an address word goes into something like an 74HC138 address decoder, and two address bits transition simultaneously.
This isn't a fault, it's perfectly normal. Logic gates are not defined or designed to do anything reasonable when the inputs are between the logic thresholds. They are defined to output the correct logic state some short time after the last input's transition to a valid state.
These sort of spikes are tolerated all the time in clocked systems. On one clock edge, the changing (usually address) bits are sent to the combinatorial logic. For a while, the outputs will slew, and spike, and eventually settle down. On the next clock edge, when the outputs are stable, they will be clocked into the next stage of processing. It's the job of the digital designer to make sure there's enough time between clock edges for the outputs to stabilise.
If you want to use the outputs directly, and can't lowpass filter to suppress the spike, then you must takes steps to skew the inputs. This is the reason Gray Code was invented, and Johnson counters have this property too, that unlike straight binary counters, only one output makes a transition on any clock edge. |
H: In BLE pairing procedures, which channel is used for communication?
I'm working at electronics company.
And I was wondering which channel is used until two BLE devices are paired?
Does it use advertisement channel to exchange required information to pair? Or once advertisement is end and Central request to pair and then change the channel to data channels and also use channel hop?
Please tell me~~
Thank you.
AI: Yes, setting up the connection is done on the advertizing channels (which are only 3 if I remember correctly).
After everything is set up and negotiated, it is switched to the data channels (37 in total) which follows a pseudo random channel hopping sequence.
Pairing happens on the data channels. |
H: Max232 to DB9 connection confusion
The picture shows standard connection between 8051 and DB-9 via MAX232. 14th pin is the translated output of the transmitted data by 8051, so this should be connected to the receiver pin of DB-9, which is pin 3, but here it goes to pin 2, which is called 'transmitted data', so should we read "transmitted data=receiver of DB-9" and "Received data=transmitter of DB-9"? Because, to me the DB-9
Label seems opposite as expected.
AI: There's 2 common pinouts for DB9 RS232, Data Terminal equipment (DTE) and Data Circuit-terminating Equipment (DCE).
DTE is the male DB9 pinout typically found on PC's.
DCE is the female DB9 pinout found on various RS232 devices such as modems.
The pinouts for DTE and DCE are such that the transmit pin on the one type matches the receive pin on the other type and vice versa...
Connecting 2 DTE devices together (such as 2 PC's) requires what is known as a "null-modem" cable that crosses RXD and TXD.
Image Source
As you can see, your schematic above matches the pinout for the DB9 Female DCE. This allows the device to be connected to a PC (DTE) with a normal 1 to 1 male-female DB9 cable. |
H: Is metal casing always recommended/expected for audio devices (for EMI protection)?
I am considering using a wooden enclosure for several homemade audio devices such as DI boxes and isolation transformers but I know that most all commercial products use metal casing. I have noted, however, when taking some of these commercial devices apart that they dont even properly ground the chassis to effectively use the metal casing as an EMI shield. I was also considering just shielding certain components such as transformers for the device and leaving the rest bare. I also was going to try and conduct some tests to compare my wooden enclosure device with commercially available metal enclosures and go from there. Any input is appreciated.
AI: My first radio came in a plastic package, but audio circuits are sensitive to pulse noise currents such as the diode cap pulse current, which are often snubbed with 100 pf to reduce the bandwidth of the noise buzz.
Mostly noise is conducted rather than radiated.
from shared common ground currents and supply ripple as the stray capacitance in Class AB amps is not enough to pickup radiated noise.
SMPS are a different case. (Pun intended)
However there is opportunity for a poor layout to pickup current loops from the supply and return into high impedance loops.
By first experience of high powered vacuum tube power amps with no shields, left me with a great impression with very low noise, and crisp audio with booming Rock music in the late 60’s.
My advise is follow the layout practises for wiring of each section with small current loop areas and high quality caps and you should expect great results. Allow for proper ventilation with a screened vent on top. |
H: What is the relation between symbol duration and subcarrier frequency spacing in OFDM?
What is the exact relation between symbol duration and frequency spacing?
You may know in WiFi6, about 11ax band, there are more sub-carriers compared to WiFi5 because of using shorter frequency spacing, but also uses a longer symbol duration. I guess there is some relation between them.
AI: Subcarrier spacing is equal to the reciprocal of the symbol time.
Equalizing narrower channels flattens group delay error and improves noise margin at the expense of added DSP processing. |
H: My precision full wave rectifier is only outputting half wave; not sure why
I am currently using a precision full wave rectifier as shown below. The issue I have is that the output look like a half wave signal. Images of the input and output signal of the circuit is shown below. The diode I am using is 1N4001G and the opamp I am using is LMH6611. I would like to know how to get a full wave signal on the output. Any help is appreciated.
Input Signal below
Output Signal below
Input vs Output Signal
AI: simulate this circuit – Schematic created using CircuitLab
Figure 1. The first stage of the full-wave rectifier is a half-wave rectifier giving out the inverted positive half-cycles.
Figure 2. Input waveform in blue. Output in orange.
The problem is that you are using a single-ended supply and that means the op-amp can only output positive voltages.
Figure 1. Internals of the ancient 741 opamp. Source: Wikipedia.
From the internal schematic of the 741 op-amp it should be clear that the output can source current from the \$ V_{S+} \$ rail via Q14 or sink current to the \$ V_{S-} \$ rail via Q20. Nearly all (there are probably exceptions) op-amps will have a similar push-pull arrangement on the output.
It should be clear from the above that the lowest output voltage possible is when Q20 is turned fully on. The 741 is particularly bad and due to the proceeding stages the output can only get to within a few volts of VS-.
The LMH6611 you are using may be a lot better (I didn't check) but it can't possibly output a negative voltage if it is powered from a single 9 V cell.
The simplest solution is to add a second 9 V battery to provide the negative rail. |
H: What would be the Open Circuit Voltage and Thevenin Resistance in this Circuit?
I am hoping someone can help me understand Open Circuit voltage and Thevenin Resistance.
To test my understanding I created the attached circuit and ran a transient analysis using LTSpice.
With a large RLoad resistance of 9e9 Ohms, the node voltage at node N001 is 8.36V, the node voltage at node N002 is about 1.407V.
My understanding is that with an Open Circuit, there can be no current, so there can be no voltage drop across the RLoad resistor.
Does that mean that the Open Circuit voltage is 8.36V?
For the Thevenin Resistance, I changed the RLoad resistor to have a tiny resistance, 1e-9, and the reported current was 1.233 milliamps. From that, I thought rth = 8.36/1.233e-3 = 6780.21 Ohms.
My understanding is there is another way to derive rth. Replace the current source with an open circuit, the voltage source with a short circuit and then determine what the resistance should be. When I try to do that, I don't get the same 6780.21 value. (R3+R4) || (R1+R2) = 6446.69.
If someone could help point out my mistakes I would appreciate it.
AI: Open circuit
With an open-circuit, there is no current. That's true. But that doesn't mean there cannot be any voltage difference. That's not true. The mistake here is that you are multiplying a zero current by an infinite resistance. Any finite value can be justified when you multiply infinity by zero. It's not sound reasoning to say the result must be zero.
So, it's best to say that open-circuits permit any finite voltage difference between the points. An open-circuit has no impact on the difference.
Thevenin approach using very high and very low impedance in LTspice
Your approach is one reasoned way to find the Thevenin impedance between two points. In your case, \$N_1\$ and \$N_2\$, using \$1000\:\text{M}\Omega\$ and \$1\:\text{n}\Omega\$ and measuring the current in both cases. You could also just leave it open-loop for one measurement, picking out the node voltages like you did, and then using a \$0\:\text{V}\$ voltage source between the two points and measuring the current through it. Either way, you'll get similar results.
Here, again, you made a mistake. You need to use the voltage difference when measuring the node voltages using the \$1000\:\text{M}\Omega\$ resistance. Here, I get an open loop voltage difference of \$8.36\:\text{V} - 1.4072\:\text{V}\approx 6.95\:\text{V}\$. Note that you don't just take the voltage at one side. Voltages are always measured between two points.
Never, ever, do you consider an absolute voltage value for anything useful. It's just not done. Don't ever do it. It's never useful taken by itself.
I also get your current of about \$1.233\:\text{mA}\$, when shorted with the \$1\:\text{n}\Omega\$ resistance.
I then get a Thevenin resistance of \$\frac{6.95\:\text{V}}{1.233\:\text{mA}}\approx 5.634\:\text{k}\Omega\$.
Verification
Your circuit can be trivially analyzed for its Thevenin resistance without resorting to the above steps.
As seen by node \$N_1\$, \$I_1\$ has an infinite impedance so it can be tossed out and ignored. \$R_1\$ goes to ground and \$R_2\$ goes to a voltage source. So the impedance seen by \$N_1\$ is \$R_1\mid\mid R_2\$.
As seen by node \$N_2\$, \$R_4\$ goes to ground and \$R_3\$ goes to a voltage source. So the impedance seen by \$N_2\$ is \$R_3\mid\mid R_4\$.
Clearly, the impedance seen looking through N1 towards N2 will just be the sum of the above results, or \$\left(R_1\mid\mid R_2\right)+\left(R_3\mid\mid R_4\right)\approx 5.64\:\text{k}\Omega\$.
Which is close enough to the results using your method, when properly applied (taking the voltage difference instead of the absolute value that picked.)
Note, here again, your approach wasn't correct. Compare it to how I just did it and you may see why. |
H: Wilkinson Divider with one shorted and open port
I have the following setup:
It is my task to calculate the only value (\$S_{11}\$) of the scattering-matrix of the resulting one-port. Here is my idea on how to do that:
From the left port and through the upper path I get:
$$
1 * (-\frac{j}{\sqrt{2}}) * j * 1 * j * (-\frac{j}{\sqrt{2}}) = \frac{1}{2}
$$
And from the left port through the lower path:
$$
1 * (-\frac{j}{\sqrt{2}}) * (-1) * (-\frac{j}{\sqrt{2}}) = \frac{1}{2}
$$
In total I have thus a value of \$S_{11} = 1\$
My question is if this value is right? It seems too simple for a fairly complex circuit to essentially behave like it's an open circuit?
AI: Without any mathematics, you could reason thus ...
The open circuit line is λ/4 long, so it transforms its open circuit load into a short circuit at the wilky
The wilky is now shorted on both ports, so it behaves symmetrically, you can ignore its cross output ports resistor. It's λ/4 long, so what impedance will each of its lines present at its input? |
H: How to interface one mini-computer (such as Raspberry Pi 4) to multiple micro-controllers (such as Raspberry Pi Pico)?
I am developing a aeroponics type system which has multiple stacks. Each stack has multiple trays which house the growing plants. I am going to use the Pi 4 as a main control station from where i can control, modify and monitor all processes. To control the operation of the water pumps and nutrient pumps, LEDs, ultrasonic foggers, etc i have decided to add micro-controller (Pi Pico) to the stacks. The Pi 4 will communicate the instructions to the Pico, but i cannot figure which communication protocol/method is the best (I2C, PCI, CAN, IEEE 1394)?
To make it more clear, the Pi 4 will interface with the Pico(s) and the Pico in turn will control the operation of the stack.
AI: You are spoilt for choices.
I2C - not really suitable for off board use. Glitches cause lockups that require timeouts and retries to recover from.
PCI (I think you mean SPI as most microcontrollers don't implement PCI). SPI is not unreasonable, but requires a number of wires and driver chips to have it run a distance.
CAN - I'd rate that as a candidate. Designed to run a distance and tolerates errors and glitches by design.
IEEE1394?? as in Firewire. Again, most microcontrollers don't implement that standard. Very high speed and not designed for your use case.
// My suggestions:
UART (RS485). Along with CAN, this is a suitable candidate.
WiFi. Takes care of the issue with wiring. If you choose a ESP8266/ESP32 microcontroller it comes for free. Has Arduino support and zillions of examples. Pi4 has WiFi as well, so it ticks the boxes.
What you are proposing sounds like a 'Programmable Logic Controller' that has been is use by industry for decades. To that end there are standard protocols like MODBUS that are supported on both ends.
Again, you're spoilt for choice - there are many, many different ways of achieving the same goal. I'd suggest you do some research before you start re-inventing the proverbial wheel. What you want to do has been done many times over and blogs, videos etc are out there. I'd even suggest you don't need to write any code and design any hardware - its all 'off the shelf' and open source. |
H: Phase Difference Between dynamic voltage and input current
I have a RLC circuit, and I know if we wish to calculate the phase difference between Voltage and Input current then the formula is
$$ \phi = \tan^{-1} \frac{X_L -X_C}{R} $$
But as we already know, phase difference between inductor voltage and input current is 90°. I know how to calculate it through waveform. $$\phi = 360 f \,\mathrm dT $$
But I have no idea how do I calculate it through first formula. In short, what will be the question mark in the following expression and why?
$$\phi = \tan^{-1} \frac{X_L}{?}$$.
Also for the capacitor voltage and input current.
I know am weak in electronics and my basics too that's why am seeking help from you guys, kindly correct me if I was wrong in any of my presumptions.
AI: But I have no idea how do I calculate it through first formula.
For a pure inductor the phase relationship between voltage and current is that current always lags voltage by 90°. This comes from the basic but fundamental inductor, voltage and current equation: -
$$V = L\dfrac{\mathrm di}{\mathrm dt}$$
Or, if you insist on using your original formula, ARCTAN(infinity)=90 degrees. It's infinity because \$X_L/0\$ is infinity and this is because resistance is 0 Ω.
Similarly for the pure capacitor, the phase relationship is that current leads voltage by 90°. |
H: Can't make simple oscillator based on NE555
I apologize for a very stupid question, I'm very beginner in electronics. I wanted to make a simple oscillator based on NE555P chips I ordered from local store, but struggling to make it work. I found an attached schematics in wikipedia and hooked it to my breadboard. I checked all connections many times, everything seems to be fine. Here is a circuit:
I use the following values:
R1 68K
R2 10K (bit different from table in wikipedia here: https://en.wikipedia.org/wiki/555_timer_IC#Astable, they recommend 8.2K. Could it be a reason?)
Ceramic capacitor between Pin 5 and ground (labelled 103, so should be 10nf)
10uf electrolytic capacitor, negative going towards ground.
As power supply, I use 12V wall wart and breadboard power supply, tried both 3.3v or 5v jumper there, power supply works for sure, tested it without connecting to this circuit. Also when I turn it on, LED on breadboard power supply look dim on 3.3v and not working at all on 5V, so looks like something shorted internally.Also tried to use another NE555P, same result. I tried to make it as clean as possible to see the board. Hope you can help me and point me out where I did stupid mistake.
Also tried to change polarity of LED and electrolytic capacitor, no results.
Here is my breadboard photo, hope everything is visible. Of course I tried with connected power supply, just took it outside to take photo :) Thanks a lot.
AI: You've got the polarities on your power supply swapped. The positive outputs of the PCB connect to the negative (blue) power strips on your breadboard and vice versa.
There's a chance you damaged your 555 timer IC's by doing this.
Rotate the breadboard 180 degrees versus the power PCB to correctly align the polarities.
I also can't see a current limiting resistor in series with the LED. |
H: Does complementary transistors have shoot throughs?
Here I have 2 push-pull circuits one made from a complementary BJT CPH5524 and the other using also a complementary MOSFET BSL316C.
I would like to be able to push and pull at a frequency of about 100 kHz. I worried about shoot through which might cause things to get too hot for comfort. If I choose "complementary" parts does this eliminate the problems of shoot through, thus eliminating the effort needed to look for matching parts?
simulate this circuit – Schematic created using CircuitLab
What are the advantages of using a BJT push-pull rather than a MOSFET push-pull? Both the parts mentioned satisfy my requirements of a 24 V (maximum) 1 A load, switch times are also way above than what is required. Which one is inherently better?
AI: Does complimentary transistors have shoot throughs?
The emitter follower and source follower push-pull configurations shown in your question DO NOT have any issues regarding shoot-through.
If i choose "complementary" parts does this eliminate the problems of
shoot through?
In the circuit configurations shown, there are NO ISSUES regarding shoot-through. Complementary parts are not going to be needed. Of course, both parts ought to be NPN and PNP and, rated for equivalent voltage, current and hFE but, there can never be a shoot-through issue with this type of circuit because you CANNOT have both transistors biased "on" at the same time due to the way the bases (gates) are connected.
You get potential shoot-through problems when operating a push-pull circuit with the transistors connected as common emitter (common source) but, that isn't the case in your schematics.
Both the parts mentioned does satisfy my requirements of a
24v(maximum) 1A load
If your supply voltage is 24 volts and you wish to have a switching output that can produce 24 volts and 0 volts then, (as shown and implied) neither of these circuits can be expected to perform adequately and particularly, the MOSFET circuit will "lose" several volts from the 24 volts rail and similarly when switching towards 0 volts. If you need hard-switching to both supply rails, then both these circuits are ineffective with the MOSFET version being significantly worse than the BJT circuit.
Of course, you can improve the hard-switching capabilities of "follower" push-pull stages but, this will involve significant re-design of the driver circuits to the bases/gates. |
H: Issues with bi-directional data bus
So, i'm trying to get familiar with VHDL and FPGA's and thought designing a simple processor would be a good idea...
I've implemented the instruction memory, a instruction register, a couple of general purpose registers and the opcode decoder with two basic instructions "load" and "move". The registers and the opcode decoder are connected to a bi-directional 16-bit bus that is controlled by the opcode decoder. Everything works as intended in the simulator but as i try to run it on the actual hardware nothing really works. From looking at the schematics generated during synthesis its obvious that the bus is the main issue.
If i synthesize the register modules by themselves i get the IOBUF's that i would expect are needed for the bi-directional bus to work, but when the modules are used in a bigger design it gets replaced with some other complicated logic that obviously doesn't work.
I've tried using dont_touch and a bunch of other attributes to keep the original synthesis of the modules, but this generates "multiple driver" errors during implementation.
I've been stuck on this bus thing for quite some time and i'm not getting anywhere so perhaps someone who actually knows what they're doing can point me in the right direction...
Oh and i'm using Vivado with the Basys3 dev-board if that's important. If any other information is needed, please feel free to ask.
Thanks!
AI: The secret sauce (if you're using FPGAs newer than Xilinx XC4000) is : don't use bidirectional buses inside the FPGA.
There is one Read bus (everything reads from it) and everything has its own Write bus (which thus never needs to be shared, or tristated). All these Write buses go to one big multiplexer, which drives the Read bus.
The multiplexer is controlled by the address decoding logic (and/or bus arbitration logic if you have more than one bus master, e.g. DMA controllers or multi-core CPUs). Can be extended to multiple Read buses and multiple muxes for multiprocessors, but get familiar with the single Read bus first.
One implementation of the multiplexer is to use the convention that each Write bus is '0' unless explicitly driven (e.g. replacing 'Z' with '0', or adding a 'L' pulldown and letting synthesis make the obvious transformation, or having the address decoder drive '0' to an AND gate on the Write bus) - then the mux itself reduces to a large OR structure.
Or, simply write the mux as a mux and let synth take care of all that.
Every Xilinx FPGA newer than XC4000 (possibly the first generation Virtex, in was SO long ago I can't remember any more) has transformed VHDL "internal bidirectional bus" code using tristates, into exactly this logic eliminating the tristates, and AFAIK almost all other FPGAs have followed this approach some time in the (approx) quarter century since.
Your external bus interface can of course use tristates on a bidirectional bus; internally, it will translate to the above.
Most newer design hides all the details in "AXI bus IP cores" ... for portability you may want to consider doing the same, but if you're rolling your own CPU you may want to roll your own bus too; it's not that difficult. |
H: Altium PCB side dimension bar
This dimension box is moving with my mouse. How can I close this thing?
AI: Move your mouse so that the box sits back in the top left of your display (or wherever you want it to sit permanently) and press Shift G to dock it. |
H: Why am I grounding my HV probe to mains to read a secondary (different) voltage in that circuit?
In this scenario I'm measuring output of a 4 wire 5kV transformer: line voltage on primary, two HV wires on secondary. For any normal multimeter reading, if I want to measure any single component on a circuit, I'm putting my leads on that individual component. For instance, if I'm measuring a 5V piezoelectric sensor on a car, I'm not putting my meter probes on battery negative and 5V positive; I'll get wacky readings. So why does my HV probe require me to ground on 110, and touch the lead to the 5kV secondary? Shouldn't my probe (ground AND hot points of contact ) be connected only to the secondary side?
Edit:
Probe ground to secondary: incorrect reading.
Probe ground to primary: correct reading of 5kV
To clarify my question: regular voltage measurements, as I'm familiar with, involve reading voltage AT the individual component in a circuit, because those components on that circuit many times operate under voltages different than the input voltage that's powering the device. Voltage readings on the device means I'm attaching both Com and V to the device.
Yet the proper way to measure the output voltage of this transformer was to take that reading from two completely different parts of a circuit, one a 5kV, and one a 110. If voltage reading comes from the difference in potential, wouldn't this yield 4,000 - 110, which would be 3,890V?
Resistance measurements (for OP to complete).
Terminals Resistance
------------------------------
Black to white Ω
Black to HT1 Ω
Black to HT2* Ω
Black to frame** Ω
HT1 to HT2* Ω
HT1 to frame Ω
* To be found. Not visible in photos.
** Frame or wire attached to frame by ring terminal.
AI: The photos you've added reveal that one of your "secondary" connections is the transformer laminations. You need to check for continuity between the HT wire and the frame. Take a resistance measurment and see if you can see a secondary connection to the frame.
There is a possibility that the transformer is an auto-transformer.
simulate this circuit – Schematic created using CircuitLab
Figure 1. Autotransformer configuration. Note that in this configuration there is no internal connection to the transformer frame.
You should be able to find which wire is common using resistance measurement techniques.
If the configuration is as shown in Figure 1 then it should be clear that measuring between HT and frame will just give you a reading of capacitive coupling. Correct meter common reference should be to the common terminal on the transformer.
I don't see any fuse in your test setup. This is bad practice and may bite you some day. |
H: Can hazards increase switching time and/or number of switchings?
In connection with question
CMOS gate logic switching time based on input vectors
one of the answers mentioned hazards. Although the question was about the switching time, finally I did not find the answer to the question. Say, we have several one-bit adders, chained, and the lower carry out bits provide inputs for the next carry in bits.
When adding an all-one number and 1 (and we do not have special handling of carry), the first carry is set with some time delay, causes to recalculate the next bit with some more delay, and so on.
In a 64-bit adder, at the end of the chain it can be a considerable delay. Depending on the relation between the transfer-to-the-next-bit to switching-time, the new carry-in inputs may arrive at different times with respect to the arrival of the two summand bits.
That is (I expect) it may affect the operation: it may cause not only glitches, but also unwanted switching. I.e., I expect that it may cause increase in switching time, if the changed carry-in appears in an early phase of switching, and may cause additional switching if it arrives in a later phase. What are the timing relations here (I mean wiring delay till the next adder(s) to switching time)?
(if we consider this operation as a "computation", the arrival of any of the input operands starts the computation, and the arrival of the next operand starts the computation again, in parallel with the previous one. In biology, there exists a "refractory" period, which limits parallel operation; in electronics, as far as I know, no similar limitation exists. How then this unwanted parallelization is handled?)
Addendum:
in connection with this question I found at
https://www.sciencedirect.com/topics/engineering/dynamic-power-dissipation
some definition-like terms, and a kind of answer to my question.
"useful data switching activity (UDSA) and redundant spurious switching activity (RSSA).
RSSA or glitching can be an important source of signal activity. Glitching refers to spurious and unwanted transitions that occur before a node settles down to its final steady-state value that occurs due to partially resolved functions.
Glitching can cause a node to make several power-consuming transitions."
AI: Hazards in logic are also called metastable or race conditions with 2 or more inputs that are changing opposite states at almost the same time in an “OR/NOR” gate. The output might produce a “glitch” during the time when both are low.
Similarity for “AND/NAND” when a glitch occurs for the transition with both appear to be high.
The solution is to choose the same or inverted clock when the inputs are stable to synchronize and latch a valid outputmin a FF or register.
Diode OR logic is similar, but in single diode high speed switching circuits with DCDC converters, the diode conducts after a single transistor turns off and the reverse recovery time is similar to a “Refractory period” when the diode cannot change state fast enough due to storage capacitance to begin conduction from reverse to forward bias.
Let me be perfectly clear.
@ElliotAlderson hazard = Glitch is the outcome of a 0-hazard, 1-hazard , from input race or metastable conditions. For outcome to be a hazard, the input terms are similar for Meta means inbetween states, race is a time factor in between transitions with unexpected output glitch.
This may occur with chip tolerances, temperature and supply tolerances if the metastable condition exists. |
H: Does the Zener diode in this schematic need a current limiting resistor?
This is a fairly generic P-Channel MOSFET driver. The supply voltage is 12 - 48 V. The Zener used is a BZX84-C12. Would this Zener diode need a current limiting resistor of, say, 1 kiloohm between SUP+ and its cathode?
AI: In the given schematic, any supply voltage above the Zener voltage would allow a unlimited amount of current to flow when the opto-coupler is conducting, possibly damaging the Zener and the opto-coupler.
The purpose of the Zener is to clamp the MOSFET's gate-source voltage. Most MOSFET's have a maximum Vgs of +/-20V, so 48V is certainly too much. Placing a resistor between SUP+ and the Zener cathode would limit the current but would do nothing in order to clamp the MOSFET Vgs.
In stead, the resistor should go between the MOSFET gate/Zener anode junction and the opto-coupler.
simulate this circuit – Schematic created using CircuitLab |
H: Xilinx Spartan 3A programming connection : where are MISO and MOSI pins?
I am trying to find SPI connections to download my program to Xilinx XC3S50A-4VQG100C but can’t figure which pins are SPI pins.
Newbie alert : I just started learning FPGA and am trying to build my own dev board from the scratch ( I always learn better that way ). In microcontroller world all I needed to do was to simply connect MOSI, MISO, SCK , SS to connector pins and run the programmer but things are different here !
I am using Waveshare programmer
AI: If you are expecting to be able to just use your programmer to load a configuration bitstream directly into the FPGA which only stores that in specialized SRAM, you are in for a surprise when it forgets everything every time you power it down.
Unlike microcontrollers, the vast majority of FPGAs do not have onboard non-volatile memory. The SPI is intended to write to an external SPI flash memory. The FPGA will expect a certain protocol for the SPI flash memory and a particular wiring scheme to it.
Every time, upon power up, if the correct jumpers are in place, the FPGA will automatically run through some sequence to read its configuration bitstream from the SPI flash to configure itself. So you should read the manual carefully for the acceptable flash memory that can be used as well as the wiring scheme since this will make or break your board similar to getting the JTAG programming wiring wrong on a microcontroller.
If you missed this basic difference, chances are you you missed some other important gotchas that are different than microcontrollers. Read the manual carefully, or at least skim each section multiple times to spot glaring differences. There are other differences (like clocking and power supplies). There's a reason implementing an FPGA on a PCB is more difficult (sometimes much more difficult) than an MCU. Don't assume things on an FPGA are already taken care of like on an MCU. One is like driving manual and the other is like driving an automatic.
I suggest you get a development module first which is like a tiny stripped down dev board without almost nothing on it (usually just the FPGA, power supplies, clock, configuration ROM, sometimes some non-volatile ROM, sometimes RAM, maybe an ADC, and sometimes a USB programmer) that you can plug into your own board (or even a breadboard). That way you have a reference for if your code is working or not for your own devboard. Otherwise you are debugging from two ends simultaneously. Not to mention you can look at the schematic of the development module which is much smaller and simpler to understand. It might be worth examining the schematic even if it's not the same FPGA. I recommend Micronova, Trenz (the modules actually made by Trenz, since they sell others as well), and Alchitry. |
H: How to add these components on LTspice for simulation
I would like to add these spice models so i can simulate them in LTspice.
PBSS4350X (.txt)
PBSS5350X (.txt)
AO4407A (.mod)
I have looked into this tutorial on how to add the models, but when I opened the library which contains all models, it looks different on the text files. The PBSS has a .SUBCKT and a .MODEL Diode on them. Do I have to add them to the diode file too?
The .mod file is even much more confusing, the files contains an NMOS where the mosfet where it came from is a PMOS.
It may be of note that doing method 1 in the turtorial where the spice text is just place on the drawing area results in an error.
AI: The first thing to realize is all the manufacturer models you are using are of the .subckt form, i.e. subcircuits. A subcircuit is a collection of other primitive devices, such as resistors, capacitors, diodes, transistors, etc. More complex behaviors can be simulated using a subcircuit over a single primitive device, like a simple .model for a diode as shown in your linked video tutorial. Discrete MOSFETs actually require a subcircuit of some kind, because built-in SPICE primitives for MOSFETs are for monolithic (integrated circuit) applications only. LTspice has an exception to this with its proprietary VDMOS primitive model architecture. However, only the built-in models use this type and a 3rd party manufacturer will rarely provide you a discrete MOSFET model in this form. The AO4407A file you linked is a subcircuit, and I wouldn't worry too much about what's inside the subcircuit (like the monolithic NMOS you mentioned) unless you are the subcircuit designer or want to modify it in some way.
The least brain-intensive way to add subcircuit models on your system is to first put the library file into your Documents\LTspiceXVII\lib\sub folder and then use this tutorial:
https://www.analog.com/en/technical-articles/ltspice-simple-steps-to-import-third-party-models.html
If you want to use standard symbols instead of the generic boxes generated by the above method, you need to Ctrl+rightclick each symbol and change the prefix to X as described in this tutorial:
https://www.analog.com/en/technical-articles/ltspice-using-an-intrinsic-symbol-for-a-third-party-model.html |
H: Embedded: Main Stack Pointer and Process Stack Pointer co-existence
I have a follow-up question to the one asked here: Main Stack Pointer(MSP) vs Process Stack Pointer(PSP)
I was wondering about how these pointers are managed so that they do not write in the same stack location if stack usage is high. My understanding is that they are both going to point to addresses within the stack that they share so how do both pointer co-exist?
AI: As the answer to that other question suggests, there is in fact more than one stack in the system. There's one stack for the "operating system" (usually an RTOS, but could be something like Linux), and a separate stack for each process/task/thread running in the application code. The OS always uses the main stack pointer (mainly for quick response to interrupts, etc.), and the process stack pointer points to the stack of whichever process is currently running.
When it comes time to switch to a different process, the process state is saved on its stack. Then the PSP is pointed to a different stack and the new process's state is restored from that stack. The stacks do not overlap in any way. |
H: LEDs combinational circuit series and parallel
I have a project involving lighting up 12 dual LEDs using a 9-Volt DC Power Adapter with Max current of 500 mA
According to the datasheet Dual LEDs datasheet the forward voltage is about 2 V at 20 mA.
I want to light up the LEDs with a good brightness to them.
So far this is the only solution if I run them at 20 mA, but if I run them let say at 24 mA the resistance will change to something like 47 ohms.
My concern is that if a string happens to fail, I believe the current will be higher on the other strings. Also, the strings are driven by a PNP transistor which is required for my project's needs. Their bases are connected to microcontroller pins and set to low to activate the PNP and thus light up a string, the PNP transistor IC = 600 mA.
So for example if all the strings are ON and one string happens to fail, will the other two strings will be affected drastically to the point that will also stop functioning, or if only two strings are on and one fails the will remaining string still be functioning?
I need help finding a good solution give by these restrictions 9 V, 12 LEDs/4 LEDs each string, and PNP transistors. I know having LEDs in parallel is not really recommended, but I want to find the best possible solution that can make this project work to minimum downfalls.
AI: Why you shouldn't use a resistor to limit the current...
I don't like using resistors as current-limiters -- most especially in cases where you are using up most of the voltage headroom with the LEDs, themselves, and leaving almost nothing left over for the resistor. If you are willing to waste a lot of voltage overhead for the resistor, then they work fine as current limiters with LEDs. But otherwise, no. They are very bad to use with LEDs when there is little to no overhead.
This is where I depart completely from the answers already provided, as well as your own thinking. I can't even get myself to help you there because it is just so wrong right from the start.
I blanche at the idea of leaving only a nominal (guessed) \$1\:\text{V}\$ out of an estimated \$8\:\text{V}\$ for the LEDs. It just won't work well and it slaps me in the face to see it.
You can read the details here and here, the gist of this with regard to the current variation in the LED:
$$\begin{align*}
\%\,I_{_\text{LED}}&=\%\,V_{_\text{CC}}\cdot \frac{1}{1-\frac{V_{_\text{LED}}}{V_{_\text{CC}}}}\tag{1}\label{1}
\\\\
\%\,I_{_\text{LED}}&=-\%\,V_{_\text{LED}}\cdot \frac{1}{\frac{V_{_\text{CC}}}{V_{_\text{LED}}}-1}\tag{2}\label{2}
\\\\
\%\,I_{_\text{LED}}&=-\%\,R\tag{3}\label{3}
\end{align*}$$
In the above cases, \$V_{_\text{CC}}=9\:\text{V}\$ and \$V_{_\text{LED}}\$ is the total voltage for all the LEDs placed in series. So for purposes of (4) \$2\:\text{V}\$ (nominal) LEDs, \$V_{_\text{LED}}=8\:\text{V}\$.
The above also assumes, though-out, that \$V_{_\text{CC}}\$ is the average value and that \$V_{_\text{LED}}\$ is also the average value and that you always have sufficient voltage to operate the LEDs.
Eq. \$\ref{1}\$ is interesting. In your case with the above assumptions, the variation in LED current will be \$9\times\$ the variation in the supply voltage. If the supply rail varies by 5% then the LED current will vary by 45%! Something to think about!
Eq. \$\ref{2}\$ is similarly interesting. In your case with the above assumptions, the variation in LED current will be \$8\times\$ the variation in the LED voltage. If the LED voltage varies by 10% then the LED current will vary by 80%! Something still more to think about!
Eq. \$\ref{3}\$ isn't terribly important. All it says is that if you use 2% rated resistors then the LED current regulation will be -2% (the negative sign just means "in the opposite direction of.") I'm sure this isn't of any concern, at all.
You could improve things by giving more voltage to the resistor. What about just (3) LEDs per series string?
Eq. \$\ref{1}\$ says now that the LED current will be \$3\times\$ the variation in the supply voltage. If the supply rail varies by 5% then the LED current will vary by only 15%. That's better.
Eq. \$\ref{2}\$ is similarly interesting. In your case with the above assumptions, the variation in LED current will be \$2\times\$ the variation in the LED voltage. If the LED voltage varies by 10% then the LED current will vary by 20%. That's much better.
So, you can see that the resistor gets better when you throw more voltage at it. But I think you can also see that I believe you are cutting things far too thin when you expect to use (4) LEDs in the string.
Another reason why you shouldn't use (4) LED in-series strings...
The above was all theoretical. Now let's look at the LED datasheet:
Already you can see that it is possible that \$V_F=2.4\:\text{V}\$ when operating at \$I_F=20\:\text{mA}\$! That's +20%! Quite a variation. (4) such LEDs, if you happened to grab all the wrong ones, could mean they don't even light up for you!
Now, admittedly, it's unlikely. But are these the kinds of risks you want to take?
Battery voltage...
EDIT: I just noticed that you wrote in a comment that, "We're using a
9V power adaptor as our voltage source." So the following may not be
applicable. That said, it may be useful to others who actually are
using a battery, instead. So it stays, with my understanding that it
doesn't apply directly in your case.
One more thing. The battery voltage of \$9\:\text{V}\$. Let's look at a typical ENERGIZER 522 -- 9 V batttery:
Right off the bat you can see that they just barely allow about \$60\:\text{mA}\$ draw. Maybe. (They are rated in the other curves for a lot less draw.) So, sure. You can use one.
But what about the voltage? Note that the above chart specifies \$4.8\:\text{V}\$? Do you expect to use one down to that voltage??? I doubt it.
Now think a moment. You already know that it is possible that you might have a (3) LED in-series string requiring as much as \$7.2\:\text{V}\$. Sure, you expect no more than \$6\:\text{V}\$, typically. But you don't design for "typically." (Or you shouldn't.)
You could drop down to just (2) LEDs, in-series. But I think you need to plan for some battery voltage other than \$9\:\text{V}\$ in your design.
Let's look at another curve from that datasheet:
See how rapidly the voltage drops to \$8\:\text{V}\$?? And it isn't close to being used up, yet. I might pick \$7\:\text{V}\$ as my minimum design voltage. You need to pick your own number. And then stick with it.
How many LEDs, now?
(Of course, you may actually be using a high quality \$9\:\text{V}\$ power supply. But you haven't said so in your question. I've chosen to assume the obvious, in response.)
Temperature
I forgot to mention temperature! The LED voltage will vary significantly over temperature variations, ambient and otherwise. And if their voltage varies, as you know by now, the resistor will supply more (or less) current as the drop across the resistor changes to accommodate the LED changes due to temperature.
Yet another reason not to like resistors.
Use current regulation.
Finally, you probably really want to use current regulation using BJTs. This is really easy to adapt to the circumstances and it will give you a uniform result even as the battery voltage declines and even with varying LED voltages in your strings. The problem with the idea is that it also requires some voltage overhead. But the benefit is that, unlike a resistor, it requires less voltage overhead for good current regulation. So, at least, you are in better circumstances with an active current limiter vs a passive one.
If you want to see all the gory details of designing one, look here.
But I can't help you with that because you need to make some decisions.
If you do decide to go with active current regulation...
Here's the schematic I'm thinking about, right now:
simulate this circuit – Schematic created using CircuitLab
I'm assuming above that your MCU cannot stand-off high voltages. So I've included option 1 for driving the circuit. There may be other options to consider. (An NFET comes to mind.) But again, I'd need more details and your thoughts about NFET vs BJT.
(At these current levels I like BJTs more -- they are cheaper and easier to get and I don't have to worry about whether or not I'm grabbing one with the right \$V_{_\text{GS}}\$. But that doesn't mean I wouldn't use an NFET if a junk-box one was laying out and handy.)
Anyway, there's a thought to consider. It's an active circuit. And it will work quite well. It is stable against temperature variations, as well. (Also against variations in the power supply rail. So long as there is enough overhead and the devices can dissipate wasted heat. But it's better to avoid wasted heat and just keep the supply rail close to what's needed, if possible.)
You can probably work out \$R_3\$ given your MCU voltage and that you only need about \$150\:\mu\text{A}\$ into the base.
Summary
If the above doesn't convince you to never again use resistors as LED current-limiters, nothing will.
There are also some wonderful ICs available these days. They do even better at all this -- lower voltage overhead, better current control over temperature, etc.
But I consider them to be boutique. New ones come out that are better and offer more features, all of which also help sell them. Given time, the older ICs will gradually fade in volume and at some point wind up "hard to get."
On the other hand? BJTs are forever. (And I can get big, fat ones for high currents that the ICs can't readily handle, too.)
EDIT: Added for the case where you want only (2) BJTs per series chain...
If you are willing to sacrifice some BJTs for the common good, then you can use mirror BJTs, instead. This will reduce the number of BJTs per string to just two, instead of three.
The high-side resistors of \$5.6\:\Omega\$ are there mostly to limit source current with the series string isn't ON and to mitigate the impact due to BJT variability within a part number and manufacturer when the string is ON.
The I/O pin resistors can probably be \$27\:\text{k}\Omega\$, or thereabouts.
simulate this circuit
Note that I decided to set up a Darlington. I'd forgotten about what happens when a series chain isn't ON. (Or if none of them are...) So there are a few more BJTs in the above circuit. Make sure \$Q_1\$ can dissipate. A TO-220 wouldn't hurt.
Finally, keep in mind I'm a hobbyist and no expert on any electronics topic. I spend hobbyist time doing hobbyist level thinking and offer that for free. I make mistakes. I miss important details. So do test this out with just one string, at first. And don't hook it up to your MCU until you are feeling comfortable with the results. |
H: Component name - breaking circuit based on ambient temperature
I'm trying to add a (resettable?) fuse to a circuit (24 VAC) which breaks a circuit when ambient temperature reaches a given temperature (say 60° C) in addition to current crossing a threshold.
All I could find were fuses for dryers - probably because I used the wrong keywords.
What is this safety equipment called so I can buy it? How would it be easiest to integrate it with a circuit which uses cables?
AI: There are thermal fuses. These are inexpensive, and can switch large amounts of current. They are, however, not resettable. Once they open, that's it. They usually carry a bevy of safety-agency approval marks since they are used to ensure safety of household appliances such as heaters and hair dryers.
There are also resettable thermal cutoffs that require a button on the device to be pushed to reset the device.
Finally, there are thermostats which automatically reset once the temperature drops more than the hysteresis.
There's a variety of temperature ranges and accuracies available, but generally they're a bit crude. If you need something relatively precise you may need an electronic thermostat or a circuit plus a sensor. The details will depend heavily on the actual requirements. Mechanical thermostats are practical up to perhaps 250°C, and thermal fuses tend to max out a bit below that. |
H: Types of signals being sent through different types of cameras
I'm very much a beginner in electronics, so I apologize if the concepts I'm trying to understand are very basic. I've been looking into making a simple camera setup with my Arduino and an Adafruit product linked here. From looking at this article, It seems that for a simple setup to work, this would require connecting to a Raspberry Pi at least, due to its requirement for a camera module:
The Raspberry Pi camera board transfers data through an extremely fast camera serial interface (CSI-2) bus directly to the system-on-chip (SoC) processor. It does this through a 15-pin ribbon cable, also known as flexible flat cable (FFC), and connects to the surface mount ZIF 15 socket in the Raspberry Pi board. As you may noted, the camera module on this official Raspberry Pi camera board is identical to the camera modules (ccd imagers) found in many mobile phones.
I'm having trouble understanding this as a concept for a couple reasons.
Why does a camera, which presumably has a set amount of pins that give video output, require a camera module to "process" its data before sending it into the rpi?
a. How is the raw input of a camera such as in the Adafruit product formatted?
b. How is the "processed" input of the camera board formatted?
(more dumb question) Assuming there's around 1920x1080 pixels for a video that the camera can take, how is this compressed into the <20 number of pinouts that comes out of the cable?
As I understand it from my limited experience with circuit diagramming, the little markers are inputs and triangles are ground, but what are the rest signifying (do lines like CAM1_DN0 signify inputs?
How could a circuit be designed to read in pixels of a camera and do high-level processing?
Again, apologies if these questions are really basic, but I couldn't seem to find any way of understanding this in a very detailed format online. Any help would be appreciated. Thanks for reading!
AI: Why does a camera, which presumably has a set amount of pins that give video output, require a camera module to "process" its data before sending it into the rpi?
The camera's output pins may use a different format, but in many cases, the camera module doesn't do any conversion. An example schematic shows the following onboard:
Power supply (voltage regulators and decoupling)
An oscillator to provide the necessary clock signal to the camera chip.
An cryptoprocessor made available on the I2C bus of the board, as well as I2C level shifting.
Common-mode chokes for noise suppression on the differential signal lines carrying video from the camera.
There is no intermediate conversion chip between the camera chip and the cable; the common-mode chokes are purely passive components that suppress certain types of noise on the line.
a. How is the raw input of a camera such as in the Adafruit product formatted?
There's no separate "raw input" in the example I found, the output of the camera IC directly conforms to the Camera Serial Interface. For a different module, you'd need to consult the datasheet of the camera IC to confirm the format.
b. How is the "processed" input of the camera board formatted?
It follows the Camera Serial Interface format.
Assuming there's around 1920x1080 pixels for a video that the camera can take, how is this compressed into the <20 number of pinouts that comes out of the cable?
The camera data is sent in a serial format over two differential lanes, with a clock signal alongside. The serial format means that bits of all the pixels in the frame follow one after the other. Per [this reference], "Each lane can run at up to 1Gbit/s (DDR, so the max link frequency is 500MHz)". As far as I can tell based on the abstract of the spec, there are multiple options for the image encoding, including 24-bit RAW.
As I understand it from my limited experience with circuit diagramming, the little markers are inputs and triangles are ground, but what are the rest signifying (do lines like CAM1_DN0 signify inputs?
Those lines are differential signals, they can be considered as outputs of the camera module. The SDA and SCL form an I2C bus, which is used to control and configure the camera, while the CAM1_Dxx pins carry the bulk video data.
How could a circuit be designed to read in pixels of a camera and do high-level processing?
It would need a differential transceiver that can handle the correct IO standard, speed, and encoding. If you're designing a chip you might pay for an IP core to integrate in your chip. If you're using an FPGA you will probably use a transceiver provided onboard the FPGA. An Arduino does not have the appropriate hardware to decode the high-speed video data.
Once you receive and decode the pixel data, you'd place it into a frame buffer and then do interesting stuff to it (e.g. encode it, stream it elsewhere, etc).
Being a member of the MIPI organization (which governs the spec) or otherwise having access to the spec is very helpful. |
H: Would a simple UPS catch power dips?
My question is, can a simple small UPS provide a solution for power dips? Both from before and after the UPS?
I don't need the UPS to run the machine for hours, just for catching the power dips. Having juice for a save shutdown in a power outage is just a bonus. Would a UPS be fast enough to handle power dips? I could imagine it takes time for the machine to detect a dip and switch to battery. I would imagine a capacitor would be faster.
Background information
I've got a 3d printer in a 70 year old building. Wiring is, let's say, not optimal. And my printer disconnects quite regular from the print server (Octoprint). I've done all I know to keep the serial connection going. I have shielded USB with ferrite beads. I have isolated the 5V on the USB to prevent back powering. I am using a big enough power supply for the Raspberry Pi, powered USB hubs for the camera's. And so forth.
I notice the LED, connected to 230V, flickers when the stepper motors move.
The machine disconnects on many factors, turning a light on or even off, the coffee machine, etc. I also notice the power dips in other electronics like crackling in the radio speakers.
AI: Would a UPS be fast enough to handle power dips?
Well, the basic AC waveform alternates (dips if you like) 50 or 60 times per second so, finding a switching mechanism that is capable of dealing with dips shorter than ten or 20 milli-seconds is unnecessary because that is what the AC waveform does all the time.
I could imagine it takes time for the machine to detect a dip and
switch to battery.
I think you broadly misunderstand how a UPS works; the battery is nearly always active all the time and is recharged by the incoming AC (rectified and processed). So, a small dip in the charging process is what it handles and is designed to handle.
The battery then feeds the output stage that reconstitutes the AC voltage for feeding on to whatever appliances are connected to the output. No need to worry about suddenly bringing into play a battery because it is used all the time: -
Image from here. Of course, some types of UPS may have a slightly different topology but, switching between one source and the battery source can be done in microseconds with modern electronic parts. However, if you are looking for product recommendations, then these are off-topic. |
H: Multi-phase buck converter inductor selection
This question is regarding the inductor design (uncoupled) in a multiphase buck converter.
In a multiphase converter, the inductance per phase requirement is higher than the corresponding single phase solution.
The ripple current per phase is lower for the same Vin and Vout, hence the corresponding inductance requirement is higher.
Now there are 'n' such inductors with each inductor value higher than the corresponding single phase case. Agree that the input and output capacitance requirement has come down due to ripple cancellation.
The inductor size depends on both inductance and current through inductor.
Is that the reason why we prefer multi-phase inductors for high current application- even though we need 'n' inductors having 'n' times higher inductance?
AI: If you are comparing (a) a single buck vs (b) a multi-phase buck where both (a) and (b) are delivering the same maximum loading output power at the most efficient duty cycle then sure, the inductors in (b) will be greater in value than the single inductor value in (a).
After all, each of the bucks in (b) receive the same input voltage, produce the same output voltage and, due to the design being synchronized, switch to to a common shared cycle AND therefore, must have the same duty cycle because each of the bucks in (b) shares the full load current current.
Is that the reason why we prefer multi-phase inductors for high
current application- even though we need 'n' inductors having 'n'
times higher inductance?
One of the major reasons for (b) is that the input ripple current (p-p) is reduced compared to one big brute converter (a). The output ripple current is also reduced by the same argument. These are the main reasons.
To make (a) have the same low ripple current as (b) requires (a) to be operated at a much higher switching frequency and then switching losses might become a big problem. |
H: Driving a Peltier element with PWM
I'm trying to drive a Peltier element using the VNH2SP30 motor driver, switching at the Arduino's PWM frequency of ~970 Hz (which is extremely low, but I'm planning on increasing the driving frequency to 15 kHz).
For now, with no experience on LC low pass filters, I'm struggling to filter the power delivered to the Peltier element.
I'm using a combination of 5 mH ferrite core coil with a bulky 4700 µF capacitor, which is showing a relatively smooth output waveform on the scope. Unfortunately, my inductor is getting really hot, so that I can't keep my finger on it.
PWM voltage : 12v
Duty cycle : 70~90%
Inductor current rating 2.5 amps
Inductor DCR: 73mOhm
Cap ESR : 0.5 Ohm
Max ripple level :10%
Peltier driven at 2 amps max
Peltier specs: https://www.google.com/url?sa=t&source=web&rct=j&url=https://peltiermodules.com/peltier.datasheet/TEC1-12706.pdf&ved=2ahUKEwjagdz5gZDyAhV1CmMBHYotDasQFjAAegQIBBAC&usg=AOvVaw0pXUw93x3BX9LqFe4th0vf
What is going on in the filter?
Why is the inductor heating up that much?
Is there any way to improve the filter?
AI: Time constant in coils L/R is relative constant for a given design sizing, where R is the coil DC resistance ( DCR in spec ).
But Inductor impedance reduces with lower f. So when you choose a lower L with lower loss ,DCR you have to raise fsw to obtain the same impedance ratio.
Then knowing the minimum R of the Peltier device and Xc cap impedance and XL choke impedance you choose the parts for low ripple and low Q which is a tradeoff.
Losses in Peltier devices increase with ripple voltage, so find good specs for optimal tradeoff on efficiency and ripple voltage would be a good start.
You have the wrong filter impedance causing massive circulating currents in the LC part.
25% of the power at 500Hz is dissipated in DCR loss.
If driven at the resonant frequency near 500 Hz the current increases x ~30 times or 28 dB around LC and not into load.
Since I guessed at your Peltier device with no specs, this is all I can say for now until your revise your question with all values of V, PWM f , duty cycle, L,DCR, C, ESR, and Peltier specs
using 8V 2.4A ~ 3 Ohms while LC at 500 Hz is 0.3 Ohms
When you add DCR to L and ESR to C all the energy is dissipated in these parts, so a precise filter spec needs all parameters to choose best Buck converter for a 2 to 3 ohm load
Assuming L
Solution (Change L to much lower DCR)
based on new info for Peltier at 26 , 50 'C on hot side.
Lessons learned: Always be conservative on L current limit and choose one greater than max. I chose C because I know that low ESR caps are ~ 10us so from R=T/C, 220 uF ,you can find low DCR caps = 10 us/220 uF = < 50 mohm and reducing L by 5 in same form factor also reduces winding loss by 5.
Imax (Amps) 6.4 6.4
Vmax (Volts) 14.4 16.4
Resistance (Ohms) 1.98 2.30
Power (Watts) 92.2 105.0
Root Cause design fault: Inductor DCR 73 mOhms @ 6.4A exceeds spec of 2.4A ,max as Pd= 3W will get too hot you need Pd < 0.5W so DCR must be 0.5/3 * 73m= 12mohm which results in lower L and thus higher f needed or more ripple . Cap must also have ESR < 5 mOhm as they are less heat conductive.
This however has very high Q at 1kHz so startup will amplify losses 10x in 50 ms to 5 watts or so.
Thus C must also be reduced then ESR increases so losses on here are critical.
Consider required f for ripple at 220 uF 50 mOhm low ESR. I suggest 20 kHz min. Let's see the Q next based on step response from 70 to 90% at 20kHz.
Conclusion
Looks good and Pd in L is < 0.5W ripple < 3mVpp
But you decide on your specs if it looks good. |
H: How to make a desktop fan?
I recently cracked an old pc open and from it i now have a fan and a button. My question is how can i connect the 2 and a battery? I thought just a simple circuit, but the fan has 3 wires. Help apreciated![
AI: Yes, you could run the fan off a 12V battery, provided the battery can supply the required current (300mA). 2 of the wires supply power, the 3rd one is a RPM sensor that can be left unconnected.
Refer to the pinout on the left side of the picture:
Image source
The button is a momentary push-button which only makes (or breaks) contact as long as you're pushing it. You need a switch to turn the fan on without having to keep pushing the button...
Note that most computer fans are made for silent operation and have rather low airflow. This might turn out quite disappointing as a table fan. |
H: Would using the output of a 555 timer to charge/discharge its capacitor work?
Background
I'm relatively new to electronics and playing around with a 555 timer.
I found that for astable operation, all sources I've seen recommends the following design (with different resistor and capacitor values to adjust timing):
I wanted to make the output waveform symmetrical, meaning that it should be high and low for the same amount of time.
With that design, however, the time to charge vs discharge the capacitor will differ. I know that I can adjust this by changing the ratio of the two resistors, but if I use a small resistor as the upper one, it will draw more current in the discharge phase.
At that point, another design came to my mind.
My idea
My idea is to simply connect the capacitor to the output of the 555 through a resistor and charge or discharge it that way:
As far as I see from that simulation, it works fine and generates a perfectly symmetrical square wave.
The question
Would this design work?
If so, why isn't it widespread and why is the former one "preferred" by tutorials, data sheets and other sources (I couldn't find my version anywhere)?
If it wouldn't, what issues would it have? Why would it be impossible/impractical/dangerous/problematic to use?
AI: It's only going to be symmetrical if the output can swing symmetrically from rail to rail. For the old NE555 that certainly isn't the case:
Images source
You can see that for a few mA of output current, as would be the case with the output charging/discharging the capacitor trough 1K, the drop on the high level signal is much greater (~1.4V) then on the low level (~0.03V). So it's not going to be symmetrical at all.
Worse still, since the 555 oscillates between 1/3rd and 2/3rd VCC, the 1.4V drop on the high level makes it come dangerously close to not being high enough to trigger the threshold level. Connect something else on the output that draws some current, lower temperature, a poorer part, etc. and it won't work at all (for 5V VCC).
Some modern CMOS versions of the 555 may be better, but these didn't exist back in the day when all these 555 circuits were "invented", so that's probably why nearly all the schematics one finds use the discharge pin. |
H: Should a pull resistor be on the main MOSFET or at the driver?
I want my MOSFET to be initially "off" by default.
I can’t decide whether to place the pull resistor on M1 or M2 (pull up at M1 and pull down at M2). My typical instinct would tell to place it anywhere (with a very small biased to placing it at M2) but i would to make an informed decision. So I need your help with arguments if should I place a pull resistor on M1 or at M2.
AI: If M2 is deactivated (and that state depends on the voltage source you have in your schematic) then M2's drain is high and Q2's emitter is high and this means that M1 is either "off" or as close to being off as it can be given the limitations imposed by the emitter follower push-pull stage.
If in fact you want it to default to having M1 inactive when there is no input connected (i.e. the input is floating), then you should use a pull-down on M2's gate to 0 volts. |
H: ESP32 GPIO not turning on MOSFET completely
I have designed a circuit that will switch on a 12V siren through a IRF540N MOSFET. The siren positive is connected to ST-3 (12V), while the negative is connected to ST-4 (SIREN-12V). A flyback diode is used for protection.
The IRF540N has a gate threshold voltage of 2V to 4V. I am using an ESP32 to turn the siren on and off via the MOSFET. The output current of IO 13 of the ESP32 is 33mA and the voltage is around 3.2V, so this should be enough to turn the MOSFET on. However, this 12V siren when activated sounds very quiet.
When I apply the 3V3 voltage from the ESP32 to the gate of the MOSFET, the siren then sounds as expected (loud).
Why is there is a difference in the loudness of the siren? The MOSFET should turn on when GPIO 13 is high but it sounds like the is only turning on the gate slightly.
AI: The gate threshold voltage is not that at which the device is fully turned on - it means that it is just starting to conduct (typically 250uA).
Usually there is a chart showing the drain current versus gate voltage - in this device you can see that at 3.3V a typical device is only just starting to conduct. There will also be significant differences between individual devices, one might start at 2V and another at 4V. It will also change with temperature.
In order to pass a high current you usually need 10V for many MOSFETs or 5V for those referred to as Logic Level MOSFETs.
You don't say how much current the siren requires. If it requires more than a few hundred millAmps you will need a level shifter to create a logic signal of at least 5V and a suitable MOSFET.
For currents up to a few hundred millAmps you may be able to find a low gate threshold MOSFET that will function with 3.3V drive, or use a bipolar transistor to drive the siren.
IRF540 Datasheet |
H: Programming the Feather 328p/ ATMega 328p
I have an Adafruit Feather 328P, which claims to be 'Arduino Compatible'. I believe I have set up my Arduino IDE correctly (according to the Adafruit link), but whenever I try to upload to the board I get the following:
avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_initialize(): failed
avrdude: initialization failed, rc=-1
Double check connections and try again, or use -F to override
this check.
Problem uploading to board. See https://support.arduino.cc/hc/en-us/sections/360003198300 for suggestions.
The link provided does not seem to cover this exact error.
I have tried using several cables and multiple computers but still the same result. I am also running the ide as sudo as I simply got a 'permission denied' before. Any ideas?
This is a brand new board and fresh install of IDE. I have not connected anything to the board. When powered, the board simply sits blinking three times every couple seconds.
Attempting on Ubuntu Linux 20.04, but the same errors were seen on a mac computer.
Update: according to a comment on this youtube video, I was able to get it to connect and program by spamming the reset button a little during the connection process. Any ideas here?
AI: I was able to get it to connect and program by spamming the reset button a little during the connection process. Any ideas here?
As I expect you know (but a few readers might not) A typical bootloader program, pre-installed on most Arduino boards and compatibles, runs after a controller reset event and spends a while looking for serial activity before giving up and transferring control to any previously downloaded "sketch".
Your description suggests that there is some break in the process by which the Arduino IDE signals a reset that is communicated through USB to the USB-to-serial IC on the board (CP2104) and thence to the reset pin on the ATmega328.
In the past there has been some confusion over which serial control signals (e.g. DTR or RTS) should be used to signal a reset but I am pretty sure current versions of the IDE assert both.
The feather schematic shows it used DTR rather than any other signal to effect a reset.
So that suggests maybe one of
Your IDE is really really old.
Your Unix/Linux configuration is blocking the relevant (virtual) serial signal line. (some stty -cdtrdsr weirdness?)
The Unix/Linux driver for the Feather's CP2104 is blocking this signal (maybe you should replace the driver?)
Your Feather's CP2104 isn't providing this signal due to some defect or a broken pin or broken track. Maybe some partial short is holding the line high more strongly than the CP2104 can pull down, but insufficiently to defeat the reset switch.
something else?
While clicking the IDE's "Upload" button, it might be interesting to monitor the CP2104's DTR pin with, for example, a logic probe of some sort. |
H: Unexpected voltage shift in zero-crossing detector circuit
I have designed following circuit of a zero-crossing detector, which is supposed to amplify small sine wave signal and convert it to (digital) square wave. It uses one single supply op amp for amplification and one comparator for converting to logic levels.
Initially, I started simulating this circuit with 1mV p-p input signal with no phase shift. In this case it works fine, output of the comparator is a nice square wave with 50% duty cycle.
Then I introduced a phase shift of 90 deg, via the Phi parameter of the V2 supply. Immediately, I noticed some unexpected behavior. The duty cycle of the square wave increased to about 51.8% - that means almost 4% difference between length of positive and negative pulse. After some investigation, I found that the amplified sine wave shifted a bit towards zero and is not perfecly centered around Vbias anymore. It shifted about 0.3 mV (6% of amplitude).
Any idea, why introducing a phase shift at the input caused voltage shift of the amplified sine wave?
AI: You have a time constant between C2 and R6 of 100ms. You can expect any transient effects to mostly be gone after 5 or 10 time constants.
Try gathering your data after 1 second for whatever time interval you are using (presumably short since you have a 10kHz signal).
There's an interaction between the waveform you get at the beginning and the DC operating point which is found by the simulator before simulation starts. |
H: Why can't the circuit breaker interrupt this circuit?
I have a 3-phase input in my home. There is 127 V between any of the phases and neutral, and 220 V between any pair of phases. The electrical appliances have 127 V, and the wirings are such that the air conditioners (that uses higher currents) of 3 rooms are connected to different phases.
I replaced one of them, and the new one needs 220V. I used the phase of the old one and another phase used for the lights and sockets of the same room (see picture).
It is working fine for some years, except that if I turn off the circuit breaker of one of the phases of the room (that used for lights), I have all kinds of unstabilities like blinking lights and beeping sounds from appliances. I need to turn off both S and T to properly turn off the energy of the room.
If only phase T is off, the sockets keep live. A voltimeter measures 127 V between live and neutral in the sockets. But an incandescent test lamp shines weakly as the voltage were smaller than that.
I suppose that somehow the phase S partially feeds T through the air conditioner wirings, but on the other hand, if the 2 phases were connected inside it, there would have a short circuit. There is a remote control for the air conditioner, and I don't turn it on during the experiment.
AI: simulate this circuit – Schematic created using CircuitLab
Figure 1. (a) The equivalent circuit. What you've got. (b) What you should have.
When SW1 (your circuit breaker) is open you have created a potential divider between the S phase and neutral, N.
The voltage on the "dead" T phase will depend on the ratio of the air-con and lights and sockets impedances (resistances).
The T phase is no longer dead and the only protection is the impedance of the air-con unit.
There is danger of electric shock if anyone is working on T after isolating that phase by SW1.
I suppose that somehow the phase S partially feeds T through the air conditioner wirings, ...
Correct.
... but on the other hand, if the 2 phases were connected inside it, there would have a short circuit.
No. A short circuit usually means a zero or very low resistance. Yours has the impedance / resistance of whatever is switched on in the air-con.
There is a remote control for the air conditioner, and I don't turn it on during the experiment.
Not relevant.
Your setup is not good. The air-con should be on a separate circuit to the others and should have a two-phase circuit breaker so that it can be isolated properly. Don't mess. Don't risk safety. |
H: I need help identifying this 10K 4Pin potentiometer
This blue Potentiometer is from a Numark NV DJ Controller.
It has 10K, 4Pins and I need to replace multiple of them, since they're getting less accurate. You can get them for 12€ the piece on a single website, which is overpriced. I've found similar Potis on AliExpress, but they all don't have the extended shaft. Any help is appreciated. I'm thankful for any link, where I can get them for cheap!
~djRimzi
EDIT: Picture of the Pins:
AI: Looks like an obsolete Bourns Pot. 10k. the shafts are plastic now. But you need to define the bottom pins , stereo, linear or log etc 1/4" 26 or 32 tpi
Most are metric now
You can find an equiv on D-K site or Mouser https://www.mouser.com/ProductDetail/Bourns/PTV112-4220A-B103?qs=h2IHEVivlqBiDxqtaFbOxg%3D%3D Compare dimensions on datasheet with yours |
H: What is meaning of the black circled function on this multimeter ? How is it used?
What is the marked (circled in black in multimeter picture) function? It looks like a square wave.
AI: According to the manual I found online it outputs a square or sine wave depending on the model. But this option isn't available on the DT830D.
Manual can be found at https://www.manualslib.com/manual/848426/Histest-Dt830-Series.html#manual |
H: How am I getting 1 Gbps through a CAT3 cable?
My work from home office, in a difficult-to-plumb part of my house, has an existing CAT3 running to it from the main part of the house. I wanted to use a wired connection because the Wifi flakes out occasionally, but it would cost a lot to properly run a CAT6 cable. While we're considering that, I opted to just use the CAT3 cable because I'd rather have a slow but solid connection instead of a fast but sketchy Wifi connection.
Despite CAT3's low transfer rate specification, I find I am able to get the same speeds as using the CAT6 cable directly on the router.
I use this connection all day at work and it is always this fast (I do a lot of data transfer for my job), so I know the speed test is not just a fluke. Other posts specifically say forget Gigabit networking on CAT 3, which is in line with other sources I've found.
Can someone help me understand how a CAT3 cable is able to consistently transfer data this fast? The answer will help me understand whether I should consider the upgrade to CAT6.
AI: 100 feet is not extremely long; so, as Dave says, you've got lucky, and your Gigabit Ethernet hardware is just exceedingly robust.
You have to realize that "Cat3" doesn't put an upper limit on the bandwidth that's usable – it's just guaranteeing that signalling bandwidths of up to 16 MHz work. Now, it seems pretty unlikely that classical Cat3 "telephony" wiring does Gigabit ethernet¹; that starts with Gigabit ethernet requiring four pairs, and Cat3 rarely being installed with all four pairs connected.
So, maybe you also got lucky with respect to your installation, and it's not actually Cat3, but something "labeled down" to Cat3. Was it installed after ca 1998? I think that was an era when Cat5 cables became ubiquitous, but I guess some people still demanded Cat3 (because that was what they were using before), so some manufacturers simply printed Cat3 on Cat5-capable cables (and sold them at a markup).
¹ hard to quantify, but we should be seeing serious attenuation here for the higher frequencies, requiring equalization from both ends of the link that far exceeds what the standard says devices need to be capable of. |
H: I2C (TWI): Hold SDA-line low by Slave when line has resistance between Master and Slave
Greetings to everyone!
The schema in brief:
Located at device 1 Located at device 2
Master SDA --------------- <physical connector> --------------- SDA Slave
According to I2C specifications: The Slave device must provide ACK signal holding SDA line on logical zero on success or do nothing on error, leaving I2C line high (NACK, logical one). This works fine when the resistance between Master is Slave is very low, for example, 5 milli-Ohm. But when the resistance rises to 100 or even 1000 milli-Ohm, the Slave can't hold line at appropriate logical zero. Such resistance can appear when a physical connector exists between Master and Slave.
Schematically, the SDA-signal at line with 100-1000 milli-Ohm resistance behaves like this:
Master sends some control sequence (7 bits + 1 R/W bit) to the I2C bus and the Slave responses with 1 bit ACK (logical zero).
Response with logical zero means that Slave holds line low, when Master holds line high.
Dev address W ACK
1 0 1 0 1 1 0 0 ?
5V ----- ----- ----- -----
| | | | | | ----- ~2.5V
0V | |_____| |_____| |_____ _____
Here the ACK signal is at level between 2-3 volts, so it is obviously not a logical zero.
How to "help" Slave keep SDA-line on zero (below 0.3V)? Are there are any common practices for solving this?
Thanks in advance.
==========================================================
UPDATED POST WITH DETAILS:
Double checked the I2C manual: https://www.nxp.com/docs/en/user-guide/UM10204.pdf
Page 10, section 3.1.6
The Acknowledge signal is defined as follows: the transmitter releases the SDA line
during the acknowledge clock pulse so the receiver can pull the SDA line LOW and it
remains stable LOW during the HIGH period of this clock pulse
Thus, Master must release the line by switching to high impedance mode. But according to the details posted below Master stays at high level, preventing the Slave to hold SDA line low.
Tested at Proteus 8.10
Master: MPU ATmega328P
Slave: EEPROM 24C01C
======================================
With resistor on the BUS:
======================================
======================================
Without resistor on the BUS:
======================================
======================================
I2C debugger logs compare:
======================================
======================================
C-code:
======================================
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/sleep.h>
int main(void)
{
// Set bit rate to 400 KHz
TWBR = (8000000LU / 400000LU - 16) / 2;
// Send START
TWCR = (1 << TWINT) | (1 << TWSTA) | (1 << TWEN);
// Wait till operation is complete: Interrupt Flag is set
while ( !(TWCR & (1 << TWINT)) );
// Exit if status not: START has been transmitted
if (TWSR != 0x08) {
return 1;
}
// Load data register with: 0b_1010_000_0 (EEPROM addres + write)
TWDR = 0xA0;
// Transmit data
TWCR = (1 << TWINT) | (1 << TWEN);
// Wait till operation is complete: Interrupt Flag is set
while ( !(TWCR & (1 << TWINT)) );
// Exit if status not: SLA+W has been transmitted and ACK received
if (TWSR != 0x18) {
return 1;
}
// ... rest of code skipped
// Send STOP
TWCR = (1 << TWINT) | (1 << TWSTO) | (1 << TWEN);
return 0;
}
Compile log:
avr-gcc.exe -Wall -gdwarf-2 -fsigned-char -MD -MP -DF_CPU=1000000 -O1 -mmcu=atmega328p -o "main.o" -c "../main.c"
avr-gcc.exe -mmcu=atmega328p -o "./Debug.elf" "main.o"
avr-objcopy -O ihex -R .eeprom "./Debug.elf" "./Debug.hex"
avr-objcopy -j .eeprom --set-section-flags=.eeprom="alloc,load" --change-section-lma .eeprom=0 --no-change-warnings -O ihex "./Debug.elf" "./Debug.eep" || exit 0
Compiled successfully.
AI: That will not be a problem with connector of 1 ohms.
You will be in specs even with several tens of ohms of resistance in series, but obviously the actual value depends on supply voltage and pull-up resistance value.
If you are seeing that a chip can't pull low during ACK, the problem is not the series resistance, it is something else that is supposed to not happen, most likely MCU IO pins are not used in open drain mode or something similar.
Edit:
The problem is that Proteus is wrong and fails to simulate I2C peripheral usage. In real life the AVR would make the pins go into open drain mode as soon as TWEN is set. |
H: How to describe the basic cabling used in industrial electrical panels?
How would the basic cabling in electrical panels be technically described (in order to make a purchase, for example?) Does it have a certain number of cores?
I'm talking about the kind of cabling you see going into the top ports of this terminal block:
For example, would the cable linked here be suitable to replace it?
The circuit overall is rated at 20 amps of three-phase, 230V power.
AI: The cable you have linked to is a 4-core double-insulated cable. That's suitable for outside the panel.
Figure 1. 0.75 mm2 hookup and equipment wire. Image source: RS.
Your panel is showing single-insulated panel hookup wire. This is multi-strand for flexibility. Strand packing is usually done in 1 (single-core), 7, 19 strands, etc. which gives the best packing without gaps.
Stranded cables work will all common terminals including screw clamp, cage clamp (below) and can be pin-crimped.
Figure 2. Wago cage-clamp terminals.
Some of the terminals require a 'just right' screwdriver to release the cage properly. (Some screw terminals require the right tool too.)
Figure 3. Cage clamp and wire showing excellent contact area.
For 20 A you should be considering 2.5 mm2 or 4 mm2 hookup wire.
(Photos used previously in my answer to another question.) |
H: timer mode 1 and edsim giving different result
IN 8051, Timer 0 in mode 1, if the maximum count reaches, the timer should displayed with 0000H, and should stay 0000h until we stop it and start again . But with this program
MOV TMOD,#01H
MOV TH0,#0FFH
MOV TH1,#0DCH
SETB TR0
The timer counts up to FFFFH then again starts from 0000, is not it unexpected?
in some internet reference it says, when it finishes counting to ffffh it "stops", does it really stop or is it just running in the background and pretending as if it has stopped by staying at 0000?
after the counting is finished and when it is showing 0000, at this moment if we really stop the counter by CLR TR0, will it imply now the counter is really reset? and if just after that SETB TR0 is used, then will it count from 0000?
AI: The 8051 timers definitely keep counting unless you explicitly stop them.1
So:
No, it is not unexpected.
No, it does not actually stop.
No, it does not guarantee that it is reset, because the counter may have advanced by one or more counts before the firmware actually got around to stopping it.
1 In fact, there's a technique called "precision reload" that takes advantage of this to eliminate accumulated error when using the timer repeatedly to create interrupts at regular intervals, despite some jitter in the interrupt latency. |
H: Complex op-amp analysis for audio filter circuit help
This circuit is from the datasheet for a demo board for the CS4334 i2s audio DAC.
I'd like some help analyzing the circuit. Not shown is the audio source, which is immediately before ALA and ARA. There is a blocking cap from the output of the aforementioned CS4334.
From what I can gather, this is a (near) unity gain inverting amp with high-pass and low-pass filtering added. The virtual ground is set to roughly half VCC (VA+5, +5 volts) because the amp is using a single supply polarity. That virtual ground will add DC bias to the input (which has a blocking cap off-screen) allowing "positive" and "negative" excursions of the audio waveform. The output has a DC blocking cap and a minimum load (100 kΩ). From what I can tell, the MC33202 is only special for the purpose of this circuit in that its output is rail-to-rail.
Am I missing anything in that description?
Is there a way to analyze the high- and low-pass bandwidths here? Also, what is the purpose of R2 being terminated on the input side between R24 and R16? Put another way, what is the purpose of the extra 1.2 kΩ past the point of feedback on the input side? Is the entire network of R3, R24, R16, C34 and C36 to be taken as some sort of multi-pole filtering network?
AI: Is there a way to analyze the high- and low-pass bandwidths here?
Yes, use spice with AC analisys which is much easier\faster than trying to calculate all the total circuit impedance by hand. You can think of the R2/R3 as a voltage divider, and it makes C3 more effective at getting a higher rolloff slope on the low pass filter component of the circuit (as shown below, green is the AC response of the 'original' circuit and red is a modified circuit which shows a more 'rounded off' rolloff when C3 is connected on the other terminal of R16. |
H: Are ARM instructions directly sent to the decoder or first stored in a register?
As far as I understood, the fetched "data"(not instruction) is first loaded to one of the CPU registers.
But in case of a single "instruction"(not data) fetched from the memory, is this instruction is sent to one of the CPU registers first or is it immediately sent to control unit to be decoded?
(I'm mostly talking about cortex M3 and M4 if it matters)
AI: The instruction is not sent to a register in the assembly-programming sense, i.e. one that has a name and is programmer-accessible (e.g. r2, lr, sp, pc, etc).
However, it is very likely to be stored in a register in the digital logic design sense of the term, i.e. an array of flip-flops or other storage elements with a common clock and enable. This is called the instruction register.
In the Cortex-M3, the IR stores the fetched instruction before it is decoded by the next stage. A more advanced architecture with out-of-order execution may have multiple IRs feeding multiple decoders. For example, Intel Sandy Bridge has an entire instruction queue rather than a single IR, which feeds four decoders that yield uops to the post-decode/allocation queue, architectural register renamer, and scheduler. Instructions are being stored in registers of all different types, in all different formats, in these stages. |
H: When is it important to use keyword "null" in synthesizable VHDL code?
I have usually seen VHDL keyword null being used in the others part VHDL case blocks. However, it also sometimes appears in else part of if statements. Unlike Python that requires us to use the keyword pass, I don't think VHDL has any such requirements.
On the other hand, I heard someone tell me once that the reason he uses the keyword null in the else part of VHDL if blocks is to prevent warnings from synthesis tools. I am personally not sure what he was talking about.
Is it important to use keyword null in VHDL for synthesis although it seems redundant and does not do anything? When do we actually need it in modern VHDL?
AI: I have been programming in VHDL for 12 years, and I have never found the use of null to be a requirement. There may be some older tools out there that require it, but I have yet to come across one.
null can be used to make your code more readable by explicitly stating that a certain condition doesn't do anything. |
H: How to turn on LED with transistor without exposing a 5V conductor
I'm trying to turn on an LED using an NPN transistor as a switch. The catch is, the LED is at the end of a BNC cable, and in my current design, both the outer and inner BNC conductors are normally at 5V when the LED is off. This is inconvenient, because the outer BNC conductor is exposed at the connector and can get shorted to other stuff lying around, so I've had to cover it with electrical tape.
Here's a simplified diagram of my setup:
I thought about moving the LED part of the circuit to the emitter of the transistor instead of the collector, so the BNC conductors would both be grounded normally, but I believe the logic level required to activate the transistor would increase, and I'd need to step up the digital out voltage (please correct me if I'm wrong!)
I can think of three options:
Figure out a better circuit that keeps the LED- pin grounded at all times
Continue wrapping the BNC connector in electrical tape
Get a different connector that isn't BNC where there are no exposed conductors.
I'd prefer option #1, but I'm not sure how to do it - any ideas?
I know this isn't the usual use case for BNC connectors, but it was the best thing I had lying around for the prototype. It seems like there should be a simple way to do this so the negative LED pin is always grounded.
Edit: My digital out is 0-3.3V, and Vcc is around 5.2V
AI: Use a high side switch, that way the cable will only be energized when the LED is lit, use the outer connector as ground to the LED and the inner conductor for the postive voltage to the LED. P channel mosfets would be best as they are low loss, but you could use either a FET or BJT (PNP) transistor.
simulate this circuit – Schematic created using CircuitLab |
H: Where is the ground wire connection of this DF electric TR28 transformer?
I am trying to repurpose an old transformer I had lying around to use it with a stand mixer from the US (120V) here in Europe (220V). However, when looking at the label I noticed that it has a ground connection for the output side but none for the input side. Input side is the top of the picture, output the bottom.
Am I missing something? Is it okay to simply connect those 2 wires to the mains plug, hence the wall?
Furthermore, is this transformer rated for a big load such as a stand mixer? The model I will be using is a 350 watt model.
AI: Transformers don't naturally use or need a ground wire unless the insulation between the unsafe primary voltage and the supposedly safer secondary is sub-standard or not up to today's standards. On the basis that yours has a ground wire I expect it is quite an old model or has the earth for some form of EMC barrier.
Furthermore, is this transformer rated for a big load such as a stand
mixer? The model I will be using is a 350 watt model.
Your transformer only appears to be rated for a VA of 120 and this means it shouldn't be used for a load of more than 120 watts.
Take note about which winding is primary and which is secondary: -
In other words, it is the upper port in your picture that is the primary (just in case you were not aware). Picture from here. And note that if you apply 220 volts to the "0" and "230" terminals, there's a good chance that there will be 460 volts AC produced on the "460" terminal. Just saying in case you didn't realize. |
H: Serial communication between different devices
How is serial communication implemented across different devices?
What is device X is transmitting data to device Y, but X runs at 100MHz.
Is there some kind of communication to set the clock of device Y before it starts to clock the data in?
How would you then transform the serial received data into parallel data that you need for another device? Shift registers?
For example if if im sending 100 bits, but every 5 bits i need to shift them out in parallel to another device, how would i do that?
AI: How would you then transform the serial received data into parallel
data that you need for another device? Shift registers?
It depends on if the data has a clock or not, with UARTs (universal asynchronous reciever transmitter) there receiver has to know what the timing of the data is (so it can be synced to the incoming data). With clocked data you can use a shift register, most devices are more complicated than that because they have multiple addressable devices and/or registers to send receive data.
For example if if im sending 100 bits, but every 5 bits i need to
shift them out in parallel to another device, how would i do that?
Use a state machine or counter to count every 5 bits, after 5 bits have passed then output the 5 bits out of the shift register (or pulse an enable line to make them valid). |
H: Do you typically need to program-in a USB communication protocol?
I have usb communication set up between a microcontroller I have and a VB.Net script. I am somewhat familiar with how the USB protocol works.
My question is: Do I need to program in start frames, tokens, and handshaking, or is this all done by the device hardware? I see on Microsoft's vb wiki that the serialport.Readline method reads a buffer up to the first new line character, and to me that looks like it does more than the USB protocol allows for. You shouldn't be able to continuously stream data to the port, it should be sent discretely in frames.
Thanks in advanced.
AI: Like any protocol stack, USB incorporates components representing various layers. High-level drivers and libraries provide higher-layer functionality on top of lower layers, effectively abstracting those lower layers away and out of sight.
In the case of USB, there are a number of so-called device classes which provide standardized interfaces on top of USB, with a coordinated protocol on the host side (implemented by a driver or a stack of drivers) and on the peripheral side (implemented by firmware or drivers). Examples of these device classes include HID devices like keyboards/mice, mass storage devices such as flash drives, image storage using PTP/MTP, and so on. These abstract away the USB protocol - for example, the FAT filesystem driver used to mount a flash drive sees a block storage device and doesn't need to worry about USB framing, packetization, etc, thanks to the USB drivers lower in the stack.
In your case, the relevant class is USB CDC (Communications Device Class). The CDC drivers present a serial port to user-land programs on your computer, which can read and write a stream of data. Meanwhile, the CDC driver will do the work of converting a stream of serial bytes and control signals into USB packets, and it will receive help from the underlying USB drivers down the stack for even lower level tasks such as establishing pipes to endpoints.
Because the specification is standardized, you benefit from knowing that your application on top of USB CDC should work regardless of what vendor your CDC-compatible device is from (as long as they follow the spec), because your system's drivers are likely implemented to the spec1.
Meanwhile, your application can use the serial port without needing to worry about what physical hardware underpins it. It could be serial over USB, it could be a nine-pin D-sub connector serial port implemented with a UART on your motherboard's IO controller, it could be a virtual serial port on Linux, or it could perhaps be a serial link on top of TCP/IP.
1 or more loosely, to give leeway for devices that may not follow the spec exactly. |
H: When these prototype devices become commercialised/productionised, how is the signal processing aspect of the oscilloscope built in to the device?
I read a lot of research papers that use large and expensive equipment, such as oscilloscopes, to do their signal processing. Oscilloscopes offer high sampling rate and bandwidth, which makes them good for signal processing applications. Also, nearly all of these research papers involve work in a lab (bench-top work), so the convenience benefit of using an oscilloscope, rather than designing, and having manufactured, a custom PCB, is clear. However, when it comes to commercialised/production devices, we often require serious signal processing to be done (that is, signal processing requiring significant sampling rate and bandwidth), and yet, these devices obviously do not have built-in lab oscilloscopes (and such a thing wouldn't be economical, anyway). So, when these prototype devices become commercialised/productionised, how is this signal processing aspect of the oscilloscope built in to the device? The Digilent Analog Discovery seems like a more modular device, but even it seems too expensive (and too large) to include in a lot of commercial devices that require serious signal processing. All of my study has been on the lab/prototype/research aspect, so I'm trying to understand how things work on the production/commercialisation side of things.
AI: In general, an Oscilloscope isn't doing much in the way of signal processing, instead it's probably serving as a signal digitizer, with the digitized signal feeding into a signal processing application running on a computer. What you use to implement this in "production" will depend significantly on the "Significant Sampling Rate and Bandwidth" and the signal processing algorithms you need to run. if you need to sample at multiGHz rates, with significant bandwidth, then you'll also have significant computational requirements driven by Gbit/sec datarates. Generally, systems like that would require a large, fast, FPGA to ingest the data and run the signal processing application needed.
Systems like 5GNR (mmWave extension of 5G Cellular) would start with something like that, and then, as the requirements firm up, the design would be implemented in a handful of ASICs (Application Specific Integrated Circuits) to achieve minimium size/cost/power.
If you need to operate below 6 GHz, then Software Defined Radios such as the Ettus N310 might work well, in conjunction with GNURadio for the signal processsing bits.
In other words, you're getting the Canonical engineering answer to a question "It Depends on the details" |
H: Questions about PUT relaxation oscillator's characteristics
PUT Relaxation Oscillator:
I have three questions about this circuit:
Why doesn't the PUT go back to forward bias as soon as the capacitor starts discharging below the trigger voltage? Is it because it discharges before the PN junction goes back to backward bias?
In the CALCULATIONS below, why dors the anode current formula present R1+R2 in the divider? How come the resistors at the right side affect the current of the anode?
I have tried to build this circuit in a simulator but couldn't find PUTs. Is the only way to add a PUT by building a PUT equivalent circuit?
AI: Your questions
See this detailed discussion for fuller answers. But in short:
Do note that the trigger voltage set by \$R_1\$ and \$R_2\$ is also a Thevenin voltage source with a Thevenin resistance. When the capacitance is charging, no real current is being drawn from it so the Thevenin voltage appears at the gate, unscathed. But, once the triggering threshold is reached (slightly above the Thevenin voltage of the resistor divider) the PUJT momentarily connects the anode to the cathode (with some voltage drop, of course) and starts the capacitor discharging. So long as the resistor charging the capacitor is large enough that it cannot meet or exceed the minimum valley current (about \$70\:\mu\text{A}\$ for the device shown given the Thevenin of \$R_1\$ and \$R_2\$), it will oscillate. In this case, \$R_4\$'s current cannot possibly exceed \$20\:\mu\text{A}\$. So that's good. Also, the Thevenin divider resistance is now supplying substantial current (not like the capacitor.. but substantial) and therefore there is a significant voltage drop across them and therefore the gate voltage has become much lower, very quickly. For more details, please read the detailed discussion I just mentioned, earlier, and examine the curves there, too.
Aside from the fact that for the device mentioned their equation for \$V_A\$ doesn't use the datasheet value (or name, which is \$V_P\$, I think) but instead just uses a standard diode drop, I've no idea where they got that equation. The peak current is going to be huge because that capacitor is discharging into a \$47\:\Omega\$ resistor. Given that circuit, getting close to a quarter-amp, or so. If you multiply a quarter-amp by the sum of \$R_1\$ and \$R_2\$ you'd get an unimaginably high voltage value for that voltage difference in the numerator and... well... it makes no sense. I had considered the idea that maybe they were talking about the current into the gate when the capacitor is finally discharged enough that the PUJT is going to reset. But that idea doesn't produce their equation. Looks like a current. It takes the current through \$R_1\$ and \$R_2\$ when the PUJT isn't firing and subtracts from it the threshold voltage divided by those resistors (roughly.) Must be something they were thinking about. I just don't know what.
See below and pages referenced therein.
That book
I've been surprised, at times, by errors in "Practical Electronics for Inventors", despite some respected names contributing to it. This page has now become yet another.
The top curve makes perfect sense. It's an RC charging curve. Just what you'd expect from \$R_4\$ and \$C_1\$ without the attached PUJT anode. Should be on the order of \$\tau=R\,C\approx 1\:\text{s}\$.
Aside from errors highlighted by your second question, it's also now the bottom curve that bugs me. No idea where that comes from. It suggests a rising voltage during the pulse. Where would that come from? Not \$R_1\$! It's value is too big. \$C_1\$? No, that's discharging into \$R_3\$. A discharge curve doesn't look like that. So... what's up? Where's the discharging behavior, especially after all that effort to demonstrate the charging curve?
Well, it was probably too short to show, to scale, because the charging period is long while the discharge period is very short. So perhaps they just got an artist to "wing it?" No idea.
Here's what it really looks like:
And zoomed in a lot, you can see what the discharge curve actually looks like (in red):
Well, what do you know? It really is a discharge curve! Who would have thought that from reading that book?
Burn the book, I say.
Without a single doubt
We'll smoke the monster out!
A few lines from a scene in the old Alice In Wonderland movie.
PUJT model
The two BJTs, if you try and use them that way to make a PUJT, need to have certain parameters in relationship to each other or the whole thing doesn't work right. It's better to just use a .SUBCKT card if you can find what to put into one.
In the case of the 2N6027 and 2N6028, I've have these 'reasonable' PSPICE models:
.SUBCKT 2N6028 1 2 3
* anode gate cathode
*node: 1 2 3
Q1 2 4 3 NMOD
Q2 4 2 1 PMOD
.MODEL NMOD NPN(IS=2E-15 VAF=100 IKF=0.3 ISE=2.5E-12 NE=1.6 RE=0.15 RC=0.15 CJE=7E-10 TF=0.6E-8 CJC=2.2E-10 TR=4.76E-8 XTB=3)
.MODEL PMOD PNP(IS=22E-15 VAF=100 IKF=0.3 ISE=1E-12 NE=1.7 RE=0.15 RC=0.15 CJE=7E-10 TF=1.6E-8 CJC=2.2E-10 TR=5.1E-8 XTB=3)
.ENDS
.SUBCKT 2N6027 1 2 3
* anode gate cathode
*node: 1 2 3
Q1 2 4 3 NMOD
Q2 4 2 1 PMOD
.MODEL NMOD NPN(IS=5E-15 VAF=100 IKF=0.3 ISE=1.85E-12 NE=1.45 RE=0.15 RC=0.15 CJE=7E-10 TF=0.6E-8 CJC=2.2E-10 TR=4.76E-8 XTB=3)
.MODEL PMOD PNP(IS=2E-15 VAF=100 IKF=0.3 ISE=1.90E-12 NE=1.5 RE=0.15 RC=0.15 CJE=7E-10 TF=1.6E-8 CJC=2.2E-10 TR=5.1E-8 XTB=3)
.ENDS
If you use these, I believe you'll find they work reasonably well when used in a Spice program. (Also see this answer for a symbol that can be used in LTspice.)
Just use regular BJTs
You can fabricate a kind of fake PUJT without actually having to own one. But the design is... slightly different. Here's the resulting output of the one I'll draw, shortly:
Now here's the schematic. As you can see, it's similar. Nice thing is that it works with "regular" junk box parts and it is modestly temperature-stabilized regarding the threshold voltage.
simulate this circuit – Schematic created using CircuitLab
Over on the right you can see how the PUJT is often diagrammed out, in books, using two BJTs. It's just that it is kind of tricky to actually drive that beast well, when the BJT parameters are coming from your junkbox (and wired together using copper) instead of from some FAB design lab (and directly sharing junctions.) If interested, take a look here for a short discussion about that generalized BJT version. You'll see something similar to the above there. But... there are some notable differences, too.
It's a heck of a lot easier to just buy some PUJTs. Do it soon. I don't know how long supplies will last! (Is anyone still building them? I have my lifetime supply. Do you? ;) |
H: Astable multivibrator not producing a square output wave, using two BJTs
I've been trying to build an astable multivibrator circuit using two BC547B NPN transistors.
I know how to build one using a 555 timer, but I'm having some problems using transistors.
I should've gotten a square wave of 0-5 V with a frequency of 500 Hz and duty cycle 0.5, but according to LTspice, I get a flat 5V output.
Can someone suggest where am I going wrong?
Also, I want the current driven to be of the order of some 3-4 mA.
In that case, are the values I've chosen for the resistors and capacitors OK, or should I make some changes?
PS: I'm working on LTspice. I can work only with 2 NPN BJTs of model BC547B, and a single power supply.
AI: There are mistakes: -
With no DC bias on the bases you will not get an oscillator.
This is how it should be done: -
However, it's not a great or stable circuit.
Try also this circuit (courtesy of micro-cap): - |
H: Precharge MOSFET safe operating area with inline resistor
I'm looking to use a MOSFET to turn on a resistive path, precharging a capacitive load. I'm considering the Nexperia BUK9608-55B, but I'm having trouble determining what my operating point is on the SOA plot.
I'll be using this as a high side switch with a 30VDC supply, ~7mOhm R.DSon, 1 Ohm power resistor down stream, then the capacitive load (an inverter with around 14mF of capacitance). With those conditions, my inrush will be around 30A. So where am I on the SOA plot? Am I at A, because we're switching 30A, or am I around B because most of the voltage drop is over the power resistor?
If we're operating around point A, would it make sense to use a load switch IC instead of a discrete MOSFET, and use it's dV/dt limiting capabilities?
It seems like it really depends on turn on speed and how it is driven. I've been planning to use this NMOS with this PV driver (https://docs.broadcom.com/doc/AV02-0259EN). We'll be up at 7V so that should be enough to get sub 10mOhms.
AI: You're at point B. The FET's 7mOhm Rds_on forms a voltage divider together with the 1 Ohm resistor, which means that almost no voltage will be dropped across the FET.
However, during the switching process, the FET will be at point A for a very short period of time while its on-resistance gradually drops to its final low value. Therefore you have to make sure to switch the FET within 1ms, which is the time that the FET can operate for under these conditions (as you can see in the SOA diagram). 1ms is quite a long time, however. Any semi-decent gate driver can switch that FET in microseconds, not milliseconds. |
H: Phasor angle calculation
In this example, for Vo it first says it is lagging by -134.8 degrees, however when then calculating actual Iin, the angle for Vo becomes -138.4.
I'm wondering if -134.8 is a typo, and if it isn't; how did they get that number?
AI: Well, the angle of the output voltage can be found using the voltage divider formula:
$$\varphi=\arg\left(\frac{4\text{||}\left(-2\text{j}\right)}{2+12\text{j}+\left(4\text{||}\left(-2\text{j}\right)\right)}\right)=\arg\left(-\frac{18}{145}-\frac{16}{145}\cdot\text{j}\right)=$$
$$\arctan\left(\frac{8}{9}\right)-\pi\approx-2.41495\space\text{rad}\tag1$$
which is equal to approximately: \$-138.366^\circ\$. |
H: Dimming energy with load cell
I'm a lawyer with no technical background. I'm just a geek! :D
I wonder if it is possible without using a micro controller can a load cell dim the amount of energy supplied by a 3V battery?
The heavier the load the more energy is supplied and the lighter the load the less energy is supplied (I'm thinking max load ~35oz, min load ~7oz)
AI: The simple answer is ‘yes’. When you say ‘dim the energy’ my interpretation is you want a voltage output proportional to the load?
If this is the case, you want a load cell amplifier. These are a common item, but for 3V operation it is not very common. Outputs of 0-5V or 0-10V are more likely. |
H: How to calculate the discharged current of a capacitor from the discharge curve?
I observed the discharge of a capacitor on an energy harvester chip via an oscilloscope. The Output was used to power a BLE module.
I know the capacitor value, discharge time, and the top and bottom voltage. I don't think I know my load resistor because it is a bit complicated to measure.
Is there a way to measure the total discharged current?
AI: If you know the capacitance C (assuming it is constant) then the slope of the curve is proportional to the discharge current.
i(t) = C \$dv\over dt\$ |
H: Current sinking in a DC-DC module
If I connect two DC-DC Modules (example module in the figure below) in parallel, can I increase its current capacity?
Also, I have another doubt out of curiosity. If I connect a module in the following configuration, instead of supplying current, it needs to sink. Will it work? In this case, the module is having input of 7V and out of 5V, 2A output. I connected the output such it closes a loop with a 7V battery and 1 ohm resistance.
Are these modules designed to deliver or can also sink the power to maintain the 5V at their terminals?
Links to the datasheet
https://datasheet.lcsc.com/lcsc/1912111437_HI-LINK-HLK-10D2405A_C465413.pdf
I am unable to find an English datasheet. However, I found a datasheet for AC-DC module.
https://datasheet.lcsc.com/szlcsc/1912111437_HI-LINK-HLK-5M05_C209907.pdf
AI: About the paralleling of DC/DC converter: in general no, you can't, unless they are designed to work in that way (look for current sharing in the feature, or feedback sinchronization or something similar)
They can't usually do that essentially for two reasons: first of all usually due to tolerances one of the converter will try to deliver most of the current; and many compensation schemes aren't really happy when someone other supplies power from the output. Or more simply they can't source in a prebiased output.
For bigger converters these day usually the solution is a polyphase converter: multiple power stages synchronized from the same clock. I've seen brochures for controllers expandable to up to 24 phases so there's a lot of power to be handled.
Another huge benefit is that EMC is greatly simplified being all synchronized.
As for the second question, about converters used as `sinks'. I don't really think it would work, in general. Many converters can only source current. Synchronous topologies could in principle sink but the controllers doesn't necessarily handle regulation (some only use that to handle gross overvoltage situations).
Of course it is feasible to do a two quadrant converter (even four quadrant ones, they are used for 42V car supplies); simply it isn't a commonly designed use case.
In your case your best course of action would be to ask HiLink (or whoever makes the converter brick) if your configuration is supported or not. |
H: Playing around with an astable multivibrator
I had a doubt regarding the astable multivibrators using BJTs.
This is a typical astable multivibrator.
I have been told that if I connect a capacitor to the output channel, there would be no difference.
I've also been told that if one would instead connect the Output 2 channel to just a load resistor, which in turn is connected to the ground, the peak voltage of the square waveform would drop, and also the duty cycle might increase (which is 50% initially, as it can be seen.)
I'm having a hard time figuring it out why this happens at all, for the resistor.
Can someone work out the formulae and try to come up with an explanation as to why this happens?
The load resistor can be assumed to be of 1k ohms.
I'm new to electronics, so it would be really helpful if someone could work out the formulae and the explanation.
AI: I'll give you a partial explanation about how the circuit works, and let you work out the details from there.
Consider transistor TR1, and let's think about what happens around the time that it switches from the nonconducting state to the conducting state. Just before that switch occurs, C1 has been charging through a path that includes R1 and the B-E junction of TR2. This has a short time constant, mainly because R1 has a much lower value than that of R2 or R3. This means that C1 is pretty much fully charged -- to the supply voltage minus the B-E drop (about 0.65V) of TR2. See the diagram below.
simulate this circuit – Schematic created using CircuitLab
Note that if you have a load resistor attached to the collector of TR1, it forms a voltage divider with R1, which reduces the maximum voltage on C1. The diagram below shows this situation, and the part in the dashed box on the left can be replaced with its Thévenin equivalent on the right, which better illustrates its effect on the capacitor.
simulate this circuit
Next, TR1 switches on and pulls its collector to within a few hundred mV of ground. Since the voltage across C1 can't change (at least, not quickly), this means that its other end — the one connected to the base of TR2 — is pulled to a negative voltage that is nearly equal to the supply voltage. This reverse-biases the B-E junction of TR2, insuring that it is cut off.
simulate this circuit
Now, C1 is discharging (charging in the other direction) through a path that includes TR1 and R3. Since this path has a much longer time constant, this takes a while, and nothing else happens until the base of TR2 reaches about 0.65 V, at which point TR2 starts to turn on. At this point, everything that I described above now starts to happen in the other half of the circuit.
In other words, right after TR1 switches on, the voltage at point A in the diagram above is about -5.15 V,1 and slowly rises to +0.65 V.2 If there's a load resistor as shown before, then the voltage at point A starts out a -2.15 V instead of -5.15 V. Clearly, it's going to take a lot less time to get to +0.65 V than in the original case.
So, from this, can you see why the presence of a load resistor, which affects the maximum voltage on the corresponding capacitor, would have an effect on the timing? Can you work out the equations and other details from that?
1 As @jonk points out in a comment, if the supply voltage were any higher than 6 V, this negative voltage could easily exceed the maximum reverse B-E voltage of the transistor, which would mess up this analysis by clamping the peak negative voltage.
2 The relevant equation is the one for a capacitor changing its voltage from a start value to an ending value through a resistor, which is:
$$V(t) = (V_0 - V_{end}) e^{-\frac{t}{RC}} + V_{end}$$
The question we want to answer is if \$V_0\$, \$V_{end}\$, R and C are known, how long does it take for the capacitor to reach some intermediate voltage \$V_x\$? That requires solving the above equation for t, which isn't exactly straightforward.
$$V_x = (V_0 - V_{end}) e^{-\frac{t}{RC}} + V_{end}$$
$$V_x - V_{end} = (V_0 - V_{end}) e^{-\frac{t}{RC}}$$
$$\frac{V_x - V_{end}}{V_0 - V_{end}} = e^{-\frac{t}{RC}}$$
$$\ln\left(\frac{V_x - V_{end}}{V_0 - V_{end}}\right) = -\frac{t}{RC}$$
$$RC\ln\left(\frac{V_x - V_{end}}{V_0 - V_{end}}\right) = -t$$
$$-RC\ln\left(\frac{V_x - V_{end}}{V_0 - V_{end}}\right) = t$$
Since negating the logarithm of a fraction is equivalent to taking the logarithm of the fraction inverted, the final equation becomes
$$t = RC\ln\left(\frac{V_0 - V_{end}}{V_x - V_{end}}\right)$$
For the first case, without the load resistor, we have
$$t = 100k\Omega 1\mu F \ln\left(\frac{-5.15 V - 6 V}{+0.65 V - 6 V}\right) = 73.4 ms$$
For the second case, with a load resistor on either side, this becomes
$$t = 100k\Omega 1\mu F \ln\left(\frac{-2.15 V - 6 V}{+0.65 V - 6 V}\right) = 42.1 ms$$ |
H: How to build a DC motor speed controller with DC source and SCR?
So after asking a question from the book "Practical electronics for inventors" and getting recommendations to burn the book, I'm going to start this thread with the premise that I'm not crazy and that they are wrong.
So basically, I don't think that the left circuit is creating AC (only a pulsing DC which will not turn the SCR off). I would still love to learn more about SCR applications. In your opinion, is there a good way to run this DC motor speed controller with a DC source and SCR?
AI: That book and its schematic and discussion
There really isn't much more to say. Burn the book.
Assuming the SCR is off to start (another thing to worry about) and there is a load hooked to the anode like that, then once the SCR's gate is triggered by a sufficient pulse the anode voltage drops to about one diode-voltage and load current travels through the anode to the cathode. This current will persist, independent of further gate triggering events, so long as the anode current is above the SCR's holding current.
As an SCR's holding current is intentionally designed to be low enough to allow most reasonable load variations (within design purposes of the SCR) to occur without accidentally causing the the SCR to abruptly turn off in unwanted ways, the upshot is that the motor in your circuit will just run at 100% duty cycle once the first trigger arrives.
That circuit uses a 2N4819. Digikey and Mouser don't list it and not even Rochester, which is known for keeping around ancient, boutique parts, doesn't list it. Neither does Solid State Inc., which often accumulates weird things. I may have one -- if so and if I find it again, I'll have to enshrine the thing in a museum somewhere. Why would a "modern" book suggest such a part?
Burn the book.
What about replacing the UJT with a PUJT?
Suppose you can get past the elephant in the room -- that the circuit isn't even designed to work -- and you want to know, "Well, what if I replace the unavailable UJT with a somewhat-available PUJT (see: Central Semiconductor and Solid State Inc.)? Can you show me that the whole circuit idea is still a total flop?"
My apologies for putting words in your mouth, but sure! That's easy to do. I'll borrow their earlier PUJT circuit and adjust a few voltages:
Which produces this output in LTspice:
Note that the second pulse didn't affect the current in \$R_5\$ (the "motor" here.)
The first "tick" turns on the "motor," as expected. But then it just stays on and the next "tick" is ignored. As will be all the remaining ones that follow.
You might ask yourself, "What exactly would cause the motor to turn off in a circuit like this? I can see something to turn it on. But where is the 'thing' that causes the motor to turn off?"
You see, in a PWM circuit (which I gather they are suggesting that circuit is, somehow), you have one thing that engages the switch and turns it on and you have a different thing that disengages the switch and turns it off. There must be two things here. Not one. So the obvious question about the book's writing is this, "How did the author of this piece fail to realize that there is only one timing mechanism here when it must be the case that two are fundamentally needed?"
Suppose the author thought, "Well, the motor will be on for the width of the timing pulse and will then go off until the next timing pulse occurs." But that's fundamentally flawed thinking for these reasons: (1) The timing pulse is quite narrow and the timing period is extremely large, given those component values and reasonable use of the potentiometer, so how is this PWM? and, (2) If one argues that it might be PWM if and only if you tweek that potentiometer down really, really low, then that also is crazy as you can't really take a 100k potentiometer and realistically use it like that, anyway, assuming the circuit would still work which it wouldn't with such a low value at the potentiometer; and, (3) The SCR doesn't turn off on its own, given sufficient load.
Getting past the fact that the UJT is unavailable to anyone but wealthy collectors, how do you get past the complete failure to think about the basics? And getting past the fact that there were multiple authors to this book, who were the content editors and what was their job? Did they actually do anything useful?
And finally, there's another problem. A motor has inductance. Sometimes, lots of it. A short trigger pulse might very well not be long enough to allow the motor current to reach a level where it can stay on. In that situation, the motor will just start to develop a tiny current and then, when the pulse is over, fall back to being off. So you will get tiny little current spikes in the motor every so often, maybe.
Even assuming you actually wanted to use an SCR and keep the motor running once the first pulse arrives, you still need to ensure that it is stretched out long enough so that the motor can exceed the required holding current levels of the SCR. And that's yet another consideration missing from this circuit. (You could increase the pulse width by reducing \$R_4\$ and increasing \$C_1\$, for example. And/or add a little capacitance across \$R_3\$. But the point remains -- no discussion of these details was present in the book.)
Anyway,
Burn the book.
Last note: I have had occasion to work with one of the contributors to this book: Bonnie Baker. (When she was then at Burr Brown.) She was admittedly difficult to get to laugh (at least for me she was rather "dry" in that sense.) But my experiences were excellent, her help quite useful, her white-papers were excellent, and I have profound respect for her knowledge. I'm sure her contributions are fine, where they appear. I'm just sorry to see her name somewhere in that book. |
H: Is this an ESP32 pin naming error?
Just looking at the datasheet for ESP32 I am finding something very strange.
I am trying to decipher why there is different pin assignment (for GPIO02) regarding my ESP32-WROVER and pin mux table inside document on page 60 (where it says it is attached on pin 22):
https://www.espressif.com/sites/default/files/documentation/esp32_datasheet_en.pdf
In the Wrover development kit schematic it says the pin is located on pin 24:
https://dl.espressif.com/dl/schematics/ESP-WROVER-KIT_V4_1.pdf
Is this a mistake, am I looking at the wrong datasheet? I don't think so because there are no others.
Could some help out with this please?
AI: It's not just this pin - if you look, the others are also wrong in similar vein.
I think the datasheet you are reading has pin labels, which don't match up the ESP32 chip's pin labels. I.e. the first datasheet is the pinout of the ESP32 chip, while the second one is the ESP32-WROVER module. I.e. ESP pin #22, which is named GPIO2, is fanned out as pin #24 on the WROVER module. |
H: Can I use this low side rated MOSFET driver to drive a high-side PMOS?
The IX4426 is MOSFET driver rated for "low-side gate drivers", but upon inspecting its schematics it is a push-pull circuit:
I should be able to drive a highside PMOS on its output. If the problem is the input voltage is not enough to turn on the PMOS of the push pull, can it be solved by adding a voltage shifter on the inputs?
AI: If the problem is the input voltage is not enough to turn on the PMOS
of the push pull, can it be solved by adding a voltage shifter on the
inputs?
There's no need: -
In other words, for a supply range of 4.5 volts to 18 volts, the input level for logic 1 is greater than 2.4 volts and the input level for logic 0 is less than 0.8 volts i.e. it is roughly TTL compatible and can be driven by TTL, 3.3 volt CMOS logic or 5 volt CMOS logic outputs.
so i should be able to drive a highside PMOS on its output
Yes. |
H: Is it possible to import CAD model files into LTspice?
There are very beneficial and comprehensive collections of CAD models provided by Mouser or other websites. Is it possible to use them in LTspice?
AI: LTspice cannot use 3D models - it has no 3D view.
LTspice cannot use the footprint - has no PCB layout.
LTspice might conceivably be able to use the symbol, but I don't know of any way to import just the drawing.
LTspice needs the spice model, and a mapping between the pins and the model innards.
The CAD models won't provide the spice model, so they aren't of much use in LTspice. |
H: Connecting a RC circuit to the output channel of an astable multivibrator (follow up question)
This is a question which I had asked earlier.
Tinkering around, I found out that if I connect a capacitor to the load resistor, i.e. if I pass the output through a RC circuit, the voltage retains it original value, and the frequency too, if the capacitance of this capacitor is the same as the previous ones in the circuit. However, the square wave's upper edges are somewhat rounded, and somehow touch the V_init. Can someone give a possible explanation to this, as to how to apply Thevenin's theorem here, if it's actually feasible here, and how does this configuration relate to the initial circuit without a load resistor connected to the output?
AI: This is a common design fault. Each stage is Common Emitter or active “pull” only , while the “push” is the collector R.
When an AC coupled Load is connected, that load R MUST be greater than Rc to avoid overloading the collector R pullup as although the series C is a HPF, the collector sees it as a partial LPF additional load. |
H: Inrush on a reverse fed step down transformer
I found a paper from ABB with some unexpected information on inrush current. First, they feed 480V to the high voltage side of a 480V/208V step down transformer and measure the inrush. Then, they feed 208V to the low voltage side of the same 480V/208V transformer - using it as a step up transformer. Measured inrush current in the second case is much higher than the ratios of the transformer would predict. Can any transformer experts explain the asymmetry in this experiment? Maybe the leakage inductance is substantially different between the windings and plays a big role in inrush?
The primary and secondary full load amps of General Electric 9T23B3874 step-down transformer are 90 amps @480 VAC and 208 amps @ 208 VAC. When connected step-down and energized at 480 VAC, the maximum peak inrush current is approximately 990 amps or 11 times the rated 90 amp primary winding full load current. But when connected step-up and energized at 208 VAC, the maximum peak inrush can reach 7700 amps or 37 times the rated 208 amp secondary winding full load current.
AI: The inrush current (considering in p.u. of rated current) will be different for each winding in which the transformer is energized. The question is not really related to "if the transformer is step-down or step-up," but rather, with the diameter of each winding. Of course, usually the low voltage windings are wound closer to the core, then the inrush current is higher for these inner windings.
This is because during the inrush event the core is driven to saturation and the current is limited by the energized winding "air-core reactance". If you look for the reactance formula of a coil, you will see that the larger the area, the higher the inductance/reactance, therefore, more "internal" is the winding, lower will be the air-core reactance and larger will be the inrush current.
It's not exactly related to this question (which winding of a transformer has the bigger inrush current), but in this post I've described a model for inrush current calculations in LTSpice. |
H: Why is CAS latency mostly recognized in DRAM timing?
From what I know, it takes roughly these steps to read DRAM data:
Enable RAS signal, send the row address
Wait for some Time(tRCD), then enable CAS signal, send the column address and read command
Wait for another some time(CL), then get what we need.
So Basically, tRCD and CL should be the same importance because we have to wait both. But many resources, such as this and this, claiming that
CAS Latency (CL) is the best known memory parameter
Since CL just means the latency between column selection and sending data, I don't understand why tRCD often gets ignored.
AI: Two reasons:
If you actually look at the DRAM datasheet, you'll see that tCL is the largest component of the overall access time.
In any but the most primitive DRAMs, there are advanced access modes (burst mode, fast page mode, etc.) that eliminate the need to cycle RAS for every access. This means that tCL is the only component left of the overall access time.
And especially with modern SDRAM, the row and column addresses are provided on consecutive clock cycles, while the data isn't returned until some 10-20 clock cycles later. Again, tCL is the primary determiner of memory random-read performance. |
H: A transistor and a diode instead of a tri-state buffer
I was wondering if it is possible to replace the tri-state buffer with a transistor and a diode as shown in the image.
In this case a zero will be the same as high impedance. I don't understand why we should make the current flow in the other direction when we output a zero, we could just leave it at zero volts, and with the help of the diode we can prevent the current from going the other way and messing up the data.
This way it will be - as I understand - not connected to the circuit.
As long as we enable just one register, there shouldn't be any problems. This way we could save a lot of transistors and propagation delay.
AI: There are some problems with your design.
Figure 1. The problem.
The big one is that once the base is pulled high that current will flow through the base and diode and drive the bus line high regardless of the state of Q.
If 1 wasn't enough of a problem your transistors are arranged as voltage followers. When Enable goes high the emitter voltage will be Vb - 0.6 V approximately. Then subtract the diode voltage drop and you've got Vb - 1.2 V. That may be enough to fail the logic high requirements of the devices being driven.
When all the Enable inputs turn off there is no pull-down on the bus line. A beginner's mistake is to assume that a disconnected input will fall to zero volts. There is no reason why they should and are just as likely to float positive or stay where they are. CMOS inputs cannot be left in a floating state.
Even adding a pull-down resistor on the bus may not be good enough as it will be discharging any capacitance on the bus. Driving the line to GND is far more effective and much faster and, therefore, allowing higher bus speeds.
But how about this solution here:
Figure 2. Attempting to prevent Enable feeding through to the bus.
It is most likely that the devices on the bus have CMOS inputs which have extremely high input impedance. That means that even a current limited solution will still pull the bus high.
And I didn't quite understand what you meant by number 4. I have seen someone on YouTube using a pull-down resistor on the bus and he got away with it!
The bus will have some capacitance due to proximity of traces and the capacitance of the CMOS inputs it is feeding. When the bus lines are driven high or low you have to charge or discharge this capacitance. You can discharge with a pull-down resistor but that will take a little time (τ = RC) and that will introduce a small delay. It's much better to drive the line high and low.
You don't want to design to "get away" with something. If you do you won't be sure how close to failing your circuit is and while it may work for this device it may not for the next due to variations in chips, etc. Watching YouTube is not a substitute for study, reading books, datasheets and technical articles and for proper design with calculations. |
H: Hardware Design Question Related to IMU VDD, VDDIO and IO pins
I am working on a flight controller design using ICM-20789, an IMU from TDK. As most of the components are powered by 3V3 source, so I am thinking connect IMU_VDD to 3V3 and VDDIO to 1V8 source same as the recommended design.
However, is the level translator needed? As the absolute maximum rating table suggests that IMU IO can take VDD+0.5V can I just connect SPI and I2C pins to MUC without level translation?
AI: The MAX3378 is expensive $4.
Since you are not using tri-state (uni-directional) you can cost-reduce this with a series R>R shunt to divide voltage /2.
Alternatively, to reduce the drive voltage within spec, you could use two Resistors to /2 the output voltage using equal R=220 (e.g.) for high speed conversion. These come in a variety of arrays that cost < $0.10. E.g. two pullup resistor networks could be used as pullup/down to divide by two. |
H: Why does the rotor magnetic field lag the rotor current by 90° in a 3 phase induction motor?
I was trying to understand load angle in electrical machines, generally denoted by delta, and I found out that while delta needs to be less than 90° in synchronous machine for steady state stability, it is not so in induction machines as delta is always larger than 90° in an induction motor.
The source of this knowledge is Stephen Chapman's Electrical Machine Fundamentals. Screenshot below:
One thing which boggles my mind is the angle by which Br (rotor magnetic field) is shown to lag behind rotor current Ir (that is 90°.) How is this possible? Isn't the rotor MMF supposed to be in phase with the rotor current? Am I missing something or is the book wrong?
AI: One thing which boggles my mind is the angle by which Br (rotor magnetic field) is shown to lag behind rotor current Ir (that is 90°.)
Couldn't find your version of the textbook on Google books. I found this instead.
The figure in this book is slightly better.
The image caption says
... the rotor current produces a rotor magnetic field BR lagging 90deg behind itself ...
The word lag in your question as well as the image caption and the arrows in the figure may lead one to think that the arrows represented are phasors. If that interpretation were true, then the magnetic field phasor would be expected to be aligned to the current.
(IMO) But the arrows are not phasors. They are physical directions of the magnetic fields. The dotted line is not the direction of the current. In the version of the book I posted above, the the arrow representing \$I_R\$ is clearly shown perpendicular to the plane of the page. Magnetic field \$B_R\$ would then be on the plane of the paper and perpendicular to the loop of the current as given by the right hand rule Wikipedia Figure. |
H: Op-amp as a current sink
With no application in my mind, I am playing around the circuits and came up with some doubts. I asked a question in Current sinking in a DC-DC module
to which I received the answer. However, I got curious and came up with another doubt.
If I connect the following circuit, will the op-amp do the magic?
In this circuit, I am connecting the modules in series to power an Op-Amp. The op-amp out is connected such that it needs to sink 1A to maintain the 3V at its output. In this configuration, I have the following questions.
Will the op-amp be able to sink the 1A?
If the op-amp sinks 1A, will it force the power source to sink or it will act as a load to the power source?
Is it even valid to use such a circuit electronically? If not, what are the reasons?
AI: If you do that to a typical op-amp (say, an LM358,) then you will "release the magic smoke" - destroy the op-amp, that is.
Op-amps must be able to source and sink current to operate. They aren't normally designed for large currents, though.
Typical op-amps work in the area of a few milliamperes. Forcing 1A of current through them is a good way to destroy them.
Something like the TI LM675 or the Analog Devices LT1210X could sink that 1A, and live to tell the tale - but they are not your typical op-amp.
If you respect the current limits of your op-amp then they can sink current without damage.
In the example you give, the 200 ohm resistor and the 1A current source will "look like" a load to the negative 5V power supply. |
H: How is PAM-3 used with binary digits?
I've seen a post on Anandtech. Apparently Intel is planning to use PAM-3 for USB80.
My question, how efficiently can we map binary digits into a PAM-3 modulation?
For instance, 2 successive PAM-3 "digits" have 9 possible values (00, 01, 02... 22)
My naïve implementation would be to map 3 bits into 2 PAM-3 digits, we use 8 values out of 9, meaning the mapping efficiency is 88.888%. Is there a way to get closer to 100%?
AI: You can code more bits at a time.
3 bits => 2 trits is a ratio of 1.5.
11 bits => 7 trits is a ratio of 1.572.
19 bits => 12 trits is a ratio of 1.583.
The best you could possibly get is log(3) / log(2) which is slightly less than 1.585. Which means that even using the 3 => 2 mapping is 94.6% efficient when calculated from a bandwidth perspective instead of a "how much of the code space do we use" perspective; the 19 => 12 mapping is 99.9% efficient. Of course, there's some waste if your frame isn't a multiple of 19 bits long :) |
H: Understanding the resolution of the DS18B20 - mismatch between specs?
I am interested in the DS18B20 temperature sensor from Maxim Integrated. The datasheet is here:
https://datasheets.maximintegrated.com/en/ds/DS18B20.pdf
The datasheet mentions:
"Measures Temperatures from -55°C to +125°C"
"The resolution of the temperature sensor is user-configurable to 9, 10, 11, or 12 bits, corresponding to increments of 0.5°C, 0.25°C, 0.125°C, and 0.0625°C, respectively."
But when I compute this by hand, I get that, at 12 bits resolution, between -55 and +125 degrees, the resolution in degrees celcius "should" be:
(125+55) / (2**12) = 0.044
(using 2 significant digits).
How to understand the mismatch? Does that mean that only "part of" the 12 bits resolution is really used? Then I guess it is not strictly speaking "12 bits resolution", right?
AI: If you look at the DS18B20 datasheet, page 6 you can see that it does not use the entire range of possible values.
With the maximum resolution (12 bits), it could potentially represent values of -127 to +127 °C (7 bits left of the “binary” point, a sign bit, plus the 1 to 4 fractional bits), of which only the values -55 to +125 are used.
The maximum resolution to the right of the “binary” point is 4 bits, which represents 2^-4 (1/16 = 0.0625 °C). |
H: Can't display characters on 16x2 LCD
Currently I'm working on a project where I want to display some data on an LCD. I'm using a MCP2221A to communicate with a PCF8574, which then sends the bits to the HD44780 LCD.
How:
I'm using the LCD in 4-bit mode so I have to send the data-bytes in nibbles, first the higher and then the lower. The bytes I'm sending have the following composition: [RS,RW,E,K,D4,D5,D6,D7].
I've been following a manual to initialize and write to the LCD. For the I2C communication I'm running a python-script using CircuitPython. Here are the two functions I'm using:
def init(lcd_address):
commands = {
"set": "00110011",
"4bit": "00110010",
"clear1": "00110000",
"clear2": "00110001",
"home1": "00110000",
"home2": "00110011",
"ON1": "00110000",
"ON2": "00111111",
}
for i in range(3):
i2c.writeto(lcd_address, bytes([int(commands["set"], 2)]),stop=False) # Function set
sleep(0.01)
i2c.writeto(lcd_address, bytes([int(commands["4bit"], 2)]), stop=False) # Set 4-bit mode
i2c.writeto(lcd_address, bytes([int(commands["clear1"], 2)]), stop=False) # Clear the screen. Higher nibble
i2c.writeto(lcd_address, bytes([int(commands["clear2"], 2)]), stop=False) # Clear the screen. Lower nibble
i2c.writeto(lcd_address, bytes([int(commands["home1"], 2)]), stop=False) # Set cursor to (0,0). Higher nibble
i2c.writeto(lcd_address, bytes([int(commands["home2"], 2)]), stop=False) # Set cursor to (0,0). Lower nibble
i2c.writeto(lcd_address, bytes([int(commands["ON1"], 2)]), stop=False) # Turn on display. Higher nibble
i2c.writeto(lcd_address, bytes([int(commands["ON2"], 2)]), stop=True) # Turn on display. Lower nibble
def write(lcd_address): # Writing the letter 'H' to the lcd
char1 = "10110100"
char2 = "10110100"
i2c.writeto(lcd_address, bytes([int(char1, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char2, 2)]), stop=True)
Problem:
I can't get the character to show up on the LCD. The bytes in the code are the same that get sent to the LCD (checked with scope), so that is not the problem. I'm guessing it has something to do with the enable pin and how that should be triggered, but I'm not sure.
EDIT
This is how my code looks now:
def init(lcd_address):
commands = {
"setD": "00010011",
"setE": "00110011",
"setF": "00010011",
"4bitD": "00010010",
"4bitE": "00110010",
"4bitF": "00010010",
"clear1D": "00010000",
"clear1E": "00110000",
"clear1F": "00010000",
"clear2D": "00010001",
"clear2E": "00110001",
"clear2F": "00010001",
"home1D": "00010000",
"home1E": "00110000",
"home1F": "00010000",
"home2D": "00010011",
"home2E": "00110011",
"home2F": "00010011",
"ON1D": "00010000",
"ON1E": "00110000",
"ON1F": "00010000",
"ON2D": "00011111",
"ON2E": "00111111",
"ON2F": "00011111",
}
for i in range(3):
sleep(0.01)
i2c.writeto(lcd_address, bytes([int(commands["setD"], 2)]), stop=False) # Function set
sleep(0.01)
i2c.writeto(lcd_address, bytes([int(commands["setE"], 2)]), stop=False)
sleep(0.01)
i2c.writeto(lcd_address, bytes([int(commands["setF"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["4bitD"], 2)]), stop=False) # Set 4-bit mode
i2c.writeto(lcd_address, bytes([int(commands["4bitE"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["4bitF"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["clear1D"], 2)]), stop=False) # Clear the screen
i2c.writeto(lcd_address, bytes([int(commands["clear1E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["clear1F"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["clear2D"], 2)]), stop=False) # Clear the screen
i2c.writeto(lcd_address, bytes([int(commands["clear2E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["clear2F"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["home1D"], 2)]), stop=False) # Set cursor to (0,0)
i2c.writeto(lcd_address, bytes([int(commands["home1E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["home1F"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["home2D"], 2)]), stop=False) # Set cursor to (0,0)
i2c.writeto(lcd_address, bytes([int(commands["home2E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["home2F"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["ON1D"], 2)]), stop=False) # Display on
i2c.writeto(lcd_address, bytes([int(commands["ON1E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["ON1F"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["ON2D"], 2)]), stop=False) # Display on
i2c.writeto(lcd_address, bytes([int(commands["ON2E"], 2)]), stop=False)
i2c.writeto(lcd_address, bytes([int(commands["ON2F"], 2)]), stop=True)
def write(lcd_address): # Writing the letter 'H' to the lcd
char1D = "10010100"
char1E = "10110100"
char1F = "10010100"
char2D = "10010100"
char2E = "10110100"
char2F = "10010100"
i2c.writeto(lcd_address, bytes([int(char1D, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char1E, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char1F, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char2D, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char2E, 2)]), stop=False)
sleep(0.02)
i2c.writeto(lcd_address, bytes([int(char2F, 2)]), stop=True)
So now I'm sending 3 bytes per command but the display is still empty. It only flashes once when the code runs.
AI: You need to control the E pin correctly.
As per the HD44780 datasheet, the RS, RW, and DATA pins must be stable for some time before E is set high, and they must stay stable also some time after E is set low.
Assuming you want to send something to the display, you basically have to send 3 bytes:
data pattern that sets RS, RW, DATA and backlight states, but with E low
data pattern that sets RS, RW, DATA and backlight states, but with E high
data pattern that sets RS, RW, DATA and backlight states, but with E low
All those three bytes are identical in respect to other bits, except the E bit.
Some implementations skip the third byte, assuming that RS, RW and DATA don't need to be stable before setting E high or after setting E low, but I don't recommend that. Even though it appears to work, it violates HD44780 bus cycle. Some HD44780 compatible clones may not be compatible.
You also don't have the required delays between commands. The sequence to go into 4-bit mode is especially important to get right, so it can reliably go from any weird state to 8-bit mode and back to 4-bit mode correctly. Also the clear and home commands take nearly 2 milliseconds to execute, so the display will not accept any commands while it is already executing a command.
Finally, if the bit order really is [RS, RW, E, LED, D4, D5, D6, D7], the data bits are in wrong order in your code, as your code assumes order of [RS, RW, E, LED, D7, D6, D5, D4]. You need to be sure which of those is sent first and last, and in which order they are. You can test that with simply trying to toggle the LED, either 0x10 lights it up, or 0x08. Standard modules are in the following order, MSB to LSB : [D7, D6, D5, D4, LED, E, RW, RS], so if you have one, you need to swap all bits left and right. |
H: Is the increase of impedance over a small hatch area on a FlexPCB negligible?
I have a FlexPCB with 4 layers and total thickness of about 200 µm. The second layer is a ground plane that is solid copper over the entire FlexPCB. To increase flexibility of the FlexPCB, I want to hatch it in the bending areas with copper (e.g. hatch width 0.2 mm and gap width of 0.3 mm). The hatch area has a width of about 2 mm and the PCB height is about 10 mm. The signals (green, layer 3) going over the hatched ground plane (blue, layer 2) have typically a trace width of 80 µm, trace spacing of 80 µm and a rise time of 6ns or slower.
I believe that hatching increases impedance and crosstalk in a totally negligible way. Does anybody have a rule of thumb or can explain an easy setup for a simulation (I have no experience for such a simulation)? https://electronics.stackexchange.com/a/568355/276690 speaks about an increase of 1.2 of the impedance?
AI: I believe that hatching increases impedance and crosstalk in a totally
negligible way. Does anybody have a rule of thumb or can explain an
easy setup for a simulation (I have no experience for such a
simulation)?
Hatching decreases capacitance for signals that are running in an adjacent layer to the hatch. Why? Because there is less copper. It also somewhat increases inductance on the hatched plane because there is less copper. Without going into a lot of calculation, if you remove 70% of a reference ground plane from the underside of a trace, you are also decreasing the capacitance to that plane by ~70% (there are fringing fields that would change area vs capacitance.
If the signals are not high speed and not transmission lines then it probably wouldn't be a big deal to change it to hatched, you may have slightly more electric field lines that go through the hatches, but I'd imagine that the cross capacitance between two traces would be similar, however, a continuous ground plane would be better than a hatched one.
If the hatched layer is a reference plane for a transmission line , the decrease in capacitance and increase in inductance will change the characteristics of the transmission line. (And at that point it will be really difficult to find the impedance, because you would need to come up with a new set of equations that approximate a hatched plane for a microstrip, in addition the hatched layer has different area of copper/capacitance between trace and hatched plane)
If I were designing this flat flex cable, I would not use a hatched plane in a high speed design. There would also need to be another good reason to use a hatched plane (thermal or for bending) because the hatched plane is worse for most designs (reduced capacitance to ground/ref plane and reduced inductance) I would recommend not using it if at all possible. |
H: What actually causes the distortion in the output wave due to an Astable Multivibrator?
I've witnessed that the output square wave from an astable multivibrator using two BJTs has its edges rounded during the rise up time(just as it is seen in fig. 1). Across resources, I've come to know that this is due to the initial current drawn in by the capacitor, or precisely the way the capacitors get charged up. I'm aware of the way an astable multivibrator works, but I'm unable to understand how does the charging of the capacitor lead to a change in the rise up time of the square wave. Which part of the multivibrator is actually causing the nuisance here? Also, how can we actually show the error dependence in the rise up time with an error in that part creating nuisance, mathematically? It would be really helpful if someone could give a nice explanation with the mathematical workout in the dependence.
AI: Let's assume that the output is Vout, at the junction of R4, C2 and the collector of Q2.
Start by understanding why the falling edge of Vout is so rapid. There's a relatively slowly rising voltage at Q2's base, as C1 charges, but the transition from off to on occurs only within a very small "window", perhaps between 0.6V and 0.7V. Thus in spite of the sedate pace of the rising base voltage, the transistor switches on comparatively quickly, as its base reaches 0.6V or so.
At that point, Q2 "instantly" becomes a dead short between collector and emitter, pulling Vout immediately to 0V.
However, when Q2 eventually switches off again, similarly rapidly, the transition of Vout is not rapid at all. This is because there's no such "dead short" to Vcc like there was to ground.
In the absence of a transistor to pull Vout hard and quickly towards Vcc, as Q2 did to pull it low, there's only R4 to perfom that function. In comparison to the "dead short yank" of Q2 to ground, R4 is merely a "gentle tug" back up!
The voltage across C2 changes only as fast as R4 permits, because together they form the familiar resistor capacitor series pair (don't forget, Q2 is off, and can be treated as if it were not even there). The voltage Vout, at their junction, will rise exactly as you would expect from such a pair, exponentially, with a time constant of R4 x C2, which is what you see in the graph.
I'd like to point out that this is somewhat simplified, because there is also Q1's base and R3 to consider, but the dominant players in the rise of potential Vout, at Q2's collector, are C2 and R4.
In summary then, Q2 switches on rapidly, dropping Vout to zero more-or-less instantly. But when Q2 switches off, it is the combination of C2 and R4 that determines the rise of Vout, which is approximately the classic exponential curve with time constant R4 x C2. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.