text
stringlengths 83
79.5k
|
---|
H: ADC Resolution vs ENOB
We know that ADC 1LSB=Vref/(2^N) where N is the resolution of the ADC.
We know that ADC Effective number of bits (ENOB) is always less than the resolution "N".
When calculating 1LSB do I need to take 1LSB = Vref/(2^ENOB) or I need to take 1LSB=Vref/(2^N)
AI: The LSB {not to be confused with LSB for byte, lsb for bit} is always the average analog level between all consecutive binary encoded voltages. Thus full analog scale is used, not from midscale, nor ENOB.
\$\text{LSB=Vref}/2^N\$ for N bit ADC and Vref is the max. analog unipolar value.
Otherwise for bipolar voltage input , one uses the full-scale range(FSR) ADC's \$\text{LSB=FSR}/2^N\$
other info
Alternatively, SNR = 6.02N + 1.76dB so for N=8 bits, ideal SNR~50dB.
The ENOB indicates the binary number of bits in resolution, limited by noise.
The ENOB dynamic range over asynchronous noise, distortion and ADC error sources measured in binary exponent bits and is best-case near full-scale. This value must be de-rated by the analog (full/actual ratio minus 1) for much smaller signals and is reduced due to ADC errors including non-monotonic, gain & offset errors and noise errors.
For Analog data ENOB is defined as \$\text{ENOB}=\dfrac{\text{SINAD-log}(1.5)}{\text{log}(2)}\$ where the 1.5 is the ideal ADC quantization error and \$\text{SINAD}=10\log \left(\dfrac{\text{(signal+noise+distortion) power}}{\text{(noise+dist.) power}}\right)\$
ENOB helps to define the logarithmic dynamic range.
the threshold for marginal speech is 12 dB or 4:1 or ENOB = 2
an 8 bit ADC has a practical max ENOB=7.1 typ.
a 12 bit ADC has a ENOB = 10.5 typ.
When using a serial data link with synchronous noise from overshoot or frequency dependent of data pattern dependent precompensation equalization, the Link term LENOB is used which uses Vpp levels instead.
thus \$\text{LENOB=log}_2\text{(SNR[Vpp])} \$
The LENOB for data varies from 3 (best case) to 4 bits (typ) for \$10^{-12}\$ BER but this is the minimum threshold value and it rises with data rates since a higher SNR is needed to maintain the same error rate in a channel and has other variables.
|
H: Why real part of load change as we look through transmission line?
As we move from an imaginary load towards generator through a transmission line, we can see that real and imaginary part of that impedance changes. In fact, an ideal transmission line is comprised of inductors and capacitors. It can alter only the imaginary part of a load; how does the real part of the load change?
Once the transmission line becomes lossy, the constant gamma circle on Smith-chart gradually shrinks down to the center and meets the characteristic impedance. Can anyone explain how this happens?
Thanks in Advance!
AI: If you assume the load is purely reactive, its real part=0 or infinite and the reflection factor is 100%. The real part stays=0 when you rotate the line length in Smith's chart. You will never meet other real part circles with that reflection factor
If you talk about general loads, not only pure reactances, you have a big false assumption "ideal transmission line is made of instructors and capacitors. It can alter only the imaginary part of a load"
Take a resistor, say 100 Ohms. Put it in parallel with an inductor whose impedance in the operating frequency is j100 Ohms. Calculate the total impedance phasor. It's about (50 + j50) Ohms; the real part has decreased substantially.
=> A resistor seen through a reactive network generally doesn't keep its real part.
I think you didn't see that the LC-model of ideal transmission line puts parts as well in parallel as in series with the load altough the visible metallic wire is only in series with the load.
The other question: Obviously nothing reflects when everything is dissipated in the line.
|
H: Step Down Switching Regulator irregularity with ripple on output
Using the L5973D- I have followed the Schematic as the datasheet instructs, but I have tried two different inductors. I have found problems with other systems on the board with the one inductor (744 561 15), such as Microchip while running in debug mode. The Schottky Diode (D1) used is the STPS340-SMB instead of the STPS2L25.
The requirements of the inductor are 15uH and 3A. So the first inductor selected was the DO3316P-153 (as recommended by the L5973D datasheet). The frequency out of the L5973D at pin one (circled in red on the schematic) is given in the figure below:
When the inductor is replaced with the more affordable inductor (744 561 15), the frequency at pin one is given in the figure below:
With the 744 561 15, there is this added ripple before the square wave, can anyone explain what is happening here?
Is the older inductor (DO3316P-153) filtering or is the new inductor(744 561 15) adding in a delay?
EDIT:
After running the Board for more than an hour and the system keeps resetting itself. At this point, I am not sure what is going on, any suggestions?
AI: It seems your load current is small enough that the buck converter enters discontinuous mode. This means in basic terms that the inductor charge is exhausted before the end of the cycle, hence you get a some ringing before the next cycle starts.
Usually it's recommended you'd want to stay near the transition between continuous and discontinuous mode in normal operation. Theoretically this leads into optimal component sizes and costs but practically it's very much a moving target.
Wikipedia gives reasonably good explanation of what's going on there:
https://en.wikipedia.org/wiki/Buck_converter#Discontinuous_mode
To push the SMPS back to CCM, you'd want a larger inductance value. However changing the inductor size may also call for a smaller output capacitor. And then you need to re-adjust the feedback loop compensation as well.
If you're not comfortable with all that, I'd suggest
Leave it alone, DCM is not a problem as such, or
Use Texas Instruments webench tool to design you a SMPS circuit, it does
practically everything for you. Kids these days.
|
H: How can I use two TP4056 with four Li-ion batteries, but single load?
Need to connect and charge two Li-ion batteries in series.
I'm not sure if it is possible to connect and use two Li-ion batteries and TP4056 like this. Are they going to be balanced? Do they need to?
AI: Thus your problem now looks like this.
"No Way, Jose".
Yes they need to be balanced to prevent accelerated aging from mismatch charge imbalance with high current and deep discharges.
It is analogous to runaway series incandescent lightbulbs ( where the faster weaker bulb achieves near full brightness and the other bulb stays cool.
Another example is like mismatched-Vf shunt LEDs sharing current. If one LED cannot support the current of 2 or more in parallel, one can burn out open cct as thermal runaway lowers the Vf of the hotter LED thus drawing more current.
There is a better chance of extending end-of-life to achieve the max rated charge cycles if they are balanced or much lower if they are not!
|
H: Component searching
I'm not sure this is the right way to ask this, but I'm giving it a try anyways. I'm essentially looking for a component that using a clock signal can open and close 4 other channels.
Clock | 0 1 0 1 0 1 0 1
S0 | 0 1 0 0 0 0 0 0
S1 | 0 0 0 1 0 0 0 0
S2 | 0 0 0 0 0 1 0 0
S3 | 0 0 0 0 0 0 0 1
0 and 1 represents low and high respectively.
On the rising edge, set the current to high on the output specified above. goes low again on falling edge.
What is the name of this component, if it even exists? Im looking for a dip-8 alternative if that would be possible.
Kind Regards
AI: For a particular pattern like you have shown, there may be a solution if you can go a little larger than an 8-pin dip. Essentially your outputs sequence in turn. This is actually a lot like an "LED chaser" type of circuit often build around a CD4017. You have the added complication that you need to disable all the outputs when the clock is low, and unfortunately this chip does not have an output blank or enable signal. So in addition to being too large you would need to follow it with something else. It's possible there might be some legacy function that fits your needs, but most older logic ICs have larger than 8-pin packages.
For the arbitrary form of this problem, realistically, if the clock speed is not too fast, you may want to use a microcontroller. You might be able to do this with say an ATtiny85 (or 25 or 45) at a rate of up to maybe 500 thousand or even a million times a second. A main advantage here is that you fit in the 8-pin DIP requirement (5 usable I/Os without disabling the reset fuse which makes reprogramming difficult).
To go fast you'd likely want some small piece of programmable logic, historically a PAL or GAL, today a CPLD. But these aren't available in your desired 8-pin DIP. You used to be able to get PALs in 16 pin DIPs, but CPLD's are pretty much all surface mount.
|
H: Current source with a grounded load and floating power supply
The following circuit represents a basic op-amp current source and can be found in the Art of Electronics 3rd edition on page 228.
Figure 4.10. Basic op-amp current source (floating load). in might come from a voltage divider, or it could be a signal that varies with time.
load is easy to calculate in this situation.
The problem with this circuit is that the load is floating (neither side is grounded.)
This problem is fixed by connecting the load to GND and with a floating power supply as shown in the following circuit.
Figure 4.11. Current source with grounded load and floating power supply.
I can't understand how load is calculated here.
I know that any node in the circuit can be used a reference node (ground node) which, I think, implies that the potential of the rest of the nodes are calculated as a function of the new ground node.
When I try to analyse the circuit something goes terrible wrong.
By the expression of load (present in the last figure) \$ V_{in}= (V_+R_1)/(R_1+R_2)\$, but looking at the circuit \$V_{in}=v_+=v_-= V_{GND} \$
Shouldn't GND be seen as 0 V to the other nodes?
What am I missing?
AI: A caveat to this circuit is the current does not flow to ground, which is hard to wrap your head around. Because the circuit is floating and power is not sourced from ground (like it normally is) the current returns back to the source through R3. All of the current through the load goes back through R to return to the source. The vgnd or com node is negative relative to the ground.
The ground in this circuit merely is a point at which to analyze the circuit. If you wanted to you could also put this node at 1000V and analyze it, and the current would be the same through the load.
It's much easier to wrap your head around the circuit if you put the 0V node here, as you can easily spot the voltage divider for the load, and the current sense resistor R (or R3 in the pic below):
|
H: How can I tell if a MOSFET is enhancement-mode or depletion-mode?
Today, from ignorance I have fallen head-first into the world of MOSFET transistors. In my scramble to find some information on the MOSFET I will be using as a switch (HEXFET actually), I learned that MOSFETs in general come in two modes, enhancement mode, or depletion mode.
When I tried to find out which mode the IRF3710 was, from the datasheet, I found that it does not say (or maybe I need glasses). At this point I started searching to find how to tell the difference between the two modes. After some time I gathered that the schematic symbols differ:
Enhancement-mode MOSFET:
Depletion-mode MOSFET:
The difference being the highlighted part below.
Three separate lines means enhancement-mode (left) and one solid line means depletion mode (right).
So, my question: Is this the only way to tell which is which, or is there a quicker way to tell (by markings on the device maybe?). Also, are there symbols out there which use a different method to differentiate between them?
I am asking here for my own learning, but also for other people who might have the same experience as me. I did not find that much helpful info in my searching.
AI: Two things I want to add to the answers already given:
Don't trust the schematic symbol. You'll see the depletion-mode symbol used pretty often for an enhancement-mode part because it's easier to draw. (The symbols suggested on the manufacturer datasheets won't make this error, but some random application circuit schematic from the web is not trustworthy at all)
How to tell from the datasheet whether the part is enhancement mode or depletion mode. For an n-channel FET, if the \$V_{gs({\rm th})}\$ is greater than 0, then it's an enhancement mode device. If \$V_{gs({\rm th})} < 0\$ it's a depletion mode device. For p-channel, it's the opposite: \$V_{gs({\rm th})} < 0\$ means enhancement mode, \$V_{gs({\rm th})} > 0\$ means depletion mode.
|
H: Using Diodes instead of a Relay
While working on a switching setup for a automotive cooling fan, I came up with a question. Can diodes be used instead of a relay when dealing with two switching sources? The main source would be the cars accessory line that is on when the key is turned on. The other source is a dash mounted switch that would let the fan be used when the key os off. Here are the two wiring diagrams.
AI: Logically, yes, but in your circuit with no relays, note the diodes are in series with the motor, so that 1) the motor will see 12V minus the diode's voltage drop (say half a volt), and 2) that voltage drop multiplied by the motor's current (2A is it?) will generate heat (say 1 watt) within the diode and 3) 1N-series diodes are only rated for about one amp (at cooler temps).
|
H: Having trouble with a frequency to voltage converter for universal frequency input
I'm having trouble with a frequency converter. The data sheet is given here.
I set the converter for universal frequency input and voltage output. These can be configured by DIP switches. The diagram from the data sheet is given as below:
And here is how I do the wiring:
I use a 12V power supply, and as an input I couple 0/10V pulse train by using a function generator to the terminals 2 and 3 above. The pulse train is exactly 0..10V and I followed the advice on section 8.7 at page 20 to set the pulse properly.
I checked the DIP configuration too many times and I'm wondering whether my wiring is wrong. Just to make sure here is my DIP config:
For SW1: ON ON OFF OFF ON ON
For SW2: OFF ON OFF OFF OFF ON OFF ON OFF OFF
According to the data-sheet if I configure the DIP switches as above I should get 0 to 5V voltage output mapped from 1Hz to 300Hz frequency input range from a pulse train input.
But I get a constant 17.6V output by the voltmeter whatever the input frequency is, it is constant even though I don't hook up any input. The RED indicator flashes with 2.8Hz which data sheet says(at page 21) it happens when the "Sensor fault or invalid DIP switch configuration".
I spent several hours but no luck. Is the connection from the function generator to the terminals 2 and 3 correct?
AI: For SW1: ON ON OFF OFF ON ON
You have configured "Universal Input Frequency to 0-5V". That's a relay input.
I'd would have used PNP input, but the manual is definitely not clear on this.
(as expected from Phoenix Contact)
For SW2: OFF ON OFF OFF OFF ON OFF ON OFF OFF
1 to 300 hz, software configuration active.
You've set all the DIP switches, but you've forgot to enable dip 10, enabling the dip switches as active configuration.
New info from comment. With this sensor (Adolf Thies 4.3351.00.000) the output is push-pull. Similar to the function generator. However, you need to change the range significantly for 1082 Hz at 50 m/s (22 Hz per m/s)
Try DIP2: [ON ON] [OFF ON ON ON ON OFF] [OFF ON]
10 Hz to 1200 Hz DIP configured, 10 Hz is 1 BFT, 1200 Hz is beyond 12 BFT
|
H: How can guitar pickups hot leads be wired in parallel without causing current problems?
I am looking up wiring diagrams for electric guitar pickups and noticing that the common method for combining two pickup signals is just to have a switch that can optionally have both the hot leads shorted together and routed to the volume potentiometer which then goes to the output
The pickups use a magnet and a vibrating metal string to generate an electrical signal. Sometimes guitars can combine multiple pickups using a switch that changes which pickup(s) get shorted together and routed to the volume potentiometer.
I understand the way to combine them in series, having the ground from one connect to the hot from another, and then the second one is grounded and thus when its hot generates voltage it artificially lifts up the "ground" reference for the first one so the total output that goes to the volume knob is the combined signals. This seems like a way to create a passive mixer signal without the need for an op-amp. Incidentally this is how humbuckers work in normal operation (non-coil-split mode) with their magnets being anti-aligned so it cancels noise.
What i don't understand is when they are in parallel, if each generates a different voltage and they are shorted together how does that not cause massive current problems. If the hot of the first is generating 1 volt and the second generates 2 volts and their hot leads are tied together wouldn't it cause massive current (since low resistance of copper wire) between the hot of the first and the hot of the second?
Someone please explain how shorting these together mixes the signals.
AI: The amount of power (electric energy) generated by these pickups is so small that there are never any "massive currents" present. The currents are simply too small to cause any issue.
When pickups are connected in parallel the generated current in one pickup will divide itself across the other pickups but again the energy transferred that way is quite small. To small even to make other strings vibrate.
You would not want to do the same with for example a couple of generators in a power plant. There the power is such that two generators in parallel can work against each other and cause damage. But for guitar pickups the power involved is simply too small.
To explain how the signals are mixed when the pickups are in parallel I will use this schematic representation:
simulate this circuit – Schematic created using CircuitLab
I model the pickup element as a Voltage source (the EMF induced by playing the strings), some series inductance (you can ignore this) and some series resistance.
The voltage / current produced by V1 will divide itself across all other elements in the circuit. The same is true for V2. Electrical engineers learn how to do the calculations on this in their first year (I did and I hope that's still true). For this calculation you can determine the resulting voltages of V1 and V2 individually and then later add them up.
What the resulting voltage going into the amplifier will be does not matter much, fact is that the signals are simply added up and end up "everywhere" in the circuit.
Maybe the thing that you missed is that each pickup element has some series resistance, this makes then "play nice" when they are connected in parallel. In the power plant example I mentioned, these resistances are extremely low and that means the voltage sources don not "play nice" as currents can get out of hand.
|
H: Small signal equivalent circuit - MOSFET
Let's consider the following amplifier circuit:
Now, if we would analyze small signal operation, we could represent the circuit with small signal equivalent:
The part that bothers me is the PMOS representation in my workbook. Shouldn't the voltage controlled current source of the PMOS transistor (index 2 in the drawing) be rotated so that the current goes from its source to drain?
AI: For this circuit:
simulate this circuit – Schematic created using CircuitLab
The voltage gain is:
\$ \large \frac{V_{OUT}}{r_x} + g_{m1}*V_{IN} - g_{m2}*(-V_{OUT}) = 0\$
Where \$r_x = r_{o1}||r_{o2}\$
$$\frac{V_{OUT}}{V_{IN}} = - \frac{g_{m1}*r_x}{1 + g_{m2}*r_x} = - g_{m1}*\left(r_{o1}||r_{o2}||\frac{1}{g_{m2}}\right) $$
And now let us analysis this circuit:
simulate this circuit
As you can see I used the N-MOS small-signal equivalent circuit for the P-MOS this time.
\$ \large \frac{V_{OUT}}{r_x} + g_{m1}*V_{IN} + g_{m2}*V_{OUT} = 0\$
And the voltage gain is exactly the same as before.
\$ \frac{V_{OUT}}{V_{IN}} = - \frac{g_{m1}*r_x}{1 + g_{m2}*r_x} = - g_{m1}*\left(r_{o1}||r_{o2}||\frac{1}{g_{m2}}\right)\$
So to conclude it may sound strange at first glance but P-MOS circuit small signal model is identical to N-MOS.
We have the same situation with the BJT's
Why are the current directions in the hybrid-\$\pi\$ model for BJT the same for both NPN and PNP?
Applying hybrid-pi model of an npn-BJT to a pnp BJT in small signal analysis
|
H: Do resistors consume reactive power?
I encountered this problem in one of my books where the problem asked to find the power loss in the core of an induction motor. I worked through the problem and I found the current as in the picture, but it came to my notice that if I find the power across this load and then find the current that passes through the resistor of interest and finally multiply current by voltage the resulted power would be a complex number. But my intuition tells me that this is a big NO. As it is known that resistors cannot store energy, therefore, a complex power doesn't make much sense.
Now, I found in similar problems that people multiply the magnitude of the phasor of each component to find power across the resistor and they simply ignore the phase angle as if it has zero phase degree. But, I am unable to understand why they just assume that a phase does not exist.
Can someone explain it to me in depth?
AI: Do resistors consume reactive power?
No, resistors only consume active power. Also reactive power is not "consumed" - it is stored or returned.
Any "complex current" that would flow through the resistor would generate a voltage across the resistor proportional to this current. The power consumed by the resistor would be \$|I|.R\$ where \$|I|\$ is the magnitude of the current. The current represented as a complex number also provides information about the phase of the current versus the voltage at the frequency that is not mentionned in your example (it is already used to determine the reactance of the inductor).
The example
The complex current in the example must be split in its real part and its reactive part. The real part flows through the resistor and the reactive part through the inductor.
This can be explained by the fact that the current flowing through the resistor does not have a phase shift with regards to the voltage across the resistor terminals - and this voltage is the source voltage itself.
So the power consumed by the resistor is \$63.6405\mathrm{W}\$.
The example, if the resistor and the inductor would be in series
If the resistor and the inductor would have been in series, then the current is not split this way, but it still has an non null imaginary part because the current phase will be shifted with regards the source voltage. The resistor would have consumed as much as indicated in the first part of this answer, i.e. \$1340.60\mathrm{W}\$. In fact, in that case, we only need the magnitude of the current - its phase would be irrelevant.
Side note
It could be argued that the question is incompletely stated as we do not really known what the reference for the current is. We assume that the phase is referenced to the output voltage of the black box and not referenced to the black box's internal ideal voltage source ahead of the black box's internal source impedance (which can be complex itself).
If the internal voltage is the reference, then we need more information about the black box output voltage phase (complex voltage like \$5\mathrm{V}+j*.5\mathrm{V}\$).
|
H: Low-pass Filter with zero drift amplifier
Can an auto-zero or chopper amplifier be used in a low-pass filter to remove a high frequency carrier? I am trying to design an active low-pass filter with a 1kHz cutoff and very low DC drift, offset, 1/f noise. The noise needs to be less than 20μVrms in 1kHz BW and -80dB attenuation is needed at 100kHz. I intend to use a Sallen-key 2nd order Butterworth with a single op-amp.
My concern is whether I need to worry about my 100kHz carrier (that I am trying to extinguish) beating against the chopping amplifier modulator to produce more low-frequency artifacts in my 1kHz pass-band, or that higher frequency intermodulation products could alias when I digitize. Would I be better off using a passive filter and dealing with any residual aliasing of the 100kHz carrier digitally?
My power supply will be ±5V and the output will drive a single ended ƩΔ ADC.
AI: Can an auto-zero or chopper amplifier be used in a low-pass filter to
remove a high frequency carrier?
Yes, if you really are using it as a low pass filter, the fact that its a chopper doesn't interfere with the signal unless but only if the auto zero/chopper has a unity gain bandwidth of greater than 100kHz (or whatever your frequency of interest is that might conflict with the chopping frequency ). The reason I say yes, is because most all chopper/auto zero amplifiers have a unity gain bandwidth in the Mhz range. The chopping frequency is higher than that of the unity gain crossover point.
The worst case scenario is the chopper/auto zero amplifier is chopping at 100kHz (or a harmonic). Because these amplifiers modulate and then demodulate signals, if the demodulation happens at the same frequency as the input bad things happen. Usually the chopping frequency is stated in the datasheet, or can be seen in the noise diagram of the amplifier.
I have had issues with this, I had an auto zero amplifier that was picking up RF in the range of 70-700Mhz and it was causing a shift (ever so small) in the amplifiers output. I put an RF low pass filter on the front end and the problem went away, later on analog devices released this part with the EMI filter built in:
So if you are having problems with a chopper op amp, it may be useful to use a passive low pass filter followed by a buffer amplifier at unity gain (or thereabouts).
Would I be better off using a passive filter and dealing with any
residual aliasing of the 100kHz carrier digitally?
There are two options, an active filter and making sure the any high frequencies that are to be filtered are within the bandwidth of the amplifier (and thus well away from the chopping frequency)
A passive filter followed by an impedance buffer (this would be the only way to go if you had a frequency on the input that was at the chopping frequency)
|
H: Is there any problems with driving a single 5mm LEDs off of a single or parallel 18650 batteries?
I want to drive one or two small (5 or 10 mm) white LEDs in parallel using one or multiple 18650 3.7vV Li-ion cells. The main reason I want to use the 18650's is, I already have them, they are rechargeable and have a pretty good mAh rating.
Is this as simple as putting a resister and a switch and I'm done? What about when the battery is drained and can only output 2.x volts? Will the light simply not light at that point and we charge the batteries?
AI: I Want to drive one or two small ( 5mm or 10mm ) LEDs in parallel
is this as simple as putting a resister and a switch and i'm done?
It is even simpler. Put a resistor and LEDs in series and you are done.
What about when the battery is drained and can only output 2.x volts?
If Li-Ion battery is drained to 2V it is as good as dead. You should never let it discharge below 3V, ideally recharging at 3.3-3.4V
Which means it will most likely be higher than double of most LED forward voltage, allowing you to connect diodes in series.
If your LED forward voltage is above 1.7V then you can use them in parallel, but you need individual resistors connected in series with each LED.
So, that takes care of LEDs. Now, if there is no very strong reason to use two batteries in parallel I'd recommend staying with one 18650 cell. There are many potential problems with using Li-Ion cells in parallel. You can search this site and find literally hundreds of questions and answers on this one, if you are curious. Otherwise just stick to single cell.
|
H: how to adjust drain bias to keep power dissipation constant (HEMT)
I am reading a chapter in a thesis about using HEMT (high electron mobility transistor) as a cryogenic amplifier. The measurement scheme is to use the gate voltage fluctuation's influence on the drain source current.
In order to use a HEMT as an amplifier, it is necessary to find out what the gain factor is going to be, for a given circuit. To that end, the paper shows plots of \$I_D\$ as a function of \$V_{DS}\$(image a) and as a function of \$V_{GS}\$(image b) for fixed power dissipation values \$100\mu \$w, \$10\mu \$w, and \$1\mu \$w.
My question is about plot (b). The gain factor from the fitting of plot (b) is only valid, to my knowledge, if you can somehow adjust the drain voltage bias such that, throughout the operation of the HEMT, \$100\mu \$w \$= I_D V_{DS}\$.
Based on the plot, it appears that if you can somehow adjust the drain voltage bias to keep the power dissipation of the HEMT constant, you have a nice well defined gain factor over the entire range of the gate source voltage.
It is difficult for me to imagine a feedback circuit where the drain voltage bias does change to keep the power dissipation constant though. Based on the circuit shown below, it doesn't seem to have such a feature.
My question is, given the fact that you are measuring the drain voltage and the drain current has to come from the source of drain voltage bias, is there a scheme to change the drain bias voltage to keep the power dissipation constant?
.
AI: Presumably the signals to be measured are very small with respect the gate bias voltage. So there is a single operating point for the HEMT transistor (and the input and output signals represent small-signal variations about that point).
So you pick an operating point (in this case you pick Id * Vds) and a load resistor. Say you pick 100uW and 510\$\Omega\$.
For every Vds from 50mV to 450mV there is a corresponding Id. So you can pick, say 200mV Vds. So Id must be 500uA. So the gate bias voltage is approximately -0.575V. You set the drain bias voltage to exactly 200mV + 255mV + 255mV (2x 510 ohms), and then trim the gate bias voltage so that the DC drain voltage (VD) reads 200mV.
P.S. If that thesis is available online it would be interesting to read.
|
H: Battery saving self-poweroff circuit
I'm designing a battery powered device, which must be turned on by an external 3.3V impulse from a comparator output (TLV3201), and for the sake of power efficiency I want the MCU to power itself off when it has finished processing the event. For the self-poweroff circuit I've found the following solution:
The P-channel MOSFET is powered off by default by pulling it's gate to VCC via R1, but when the EN pin is pulled high, either by the comparator (push-pull op-amp) output or the MCU (after it has got power), the P-channel switch is opened to supply power to the MCU.
The problem is that the high signal from the comparator may stay high for many seconds, even after the MCU has done it's work, and I want the MCU to be able to shutdown the power regardless of the comparator output (after it has initially enabled the switch) until the next time the comparator output rises. How could that be done?
Edit:
Added schematic of the comparator circuit.
The power supply is an unregulated Li-Ion battery (~3.7V). The 3.3V regulator is to be placed after the P-channel switch as part of the Load.
The comparator is used to sense the resistance connected between IN1 and IN2, which may switch between ~1.5 kOhm and ~5.5 kOhm. 1.5 kOhm resistance results in a high level output of the comparator, and 5.5 kOhm is low level (0V) output.
simulate this circuit – Schematic created using CircuitLab
Edit 2: The MCU will be a ESP-07 Wi-Fi module (ESP8266). I am currently researching its deep sleep capabilities and power consumption.
AI: For this application I'd recommend using TPL5110 timer. It is designed to drive MOSFET gate, so you simply put it instead of R1 and Q1.
It will be waking up you MCU periodically (longest delay is 2 hrs), but will switch power OFF as soon as MCU sends back DONE signal.
The great thing about it, though, is that it has manual ON input (also dubbing as delay setting). So, you can connect your comparator output to that pin and TPL5110 will power up MCU when necessary. Again, when MCU is done processing that event it signals DONE and shuts down.
Basically MCU is powered up externally at certain intervals or comparator signal, whichever comes first.
UPDATE
Following your comment I see that the timer won't work for you. Here are some alternative solutions. Whether they work or not depends on specific MCU you are using.
Most MCUs nowadays have sleep modes consuming power comparable to that of external nano-power timers. For example, ATmega328P can run on a button cell for a year. The interrupt requests that wake up MCU usually can be configured for either edge of a signal. So, by connecting comparator to MCU interrupt pin and programming it for rising edge you can solve the problem and get rid of power MOSFET circuit altogether.
Another option is a variant of the above, but using analog comparator input instead of interrupt. Some MCUs have comparator inputs with interrupts, so you can get rid of external comparator too and use internal interrupt for wake up. Although power consumption is usually higher in these sleep modes, so check datasheet if it is acceptable to you.
Finally, you can use a combination of power switching and sleep mode. Basically, use your existing circuit, but with tiny code change. After your MCU is done processing and switched MOSFET control line OFF add commands to enter sleep mode (deepest possible, with every module shut down; on some MCUs sleep current is as low as 0.1μA, or 400 times lower than your comparator!). If by this time comparator is already below threshold then MCU will be powered down immediately and your system is back to original state. However if comparator is still high then MCU will temporary go into deep sleep and won't consume much power.
UPDATE 2
Here is another option you might want to try. It reduces power consumption to under 3μA by switching to power off mode instead of deep sleep. Essentially, this is option 3 above, but instead of using external MOSFET it uses internal power switch of ESP8266.
It works like this:
- Initially R2 pulls CH_PD low, keeping MCU in power off mode.
- When wake up signal comes in through D1 it powers on MCU. MCU immediately puts high on GPIO0 to keep itself active through R1.
- After MCU is done processing the event it switches GPIO0 low and goes into deep sleep mode. If comparator input is already low, then CH_PD goes down and MCU powers off. If comparator is still high the MCU will sleep until it goes low.
|
H: The best way to attach a heatsink to a SMD component?
There's a SoC I want to throw a heatsink on. What's the best way to do this for maximum heat transfer and longevity?
I know of thermal pads, thermal adhesive, and thermal paste -- but when it comes to the standard black plastic packages, I have no clue which of the options would be best.
AI: I don't know about maximum efficiency but there are little stick-on heatsinks that can reduce the junction temperature of BGA and LQFP packages.
I got like 90 of them (2 sizes) from Aliexpress for about $7 US shipped, but you can also find them at distributors. Chances are good the adhesive might be better in terms of longevity and thermal conductivity from the disties, but $$.
There are other options such as using a high tech thermal pad between the chip and a finned case, but there isn't so much advantage in that for the relatively low power levels and home/office environment. Chances are your SoC will be obsolete before it fails if you keep the junction temperature reasonable.
As others have said, most of the heat is typically conducted out through a thermal pad (with copious thermal vias on your board) to planes on the board. Keeping the board cool can be done with a fan, or perhaps something attached to the opposite side of the board (insulation is really important or something could be damaged).
|
H: GPIO state of MCU (AVR) with no supply voltage
In general:
what state do the GPIOs of a MCU (an AVR in my case) have when there is no supply voltage and the MCU is "off". Am I right that the pins are floating as there isn't any "reference" for the logic level at all.
In special:
I have a bilateral switch (BS) shifting a variable voltage (provided by a DAC) for an analog device (AD). Both the BS and the DAC are controlled by a MCU. As the BS's output will be floating when its input is not set high by the MCU, I need a pulldown at the output to define the low-state. I've attached a simplified circuitry illustrating the scenario.
If the MCU is running, everything is clear: the BS will supply the AD with an analog signal as long as gpio1 is high.
But if (for a reason) the 3.3V power supply is not there, I still need to have a defined voltage at the AD's Ain pin. So if my supposition, that the gpios of an inactive MCU are floating, is correct, the following circuitry should meet my requirements:
MCU active: gpio2 is configured as low output -> strong pulldown (1k & 10k in parallel equals 0k9) outweighs 100k which leads to a negligible voltage divider. Ain is pulled to low voltage (not ground exactly, but close to it - 0.2V)
MCU inactive: gpio2 is floating -> weaker pulldown (10k) and not negligible voltage divider (100:10) leading to a defined voltage of 0.9*24V = 2.2V.
I'm quite sure that the circuitry as such makes sense, but if it works or not completely depends on the behaviour of gpios when there is no supply voltage for the MCU.
Can this damage the MCU? (I guess not, as long as R1 is high enough)
AI: The simplest way to imagine the MCU pins with no power is like this:
Basically I would say that the input is floating, but if you apply any voltage to that pin, then it will flow through the upper protection diode and begin parasitically powering the circuit.
What's going to happen in your circuit, is that once the voltage increases past the diode threshold (0.6v), current is going to start flowing through that 1k, and the voltage isn't going to get much higher than that. Without doing the math, i'd say like.. 0.8v
I'm not sure what exactly the purpose of the gpio2 pulldown is, but you could hook it up like this instead:
simulate this circuit – Schematic created using CircuitLab
|
H: How does an ATX power supply turn off?
It's well-known that for powering up a standard PC power supply, its PS_ON# line must be momentarily pulled low (i.e. connected to GND).
But how does the motherboard actually turn it off?
AI: The motherboard doesn't turn the power supply on and off. It controls it between a low power standby, where only the 5v_STANDBY line supplies power, and full output.
What controls these two states is the (perhaps mis-named) PS_ON# line. Logic on the motherboard, powered by the 5v_STANDBY line, pulls PS_ON# low permanently to enable the power supply main output, and releases it high to disable the main output.
|
H: common mode choke and derating vs current
I have a CMC that has is rated at about 1kR at 100Mhz at 3A
However my circuit uses only 1A. Do I need to derate my inductance since there is less current flowing through it ?
AI: If your common mode choke is passing a regular circuit current of 1 amp then that will be a differential current i.e. it flows into one terminal on one side and returns in the opposite direction through the equivalent terminal on the other winding: -
Under these circumstances the currents are in opposite directions and the fluxes cancel out hence there is no need to derate the inductance. In any case, inductance will only be reduced (usually) when the common mode current reaches a significant level: -
Picture source.
|
H: What is the meaning of this line? "Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with 4-way associativity."
Memory-mapped, cached view of external QSPI flash. The cache is specified as 32 KB with
4-way associativity.
Does it mean that my external QSPI Flash is only 32Kb or it has been memory mapped onto 32Kb?
Does cached view mean that repeated read will get the data cached within the processor and not actually access the memory?
AI: The confusion probably comes from the formulation: "memory-mapped, cached view" The fact it is memory-mapped has nothing to do with the fact it is cached. The size of the memory mapping is independant of the size of the cache.
So, I'll break it down for you:
Memory-mapped
Means you can access the contents of the external memory directly by reading/writing the main memory address space (at some specified address). Also typically imply that, if the external memory contains executable code, you can execute this code simply by branching: you don't need to copy the code in internal memory before branching. This is achieved by the MCU which, internally, translates any access to this part of the memory into the required QSPI commands to read the external flash on the fly. At this point, it does not imply that there is a cache.
Cached
Means that data read from this part of the memory will be placed in a smaller-sized, intermediate memory area (not accessible directly), which the MCU will lookup first when the the external memory will have to be accessed again. This way, when the same data is accessed twice, the external memory does not need to be accessed again. The data from the cache will be retrieved, which is much faster.
Indeed, this is very useful for memory-mapped QSPI. The QSPI interface is much slower than the CPU: any read/write operation has to be translated into commands sent serially on a few signal lines, which adds a lot of overhead. To reduce this overhead, you'll typically try to read multiple bytes for each QSPI access, and store them in a cache so that, if the next read addresses the neighboring byte (which is likely), you have it ready.
32kB
Here, this is the size of the cache, not the size of the memory map. The size of the memory-map will typically be big enough for the whole size of the external memory (check the detailed specs).
4-way associativity
This is the way the cache is internally organized. The cache is much smaller than the external memory. The naive way to implement a cache would be to store all the recently accessed bytes along with their corresponding addresses, and, when subsequent accesses are made, check in the whole cache if an existing byte has its address corresponding to the accessed address. This is extremely inefficient. For each byte, you would have to store the address, which multiplies by five the required size for the cache (assuming 32-bit addresses: for each byte, you need the data byte value plus four bytes for the corresponding address), and, for each access, you need to compare the address against 32768 possible values to check if it is already in the cache or not.
So, here is how it is done:
First, the cache is organized in lines of N bytes (e.g. 16 or 32 bytes - Note that the cache line size is not specified in your case). You store the addresses for the whole cache lines, not for each byte, which saves a lot of space.
Then, not all possible addresses can be stored anywhere in the cache. When accessing the cache, you take a part of the address, and this will give you the index of a "cache set". Each cache set can contain 4 cache lines (in your case). When checking if the data is in the cache, this means that you only have these 4 cache lines addresses to check because you know that, if it is there, it will necessarily be in this set. This will reduce the complexity of the cache structure a great deal, at the expense of less flexibility in storing the data (meaning a possibly lower cache hit rate, depending on the memory access patterns).
This is what the cache associativity is: The number of cache lines per set. This gives an indication of the likeliness you can retrieve data in the cache if it has been read before. The bigger the associativity, the better, but it makes the cache more complex and more expensive to manufacture. And at some point, the benefits are not even worth it. 4 is not bad (which is why they are proud to advertise it).
|
H: Compare two numbers of four bits
I have two numbers both with four bits (a3 to a0 and b3 to b0 reading from left to right) and I would like to find out if a is bigger than b.
I have drawn a solution of mine and a short mathematical explanation of it, it would be amazing if somebody can tell me if this is correct.
So my idea is to to use full adders (which are named VA), where I have a, b and the carry from before as inputs. The carry from the current adder and q_{i} are outputs of it.
I negate all of my bits of b,setting C0 to one to add one so I have a two's complement and than I add both numbers.
One of my sources say's, that a>b when sign=0 and z=0. Z is 1 when all q_{i} are zero. And sign means (concerning my sources), that q3 is zero.
So in the end I came up combining it like the way I did on the photo.
Am I right with this?
Why is there no use of c3?
AI: The fundamental error with this and why you are not using c3 is that you need to represent the twos complement input and output with the range 15 to -15 as a five bit quantity. This means that c3 is q4 of the output and the sign bit.
|
H: Analysis of BJT transistor - is a linear analysis possible?
Since semiconductors aren't passive components, can I realistically derive analytic solutions for a circuit with BJTs? I know this may seem like a very naive question, but am I on the right track with this sort of analysis, or do I need more powerful numerical modelling even for basic circuits? e.g., the responses for single transistor amplifiers, etc.
I'm trying to teach myself more about circuit analysis - but after the 'RLC' passives, I don't want to end up on the wrong track. I understand this is bordering on opinions, but I really want to start on the right track.
I should mention, I've used 'rule-of-thumb' stuff for constant current, biasing etc. But I'm not entirely satisfied with this approach.
Thanks for the current (ha ha) feedback. Using some of the 'hobbyist' rules of thumb, I've found it easy to achieve switching (saturation) modes, and biasing giving consistent results with different small-signal transistors - which seemed counter intuitive and really surprised me! I really need something like a first-principles primer that doesn't gloss over the complexity, or the range of uses. My multimeter isn't teaching me much...
AI: Transistors are fundamentally nonlinear devices, so no, a strictly linear analysis is not possible.
This is why circuit simulators like SPICE were developed. They take two approaches to make the problem tractable:
For small-signal analysis, you linearize the nonlinear equations around the operating point. This works as long as the deviations of the signal from the operating point cause negligible errors relative to the nonlinear equations.
For large-signal transient analysis, you linearize the equations around the current state, and pick a time step that is small enough that the deviations from the nonlinear equations are negligible. If you find that currents or voltages are changing "too fast" (a settable parameter), you reduce the size of the time step.
|
H: Pipelined vs Low Latency implementation of cube of a number in Verilog
I was studying about FPGA design and then came across this terms Throughput and Latency. So the author provided an example of a highly pipelined implementation of finding the cube root of a number:
which apparently has the following logic diagram:
.Then the author has tried to reduce 'Latency' by writing the code like this:
which unrolls like this:
My question is to me both the implementations appear almost identical, so how are they different? I understand blocking and non-blocking assignment, but how are they causing a different logic diagram in this case? How is it decreasing the latency of the circuit in the second case?
AI: Logically speaking, you're correct they are identical. However, the first implementation is clocked (note the always @(posedge clk) statements vs the @* statements in the second), so it has a minimum three-cycle latency that is determined from the clock period. The second implementation is calculated completely asynchronously, so its latency is dependent only on the speed of your technology (how fast the multiplications and routing delays resolve).
What this example illustrates is that many digital functions can be implemented in a heavily pipelined manner or in one long logic path, or somewhere in between. Which you choose can depend on many factors. The first implementation is less resource-efficient as it uses many extra registers to store the pipelined values from cycle to cycle. The second is more resource-efficient, but if you plop it in a synchronous system running at a high clock frequency it will be harder to close timing on, because it fits so much logic into one cycle.
Notably, both implementations have equivalent throughput. Both can handle a calculation every clock cycle, it's just that the first implementation will supply the output three clock cycles after receiving the corresponding inputs.
|
H: AVR bitwise operation: making a NOT with a microcontroller
I'm programming an ATtiny10, i have two spare pins, one of which is the external interrupt pin on PB2, and i want to write the inverse of that signal to PB1. I want to spare a component, since i have already the uC and some pins.
So far, the best option i could come up with, is:
PORTB = (PORTB & ~(1<<PB1)) | ((~PINB & (1<<PB2))>>1)
Used in the IRQ routine associated with the change of INT0.
Basically clearing the PB1 bit, inverting PINB, masking it and shifting to the right by one place.
The other option is using an if statement and branch the code, which doesn't really satisfy me.
Is there an easier way to do this, that i cannot see?
AI: In assembly code it would just be a matter of testing a single bit and then setting another bit to 1 or 0, if you write C code close enough to that assembly code, then that might be what the compiler will turn it into.
Compilers are usually very clever. Perhaps your code is already being converted to that assembly code, or maybe even something that is more efficient.
If I were to personally do it then I wouldn't care because you are over-optimizing a non-existent problem. But for the sake of being pedantic, then I'd look into the compiled assembly code and verify that it is what I believe is the most efficient code. Or I would just write some C code that hopefully makes the compiler turn it into what I think is the most efficient code.
if(PINB&(1<<PB2)){//This is essentially a bit test instruction
PORTB&=~(1<<PB1);//This is essentially a bit clear instruction
}else{
PORTB|=(1<<PB1);//This is essentially a bit set instruction
}
The bottom line is that it probably doesn't matter. The total number of clocks spent on this minuscule part of your code might go from 10 to maybe 6, or maybe it already was 6 because the compiler is smart. All in all, a couple of clocks here, some there, doesn't really matter.
Here's the test that OP performed, might be interesting for future readers:
26:blink-prescaler-register.c **** PORTB = (PORTB & ~(1<<PB1)) | (((PINB ^ (1<<PB2)) & (1<<PB2))>>1);
107 .loc 1 26 0
108 0010 62B1 in r22,0x2
109 0012 50B1 in r21,0
110 0014 5095 com r21
111 0016 5470 andi r21,lo8(4)
112 0018 452F mov r20,r21
113 001a 50E0 ldi r21,0
114 001c 5595 asr r21
115 001e 4795 ror r20
116 0020 562F mov r21,r22
117 0022 5D7F andi r21,lo8(-3)
118 0024 452B or r20,r21
119 0026 42B9 out 0x2,r20
VERSUS:
27:blink-prescaler-register.c **** if(PORTB&(1<<PB2)){//This is essentially a bit test instruction
95 .loc 1 27 0
96 000a 129B sbis 0x2,2
97 000c 00C0 rjmp .L5
28:blink-prescaler-register.c **** PORTB&=~(1<<PB1);//This is essentially a bit clear instruction
98 .loc 1 28 0
99 000e 1198 cbi 0x2,1
100 0010 00C0 rjmp .L4
101 .L5:
29:blink-prescaler-register.c **** }else{
30:blink-prescaler-register.c **** PORTB|=(1<<PB1);//This is essentially a bit set instruction
102 .loc 1 30 0
103 0012 119A sbi 0x2,1
104 .L4:
Instruction wise, my version is about half. I suppose the compiler wasn't smart enough in this particular case.
Also, from OP tests, propagation delay:
The blue track is the input signal, assuming 5.1V*(0.6) = 3.06 V for the high treshold voltage (trigger at 3.04V because of scope settings).
The expression that OP proposed is the CH1 (yellow, labeled), while the white one is the response time with the IF statement.
The difference in time between the two is 1.64 uS, about 13 clock cycles if the uC internal oscillator is calibrated correctly. So with the expression the propagation delay is almost double.
|
H: Safety of Lipo batteries
I help organize a robotics competition in my home town. For the past 16 years, we provided each team with two 12v battery powered drill. They were cheap, easy to find and provided us with 4 batteries, two chargers and two motors. The battery chemistry was NiCd and provided us with two easy access port.
They shorted them, they depleted them, they beat them up to death and nothing spectacular ever happened. Only one time did one basically inflated.
However, NiCd 12v drills aren't available anymore. The industry defacto standard is now 12v Lipo or 18v Lipo. From my understanding, Lipo are much more tricky to use in term of charge, discharge, c rate etc..Furthermore, it gives access to all the intermediary cells on the battery connection header.
With that in mind, my questions are:
Am I right to assume that those batteries are more dangerous to use by error prone teenagers?
Are there any way to mitigate any risk?
Bonus: what kind of chemistry would you recommend instead? I know I can't ask for a specific product suggestion, but having a general idea of what chemistries are safe would be really helpful.
AI: 1) Yes, defective Lithium batteries may burn or explode, but properly handled and protected, they are safe.
2) Not knowing what exactly you're doing:
Riggid packaging, so people can mishandle the battery without blowing up.
Prevent short circuits, deep discharging and overcharging with proper protective circuits.
Train your teenagers what happens if you treat a battery wrong. There are tons of videos on youtube. Don't do plain theory, but also add explosions and burning batteries. Teenagers (and adult engineers) love explosions.
3) I would recommend staying with Lithium Ion batteries. Best you can get on the market and as long as someone knows what they're doing, they're pretty safe.
|
H: Why Flash memories have less P/E (Program / Erase) cycles (e.g. 100K) compared to EEPROM (e.g. 1000K)?
As per datasheet of EEPROM: CAT24C02 (On Semiconductors) - P/E Cycle count is > 1000k whereas as per datasheet of NAND Flash: MX30LF2G18AC-TI (MXIC) - P/E Cycle count is > 100k
I would like to know why there is such high difference in P/E cycle count because as per my understanding from internet websites; EEPROM & Flash memories are similar types of programmable memories.
AI: Thanks for the example part numbers. The difference is essentially a design choice; both operate by tunneling electrons on and off a floating gate, but the two examples you give are of very different capacity and therefore density. The tiny part with the high number of erase cycles will have larger, more durable cells compared to the high-density part.
Note that as density goes up, the amount of wear on any individual cell would normally go down. It takes longer and longer to write all the cells in the device.
|
H: Official GPS protocol documentation?
Searching for "GPS protocol" reveals many sources for processed GPS data, e.g. NMEA or binary outputs of GPS units.
Where is the official documentation for the GPS satellite - receiver protocol? Or any interesting supplemental material that might explain it?
Context: I'm especially interested in learning about how (e.g.) the almanac and ephemera are transmitted.
AI: The official documentation for GPS is available online at:
https://www.gps.gov/technical/
The portions you are probably most interested in are the Interface Control Documents, especially:
IS-GPS-200, "NAVSTAR GPS Space Segment / Navigation User Segment Interfaces"
IS-GPS-800, "NAVSTAR GPS Space Segment / User Segment L1C Interface"
These documents jointly define the portions of GPS used for navigation and timing. If you're specifically interested in how the almanac and epherema are transmitted, that's covered in the second document.
|
H: Is it appropriate to use a 300V RMS CAT I rated oscilloscope to measure 230V RMS of a power inverter?
Yesterday I decided to look at the waveform of my portable power inverter. The inverter specs are the following:
Input: 12 V DC
Output: 230 V RMS
Rated power: 300 W
I powered my inverter from a bench power supply at 12 V (powered from a mains power outlet) and set the current limiter to 2 A. The inverter was then loaded with a series of 2 resistors of 1 Mega ohm each (rated at 1.5 W).
At first, I measured the output voltage by connecting the probe of the oscilloscope to the second resistor (therefore measuring half of the inverter output). Then I connected the probe in parallel to the series of resistors.
My oscilloscope is a Rigol DS1054Z. From the datasheet, I read
Maximum Input Voltage (1 MΩ) for analog channel: CAT I 300 Vrms, CAT II 100 Vrms.
According to my understanding of CAT ratings, the system I was probing should be CAT I so the measurement setup is appropriate. Is my reasoning correct? Furthermore, I'd like to understand if I may have damaged the oscilloscope somehow, maybe because of startup/poweroff transients from the inverter.
The probes I used were those shipped with the oscilloscope. I think I was using the x10.
EDIT:
I added the waveform of one of the measurements I did
AI: Damage doubtful. Your 10x probe divides the voltage down to 23V at the scope input. The waveform looks typical for a cheap inverter (modified sine wave).
|
H: How to avoid this voltage when I turn on the circuit?
What I want to do is an Delay ON, but when I press the switch this voltage pass for a brief moment and turns the led on for a moment.
How can I avoid this?
Should I put the switch somewhere else?
If you can make another suggestion for a circuit I wold be happy to read it, what I need is that the led come on only if the switch is on continously for more than 13 seconds.
Thanks in advance.
AI: Add this circuit to your reset pin so it ignores random turn-on pulses. It delays any turn-on pulse by about 100 milliseconds. The time delay is ~\$1.1{\cdot}R{\cdot}C\$. D1 is used to make sure of a fast reset when power is OFF. To increase time delay increase R1 instead of C1. High values of C1 could damage D1 when power is switched OFF. Multiply R1 by 10 to get 10 times the delay, etc.
NOTE: I noticed in your diagram you had no bypass capacitors for the IC. It is mandatory to have bypass capacitors close to the IC so it is stable. C2 should be right at the power and ground pins of the IC. The 100 uF capacitor should be within 2 inch's/50mm of the IC.
simulate this circuit – Schematic created using CircuitLab
|
H: AD4610 - Is it possible to eleminate this single overshoot
I am trying to compensate input capacitance of the AD4610. It does not matter if it is actually over- or undershooting - there is always a spike:
I do not have this problem with the OPA4141 for example:
Is it possible to eliminate this single spike?
AI: The spike in waveform originates from a peak in AD4610 at approximately 2 MHz (from what I gather on scope trace). Thus the transient response results in overshooting, aka "ringing". The other OPA apparently have a fairly flat frequency response and has optimal damping on step function.
I doubt very much that the overshot can be fixed by simple capacitive compensation. You need to SPICE model this OPA with all board parasitics and find a more complicated compensation to remove the 2-MHz resonance and make the frequency response function flat if you don't like the small overshot.
It even might be impossible at all, since even the original manufacurers datasheet shows exactly the same kind of large amplitude response:
|
H: Why the earth attracts charge and where does the charge go when it goes into ground?
Self-explanatory question but to add what I know, by this I will also know if I am right or wrong.
I read somewhere that Earth is positively charged, but is not the state (whether positive/negative) of a body relative to the near body/object. If Earth is positively charged, why is it so? Is there any negative charge around it which helps Earth retain it's positive charge?
Also, what happens to charges that flow to the ground ?
AI: If Earth is positively charged, why is it so, is there any negative
charge around it which helps Earth retain its positive charge ?
It's actually a little more complicated than that.
The earth around you locally is part of a giant circuit!
Thunderstorms generate a negative current which then flows back to the ground wherever there aren't thunderstorms. So it really depends on the weather. It also depends on a variety of other factors like solar storms and ionization of the upper atmosphere, but this gives you an idea of what goes on.
I coudln't find the graph but the local electric field also changes when the sun shines from ionization. The electric field above also contributes charge, thunderstorms and other effects all contribute. All in all, the ground's net charge can be considered zero and a reference for all other charges.
Source: https://slideplayer.com/slide/6192933/
Also, what happens to charges that flow to the ground ?
The ground also has conductivity (or functions like a resistor), so it distributes the charges (as in a lightning bolt) or the really low currents that come from the air to the ground to keep it's potential the same. There isn't really a good way of determining the earths total charge as you would have to account for all of the factors. Just call it 0V for now.
|
H: How can I use a comparator in a circuit?
I am learning about how to use Operational Amplifiers as comparators. I understand that comparators are used to compare an input signal to a reference voltage. However, I am having a hard time visualizing this.
Say that I wanted a comparator with a reference voltage of 0 V (therefore, the output will be either positive or negative depending on whether the input signal is positive or negative, respectively). How could I draw a circuit like this?
I hope my question is clear.
AI: Electronics Tutorials actually has a very good diagram of this.
Henceforth, when you're talking about reference voltages, it can depend on the structure of the reference voltage itself and it can be determined by Kirchhoff's Laws, i.e. Kirchhoff's Current Law.
If you have any more questions, I invite you to check out that hyperlink I put at the beginning of my answer.
|
H: Soldering 0.4mm pitch QFN 48 chip
I have this QFN 48 chip which has the following design :
As it can be observed, the pitch is 0.4mm. It seems like the conventional QFN adapters won't fit because the conventional ones has the pitch of 0.5mm.
I should have read thoroughly before buying it. Do you have any suggestion to help me use this chip?
The chip is just a little small for the QFN adapter, so I tried soldering it anyway but it doesn't work.
The chips is 6mm x 6mm but the QFN is 6.07mm x 6.07mm. I have three different kinds of QFN48 adapters but they have the same design in terms of size.
For your information, I don't have the hot air gun.... would that be a problem?
AI: You need to be sure that you have the correct adapter board for your IC. "Close" just isn't good enough! The problem is not the overall dimensions, but the difference in pin pitch.
It looks like my usual sources don't make a breakout board for 0.4mm QFNs. Here is one for your IC from ProtoAdvantage. I haven't used that company's products, so I can't recommend them.
Regardless, I couldn't solder a 0.4mm QFN (with exposed pad) without either hot air or a reflow oven. And I've done a lot of soldering!
If you still want to try soldering, your simplest solution might be to actually make your own breakout board. Companies like OSHPark will produce PCBs very inexpensively, although you might have to wait a week or two.
|
H: 10 MHz reference distribution (daisy chaining vs. BNC tees vs something else)
10 MHz is the quasi-standard for reference clock in measurement equipment. Most boxes have "REF in" and "REF out" or "10 MHz in"/"10 MHz out".
In my case, I have a measurement setup consisting of an FSW, SMW200A and SMF100A (all top notch boxes from Rohde & Schwarz) as well as a Tektronix signal generator.
What is the best way to distribute the 10 MHz reference clock along >2 devices?
Daisy-chaining
Use BNC Tees (for 3 devices one tee, for 4 devices 2)
A combination: E.g. take the FSW as master, use a BNC tee to connect to the SMW and SMW via RefIn. Then take RefOut from the SMW and use it for the Tektronix signal generator
AI: The 'Best Way (TM)' can vary with what you are trying to achieve. Here are some considerations. I'm not going to use the term 'daisychain', as it can mean different things to different people.
Most 10MHz I/O are designed with roughly 50 ohm output impedance and high input impedance. Most have enough sensitivity to work well with a terminated link driven by 0dBm. Most drive at least 0dBm.
This means point to point links between 2 instruments can be connected with impunity. The far end of the cable will receive clean transitions, whether terminated or not, whether driven with square or sine.
A single Tee'd connection is different however. If terminated, all points on the cable get clean transitions. If left unterminated and driven with squarewave, then only the far end sees nice switching. All other points along the cable see the voltage rise to 50% as the outward wave passes, and it dwells there waiting for the reflection from the high impedance far end to continue up to 100%. This midpoint voltage is the worst possible place to wait, in terms of noise and even possible mis-clocking. A Tee'd connection must be terminated at the far end, with all intermediate nodes high impedance. This is less important if you know you have sinewave drive, and will always use sinewave.
Other considerations. You might want to look at the specifications for the internal standards, and choose the best to be the master. Make sure it's got a separate IN and OUT connection, some devices have a single I/O port. The reason? If you decide later to use a higher quality external reference, then plug it in here, and you don't have to re-cable your system.
Having a Tee on the back of the instruments may make cable identification easier when you are crawling around trying to change connections.
You might find that some inputs are just not sensitive enough for good clocking from some outputs, especially if terminated. If you get problems with one configuration, then try another. Better still, measure the sensitivities and output levels, and actually avoid any dodgy links.
Whatever you do, write down how you've connected them, and note which are high Z and which are terminated inputs. It will save grief when you come to add a new box, or exchange it for one with different ref I/O provisions.
If you run point to point links, then it saves Tees, and it may save thinking about terminations (but see below). If any box is switched off, then everything downstream will (may) not work, which is a good failure! If you run a Tee'd connection, then the system may still work with an intermediate box being off or failed. This box may be degrading the standard without you noticing.
If you run point to point links, each box has the option of passing the input straight to the output, or buffering the reference signal. If it passes it straight on, then electrically it's a Tee, and you will need to terminate the far end for it to work properly. If it buffers the signal, it will add noise, which will usually be at a level irrelevant to your measurements. Should you come to investigate an anomalous system close to carrier noise floor, revisit your reference distribution arrangements to make sure that's not it.
|
H: Can a photodiode tolerate a reverse bias beyond its specification
While searching a literature related to my experiment I come across this paper https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4424026/pdf/PP_PP201500119.pdf . On the page 27 under the section SRS microscopy they have described the set up. The detector used is a thorlabs si-photodiode https://www.thorlabs.com/drawings/5192555af7db78e1-C0DBB61A-E594-6195-A9756D77D1A106F9/FDS1010-SpecSheet.pdf, whose spec says that max reverse bias is 25V. However, in the paper they have said to have it operated under 70 V bias. Am I missing something ? P.S: Question was initially posted on beta engineering but deleted.
AI: The manufacturer gives a maximum voltage that he expects all devices to survive.
When he measures the breakdown voltage in development, he will see a spread due to variations in the manufacturing process and the materials used. The voltage breakdown depends on diffusion depths (difficult to control), defect densities (difficult to control), material purity (difficult to control). So there will be a significant spread of withstand voltages. If he put the maximum at 3 sigma, then he would guarrantee that all his users would experience some failures if they used 1000 devices, which is bad for reputation and business. So he will be more conservative than that.
With a conservative 25v specification, it's easily plausible that some, even most devices of this type, could stand 70v.
The overloaders' creed is 'turn it up until it catches fire, then back off a bit'. If you know what you're doing, in your own one-off experiment, then it's quite a reasonable way of extracting more performance from a cheap stock part. Just don't sell what you've made to unsuspecting 3rd parties, they deserve all components operated within specification.
So if it's for your use, go for it. Just be aware that if you want to build another one later, or repair that one, diodes to that performance may no longer be available. Manufacturers have been known to change their process to tighten spreads, so can leave the published specification as is, while building a cheaper device that still meets the original specification, but now fails your inflated expectation. If it's not too costly, it would be prudent to do a 'one time buy' of all the diodes you'll ever need from this high performing batch, you may never see them again.
|
H: Finding Transformer Leakage Impedance from S/C, O/C and Winding resistance tests
I've recently done an experiment on a 415/100 V transformer. Performing the S/C test on the HV side and the Open Circuit test on the LV side and finding winding resistance from applied DC voltage and current.
Therefore I now have results from the experiment and have solved Magnetizing reactance, core loss Resistance, Copper/winding resistance and Leakage reactance equivalent values, depending on which side I transfer the equivalent values to according to my turns ratio (a).
The issue I'm having is that I cannot find a way to split the equivalent leakage reactance into its 2 components (LV winding and HV winding).
Equivalent winding resistances are split into their 2 values from the measured applied DC voltage and current Test.
I was beginning to think that if I had measured the voltage across the S/C side in the S/C test (although it would be small, it would still exist) I would have had enough unknowns to solve for Xl LV side; but because I did not measure that voltage assuming 0 volts, I cannot.
Any ideas? I'm stumped.
AI: You can split the overall leakage inductance into two parts proportional to the turns ratio squared. For instance, you know that if the transformer was 1:1 the leakage inductance would be shared equally between primary and secondary .
You also know that if the turns ratio was (say) 10:1 and you measured a 1 mH total leakage (referred to the primary) you could attribute all of this 1 mH to the secondary by dividing it by \$(10:1)^2\$. In other words 10 uH on the secondary is equivalent to 1 mH on the primary.
So, whatever your combined leakage is referred to the primary (let's assume it to be 1 mH), split it into two halves and keep 0.5 mH at the primary and transfer 5 uH to the secondary.
|
H: How can a pringles can act as WiFi booster?
How does a wifi transciever placed inside a Pringles can act as a wifi signal booster. I saw this trick on Mr robot TV series.
Do WiFi routers emit less power than GSM or LTE. I know for :
GSM - 2W (33dbm)
LTE - 200mW (23dbm)
AI: As PlasmaHH says, it doesn't boost the total power - it can't, it's a passive device. Instead it makes the antenna much more directional. Signal behind or to the side will be much weaker. Usually you'll use them in pairs to make a link between two buildings.
Very similar principles to the directional reflectors in flashlights or car headlights.
|
H: Heating a box (as a thermodynamics dummy)
I expect to build a dehydrator and am knowledgeable enough for most of it (fan, timer, power supply and so on), bar the most important part: the heating element.
If you're not familiar with electric dehydrators: you usually seek a 30-70°C (most of the time ~40°C) temperature control for a 5-12h drying timespan.
Problem is, I have absolutely no clue how to pick a heating element. The box will be made of wood (not sure about the door, will be either wood or glass) and its dimensions should be something like 30x30x50cm. The box is partially open so air can pass through (with the help of a fan).
Is there a fairly simple way to guess which amount of heating power I need? To pick a heating resistor which will "roughly" do the job at expected temperature?
AI: Basically a thermodynamics problem with thermal conductivity, area, thickness, ΔT
q=dKA(T.hot−T.cold) [watts]
Where:
q = Conduction heat transfer (W)
K = Materials thermal conductivity (W/mK)
A = Cross sectional area (m²)
T.Hot = Higher temperature (°C)
T.Cold = Colder temperature (°C)
d = Material thickness (m)
Convert your area 30x30x50cm = 6,600 cm² to 0.66 m²
Look up K for wood and choose d
Compute Q heat in watts.
But if this is too slow, adding forced air evaporates faster Then the air flow volume and rate has to be pre-heated and forced thru the box at the desired temperature.
This is more complicated by the efficiency at which air temp rises thru the heat per unit volume of air. A radiator has high efficiency, a circular tube is lower coupling but allows higher flow rates.
I might suggest a 50W power resistor (s) mounted to a CPU heatsink ( tap and screw or clamp) from say an ATX PSU using 12V, using a CPU Fan with a variable speed control and same for heat control and temperature sensing to design a servo loop for the heat and fan speed. LM317's can also be mounted on heatsink for each so no heat is lost here.
simulate this circuit – Schematic created using CircuitLab
A Cheap and Dirty solution for fan control might be to use 5V on a 12V fan to reduce flow rate since evaporation rate is all day, as long as it is a good fan that starts at 4V.
For compact units choose an old Pentium heatsink. ( free at most repair stores)
|
H: LM2776 is it normal to have the oscillations in "bursts"
Is it normal LM2776 to oscillate in "bursts":
The frequency of the oscillation is OK, but they come in the 8 oscillation bursts every ~9us.
Is it normal? The board design and the schematic is exactly the same as in the DS.
The LM2776 is giving me the correct negative voltage and I draw about 50mA from it
AI: Yes. See section 7.3.2 of the datasheet.
|
H: Are these two ways of doing equivalent?
I have two signals sampled at, say, 256 Hz. I want to make the difference between these two signals and downsample them, say until 32 Hz.
Is it rigourously the same (in term of final result) to make first the difference between the two original signals and then downsample OR downsample separately the two original signals and then make the difference and that's it ?
If they are not equivalent, what's the best and why ?
Thanks
AI: Basically, its equivalent, because both the downsampling process and the differencing process are linear. (If you still want peace of mind, try doing it manually both ways with a few made-up sample values.) But if there are other oddities in your system not mentioned--especially non-linear effects--then it's not so simple. For example, if the samples aren't synchronized between the two signals then the answer is more complex.
|
H: Stack overflow results
I have been forcing a problem regarding to corruption of the MCU flash memory content during software running. This corruption occurs randomly and I have got an idea whether the flash memory corruption could be result of stack overflow event. Is it ever possible? After all stack is in the RAM. Thanks for any suggestions.
AI: Short answer, yes. If your processor support self-programming instructions, and you execute arbitrary memory as instructions due to stack overflow, anything the processor can do is possible. Usually self programming requires some conditions to be met, but specifics depend on the processor.
|
H: In USB 2.0, what's the maximum delay between SETUP and DATA0 packets?
In the setup stage of a USB control transfer, the host sends both the SETUP and the DATA0 packets. Assuming that the device has received the SETUP packet, how long should it wait for the DATA0 packet? I wasn't successful at finding that information in the USB 2.0 spec.
AI: Assuming that the device has received the SETUP packet, how long should it wait for the DATA0 packet?
There is no timeout here. The device must retain setup information until either
the control transfer has finished
another SETUP packet arrives (e.g. due to bus errors)
USB Reset/Disconnect is detected
|
H: How to stack protoboards solidly?
I have an enclosure and want to add some protoboards inside it. I can place two next to each other, but since I'm a bit out of space I think it's best to stack 3 of them with PCB spacers.
However, I will have some (not many) connections from one board to another (and second to third board).
I can use either pin headers without soldering, with soldering or using terminal blocks. Terminal blocks cost some more space, but when soldering pin headers, I can never replace anything without desoldering, even when I use for the necessary ICs DIP8 sockets. And I wonder if using nonsoldered pin headers would be rigid enough for a device that will be moved around all the time and not handled always softly (except while in use).
What would be best:
Soldering pin headers and when something is broken to desolder the pin headers?
Not soldering the pin headers at all?
Using terminal blocks?
In my case I will use only a few terminal blocks from one board to another, but it's more a generic question for in future use.
AI: Use male and female headers.
Female header with short pins:
(Image from grobotronics.com)
"Stackable" female header with long pins that go through to the next board (or can be simply cut to length).
(Image from sparkfun.com)
Observe that the stackable pins are sort of "flat", and due to their length, have some bend to them.
Male header:
(Image from solarbotics.com)
These headers are more "square" than the stackable variety, which makes them stay mated better, and they are less bendable.
Note that male headers can but cut or snapped-off to the exact pin count desired. Female headers can be cut, though it is not nearly as convenient (and will probably look bad).
When the headers are mated (the boards stacked), there should be some rigidity, depending on how many pins. Double-row headers will have even more rigidity. Having headers in two places on the same board offer even more rigidity (removing the possibility of the header bending/acting like a hinge).
Typically, they do not fall-out on their own, even with vibration, though if put in a vehicle for years, you may want to secure them further (See below).
The end result when using the long-pinned stackable headers:
(Image from adafruit.com)
If that's not enough, you can combine the headers with standoffs between boards:
(Image from robotroom.com)
|
H: Audio amplifier alignment procedure questions
I have a couple of questions related to the alignment procedure for the amplifier of Klipsch SW-12, a powered subwoofer.
Its service manual (link below) says the following:
(page 3)
COMPRESSION
The compression circuit consists of U5, U6, and Q28 and is adjusted via R30. If adjusted correctly, this circuit will limit the amplitude of the signal so the minimum amount of distortion in the form of clipping will occur at the output of the amplifier. R34 is used to set the maximum output level for the amplifier so it doesn’t underamplify or clip. Next, the signal goes through a buffer amplifier U2, which provides a gain of 4. Then the signal goes on to the driver circuits.
The alignment procedure (page 5) is the following:
SW12-I AMPLIFIER ALIGNMENT PROCEDURE
Equipment required:
A signal source capable of supplying a 30Hz sine wave at 300mVrms.
A true RMS Voltmeter such as the Fluke 8060B.
A 16 ohm load rated for at least 200 Watts.
An oscilloscope (optional).
To totally align SW12 series 1 amplifiers, follow this procedure:
1. Disconnect power from the UUT (unit under test).
2. Connect the UUT (unit under test) to a 16 ohm load.
3. Connect a signal generator to the RCA input of the amp.
4. Set all controls on the UUT to their full clockwise position.
5. Set the signal generator for 30Hz and 60mVrms output.
(be sure and measure the output of the generator for 60mVrms.)
6. Connect the voltmeter leads to the output of the UUT.
7. Apply power to the UUT.
8. Adjust R34 for 33Vrms. Range is from 32.1 Vrms to 34 Vrms.
9. Change signal level to 1.5mVrms @ 30Hz .
10. Measure the output voltage. Should be between .94 Vrms and 1.06Vrms.
Adjust R30 if necessary.
* NOTE: Some interaction between adjustments is common. Recheck steps 8 and 10 for proper voltages.
11. Alignment of the UUT is now complete. Disconnect power and other
connections from the UUT.
Questions:
I don't have a 16-ohm load. I want to use an 800-watt space heater that is 17.5 ohm. What should the target voltages for R34 and R30 be? Do I need to change them? Or should they remain the same?
As to adjusting R30, to make the compression lower (to make less compression), should the output voltage be raised or lowered?
=================================
The service manual for SW-12 is here:
http://www.audiolabga.com/pdf/SW12-15%20I.pdf
Update (9/28/2018)
Accepted Tony EE rocketscientist's answer and built a 186.23-ohm resistor, consisting of 4 of the 16 750-ohm power resistors I had had since I started working on the subwoofer last year. Their not being exactly 750 ohms helped.
Their values and the combined (calculated) resistance:
773.5
738.0
734.4
735.2
186.2312444271
The space heater's resistance with/without the above resistor
AI: Rev A
it could be a 220Ohm 10 Watt resistor if it is stable. An audio power Amp is just a voltage source near 0 Ohms ( 10mOhm ish)
Old answer
You need 187 Ohm 20W to make 16 Ohms
Figure out what's easiest for you.
e.g. 100x 1/4W axial resistors on a "Digi-reel" shunted in parallel 18.7k Ohm
or one 20W resistor or whatever...
For 30Hz 1.5mVrms input, output "Should be between .94 Vrms and 1.06Vrms." Thus CW rotation increases signal input and more compression occurs in the log amp.
So turn CCW slightly. to the minimum level of this range.
Going outside the range, of course, increases risk to cone-wire overtravel stresses or excess compression.
|
H: Inductive load shorts power supply
I had a hair-brained idea the other day and put together the attached circuit.
The problem: the instant I push the switch to place a voltage across the inductor, the power supply shuts down. Clearly this is some sort of emergency power-down because I can start it right back up, but I'm not understanding why such a feature would get triggered? Is it just the voltage drop is sudden enough to flip off such a power supply? I would have thought that the series resistance would be enough to prevent something like this.
simulate this circuit – Schematic created using CircuitLab
AI: The above circuit in of itself would not cause the 5V ATX supply to drop out, as the max amount of power that this circuit could draw would be ~12mA. The circuit may have been built something a little different than the one listed above and may be shorting out, test it with a benchtop supply before using it on an ATX supply.
The ATX power supply probably has overcurrent, and/or undervoltage protection. Undervoltage protection is the most likely, if the supply dips below a given voltage (say 4.7V) for an extended amount of time, the supply shuts down.
If you're looking to dim the LED slowly and turn it on slowly, a mosfet with an RC for the input might be best.
Like this:
|
H: Are civilian GPS signals cryptographically signed?
As far as I understand, receiving enough GPS signals at the same time enables to deduce the position and the time.
I guess it is possible to use an offline receiver as a very precise clock.
If so, is it possible to flood this offline receiver with fake signals to make it believe it's 12:00:01 when it's actually 12:00:00?
More specifically, is it possible to design a receiver that can't be attacked this way?
If GPS signals (or Galileo's) are cryptographically signed, it's easy to reject non-signed signals by saving the public key of the satellites beforehand.
Are the GPS signals cryptographically signed?
Edit: My question is not about the civilian signal being encrypted or not (meaning unreadable for people not having a secret key), but signed or not (meaning the authenticity of the signal being verifiable thanks to a public information: the public key of this satellite.)
AI: GPS can be spoofed without decrypting or creating signals. Therefore, the system cannot be made secure by cryptographic signatures.
The conceptually simplest way to spoof is to erect a number of highly directional antennae and point each of those at a GPS satellite, such that it receives exclusively signals from that satellite. Then feed those signals through a bank of delay lines, mix them back together and use another directional antenna to send the result toward an enemy aircraft.
You can then sit in front of the delay lines and force arbitrary position errors upon the unsuspecting enemy. If you introduce a delay for the satellite that is south of your position, the enemy's receiver will consider itself further north than it actually is, about 30cm per nanosecond of delay.
Cryptography doesn't help you to detect or prevent those attacks, as the signals are only delayed but never changed. The only defense a receiver can mount is radio direction finding. If all satellites' signals come from the same direction, it's probably a spoofer. All modern military receiver employ this method, more sophisticated ones also crosscheck the directions of arrival against the known position of the satellites.
|
H: Input impedance of a one stage voltage amplifier?
This is not a school/university assignment; I'm a physicist trying to learn some electronics. I'd like to calculate the input impedance of the following circuit.
simulate this circuit – Schematic created using CircuitLab
We know that C1 has a reactance of \$X_{C1}=\frac{1}{2\pi f C1}\$, which must be taken into account. That's roughly all that I know! For example I don't know if R2, C2 and the impedance of the output device are important in the calculation of input impedance as well. Also, I'd like to know the equations which show that the input impedance increases by replacing Q with a Darlington pair.
AI: Due to the fact that the bipolar transistor is a highly nonlinear device to simplified the circuit analysis we are using the highly linearized BJT's small-signal model. True only for "small" AC signal only (10mV peak).
http://www.ittc.ku.edu/~jstiles/412/handouts/5.6%20Small%20Signal%20Operation%20and%20Models/section%205_6%20%20Small%20Signal%20Operation%20and%20Models%20lecture.pdf
If we replace the transistor in your circuit with the hybrid-π model, your circuit will look like this:
simulate this circuit – Schematic created using CircuitLab
Where:
\$r_{\pi} = \frac{\beta}{g_m}\$
\$g_m = \frac{I_C}{V_T} \approx \frac{I_C}{26mV}\$
\$I_C\$ - is a quiescent collector current (DC collector current).
In the hybrid-π model, we are treating the BJT as a voltage controlled (vbe) current source (Ic).
That means that the collector current Ic is determined and controlled by the Vbe voltage, and not by the input current base Ib.
And if you plot \$I_C\$ vs \$V_{BE}\$
The \$g_m\$ is the slope of this curve
In general transconductance \$g_m\$ in simple term is a "gain" for any transconductance amplifier. And because transconductance amplifier is nothing more then a voltage controlled current source (VCCS) the gain expression is \$g_m = \frac{I_{out}}{V_{in}} = \frac{dI_C}{dV_{BE}}\$
And for example, to find the output voltage in you amplifier circuit we can use a nodal analysis and write for the output node:
$$ \frac{V_{OUT}}{R_1} + g_m \cdot V_{IN} + \frac{V_{OUT} - V_{IN}}{R_2} = 0$$
And find the voltage gain is:
$$A_V = \frac{V_{OUT}}{V_{IN}} = - \frac{g_m R_1 R_2 - R_1}{R_1 + R_2} = -\frac{g_m R_1 - \frac{R_1}{R_2}}{1 + \frac{R_1}{R_2}} \approx - g_m R_1||R_2 $$
And to find the input resistance we can use the same approach and solve for
\$R_{IN} = \frac{V_{IN}}{I_{IN}}\$
But we also can use a Miller's Theorem
Miller's Theorem - Input Capacitance
And find \$R_{IN}\$ by inspection
$$R_{IN} = \left(\frac{R_2}{1 + |A_V|} \right)||\: r_{\pi} = \frac{R_2 r_{\pi}}{R_2 + (1 + |A_V|)r_{\pi}}$$
|
H: is it true that whether a POS can read an NFC card, that POS is compatible with HCE?
I was wondering if the host card emulation would produce the same result as an physical contactless card is producing.
AI: That's the general idea. HCE requires being able to emulate the same radio and data standards that the card reader expects. If for example the HCE device can do MiFare V1 and the reader can read MiFare V1 then it will work.
|
H: Do I need to consider RF power splitters for signal <1kHz?
I need to tap a signal on a BNC cord to record it with a Labjack ADC. The signal is very low frequency, as the FFT shows. I was planning on using an RF power splitter to add a line to the Labjack. Would an RF power splitter even be necessary for this application?
Also, I realized that the way I'd connect the labjack might negate whatever effect the splitter might have. I would probably just strip the cable and run the core to the Labjack's leads. That seems like it would throw the impedance of the whole line off making the RF splitter useless. Should I be concerned about this method of capturing my signal? Is there a better way to connect a BNC cable to a Labjack screw terminal block?
AI: 1kHz is audio frequency range, you don't need to worry about controlled impedance or reflections. You can use a regular BNC splitter or whatever you like.
|
H: An Audio Module heating up
I am using an Audio Module. The schematics of the module can be found below. The module uses a audio amplifier (TS4990IST) and STM32F0 MCU.
I found out when powering up the module, the module start heating up. In my module, capacitor C10 is not soldered. Can someone help me to rectify the issue?
EDIT:
I have been reading the datasheet of TS4990 and found out it may require Cin capacitor at the input (Page 4). Find the screenshot of page 4 below:
Looks like the manufacturer of these modules has forget to add the Cin Capacitor and may be that's why the module is amplifying DC voltage and hence heating up the module. Am I right?
AI: The problem maybe Phase Margin is negative at unity gain (See Spec Fig 2) unless loaded with 560pF. See table... 65 deg unity gain phase margin with 8 Ohms 560 pF.
Check DC output across the load. If any DC, add DC blocking to the interface if your source is DC coupled with offset. The 1uF draws power for 20~35 ms after Standby disable. This can be cancelled by RC matching input cap to rise at the same RC time constant.
THis is a transient heat problem with coming out standby in this design on a 8 Ohm load. Starting to heat up is OK. Continuing to heat up is NOT OK. If not continuing, it may be a non-problem.
If OK but you want to reduce the trasnsient heat effect, change 1uF to 0.1uF. but note"In the high frequency region, if Cs is lower than 1 µF, it increases THD+N and disturbances on the power supply rail are less filtered."
You have 3 choices;
increase gain to 20dB or x10 with Rf/Rin ratio.
Add 500pF to each output. which at unity gain does not affect closed-loop BW significantly.
some compromise of above. R5=100k gain ~x5 14dB , phase margin >30 deg stable then test Step response for overshoot and add load Cap up to 560 pF
It seems like the OEM designer did not figure out these issues.
|
H: Legal to use non FCC-approved radio devices purchased locally?
Does anyone know if it's legal to use non FCC-approved radio devices which are purchased locally? For instance, if the device is purchased from a local eBay seller, is it still legal to use it in any setting (commercial or otherwise)? AFAIK it is the importer's responsibility to ensure the device imported is compliant. So my take is that it is illegal to use unapproved devices bought from foreign sellers only. Is it correct?
EDIT: I'm referring to USA in particular.
AI: If you buy from a foreign seller then YOU are the importer, and so are responsible for any required certifications.
If you buy non approved product locally then you may be prosecuted for operation of an unlicesensed radio transmitter, in reality the usual result will be the man telling you to cease and desist from operating that equipment together with fines for the company that did the importing (But keep it up and the fines will start getting serious quickly).
And yea, HAM radio license holders have some strange and unique permissions, things like being allowed to build or modify transmitting sets and operate them without needing to get them certified. No other group of civilian radio users has that right.
|
H: Is there such 555 timer circuit?
I was wondering if there is a simple 555 Timer circuit, or even simple transistors circuit, that has a HIGH output for 1sec, 2sec, or more seconds when I trigger the input switch and will remain HIGH till the specific time is done then it will go LOW. Knowing that when I trigger the input switch like 5 times quickly and repeatedly this shouldn't impact/interrupt the output at all, which means when I hit the input switch a multiple times the output will go HIGH for 3 seconds or whatever the time and will turn off once time cycle is done, and will go HIGH again for another 3 seconds if I'm still hitting the switch!
Sample timing diagram for OP to edit:
_ _ _ _ _ _ _ _ _ _ _
Trig _| |_| |_| |_____| |_| |___________| |_| |_| |_| |_| |_| |_
_________ __________ _________ __________
Out _| 3 s |_____| 3 s |______| 3 s |_| 3 s |__
AI: It sounds like you are looking for a non-retriggerable timer (monostable multivibrator).
If so, that's exactly how the basic 555 monostable configuration behaves, with the caveat that the trigger input is active-low.
|
H: Selecting the equipment with the optimal 10 MHz reference
In a larger testbench I have 5 synchronized instruments (signal generators, ARBs, VSAs etc). I am trying to decide which instrument to take as "master". I rule out the older/cheaper parts (Tektronix AFG3253 and HP 8648C) and select between the following high end devices
Rohde & Schwarz SMW200A ARB (with B22 Enhanced Phase Noise)
Rohde & Schwarz SMF100A Signal Generator (with B22 Enhanced Phase Noise)
Rohde & Schwarz FSW (with B4 OCXO Precision Reference Frequency)
Based on 10 MHz reference distribution (daisy chaining vs. BNC tees vs something else) it does not seem to matter too much. But I would still like to decide for the most optimal way.
I have been told I should use the "FSW" because it has "the best" but without further reasoning this sounds like yet another "gut feeling".
I have two general questions ahead:
What does it mean to have a "clean" 10 MHz reference in the first place? Low phase noise/jitter or super stable 10 Mhz wrt to temperature/aging etc?
If it is the latter case: Why would it matter? Say aging: If my setup is stable in short timespans of measurements (say, hours), why would I care if the reference is 10.00000 Mhz or 10.00001 Mhz? All instruments are synchonized anyway, hence the exact value should not matter too much.
Now I looked at the datasheets of the three devices and found that they only show "static" accuracy, temperature, aging ... but do not state jitter at all. That's counter intuitive because I assumed that jitter would be my most important criteria for the reasons discussed above.
The datasheet specs are shown here.
I assume aging is not an issue since and I do not care the exact value of the 10 MHz. Furthermore I assume temperature is not an issue because all instruments are running. Looking at the "Achievable initial calibration accuracy" of the FSW versus frequency error of the SMW and Aging/temperature for SMF (the only given spec) I would go for the FSW (5e-9), then the SMW as second choice and surprisingly the SMF as last choice. Jitter is not taken into consideration at all.
AI: What does it mean to have a "clean" 10 MHz reference in the first place? Low phase noise/jitter or super stable 10 Mhz wrt to temperature/aging etc?
Phase noise is usually not the primary concern, because the usual use of the 10 MHz reference input of an instrument is as reference to a PLL generating some other frequency. And this PLL will tend to dramatically attenuate any jitter at jitter frequencies above a few kHz.
Stability against aging and temperature thus does tend to be the critical parameter.
If it is the latter case: Why would it matter? Say aging: If my setup is stable in short timespans of measurements (say, hours), why would I care if the reference is 10.00000 Mhz or 10.00001 Mhz?
In your measurement it might not matter.
If you want to reproduce your result a year later, it might matter.
If you have a requirement for a particular frequency accuracy, and it's been more than a few hours or days since your instrument was calibrated, it might matter.
I assume aging is not an issue since and I do not care the exact value of the 10 MHz.
If your measurement is not sensitive to small errors in the reference frequency, then the accuracy of the frequency reference might not be critical to you.
Looking at the "Achievable initial calibration accuracy" of the FSW versus frequency error of the SMW and Aging/temperature for SMF (the only given spec) I would go for the FSW (5e-9)
If you haven't recently sent your instrument for calibration according to the manufacturer's recommendations for achieving this spec, this spec is irrelevant.
All else being equal, I might pic the instrument that was calibrated most recently to use as the reference.
But since you implied that frequency accuracy is not critical to your measurement, the whole question is probably moot.
|
H: How i fix this PSPICE orCAD problem? - Model D1N747 used by D_D1 is undefined
From [PSPICE NETLIST] section of C:\SPB_DATA\cdssetup\OrCAD_PSpice\17.0.0\PSpice.ini file:
.lib "nomd.lib"
Analysis directives:
.TRAN 0 500ms 0
.OPTIONS ADVCONV
.PROBE64 V(alias()) I(alias()) W(alias()) D(alias()) NOISE(alias())
.INC "..\SCHEMATIC1.net"
**** INCLUDING SCHEMATIC1.net ****
* source ELECTRICAI_LAB_2_PUNTO_1
D_D1 0 N00392 D1N747
D_D2 N00346 N00392 D1N747
R_R1 N00148 N00346 1k TC=0,0
V_V1 N00148 0 AC 0
+SIN 0 10 60 0 0 0
**** RESUMING ElectronicaI_Lab_2_Punto_1.cir ****
.END
ERROR(ORPSIM-15113): Model D1N747 used by D_D1 is undefined
ERROR(ORPSIM-15113): Model D1N747 used by D_D2 is undefined
AI: First of all the root part number is 1N747, not D1N747. It is a 3.6 volt 500 mW zener in a glass case. I searched several sites and it is embedded in a long list of part numbers starting with 1N746. OrCAD may not have a dedicated path to that part number.
Also since I have UltraLibrarian it was not part of the ULO inventory. It may require a trace to a manufacture who offers Pspice data. I have version 17.2.0 64 bit and I cannot find it either. You may need to find a similar part that has Pspice data.
|
H: LDO in split supply behaving incorrectly when power supply sequence is "wrong"
I'm powering an op-amp (OPA657) in transimpedance configuration. I externally supply ±15V, with LDOs to drop to ±5V; here I omit the feedback loop for simplicity:
When I turn on the negative rail (-15V) first, my positive-regulating LDO (LDL1117S50R) somehow gets exposed to what I measured at -0.8V at the output, highlighted in the below image:
When I then turn on the positive rail (+15V), the LDO doesn't regulate the incoming +15V; the output just sits at -0.8V. The only way to get it working is power-cycling, and making sure the positive rail comes on first.
What is happening? I had an older TO-220 regulator there before and it was fine.
I realize I'm inadvertently exposing the LDO beyond its absolute maximum negative voltage at its output (according to the datasheet, it's -0.3V). How do I prevent this from happening? I don't want to do any complicated power supply sequencing.
For reference I have snipped the negative rail section:
AI: When the Vout is below ground it cannot bias the regulator to start up as the NPN's to ground are reverse biased.
The LM85 bias network is improved and appears to prevent this issue.
What can you do?
Consider the minimum load current 5mA and the idle current 10 mA max and use a pull-up on Vout to Vin+ that forces Gnd to be < Vout+ during V- startup. ( remember Gnd can be floating and just a 0V ref. at some point.)
How?
A Zener from Vin to Vout greater than the expected drop which overloads V- until V+ starts? maybe brute force method...
a CC regulator SMD 20mA chip from Vin to Vout? Not a good idea if the load is less than 20mA then it pulls the Vout higher since it is an emitter follower output.
Power management cct to ensure both are OK before enabled to IC. ( too much trouble)
A resistor divider that draws > 10mA from Vin to Vout and GND so the external voltage V- its pull-up ( more current ) from the V+ being off? YES
a simple idea may work but depends on dynamic current flow as V- starts up
compare with a LM317 ? maybe
Idle current spec : Vin ≤ 8V 5(typ) 10 mA (max)
Remember LDO's only source current and not sink.
But if the output is reverse biased relative to GND they cant source either because the Gnd is a controlled current sink.
Best bet
Power Schottky diode from Gnd to Vout with a R divider on Vout from Vin to gnd to bias the output for 10mA just below the regulated output worst case, never above.
|
H: Can I connect DC12v on VAC output (2A 100-240V) relay?
Solid State Relay G3MB
This relay's output says "2 A at 100 to 240 VAC". If I wanted to connect something like DC12v 2A to this relay output,
It is not going work?
It might work, but it will damage the relay circuit?
I know this is a very stupid question, I googled but couldn't find a page explaining the differences between VAC and VDC output relay.
Hope somebody can clarify the differences.
Thanks in advance.
AI: NO you cannot. Nada, no way Jose. 決して、今日ではなく、決して
This is a Triac controlled SSR and these latch ON with DC.
Read the fine print in the 1st page summary.
"The G3MB-202PEG-4-DC20MA crosses directly to the Motorola M0C2A-60 series power triac."
|
H: I need a potentiometer controlled by only a 0-10v signal
I have a PWM control box made for standard 4 pin PC type fans (12v). It has a potentiometer to control the fan speed via PWM. It is powered by its own 12v supply. I would like to exchange this manual potentiometer with one that is controlled by a 0-10V signal. The 0-10v signal is coming from an aquarium controller and is intended to adjust an LED driver. I can program this aquarium controller to vary the voltage based on sensor inputs. Ideally adjusting fan speed based on temp/humidity/pH sensors via the 0-10v output.
My biggest concern is not damaging the expensive aquarium controller (source of 0-10v). I obviously need to crack open the PWM fan controller and measure the potentiometer.
I'm having difficulty figuring out what I need to use or search for. A digital potentiometer seems close, but needs more inputs than I can provide.
Fan PWM:
https://noctua.at/en/na-fc1
AI: The most interesting way might be to use one of the Atmel ATTiny chips (ATTiny 13A would do it) or an Arduino. Divide the 0-10v by a convenient amount (2 or 3), read it in to the ADC and generate a PWM signal proportional to the input voltage.
If you want to do it the easy way, then this seems to be exactly what you need.
|
H: Why does a classic envelope detector produce a negative voltage for some periods, when high frequencies are put in?
I'm playing around in ngspice & trying to understand envelope detectors for ham radio.
I'm pumping 150Mhz into the circuit described below - I'm expecting it to go positive in proportion to amplitude + frequency. After a bit, it goes negative. Whats happening here?
Consider this circuit:
150Mhz 2V sine wave input, 50ohms impedance
D1N4148 diode
100pF smoother capacitor
500k bleed/load resistance
Spice input I'm using / working on:
Detector CIRCUIT
.model D1N4148 D (IS=0.1pA, RS=16 CJO=2pF TT=12N BV=100 IBV=0.1pA)
v1 1 0 sin(0 2 150MEG)
rSOURCE 2 1 50
d1 2 3 D1N4148
c1 3 0 100p
rLOAD 3 0 500k
.control
tran 5us 50ms
run
write kek.raw v(3)
quit
.endc
.end
Gives me this:
Note the negative sections.
How does a diode detector go negative like this? how do I stop it?
AI: v1 1 0 sin(0 2 150MEG)
...
tran 5us 50ms
=====
What's wrong with this picture?
==
The sampling time is violating Shannon's Law. Measurement error.
What is the diode spec?
Rectification efficiency \$V_{HF} = 2 V, f = 100 MHz ~~~ η_r = 45 \text{%}\$
It won't work well because recovery time is 4ns with 100 Ohm load and yours is only 500k.
Although Cj=2pF is ok. ( even though max in the datasheet is 4pF )
As @Sparky256 said try a UHF diode.
These have faster recovery time than 8ns on a signal with a 6.7ns cycle time.
This is why efficiency is poor at 100MHz and worse at 150MHz.
Try a Falstad simulator. Limited parameters no custom parts. You make the passive part models and LOGIC, OA's have 0 Ohms out. Same with Voltage sources.
Press Reset anytime.
|
H: Icarus verilog syntax error in a generate block
In the top level of a module, I have the following block:
genvar i;
generate
for (i = 0; i < DEPTH; i++) begin
fifo_element #(WIDTH) element (.clk(clk),
.d_in(e_qd[i]),
.d_in_strobe(e_in_strobe[i]),
.q(e_qd[i+1]),
.q_ready(e_qready[i]),
.in_strobe_chain(e_in_strobe[i+1]),
.q_out_strobe(e_out_strobe[i+1]),
.out_strobe_chain(e_out_strobe[i]),
.prev_used(i == 0 ? 1'b0 : e_used [i-1]),
.next_used(i == DEPTH-1 ? 1'b1 : e_used [i+1]),
.used(e_used[i]));
end // for (i = 0; i < DEPTH; i++)
endgenerate
When I attempt to compile this with icarus verilog (v10.1, using the -g2009 command line option) I get the following errors:
fifo.v:84: syntax error
fifo.v:96: error: invalid module item.
fifo.v:97: syntax error
These errors correspond to the line containing for, the last line of the element instantiation, and the line containing the end that corresponds to the begin of the for block. If I delete the apparently-unnecessary begin and end markers the third error goes away but the other two remain.
What's wrong with my code? Or is this a problem with Icarus Verilog? (A little research suggests that while earlier versions didn't support generate blocks at all, they've been supported for some time now, so what I'm trying to do here should work)
AI: Maybe you need i = i +1 instead of i++. Other than that, I don't see anything obviously wrong.
|
H: Connecting Led in Series with Parallel Switches
I created a diagram for a circuit that i am trying to implement. I have attached my circuit diagram and have been trying to implement it but still have not got it to work. I've checked all the wiring and they are all connected correctly and I am getting a Voltage source. Does anybody have any ideas? Am I missing something? Because when I press the switch, the LED is supposed to light up, but is not.(Not pictured: But I also tried adding a a connection to ground at the end of the first switch as well but that didn't do anything either.)
AI: Your LED is connected to nowhere.
On these breadboard, there is no connection on the central line. your orange cable connected on the other side to your 0V is actually not touching the LED pin.
|
H: Safety design on mains detector
I've designed a simple mains detector which I have drawn below. The theory of operation is pretty simple. A capacitor is used to reduce the current so that my optocoupler can operate (represented by the LED below). With LTSpice, I get a nice sine wave @60Hz with ~3.5mA. I have omitted the other side of my opto for simplicity's sake. Now I'm interested in doing some safety analysis.
The first thing that comes to mind would be to add a high resistor in parallel with the capacitor to reduce the risk of someone getting a discharge from the capacitor, but I've purposely omitted that because it would burn a lot of heat and ultimately I can put this whole thing in a place where no fingers should be reaching it. The fuse should protect me against the case where the capacitor fails with a short. An open circuit would simply fail nicely. Any other thoughts I may have missed in my safety design?
simulate this circuit – Schematic created using CircuitLab
AI: The first thing that comes to mind would be to add a high resistor in parallel with the capacitor to reduce the risk of someone getting a discharge from the capacitor, but I've purposely omitted that because it would burn a lot of heat and ultimately I can put this whole thing in a place where no fingers should be reaching it.
Bad idea. If this is a pluggable device (not permanently wired to the mains), then anyone can touch the two prongs of the plug and get a discharge from the capacitor. And in any case, a 1MΩ resistor, which would discharge the capacitor in a fraction of a second, would only dissipate 15 mW @ 120VAC in normal operation. (Pay attention to the voltage rating of this resistor — use multiple resistors in series if necessary.)
Second, as Transistor pointed out in the comments, the capacitor limits the current at 60 Hz, but has little effect on higher frequencies, including those produced during switch-on, as well as fast transients on the line caused by nearby lightning, other equipment switching, etc. Such currents might blow your fuse, but not before destroying your LED.
So, at a minimum, I would suggest:
Add the 1MΩ resistor in parallel with the capacitor
Raise the series resistance (R1) to 10kΩ (it will dissipate about 130 mW in normal operation)
Raise the value of the capacitance (C1) to 100 nF in order to compensate for the increased drop across R1.
|
H: Correct wiring of a variable resistor
I built the following schematic and I made a mistake. I wired the 4.7M preset reversed.
This is the original schematic:
And this is the schematic that I drew in Kicad:
I already built the PCB but I am wondering what can happen if I wired the 4.7M multi turn preset reversed ? Will the soldering station work correctly ?
I know that the direction of increasing the resistance changes, but I am wondering what other things can occur ?
AI: I'm not sure what you mean by "preset", a digital potentiometer?
If it's linear, then reversing the direction is the only thing you'll see. If it's logarithmic, adjustment may become very non-linear i.e. you'll see almost all of the change in right in the beginning or end.
|
H: How is the voltage regulator functioning in this vintage charging circuit?
Attached is a simplified diagram of my motorcycle's charging circuit. It has a standard 3-phase bridge rectifier circuit--simple enough.
I'm confused about two things:
How is the voltage regulator regulating the voltage here? Only one phase is connected to the regulator which appears only to have a ground.
How does this regulator function? All the example regulator circuits on the internet are either linear regulators or Zener diode reglators. This one seems to involve a thyristor and a diode.
1977 Suzuki GS400X charging circuit:
AI: This SCR (not Triac) seems to shunt the core with one phase diode and reverse bias the other phase diode to reduce the generator output in order to regulate. The R divider biases the trigger threshold for 14.2V.
Not as mechanical load efficient as 3 SCR's but works.
Lambda used TRIAC bridges in the old days (the '70's) to pre-regulate DC before higher linear stages as a lab. power supplies. That improved efficiency "somewatt" before SMPS came on the scene.
|
H: Is "humming" normal in an AC relay?
I have a 240V relay that uses a 120VAC coil. When I switch power to the coil, the relay makes a faint humming sound. It isn't very loud, and sounds like a transformer almost. A normal speaking voice or small fan in the room is enough to drown it out, to give you an idea of the volume.
To be clear, this isn't a situation where the coil isn't getting enough power and contacts open and close rapidly (described as "buzz" in other questions). I have verified that the voltage is correct, and I have observed that the armature is still (not vibrating) when the coil is getting power.
So I am wondering if I have a bad relay, or if this is normal for AC relays, which I haven't used before. I am used to 12VDC relays for the record.
AI: Figure 1. Source: Machine Design.
The relay coil, when energised, pulls in the armature to actuate the contacts.
Since the coil is powered by alternating current the magnetic field collapses to zero at each mains crossing and the armature tends to start to release. Its inertia is high enough that the contacts remain actuated long enough to maintain contact through each zero-cross of the mains.
The buzz is normal. It is caused by the vibration of the armature on the yoke on each half cycle.
Just a note on relay terminology: "I have a 240V relay that uses a 120VAC coil" is a little confusing. "I have a relay with 240 V contacts and a 120 V AC coil" would be clearer.
Update:
Spehro and Tony's answers both address the use of 'shading' poles on the armature to help maintain force during zero-cross. This in turn will reduce the vibration.
Figure 2. The yoke of an AC 'contactor' (high-powered relay generally used for AC motor circuits, etc.) showing two shaded poles. Image source.
|
H: STM32 Logic level high and low thresholds
I am scratching my head to find what are the minimum voltage for registering a low digital signal and maximum voltage for registering a high digital signal when I configure GPIO pins of STM32 (I am using STM32L476) as Input Capture mode (I want to measure some frequencies).
No where in the datasheet and the Reference Manual I can see this details.
I have a signal that suppose to be around 0.3V when low and 1.8V when high...So I want to know these thresholds to either use a schmitt trigger in my circuit or use some divider resitors to up or down convert the levels to match inputs of STM32.
AI: The logic levels are described in section 6.3.14. When your supply voltage is 3.3V then:
V_IL = 0.39 * Vcc - 0.06 = 1.23 V (table row "I/O input low level voltage except BOOT0". The maximum voltage that will be read as logical zero is 1.23V (+ read all footnotes).
V_IH = 0.49 * Vcc + 0.26 = 1.88 V (table row "I/O input high level voltage except BOOT0"). You need to supply at lest 1.88 V to make the pin read as logical one.
Your 1.8 V signal is below V_IH and above V_IL, which means that the result is unpredictable.
You have to be concerned with the maximum voltage read as zero (ie. not exceed it) and the minimum voltage read as one, not the other way around.
|
H: Practices for using a scope, scope probe and termination resistor for a proper measurement setup
In a setup, assuming a function generator has 50 Ohm output impedance and a coaxial cable has 50 Ohm characteristic impedance. Therefore to prevent reflection of a signal a 50 Ohm termination is used. But for many people I know they dont use 50 Ohm termination for basic use(non RF)
and they also use 1X probe with scope's 1X setting.
And if we use a 50 Ohm termination then our signal's amplitude halves. How is this problem compensated? By setting the scope to 2X? How about the probe? 1X or 10X?
What is the proper probe/scope and termination resistor setup depending on the signal frequency? Can we roughly make a category? I want to learn the proper measuring technique with scope and probe depending on the waveform and frequency. Since these are based on experience I cant find a compact info about it.
AI: Ideally, an oscilloscope would give you an option between 1M ohm input resistance (in parallel with perhaps 15pf capacitance) and a 50-ohm input termination - often cautioned not to apply too much power, lest it over-heat. This 50-ohm internal termination is quite vulnerable to burning up into an open-circuit, so it is wise to do a measurement check to see that it is still there, and still 50-ohms.
Many 'scopes give you no option - the default 1M ohm input applies. In this case find a BNC "T", and add a 50-ohm termination right at the 'scope input. Use the 1X scale, and use 50-ohm coax to connect to your signal source, not the 1X probe. Not as good as the internal termination described above, (a short unterminated section often remains), but its about the best you can do. Don't forget that its there - when you go to measure that +24V DC supply, you may smell smoke.
Most 1X/10X probes have cable impedance higher than 50 ohms between probe tip and BNC connector, so don't use a probe in a 50-ohm measuring system to make a careful amplitude measurement.
A good function generator takes care to drive its output through a 50-ohm resistance, so its open-circuit output should be twice as high as its 50-ohm terminated output. It is common for function generator outputs to be 20V p-p open-circuit, and 10V p-p when terminated with 50 ohms. Such a generator might say in its manual, "Will deliver 10v p-p to a 50 ohm load". A source calibrated to deliver 0 dBm power will deliver that power into a proper load (usually 50 ohm) - of course it delivers no power to an open-circuit, and very little more to a 1M 'scope input. Your oscilloscope (set to 1X) might make an RMS amplitude measurement - power is simply \$ \frac {{V_{rms}}^2}{50} \$ when you've got that BNC-T with 50 ohm termination attached.
|
H: Proper convention for schematic component pin input/output arrows
In a schematic, if a designer chooses to use arrows on pins of a component, should they always point in the direction of current, or in the "perceived direction" of the signal? Or this a made-up designation with no real integrity?
I'm using Altium Designer and the pins on a component can be denoted as several specific types:
Input
I/O
Output
Open Collector
Passive
HiZ
Open Emittor
Power
I will use a connector as a specific example, however I'm interested in the use of pin arrows across devices.
Here is a connector with 8 pins of the type correlating to the list above. Observe the first three pins have arrows based on their pin type:
I'm wondering if there are specific standards/conventions for arrows on pins.
I have a 4 pin connector in my circuit. From the reference point of my circuit, the 4 signals are:
Power (12V+ from my circuit to connector, supply is from my circuit)
Ground (supply is from my circuit)
PWM "Out" (Device that plugs into connector has a pullup resistor and reads via GPIO, my circuit will periodically pull it to ground)
Tachometer "In" (Device that plugs into connector periodically pulls this to ground, my circuit as a pullup resistor and reads via GPIO)
Here is the system schematic of my setup:
Going back to the image of the hypothetical 8 pin connector, since the "Input" type shows an arrow going in to the connector, the type is clearly from the reference point of the component.
Using the connector as the reference point:
I assume I will want "Power" to be an input. That's easy, because
conventional current flow into the connector.
With Ground, current flows back out of the connector. So that must be an "output".
As for PWM "Out", it's an "input" to the connector, however current flows out. I still assume "input" is the proper type.
The tachometer is just the vise-versa of PWM "Out".
Are my assumptions correct?
AI: No your assumptions are not correct. Inputs and outputs are generally reserved for ICs. An input has the arrow pointing into the component. An output has the arrow pointing out of the component. Power and ground pins of ICs should be set to "Power". Generally arrows should not be used on connectors. Connectors are passive, so select "Passive" as the pin type for all of the pins on the connector.
|
H: Airport radar is interfering with my circuitry
I am using an arduino, but this is not an arduino question. And unfortunately, I cannot share my schematic. I can tell you though that I have a stack of three arduino-sized boards which create two ground loops, each about the size of an arduino. I also have an RFID reader board connected, which has about a 2 inch coil in it. But it seems like that coil should be isolated from the arduino by the RFID board's ICs. Altogether I'm pulling a max of around 250 mA out of 6 AA batteries, though the actual current draw varies on short timescales as I transmit wireless data via Xbee.
Somewhere, I'm picking up noise, and it's only happening when I take my devices closer to the radar station about a half mile away. It's actually crashing and rebooting my arduino program. I'm positive an external source is the culprit. since tinfoil shielding prevents the problem, but I cannot add shielding to my design until a later date.
My coworkers and I believe the noise must be affecting the power to my processor, since it seems like processor voltage drops/spikes would most certainly be the only things to cause such behavior in the microcontroller. If that's the case, then I should be able to add line filters. Does that seem reasonable?
I could apply filters to the main power into the arduino, and also add something to the 5V and 3.3V output buses to the peripherals. My initial instinct is to add a 1000uF low-ESR electrolytic across the main power, and some smaller 470uF caps across the 5V and 3.3V buses. But I know it should depend on the frequency of the noise I'm trying to filter. Airport radar operates in the 2.7G range, so I guess I should try filtering 2.7G, right?
Is this the right way to go about the problem?
AI: Radar Pi filters and ferrite beads in addition to shielded pairs or signals will correct your problem as long as the ground and slots in a case don't resonate the signal.
Murata has the right NPO caps in the <50 pF range and ferrite CM filters and chokes for microwave. All IO cable signals need capacitive to ground feedthrus and CM chokes. Exposed boards need a good ground plane or earthed box.
Experience
Burroughs had a similar problem on the top floors of Richardson Security Exchange in Winnipeg. The tallest building still at Portage and Main. The 207 14" Disk drive was getting random errors on the read-write chain serial differential data.
Factoid: Since Bill Richardson's family had a long history of wealth and he became a Pierre Trudeau cabinet minister. he had pull.. so it is written into the city bylaws that no building in Winnipeg can be taller than his. Little did they know about Radar.
But then down the block, where NRC was inventing mobile 7 Tesla medical scanners, every CRT in the building when the MRI was fired up in the basement. Metal Chairs were once reported to be once pulled in at lethal velocities across the room and any credit card anywhere inside the chamber would get demagnetized.
It needed an EMI fix ... fast.
The customer was the biggest Investment company around and data records were getting infrequently but randomly corrupted.
It was Wpg Intl Airport Radar.
The field fix took a few days, to patch and weeks for general production after test verification. It was my 3rd employer and it happened just before I got there in 1982.
|
H: How to calculate Junction to Ambient temperature of LED?
I'm trying to calculate the Junction to Ambient temperature for this Cree LED:
https://www.mouser.ca/datasheet/2/90/ds--XHP502-1093532.pdf
I know how to calculate this:
Junction Temp = Junction to case + Case to Heat Sink + Heat sink to Ambient + Ambient
However, If I want to know what is junction to ambient temperature (aka no heat sink), how do I calculate this from the data sheet?
AI: This is a high power LED. It is designed to dissipate the vast majority of its heat through the solder joints.
So you need to figure out the solderpad-to-ambient thermal resistance of the circuitboard onto which you intend to solder it.
This can of course be calculated based on the dimensions of the copper features on your board, but the easiest method may actually be to just solder the LED to a board and measure the temperature at the solderjoint for a few different power levels.
|
H: Derive terminal voltages for a BJT
Question: A BJT for which \$BV_{CBO}\$ is 30 V is connected as shown in the figure below. What voltages would you measure on the collector, base and emitter?
simulate this circuit – Schematic created using CircuitLab
Answer: \$V_B = 13.3 V\$
\$V_C = 43.3 V\$
\$V_E = 12.6 V\$
I have no idea how to proceed with this question. How is the breakdown voltage related to the terminal voltages?
AI: Something does not make sense in the supplied answer, 43.3V on the collector implies Ic = (50V - 43.3V)/10k = 0.67mA, but Ie is 1mA due to the current source, so by Kirchoff Ib must be 0.33mA, making the voltage across Rb 6.6V (With the base negative with respect to ground, not impossible with a magic current sink in the emitter, but still)....
Jonk has the right of it I think 33.3V Vc makes sense to me as well.
Here is how I would probably tackle this thing (But it has been a few years)
:
Due to the 30V breakdown, Vb = Vc - 30V.
Then the collector current is the 1mA from the current source plus the current in the base pulldown resistor (Vc - 30V)/20k
So Ic = 1mA + (Vc - 30V)/20k. From Kirchoffs laws concerning currents into and out of a junction.
But Vc can also be written in terms of the voltage drop across the collector load, Vc = 50V - 10k * Ic, so some substitution and trivial algebra should get you an answer.
Further due to the forward biased base-emitter diode we know that Ve = Vb - 0.7v.
That would be my take on it.
|
H: SMDs on bottom side of board with THT components?
I'm designing a PCB for factory production and I'm not sure if placing SMD components on the bottom side of the board, which also contains THT components (pin headers and buttons, they're all on the top side though), will make the board manufacturable without a special process. Normally I would avoid this, but the board I'm designing is very space-limited, and this could save really a lot of space.
I understand that this is not a problem for hand-soldered DIY-level projects, but this board is factory produced and assembled.
Thanks for any suggestions.
AI: You'll need to talk to your assembler, but this is not necessarily a problem. There are essentially five options:
Use pin-in-paste to install your through-hole parts. This is where solder paste is stenciled onto through-hole pads, then the through-hole parts are installed into the paste-filled holes, and your PTH and SMT parts can all be soldered in one reflow pass. The major downside is that the PTH parts have to be specifically designed to withstand this process which really limits your part options. The plastics used in many standard PTH connectors that are not designed for pin-in-paste will soften or completely melt at reflow temperatures.
Glue down the bottom side SMT components and wave solder them at the same time as the PTH parts. This is usually used on high volume production of low-complexity boards (such as power supplies), and is not advisable for fine-pitch parts, large ceramic capacitors, or other mechanically/thermally sensitive parts.
Wave solder using masks and jigs to protect the SMT components. This is highly dependent on the component placement, board geometry, and other factors.
Selective wave soldering, where a machine moves a small sort of fountain of molten solder around the board, so that through-hole parts can be soldered after all of the SMT parts have been installed. This is an ad for a machine manufacturer, but it shows the process fairly well: https://www.youtube.com/watch?v=p-VImd2yW5s
Hand soldering. Many assemblers have the equipment and personnel that make this a viable option for ~hundreds of boards, depending on complexity. Sometimes this is simply the best option for really fiddly jobs, and a good assembler will often create fixtures or jigs to help ensure accurate assembly even with hand soldering. It avoids the setup costs that the other processes require, so it's a good low-ish volume option.
Of course, additional process steps will increase the cost of assembling the board, but it's pretty common to have double-sided SMT and PTH on boards these days.
|
H: Using LED display
This is my current circuit. It's a 2 person colour guessing game. The first person will choose the colour code the other player has to try and guess using the registers on the left and then set the colour code by transferring it to the registers on the right by clicking the set code button down the bottom. The second player will then try to guess the colour. To check if the player has got the colour code correct they will press the guess colour button. If all 2 colours have been guessed correctly the hex display will display 2. If 1 colour has been guessed correctly the hex display will display 1 and if no colours have been guessed correctly the hex display will display 0. So for example, in the above picture, if the player pressed the guess colour button atm, the number 2 would be displayed as all colours are in their correct position.
The latter part is what I need help with. I have no idea how to wire up the circuit so that it can check whether or not the guesses are correct. Any help is greatly appreciated as I'm still learning.
AI: You must start by defining the Game theory of operation and all the permutations of N colours ( N=4) in 4 positions.
These can be defined by colours ABCD and positions 1234.
Results can be displayed as a B/G bicolour LED with up to 4 indicators per guess.
Both guesses and Results could be displayed AND saved for each of 12 guesses.
Blue or Green indicators display the count of the correct colours in the False (F) and True (T) Positions in a discrete binary format
each result indicator has two outputs for F/T Position count, not the actual position
each guess entry can use a BCD finger wheel decimal switch to indicate a colour number for any number of colours such as 4 or even more
A typical Mastermind game allows 12 guesses. so 4 inputs and 4 bicolour guesses could be displayed
A blank colour as a 5th option = 0 could be considered or even more colours for the advanced player could be considered. This leads to N^C permutations.
The display algorithm
A count of 0,1,2,3,4 is 5 different results requires a 3-bit count and could be displayed using an RGB LED indicator with arbitrary colour assignments.
Each position 's guess must be compared in a sequential manner to count B/G results using an XOR of 1 to 4 colours or numbers entered.
This requires an XOR operation to count both correct colour and correct colour&position and subtract these to indicate the number of Blue or wrong position correct guesses.
A 2k ROM uC might be able to perform these operations to interface and drive these displays with up to 4x12 input guess RGB LEDs and 4x12 bicolour LEDs for results giving 96 LEDs to interface using I2C driven LEDs.
If you wish to edit my specs and simplify it, feel free to copy and paste into your question what you prefer.
Remember the best design always starts with a good clear specification of "Must Haves" and Nice to Haves ( or shall have or the lesser restriction, may have)
|
H: installing a DIP package with broken legs
I'm attempting to modify my NES (Nintendo entertainment system) for RGB output. This involves removing the PPU from the board and soldering it onto a board supplied in the mod kit. This installation is rather notorious for giving people trouble. In particular, desoldering the PPU seems to cause many people to pull traces and break legs off of the PPU, as the motherboard is rather thick and tends to soak up heat.
I was no exception to this. I spent a few hours slowly trying to work the PPU off the board, but I still managed to break a few of the legs and pull a trace. However, I believe this is still salvageable, with the proper methods. I think I've fixed the trace by installing this wire (picture #1), however I'm a little afraid the wire may be too thick (also I'm sure the wire layout isn't ideal; if you can't tell I'm a novice at this). My last obstacle is to do something about the broken legs on the PPU (pictures #2 and #3). I was thinking of two things:
(1) I can solder a small piece of wire to the shorter legs then solder that as usual, but I may run into a problem where the heat from soldering the PPU onto the board may desolder the piece of wire and leave me with little bits of wire in the through-holes, which I am deathly afraid of.
(2) I can solder the PPU onto the board as usual (as if the PPU did not have a few broken legs), just making sure to add extra solder to the broken legs so that the little stubs can make a proper connection.
I'm leaning towards (2), but it seems a little crude and naive, so I come to you all for advice on how to proceed. Any help is greatly appreciated. Thanks
AI: I would take option 2, the path of least destruction. Solder it back in place and use a conical (needle) tip solder iron.
For pins too short to make contact push in a skinny wire (like a small resistor) from the backside and tack solder it to the short lead, then solder the extension to the pad, then snip the excess off the backside.
You are right in that option 1 is too risky.
|
H: Working of a Doubly-Fed Induction generator (DFIG) in wind turbines
I am unable to understand how a Doubly-Fed Induction generator works (in Wind Energy conversion system)? I do not understand how the reactive power fed into the stator interacts with the rotor so as to produce active power in the stator. It would be helpful if somebody is able to explain the phenomenon with the help of a phasor diagram showing the voltage, current and the field directions.
AI: I am unable to understand how a Doubly-Fed Induction generator works
(in Wind Energy conversion system)?
I think the document you linked is a little confusing and I would recommend this better document instead. Entitled "Principles of Doubly-Fed Induction Generators (DFIG)" and produced by Lab-Volt.
However, in simple terms, think about a regular 3 phase synchronous generator first; DC is applied to the rotor, the shaft is rotated at synchronous speed and the generator produces an AC output of the correct frequency (50 or 60 Hz).
However, if the shaft isn't rotated at synchronous speed but a somewhat slower speed (as what usually happens in a wind generator), you can still produce a synchronous output frequency (50 or 60 Hz) by feeding the rotor with an AC current of "so many hertz" instead of DC. The rotor frequency now "makes up the difference" between shaft speed and synchronous speed: -
Picture source.
The important things to notice is that the rotor voltage and current is derived from an AC/DC converter (rectifier) followed by a DC/AC convertor. The DC/AC converter is an inverter whose frequency can be set to a value that "makes up the difference" between shaft speed and synchronous speed.
Not shown in the diagram is that the control block needs to accurately measure shaft speed in order to calculate the rotor frequency required to produce a synchronous AC output from the stator.
It would be helpful if somebody is able to explain the phenomenon with
the help of a phasor diagram showing the voltage, current and the
field directions.
If you understand the phasor diagrams for a standard synchronous generator and the relationship between slip and driving frequency for a 3 ph inductor motor, then it will be clear. If you don't understand these then you need to ask a more appropriate question.
|
H: Is a 2 Ampere power supply dangerous?
I'm sorry if this question seems extremly dumb or/and pathetic. I'm quite new to electronics.
For one of my electronics projects, I want to buy a power supply like this one, but it says Output: 12V 2A and I know that 2 amps is dangerous for the human body.
So what does it mean? Will I hurt myself if I touch the two cables or is it fine? I don't really understand this.
I'm sure you all know the answer and if so, please explain why.
Thank you!
EDIT: My question has been marked as a duplicate but it's different. The other question is about what voltages / currents are dangerous but I do now that. However, I'm asking if the 2A power supply always outputs 2 amps or if it is the maximum current it CAN output.
AI: Since you have clarified your question, the answer is as follows:
2A is the maximum current it can deliver.
It will always deliver 12V - as long as you don't put a load on it such that it would have to deliver more than 2A.
Then, the voltage will drop. Either because it over heats and burns out or because the engineer who designed made it so that it safely limits the current so as not to burn out and be a fire hazard.
The current that really flows depends on the resistance of the connected load and the voltage.
Ohms law says this E=IR, where E is voltage, I is current, and R is resistance.
It can be rewritten like this: I= E/R.
This is what we want, since it is the current that can kill you.
Your body can be thought of as a resistor. It has a resistance of several thousand ohms. It can vary from 1000 ohms to 100000 ohms.
Lets stick with the middle field: 10000 ohms.
I= E/R
I= 12/10000
I= 0.0012A
I=1.2mA
So, your power supply can deliver 2A, but it can only force 1.2 thousandths of an ampere through your body.
So, your powersupply is safe for you to use and to touch with your hands.
Always buy your power supplies from trustworthy sources. Switching power supplies must be properly made in order to isolate the ouput from the high voltage input.
A cheaply made powersupply might skimp on the isolation, and allow the output to be at a high voltage compared to the ground. There will be the rated voltage between the two output terminals, but the full line voltage between one of the outputs and ground. THAT can kill you, but has nothing to do with the rated output current or voltage. That kind of thing is poor design and manufacturing.
|
H: Can you use a 3ph BLDC controller without a potentiometer?
I'm attempting to build a 12v fan and I'm looking at controllers to drive the 3 phases of the motor. It seems every ESC i find has a potentiometer included. I want my fan to always run at full speed and be turned on by a simple relay.
Can I just short the positive and signal legs on the board in lieu of connecting the pot in order to simulate a pot at 100%?
AI: If you have an ESC that is suitable for the motor you can use it without a potentiometer by connecting the potentiometer input to the terminal that would normally be connected to the clockwise end of the potentiometer. Check the ESC information to be sure that the CW end of the potentiometer is normally connected directly to a terminal.
You also need to check the ESC information to determine how the motor is started and stopped. Your start/stop relay must supply that command. It is unlikely that you can start and stop by connecting and disconnecting the motor to the ESC or the ESC to power.
It might be a good idea to start with the ESC connected exactly as recommended using a potentiometer. Make sure the motor runs well and then change the speed setting method.
|
H: How does flyback converter with two MOSFETs work?
This is the schematic.
Both switches are in phase (On and Off at the same times).
When both SW are On, current flows through primary transformer coil, entering from the top side of it, and leaving through the dotted side, finally going to the ground through the lower switch. Diode on the secondary does not conduct (reverse biased). Energy is being stored in the magnetizing inductance - an inductor parallel to primary coil of transformer (integrated part of transformer, not shown in the schematic drawing). Output of the converter is energized by output capacitor.
When switches are off, magnetizing inductance provides magnetizing current that enters primary windings of the transformer through the dotted side, and is transferred to the secondary side in such a way that current leaves the dotted side of the secondary, and the diode is forward-biased.
If this is correct, the question arises about the two diodes on the primary side, how do they turn on?
When switches are abruptly turned off, leakage inductance (inductance in series with the transformer), will oppose quick current drop, and will induce a voltage of same polarity as input voltage in order to slow down current decrease. This voltage can be quite large and can damage MOSFETs.
Finally the question: How exactly do those two diodes turn on? MOSFETs are turned off by external gate-driving circuit, and they will instantly go Off (open circuit). Then how will lower diode draw current, from where? Then, where this current goes, where is it dissipated?
Thanks
AI: When switches are off, magnetizing inductance provides magnetizing
current that enters primary windings of the transformer through the
dotted side
Incorrect. When the switches are off, the current doesn't reverse it ramps down from the previous "charging" positive value to zero. This means that the voltage at the lower MOSFET drain reverses polarity compared with the upper MOSFET's source.
The two diodes steer any excess back-emf to the DC power rail on the primary side. This steering is necessary to prevent leakage inductance on the primary damaging the MOSFETs because the energy contained in leakage inductance is never transferable to the secondary.
Here is my version of your circuit showing the charging voltages in red followed by the energy transfer voltages in blue: -
It's the primary voltage that reverses because the previously positive ramping-up current when the MOSFETs were on (red phase) becomes a negative ramping-down current during the blue phase.
|
H: Would a parallel EEPROM ever output a value that it does not contain?
I understand that when the address input to a parallel EEPROM changes from A to B, the output may show results that aren't the contents of either A or B, for a few nanoseconds.
Are these spurious outputs constrained to be values stored somewhere in the EEPROM?
Would an unprogrammed EEPROM (all 0xFFs) ever output non-FF values when the address changes?
(I expect the specs say that any value could be output, but I'm interested in what happens with current implementation technology.)
I'm asking because I have two EEPROM bits which will cause a bus conflict if they are ever both 0. There will be no value stored at any address in the eeprom which has both bits as 0.
AI: If the output is latched, and if the timing constraints (address stable before latch) are observed, then that should not happen.
If there’s no latch, then allowing the minimum delay to occur after the address lines are stable before enabling the chip will typically suffice.
Eg. see the timing diagram for the 28C64 (Fig.3)
If you’re just wiggling the address lines around, the the output may not do what you don’t want, but it certainly could.
|
H: Calculating the transfer function of this op-amp circuit
The first op-amp is a differential, second one is a follower and third one is a second order low pass filter. I've been struggling with the low pass and the final output of the circuit and I am not sure whether my calculations are correct or not, they are still incomplete though(Always had troubles with these kind of circuits).
Refer to the input voltage as Vi, first stage output as Vo1, second stage output as Vo2 and final output as Vo.
My calculations so far:
AI: Here's how I would start it: -
You can calculate the filter TF seperately but concentrate on solving the feedback problem first with the unknown TF as "TF": -
Write down what you know: -
V1 = Vin - V2
V2 = V1.TF
From (2) V1 = V2/TF and from (1) V2/TF = Vin - V2
Solve for V2 using (1): -
V2(1 + 1/TF) = Vin therefore
\$\dfrac{V_{OUT}}{V_{IN}} = \dfrac{1}{1+TF}\$
Then solve the transfer function TF (it looks fairly easy given that it is a sallen key 2nd order preceded by a simple 1st order filter with a buffer between them hence no impedance interactions).
Sallen key stage (help from wiki): -
That's the hard bit and H(s) just multiplies with the transfer function of the low pass filter which I'll leave you to do.
Then you'll have "TF" which you can insert into the equation I derived.
|
H: Charging a device with higher amperage, can somebody please clear up the confusion and contradictory statements?
I have a PS Vita I wish to charge with my Samsung Galaxy charger, its output is 9.0V = 1.57A which is for fast charging (correct me if I'm wrong) and thus won't handshake with the Vita so won't supply this power (again, correct me if I'm wrong) or 5.0V = 2.0A
The regular charger for the Fat PSVITA supplies 5V = 1000mA and the regular charger for the Slim PSVITA supplies 5V = 1500mA.
I posted a question on Amazon asking if I could charge the PSVita with a Samsung Galaxy charger because as far as I know it would only supply the amperage the system asks for and someone replied saying this:
This is only partially accurate. The PSV has the capacity to play while charging which normally just makes the device charge more
slowly. Using a higher capacity would mitagate this a bit, but for
standard charging it would charge the battery to quickly causing
overheating and decreased life span for the battery. It would also
lead to battery expansion (ballooning).
He had me worried a little bit, but as far as I know and according to other sources online as long as the amperage of the charger is higher than the amperage the device needs and not lower it should be perfectly fine, but maybe he is still right and charging it with a Galaxy charger would make the battery expand, overheat and have a decreased lifespan?
AI: From what I can gather, the Samsung’s Adaptive Fast Charging is a sort of Quick Charge 2.0, and therefore requires a QC-type "negotiation" to output potentially damaging 9-V charging level. In default mode it will serve as a regular 5-V/2-A power supply.
Since your PSV is designed to work with 5V/1500mA power input, it will work just fine with more powerful (2000mA) adapter. The concern about charging the battery "too quickly" is grossly unfounded for such a reputable manufacturer as Sony: all internal charger circuits have strict limits on how do they charge their internal battery, and this current will never increase under any input conditions.
As a bonus you will have a benefit of faster charging the battery when playing, as someone rightfully replied to you on Amazon, except the "ballooning" part.
|
H: How could using an ungrounded appliance with a grounded extension cord be a fire hazard?
I came across a tweet recently:
Don’t even THINK about using a 2-prong plug in a 3-hole slot! Use
only the required number of slots in an outlet or power strip.
Below was a picture of a burned-out grounded extension cord.
I'm hesitant to argue with anyone in the business of keeping our food, shelter, clothing and loved ones from combining with oxygen, but this seemed quite strange; I can't think of any possible way this could be a fire hazard.
The NEMA 5-15 wall receptacles in Canada are grounded by default, for reference.
AI: The Fire Dept is wrong - it is perfectly normal to plug a device with a 2-pin plug into a 3-hole socket.
Breaking the ground pin off a 3-pin plug, then plugging that into a 2-hole or 3-hole socket may produce an electrical hazard - possibility of a shock.
If a high-current load, like an electric heater, was plugged into that burned outlet, and the contacts made poor contact, that would cause the overheating and resulting fire, whether the heater had a 2 or 3 pin plug.
|
H: By replacing capacitor values with different ones in a circuit, will the circuit still work?
I'm fairly new to electronics and following this schematic for a "Truth Meter", which detects sweat from your fingertips and a lights up an LED depending on the resistance of the skin, as one of my first projects. Anyway, I do not have the 10nF capacitor that I am supposed to use but I do have 22 pF, 10 uF, 0.1 uF, and 100 uF capacitors. What combination, if any, of the capacitors that I have, will make this circuit work. Also, does it matter if I use 1n4007 diodes vs the 1n4001 diodes I am supposed to use? Thanks.
AI: That circuit is crap. Run away. Just from the look of the schematic, it's obvious that whoever designed this didn't know what they were doing. Some problems:
There are no junction dots anywhwere. That's the standard, for good reason. Not only is it aggrevating to look at by those that are used to seeing proper schematics, but it also results in ambiguity. Is the node between R3 and D1 connected to the vertical line between C2 and IC1A or is it not?
LED1 is shown with current going up.
There is no connection to the cathode of LED1 at all!
The opamp power connections aren't shown (this is giving the designer the benefit of the doubt to assume he knows power is supposed to be connected).
What are the V+ and V- voltage levels?
What is "V"? It only appears at left connected to R4.
Then there's the circuit itself:
This circuit does not "Measure skin R" as the comment states. It measures the change in skin resistance over a narrow frequency range, roughly ½ Hz to 5 Hz. If the subject's skin resistance changes more slowly than that, or if it starts out sweaty, then this circuit won't detect it.
Noise on V+ and V will end up in the signal. Perhaps they are simply hoping that noise won't be in the detectable frequency range. A little filtering would be a good idea.
Like I said, run away.
However, to answer you specific questions:
I do not have the 10nF capacitor that I am supposed to use but I do have 22 pF, 10 uF, 0.1 uF, and 100 uF capacitors.
So get some. 10 nF capacitors are readily and cheaply available from the other end of the internet. Get a range of values if you plan to be tinkering with electronics.
You seem to be asking about C1. That and C3 need to be close to the values shown for this circuit to work as intended. C1 together with R8 set the low pass filter rolloff frequency. Likewise, C3 together with R5 set the high pass filter rolloff frequency. There are ways to use different capacitor values, but then you'd have to change resistors. That changes the impedences and the gain, which also has to be considered. In short, leave R5 and R8 alone, and use close to the capacitor values specified.
Also, does it matter if I use 1n4007 diodes vs the 1n4001
No, not in this case. D1, D2, and D3 are used only to make a voltage a little above V1. Just about any silicon diodes would work in that role. 1N4001 are probably what the author had in his junk box, LOL. In a real design, these would be small signal diodes, like 1N4148.
|
H: Instantaneous vector sum of three phases 120 apart is not zero?
Vector sum of equal 3 phases 120 degree apart is zero and serves as a neutral. However, while drawing instantaneous vectors of the three phases, they summed up all together to give a vector. I have attached the animation which shows: "An ordinary three phased system in both vector form and in sinusoidal form. The black vector is the resultant space vector; a vector sum obtained by adding the three vectors. As can be seen, the space vector's magnitude is always constant".
Kindly explain the difference between the two approaches.
The source of the animation and text is this page on three-phase power conversion.
AI: Vector sum of equal 3 phases 120 degree apart is zero and serves as a neutral.
True if you are referring to the voltages or currents in a balanced three-phase system.
In this case they are referring to the space vector - a term with which I am not familiar. After a quick scan through the top of the linked article I would think of this as the direction of the sum of the rotating magnetic fields and not the voltages.
Figure 1. The vectors when U, the red phase, is close to max.
The U voltage vector at the instant shown in Figure 1 would be pointing in the same direction as the red arrow. Meanwhile the green and blue arrows would be pointing in the direction of V at 120°, and W at 240°.
The catch here though is that the arrows don't represent the voltages or currents but the resultant magnetic fields. At the instant shown the V voltage is negative so while the voltage vector might be pointing to 120° the magnetic or space vector will be pointing the opposite direction to 300°. The W space vector will also point the opposite direction to the voltage vector. The result is that the three vectors are always adding constructively.
Note only that, but they always sum to exactly the same value. This is one of the beauties of 3-phase systems; the load on the generator is constant and the torque of the motor is constant through the full cycle.
Let me know if that helps or if further clarification is required.
From the comments:
The key here is that the phase of each magnetic field is the sum of the electrical phase (from the supply) and the mechanical phase (the angular spacing of the three different coils in the motor). – ajb
Agreed.
If we assume them to be voltages, since they also vary sinusoidal, each vector changes its length between positive and negative values. Isn't that vector sum hold correct for 3 phase voltages too? – Speaker Noir
You can't. They're not. I think part of your problem and reason for asking the question is that you are confusing phasors and magnetic vectors.
Wikipedia's article on Phasor explains it fairly well in the opening paragraph.
In physics and engineering, a phasor (a portmanteau of phase vector), is a complex number representing a sinusoidal function whose amplitude (A), angular frequency (ω), and initial phase (θ) are time-invariant. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor that encapsulates the frequency and time dependence. The complex constant, which encapsulates amplitude and phase dependence, is known as phasor, complex amplitude, and (in older texts) sinor or even complexor.
Figure 1. The phasor representation is a mathematical trick to represent the sinusoidal voltage waveform. A voltage measurement is one-dimensional but the 2D representation as a phasor allows us to represent it as having constant magnitude (the peak voltage) but varying phase angle and, in this example, taking the cosine of the phase angle gives us the instantaneous voltage. Image source: Phasor.
So, the phasor is a mathematical tool for representing sinewaves. It is not a 2D vector in the real world.
Figure 3. A three-phase motor has its windings oriented in 3D space. Since there is no change in orientation along the axis of rotation we can represent it in 2D. Image source unattributed.
Now note the difference. In this diagram, and in your space vector diagram, we are discussing true vectors in the real 3D world. There is a real magnetic field rotating in the motor and it is the sum of the three individual phase magnetic fields.
In summary: phasors (PHASe vectORS) are a tool for representing sinusoidal electrical voltages and currents while space vectors are for representing magnetic fields in 3D space.
By the way, +1 and thank you for the question. It made me reappraise my understanding of phasors and the concept of the space vector.
|
H: Is it normal/common to use this way of connecting point to point wire?
I want to make a circuit and am uncertain if the following is feasible/common and if there are better solution.
From the pin headers (the light blue circles), e.g. GndIn (upper yellow circle), I want to use 24AWG stranded wire to three other places on my protoboard (and one other), see the lower yellow circle.
Should I:
Solder it like in the picture (one pin header with three wires soldered directly to the pin header); I can imagine it's a bit hard to solder three wires at the same time', maybe I should connect them first and solder them as one wire
Make a vertical solder line until three pins below the GndIn pin header and solder each wire to a separate protoboard hole? However, this takes more space.
Use another solution?
And I have actually the same questions for IC pins (not clearly shown in my example picture).
The protoboard I use is this type:
AI: With so many unused holes, you can be more creative in routing jumper wires to holes and expand the number common pads. Use the adjacent row if necessary.
Just route then as neat as you can, like a PCB layout without clusters of overlapped wires to avoid signal crosstalk and so solder joints can be clearly inspected.
It doesn't take any longer and if the wires are tight and routed tight or bent in right angles, it will look better. Snake wiring looks a bit suspect to prospective clients or employers. Flush rectangular routing without overlap looks well planned.
Trained assemblers will use instant adhesive dots sparingly to prevent long loose wires. ( as long as you know it is permanent)
Don’t use excess solder.
Get the right solder temp to allow you to solder in 2~3 seconds by preheating and then add solder wire in a smooth sequence then release.
Magnet wire is popular for thinner appearance and I just burn thru the varnish without inhaling or use a fume extractor.
|
H: Calculate how much heat an IC produces
I'm building a device containing a large number of ICs. Now, I'm 99% certain heat won't be an issue. (Considering the volts and amps involved, I doubt you could accurately detect the amount of heat involved.) But, just for argument's sake, how would you go about calculating this stuff?
My first thought was to look at the datasheet. But it doesn't seem to say anywhere "this chip will produce X units of heat in normal operation". (Presumably because there's too many different variables that affect it, so they can't easily come up with a definitive number.)
The only relevant thing I can see is a section on "thermal resistence". If I'm understanding this currently, this is a measure of how quickly any heat generated would be able to escape the casing. (Presumably depending on how hot inside vs how hot outside; it seems to be expressed in units of °C/W.)
Clearly thermal resistence is part of the equation. But without knowing how much heat per second the IC produces in the first place, I'm not sure where to start with this.
AI: Here is rather simplified answer: with few exceptions (like light emitting or RF radiating) the electronic circuits convert all incoming power into heat. So, if you measure the power consumed by your assembled device you can get pretty good approximation of the heat to be dissipated.
Of course, if you want to calculate it in advance or predict temperature in various conditions you need all those things described in @SpehroPefhany's answer.
|
H: Protecting voltage sources against pushing or shorting into each other
I am designing a PCB where there is a load with peak power of 15 Watts to be powered from a user selectable voltage source.
There are a 35V and a 24V sources and it should be only one of the voltage sources to be active.
The user should be able to chose this, so I decided to go with some jumpers (2 in parallel to increase current capability). Now the problem is, if user leaves all jumpers in, the voltage sources will push against each other and for sure something will go KA-BOOM!
I can only think of adding two diodes per each source, in case user puts the wrong jumpers in.
The problem is, I do not know what rating these diodes should have. Should the diode have power dissipation rating same as the load? what characteristics the diodes should have for this scenario? e.g. should it be fast like schotkey or just a rectifier diode or zener will do?
I think 1N4007 is not the correct choice as datasheet says 1A load is the max. My load can draw up to 5A but usually sits around 2A.
Here is my draft schematic:
AI: The rated masimum forward current of the diodes should be higher than the maximum current to your load. The maximum reverse voltage of the diodes should be greater than 35V.
If your load can draw up to 5A then it must have a power consumption of at least 120W, or you have misled us somehow.
|
H: Would a bridge rectifier allow to detect µs-scale pulses of either polarity
The incoming signal is ground-referenced and consists of short (a few µs each) pulses at >150V. The signal is at ground potential the rest of the time (the "duty cycle" is low), and is of low impedance (< 100 ohms).
To adapt this signal to my 3.3V MCU, I've used a voltage divider + NPN like this:
simulate this circuit – Schematic created using CircuitLab
This works fine, but assumes positive polarity on the signal. I need to make the circuit more flexible and accept negative signals as well - I want to get a logical 0 whenever the signal's absolute value is above some threshold (e.g. abs(V_signal) > 110±20V).
I'm thinking about using a diode bridge after the voltage divider, but I'm worried that the divider + the diodes' junction capacitance will form a low-pass filter, which in turn would attenuate the short signals too much. I'm not exactly sure how to compute how bad it would be though.
I can also place the bridge rectifier in front of the voltage divider, which isn't ideal (the bridge rectifier will need to be high voltage rated, up to 400V in this case). But this will likely work.
Maybe other approaches (that don't use a rectifier) are also possible.
How to approach this conversion? Are there any caveats?
AI: A bridge won't work for you because your input signal is ground referenced.
Consider mirroring your circuit off the 3.3V rail with a PNP and then combining the two signals with a couple of gates.
|
H: Transmission Line Effects
My question has to do with the rule of thumb we generally use when dealing with transmission lines.
We say that if the length of the line is 10% or less of the wavelength, we could neglect the effects of the transmission lines—and that makes sense if we look at from the perspective of the time delay it takes the wave to travel along a short vs a long line.
But when looking at it from the perspective of the input impedance equation, the 10% rule of thumb doesn't always hold. For example,
$$ Z_{in}=Z_o\dfrac{Z_L+jZ_o\tan{(\frac{2\pi}{\lambda}L})}{Z_o+jZ_L\tan{(\frac{2\pi}{\lambda}L})}$$
For some values of \$Z_o\$, and \$Z_L\$, (with \$L=0.1\lambda\$), you don't necessarily get an input impedance close to the value of the load one (which I think would mean we could ignore that the TL is even there).
Take for example, \$Z_o=50\$, and \$Z_L=300\$, with \$L=0.1\lambda\$, then \$Z_{in}=23+j64\$. The line does transform the impedance seen by the source even though it is "short" as per 10% the rule of thumb.
Also, even if the line effects were negligible somehow, the reflection coefficient would still be nonzero since it's defined by:
$$\Gamma =\dfrac{Z_L-Z_o}{Z_L+Z_o}$$
What would be the effect of having a nonzero \$\Gamma\$ even when the line effects are considered negligible? (Hopefully this makes sense!)
What am I missing here?
Thanks
AI: Also, even if the line effects were negligible somehow, the reflection coefficient would still be nonzero since it's defined by:
$$\Gamma =\dfrac{Z_L-Z_o}{Z_L+Z_o}$$
What would be the effect of having a nonzero Γ even when the line effects are considered negligible?
This is exactly what you should expect. The reflection coefficient does not go to zero when the line length goes to 0.
If you have a generator with impedance \$Z_0\$ driving a load with impedance \$Z_L\$, with no line in between, the voltage across the load will not be equal to the generator's nominal voltage (the voltage it would drive on a matched load), indicating the presence of both forward and backward waves.
For some values of \$Z_o\$, and \$Z_L\$, (with \$L=0.1\lambda\$), you don't necessarily get an input impedance close to the value of the load one (which I think would mean we could ignore that the TL is even there).
Take for example, \$Z_o=50\$, and \$Z_L=300\$, with \$L=0.1\lambda\$, then \$Z_{in}=23+j64\$.
This equivalent input impedance doesn't look very close to 300 ohms.
But consider, if we drive a 300-ohm load directly from a 1 V 50-ohm generator, the voltage across the load will be 0.86 V.
If we drive the composite load (300 ohms at the end of a 0.1-wavelength line) with a 1 V 50-ohm generator, the voltage at the output of the generator will be about \$0.61-0.34j\$, which has a magnitude of 0.70.
When you're used to working with a computer that does calculations to 10 digits, 0.70 doesn't seem particularly close to 0.85 (it's about a 20% error). But if you were working with a slide rule like early rf engineers did, this error might not be the biggest one in a complex calculation.
Practically, you might only know your \$Z_0\$ to \$\pm\$10% and your load impedance (at a particular frequency in your operating band) to \$\pm\$10%, and you might not know what reactive parasitics are associated with the load, so trying to calculate the load effect more accurately than this would not be sensible anyway.
Of course if your application requires greater accuracy, you are free to adopt your own rule of thumb, such as only ignoring transmission line effects when the line length is less than 1/20 or 1/50 of the wavelength.
|
H: How to charge a 48V battery (12V battery * 4 in series connection) with a 12V charger?
I found a question about charging the battery with a 12 volt charger. However I’m confused as to do this?
First method, unhooking and charging each battery individually?
But wouldn’t I be just shorting a battery out if I tried charging one at a time while they are still wired in series.
Would charging the whole series only yield a 12 volt charge on the series? I was thinking of just buying a proper 48 volt charger
Charge 48V battery bank with 12V.
AI: The easy (and proper?) way to charge a 48 volt battery bank is to use a 48 volt charger.
If you only have a 12 volt charger, you can charge the individual 12 volt batteries one-at-a-time without rewiring anything - your charger's negative terminal should not be connected to "Ground".
You could also use four separate 12 volt chargers, each one charging one of the four 12 volt batteries making up the 48 volt bank, as long as the outputs of the chargers are not connected to each other except at the batteries.
|
H: Can I sum up multiple, alternating series capacitors and resistors like this?
When the components are between each other, can I sum up multiple, alternating series capacitors and resistors like this? Thanks.
AI: Since they share the same current, the end-points cannot tell the difference in an equivalent circuit.
Only a real voltage drop on each and power dissipation may be different for power heat rise.
|
H: Why we can't copy car remote patterns?
I have a set top box remote that I can use to control set top box as well as TV by using the set button on my stb remote.That remote can copy any remote IR pattern but when I tried it on a car remote it didn't worked.Do car remote uses different type of radiation?
AI: Do car remote uses different type of radiation?
Yes.
TV remotes use infrared transmission. On many the LED is visible at the end of the remote. You can test the operation of an infrared LED using your camera phone as they are responsive in the infrared region.
Figure 1. IR LED testing using a digital camera. (Image mine.)
Car remotes use one-way radio signalling.
Figure 2. A Peugot 307 remote. Note there is no LED visible but there is what appears to be an inductor (top of PCB) and an antenna (to outer loop). Random image source.
How do they differ?
Infrared requires line of sight to the receiver or enough reflective surface to bounce the infrared light. Point the remote into your hand and the television will not respond.
Your car's remote control will work inside your pocket. This is the biggest clue that it is not using light as a transmission medium.
Your programmable remote control is for infrared remote only. It will not be able to receive radio signals from your car key.
|
H: What purpose is to place pins at bottom on STM32 discovery boards
What is purpose to have pins (pin header) on STM32 development board places at bottom? Arduino and similar have pins at top so you see the top of board and connect external peripherals by connectors that points up. But on the SMT32 Discovery (like https://www.st.com/en/evaluation-tools/32f429idiscovery.html) connectors points down. Soo you can see on board peripherals like LCD display on top but connectors points at the opposite side. Because on some boards connectors have two rows you can't place it on breadboard.
Why STM32 boards have connectors pointing down, what is an advantage of that?
AI: The advantage is that you can stack boards downwards. If the connectors were on top the first add-on board would cover the display.
Arduino does not have a display, so there is nothing that has to be visible that would be covered.
Having all components on the same side (like Arduino) makes manufacturing easier and cheaper. In case of the discovery board you could not for example use wave soldering for mounting the connectors.
|
H: Electrical properties of a stack of strip board connected by through-hole pins/sockets
I'm aware that breadboards have a limit of 1-2MHz. I think this is because of capacitance.
Are there similar limits for a stack of stripboard connected by through-board sockets/pins? Would breaking all the tracks, to make them as short as possible, help?
AI: A 200 MHz 10:1 probe is limited to 20MHz with a long ground clip due to excessive ringing. But with a short clip its ok or better with no clip and tip/ring.
I have run analogue logic with harmonics into the 200MHz region too but localized current loops must be kept small, so decoupling is needed and attention to details on layout since there is usually no ground plane.
For high-speed signals, I preferred twisted pair 24AWG wirewrap wire. With 8~16 turn per inch gives a low impedance for reducing crosstalk and ringing that can work well on protoboards from 150 to 75 Ohms. But for high impedance logic like CD4xxx that also adds capacitance and can reduce rise time and add prop delay and reduce speed. But it can look neater for single jumpers with precision tight corners with flush jumpers. I would have a jar of them to choose from.
But if you look at modern DVD players, some use single sided boards.
But above being said does not mean you will have success above 1 MHz on a breadboard. Even a professional design on a 6 layer PCB can fail for crosstalk and ringing long tracks and split grounds.
Final comment
So ultimately it is not the board but rather the user skill at understanding physical wire inductance and EMI , crosstalk, E-fields and H-fields,that limtis performance.
Rule of thumb is to keep jumpers < < 2cm, if speed matters and use twisted AWG24 pair jumpers using thin insulation wirewrap wire.
other info
If you want to run 200 MHz it would have to be in a small zone with good RF decoupling and it will have some unintentional radiation, but OK for proto's.
From my experience, professional designers will design, build and test on an in-house etched board same day rather than use a protoboard and get it working usually 1st time with minor value tweaks. This would be to evaluate a new chip.
Then a complete board design turned around in 2 to 5 days with Getek, FR4 or 2 weeks for ceramic hybrids. Either using iron-on boards for Op Amps or lithographic film prints same day for RF, etch and make it work the same day.
|
H: Line out cannot drive audio isolator
I have a cheap Bluetooth/FM radio module connected to a TDA7942. Unfortunately it passes all kinds of disturbances it picks up from the power supply to its line out.
I thought to connect an audio isolator to it, but it appears that the line out is not strong enough to drive this; I lose all low frequencies through it. The DC resistance of the audio isolator is 130 Ohm. If I connect my phone earphone output directly to the audio isolator, there is no discernable loss of quality.
Is there some kind of line driver (circuit) that I can put in-between?
AI: "[radio] passes all kinds of disturbances it picks up from the power supply to its line out."
Trying to "fix" electrical noise after-the-fact is like putting a band-aid on a knife wound... totally ineffective. It is much better to tackle the source of the noise directly.
"[it] appears that the [radio] line out is not strong enough to drive this [audio isolator]; I lose all low frequencies"...
Forget about the isolator for now. The TDA datasheet states an input impedance of 60kΩ, which is not much load at all; the radio should have no problem driving that directly. I suggest investigating the following:
Check the cleanliness of the power going into the radio. Most good multimeters, on AC volts mode, will show noise up to 100kHz or so. If you see >0.010v AC across the radio's DC supply, then that is a likely source of noise. Of course, an oscilloscope would be the best tool to measure this. Solution: might be able to filter out power supply noise by adding a 0.1uF and 10uF, perhaps even 100 or 1000µF "bypass" capacitor across the power right at the radio.
The TDA datasheet also states a minimum SVRR (supply voltage rejection ratio) of 40dB, which means noise on the TDA power supply can be audible. The solution here is more complicated, as the TDA creates large power transients, and at much higher frequencies, so is very demanding of bypass caps. "Low-ESR" types are pretty much required. The TDA should already have several large bypass caps... if not, add some and measure the AC across them.
If using two different power sources which are not fully isolated, current can flow between their ground leads, resulting in "ground-loop." This current usually results in "hum", but can cause other effects (up to the destruction of one or both devices.) Solution - try to power both from the same source. Easy way to test - power each from batteries.
Long, unshielded cabling between the two can result in electro-magnetic interference (EMI) induction. This is where the wires essentially act as antennas, and pickup radio-frequency (RF) noise from the area. Solution: use shielded wiring for long runs and sensitive inputs. Short runs (hand-width) should not matter, however you are near an RF transmitter (BlueTooth device.)
In section 6.3 Input Impedance and Capacitance (assuming the TDA7492), did you place a 470nF (preferably ceramic or polyester) cap between the radio output and TDA input?
|
H: Confusion with understanding the fixed bias circuit
Below shows a bipolar transistor in fixed current bias configuration. A text says that this topology is independent of β:
What I understand is, since Vcc, Vbe and Rb is constant the base current becomes fixed as:
Ib = (Vcc-Vbe) / Rb
As we see in the above formula since all three variables Vcc Vbe and Rb are constant, Ib is constant and so fixed.
My confusion is the following:
Imagine if we now change the transistor with the same type but with a different β, will the Ic change?
Thought 1:
I'm asking because I guess we can say that after the transistor is changed the Vbe will not change(?). And according to the Ebers Moll equation the Ic will not change since Vbe will not change.(Ic is determined by Vbe)
Thought 2:
But if we think again after changing the same type transistor with different β which means Vbe will not change and so that means Ib is fixed at the same value as well. But now the new transistor has a different β and Ic = β × Ib. So this tells us that Ic will change.
Which thought above is correct and where am I making the logical flaw?
AI: The Ebers-Moll equation does account for changes in \$\beta\$, but they use a parameter \$\alpha\$ where
$$\alpha = 1 - \frac{1}{\beta} $$
We often say that \$V_{BE}\$ is fixed in this circuit because the changes in \$V_{BE}\$ are usually small compared to \$V_{CC}\$. It follows that \$I_B\$ is (essentially) fixed since it is determined by Ohm's Law and the voltage drop across RB.
Your understanding is correct then that if \$\beta\$ (or \$\alpha\$) changes then \$I_C\$ will change significantly, causing \$V_{CE}\$ to change proportionally.
|
H: THROUGH-HOLE PLATING problem
I'm making a stm32 board at home.
How can I plate through holes low cost at home?
Is there a low cost through hole plating kit?
AI: Olin Lathrop has mentioned the only likely solution. The pieces are essentially hollow copper rivets with very thin walls. The holes are drilled oversize, the rivets inserted and then flared by striking a punch (very carefully). As you might expect, this is not exactly practical for a board with more than a dozen or two holes.
There is a chemical approach, such as is discussed here and there is a Youtube video you can find. Essentially, you dip the predrilled board into a solution which coats the board. The board and solution are then baked and the solution pyrolizes and leaves nanoparticles of copper in the hole walls (and over the entire board. The pyrolosis products get washed away, and electroplating is used to thicken the plating in the holes. Resist is then applied and etched.
Two problems: first (and least) is drilling/registering the holes. You can apply a marking system, drill, and then remove the marking for the electroplating step. The problem comes when you attempt to lay down resist on the plated hole pattern, and getting the pads perfectly registered over any sort of board size can be a real challenge. And drilling the holes requires sharp carbide tips - ragged hole walls and edges will not work.
But that's the small problem. If you're in the US, the big problem is that the active ingredient, calcium hypophosphite, is a Level I controlled substance. That doesn't mean you can't buy it - you can, it's even on eBay. Thing is, it requires careful attention to paperwork. And just ordering it on eBay from China may work, or you may get a knock on the door and find yourself in big, big trouble. Can you say "meth lab"? I knew you could. So can the Feds. At the least, unless you are a certified researcher you will have an uphill battle establishing your need for the stuff.
|
H: How to make BJT/MOSFET work in RF?
I am trying to design an amplifier with a transistor model. I followed some basic tutorials online (http://microwave.eecs.utk.edu/ECE545_files/02_Lab_2.pdf) and designed a transistor model for BJT/MOSFET. I am stuck in making the amplifier work in RF frequencies. I guess either I am using a low-frequency model or improper circuitry and not sure which one of
Now, for MOSFET/BJT I fixed a bias point (for maximum transconductance in MOSFET/ fix Beta,VBE,IC,IB in BJT) and then designed the biasing circuitry and simulated the S-Parameter/AC simulation. I am getting the S21/Gain dropping to 0dB near 150 MHz. I guess this has to do with transit frequency.
I guess two mistakes - either I am using a low-frequency model or improper circuitry and not sure which.
My circuit looks like the one below for BJT:
AI: From the datasheet of the 2N3904 I got that for maximal RF performance (current gain bandwidth product = 300MHz) one must have Ic=10mAdc and Vce=20Vdc. This is not the case in your circuit.
|
H: Does the Si4731 / Si47xx provide the necessary info to fetch left/right demodulated audio levels?
I'm working on a project which uses an Si4731 broadcast radio receiver IC (Si47xx family) and which calls for left & right demodulated audio level meters based on a source FM broadcast signal. I expect this to be available via I2C from the IC. However, I've read through the manufacturer's programming guide (linked above) like five times now and I'm still not sure.
The closest thing that I've found to this in the above-linked guide is the TX_ASQ_STATUS (Audio Signal Quality) property, but I don't see exactly how it would apply here. I'm already reading the FM_RSQ_STATUS (Radio Signal Quality) property and using that to get the Radio Signal Strength (for RSSI), Signal-to-Noise Ratio (SNR), and FM "Pilot" indicator via I2C. Perhaps it's in there somewhere and I'm overlooking it?
Can someone with more experience with radio receiver ICs please help me understand this better?
AI: I do not believe it does, but you can just process the digital audio stream to derive whatever style of metering you desire.
Audio metering is a can of worms, there are literally dozens of different standards worldwide, VU, PPM (But which scale?), DPM (But what dynamics), Loudness (There are at least three), nobody is going to bother with that in a simple radio chip, far better to leave that to the customers processor if they really want it.
Bring the I2S or whatever format makes sense into your micro and do the sums on it.
|
H: Gate and routing delays as a function of voltage and temperature
As I understand from watching overclocking videos, the maximum operating frequency of a digital ASIC is a function of voltage and temperature. Specifically, it seems that the maximum operating frequency increases with higher voltages and lower temperatures.
My guess is that gate delay and routing delay both vary with operating voltage and temperature. Is that correct? Given a process node (e.g. TSMC's 16nm FinFET+) where can I find documentation (e.g. datasheets) explaining how gate and routing delays vary with voltage and temperature?
AI: For submicron technologies the gate and routing delay variability maximum operating frequency of a digital ASIC as a function of voltage and temperature is more complex than in classic CMOS. In particular, there is new effect called "ITD", see this
academic article
One of such factors is the Inversed Temperature Dependence (ITD)
effect 1. When a circuit is operating in low voltage, the
propagation delay of a cell may decrease as the temperature increases.
So cooling of a processor does not necessarily help.
For exact details you probably need to contact TSMC, look up their white papers, or look for more recent papers from world SEMICON conferences.
Regarding overclocking, these days must be over: modern 22-14 nm processors employ local embedded LDO regulators that might be set constant but different for different CPU blocks, so external supply voltage has little to no effect on overall CPU performance.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.