text
stringlengths 83
79.5k
|
---|
H: Can I parallel multiple resettable fuses to achieve a higher current rating?
I have an application wherein I spec'd a fuse, rated for 80VDC 500A. The customer is concerned about what happens if they have to replace the fuse, it being rather difficult to access. They suggested resettable fuses as an option.
I'm not aware of any resettable fuses rated for those currents. However, since they have positive temperature coefficient, it occurs to me that I may be able to effectively parallel several smaller ones. Is this good practice?
AI: I assume you're talking about paralleling polyfuses- this is likely not a viable idea since the breaking capacity will be insufficient to interrupt any reasonable fault current, even if the current were to share nicely. When the last one in parallel opens it has to interrupt the entire fault current. I can almost smell the arcing..
You could consider a remotely resettable 500A DC circuit breaker, but I suspect once you get a price quote the customer will promptly reconsider how difficult it is to access a fuse that shouldn't be blowing very often anyway.
Those are rather decent fuses- low voltage drop, good interrupting capacity (3000A) and fairly widely available.
|
H: 555 timer accident
I accidently applied around 5V to the output pin of the 555 timer.The output is zero when I ground the trigger, and an LED connected between 5V and pin 7 will not respond when 5V is applied to the threshold. Pin 8 is also connected to 5V. Is it damaged? Could it be something else?
AI: Sounds dead.
If you have another working chip you can set up a circuit with that chip (e.g. an LED blinker). If the circuit behaves as expected with that chip, then swap in the one you think is damaged and see if the circuit still behaves. If it doesn't then the chip is dead.
|
H: Inductor for Switched regulator
I'm implementing a Step-Down Switching Voltage Regulator with Vin = 12V, Vout = 5 V 300mA and I followed the application which is in the page 13.
I have no doubts in choosing all the components around the regulator but how do I choose the right inductor for this application?
I have selected this inductor (SDR0604- 331KL) but I don't know if it will work perfectly for my requirements. What is the influence of the frequency in the inductor for this type of regulators.
Many thanks!
AI: For inductor selection in a buck converter, you usually start by picking how much ripple current you want to allow in the inductor. This is a rule-of-thumb kind of estimation. If you need fast transient response and smallest inductor size you could go as high as 60% ripple current. If you need low output ripple and high efficiency you could go as low as 20%. 40% is usually a good compromise.
(For absolute smallest inductor size at low currents where efficiency isn't paramount you can run the converter in discontinous mode with 100% ripple current.)
Since you need 300mA output current, 40% ripple current is 120mA. Your regulator switches at 52KHz which is a period of 19.23uS. For a buck converter, the duty cycle is Vout/Vin (continuous conduction mode) so your on time is 8uS. During the on time the voltage across the inductor is Vin-Vout or 7V. V=L*dI/dT so your inductor value should be 7V*8uS/120mA = 466uH.
Next you have to make sure the inductor can handle the current without saturating or getting too hot. Check the datasheet for the inductor to make sure you're below those limits. The 330uH inductor you selected has an RMS current maximum of 270mA so it's undersized for this application. The saturation rating is 360mA, and your output current +1/2 ripple current is 360mA, so right on the edge for that rating as well. The inductance would be OK, you would just have higher ripple current than the 466uH calculated above.
Finally, the core loss can be important especially at higher frequencies. This converter operates at a relatively low frequency so most magnetic materials intended for use in SMPS are probably fine. For higher frequencies you have to look at the core loss curves (if available) to see if the part is suitable for use in your design. If no core loss curves are available sometimes there's a frequency range given or an online core loss calculator tool. Otherwise you have to contact the manufacturer for core loss information.
|
H: FSK combined with either ASK or PSK?
Why is it that we are able to combine Amplitude and Phase modulation (Resulting in QAM) but are not able to do so with frequency? Why not combine all three degrees of freedom (Amplitude, Frequency, Phase) for modulation?
AI: Quadrature amplitude modulation could be viewed as a combination of phase and amplitude modulation, but it could also be viewed as overlaying two amplitude-modulated signals that are 90 degrees out of phase and have a limit imposed on the sum of the squares of their amplitudes. If one interprets the signal in such fashion, and has a reference waves which are at "zero degrees" and "ninety degrees" phase, one can multiply the incoming signal by those two waves and filter the result to get the zero and ninety-degree components.
While it might be possible to combine frequency modulation with amplitude modulation, many techniques of detecting frequency-modulated signals have gain which will vary slightly with the incoming frequency. Thus, even if an FM signal was transmitted at a uniform amplitude, the output of an early stage of the tuner might have an amplitude that varies in response to the modulating signal. If the amplitude of the signal at that point is going to be ignored, that's not a problem, but if one is trying to recover an amplitude-modulated signal on top of the FM signal, one would have to filter out such effects.
|
H: What is the current source for in the Schmitt trigger?
What is the current source for in the schmitt trigger? I can't understand this. Can't see any resistor either.
AI: This appears to be a misuse of the symbol for a current source. Obviously, it should be a voltage source (which sets the comparison threshold), but whoever made the diagram (engineers, marketing, etc.) was too lazy or stupid to use the correct symbol.
|
H: Recommended Mosfets for switching purposes
I'm looking for common MOSFETs to do some experimenting with. They should be able to switch at least 3A at 15V, have a low on resistance for power efficiency and also be quite cheap.
I require all four types, that is P & N channel, both depletion and enhancement mode.
So far I've found the following enhancement mode MOSFETS:
N Channel: IRLML0030 or IRLML6244 (any recommendation
here?)
P Channel: IRLML9301
I think these MOSFETs will do the job. However, I'm struggling to find comparable depletion mode MOSFETs. Can anybody make a recommendation?
In general, is there a preference between using a N channel or P channel for switching purposes? I understand with a P channel the source is connected to positive while with an N channel the source is connected to ground. Practically does this make a difference?
AI: There is definitely a preference for switching with N-channel MOSFETs. There are two primary practical reasons:
Electron mobility in silicon is ~2-3x higher than hole mobility. This means that, for a given \$BV_{DSS}\$ and \$R_{DS(ON)}\$, an N-mos will be half to a third of the size of a P-mos, and therefore half to a third of the price (raw silicon cost; this doesn't account for package cost, et al).
N-channel MOSFETs switching high-voltage loads can have their gates driven by a signal that is close to ground, obviating the need for a level shifter. (After all, most control circuits are ground-referenced.) P-mos, on the other hand, require a high-side referenced control signal.
As for discrete depletion devices, those are fairly hard to come by. You'll have much better luck looking for JFETs.
|
H: IC pins too small for my breadboard
I recently bought some IC's with 8 and 16 pins.
I was used to work with bigger ones, so now I cant test them in the breadboard without soldering an individual wire to each pin (which would probably destroy the IC).
I thought about sockets, but it doesn't make the distance between pins bigger.
Is there any device I can use to connect the small IC's to the breadboard?
Thank you for your help.
AI: If your IC does not come in a DIP package, you can use a break-out board like this one (Adafruit) and some pin headers and solder your SOIC-16 or TSSOP-16 package onto it to use it in a breadboard. This of course assumes the package you have bought is a 16 pin package - there are other versions of the breakout board available on the internet as well.
(Image courtesy of Adafruit.com)
|
H: Power switchers copper fill / region
When is it considered that too much is too much when it comes to copper fills on DC-DC switchers. I am currently laying out a Buck, 3 Amp max output. Every literature that I have scanned before advises me to pour and fill solid regions on the input node, the switch node(where the maximum peak currents happen in a Buck) and the output. But when should i start backing off?
AI: There is no glaring downside to overfilling the input and output nodes of a SMPS (if you have room) but the switching node of the supply should not be egregiously* oversized.
This node is switching very fast (500kHz - 2MHz are common in most ICs) and usually from ground up to your input voltage so it generates a lot of high frequency noise. It is essentially a capacitor and the larger you make the plane pour, the more capacitance it has and the more likely it is to cause problems by coupling into your ground plane or high speed sensitive trace.
*On most designs I generally size it about 1.5x - 2x as big as it needs to be (and am very careful not to run any other traces under this node on any layer) for a couple of reasons listed below.
If I accidentally short the output the trace (hopefully) doesn't pop and the
circuit is useable/salvageable.
This can also protect you from throwing out the PCB due to
requirement creep if you need more output current than initially
thought
|
H: Understanding PCB layer stack-up shorthand notation [2:(1(2+345*67)*16):7]
I was given a PCB layer stackup specification like this:
[2:(1(2+345*67)*16):7]
For an 8 layer blind via setup.
Could someone provide a reference to decode this shorthand notation?
What is the proper name of this shorthand notation?
How widely used is this notation?
Update: apparently it's from Eagle CAD's DRC (design rule check) dialog.
AI: I'm reposting this from an old answer. I'm not sure this is a duplicate, because the old question specifically asks about Eagle CAD layers.
The description on the notation is below. However, it appears the notation you got is flawed. The presence of the numbers '345' and '67' can't be correct.
It doesn't seem unreasonable to assume the '345' is '3*4*5' and the same for '67' being '6*7'. The corrected specification is:
[2:(1*(2+3*4*5*6*7)*16):7]
If true, your stack up looks like this:
There are blind, buried, and standard vias.
From the EAGLE help file, (which uses the same layer notation you're dealing with):
Layers
The Layers tab defines which signal layers the board actually uses,
how thick the copper and isolation layers are, and what kinds of vias
can be placed (note that this applies only to actual vias; so even if
no via from layer 1 to 16 has been defined in the layer setup, pads
will always be allowed).
The layer setup is defined by the string in the "Setup" field. This
string consists of a sequence of layer numbers, separated by one of
the characters '*' or '+', where '*' stands for core material (also
known as FR4 or something similar) and '+' stands for prepreg (or any
other kind of isolation material). The actual core and prepreg
sequence has no meaning to EAGLE other than varying the color in the
layer display at the top left corner of this tab (the actual
multilayer setup always needs to be worked out with the board
manufacturer). The vias are defined by enclosing a sequence of layers
with (...). So the setup string
(1*16)
would mean a two layer board, using layers 1 and 16 and vias going
through the entire board (this is also the default value). When
building a multilayer board the setup could be something like
((1*2)+(15*16))
which is a four layer board with layer pairs 1/2 and 15/16 built on
core material and vias drilled through them, and finally the two layer
pairs pressed together with prepreg between them, and vias drilled all
the way through the entire board. Besides vias that go through an
entire layer stack (which are commonly referred to as buried vias in
case they have no connection to the Top and Bottom layer) there can
also be vias that are not drilled all the way through a layer stack,
but rather end at a layer inside that stack. Such vias are known as
blind vias and are defined in the "Setup" string by enclosing a
sequence of layers with [t:...:b], where t and b are the layers up to
which that via will go from the top or bottom side, respectively. A
possible setup with blind vias could be
[2:1+((2*3)+(14*15))+16:15]
which is basically the previous example, with two additional outer
layers that are connected to the next inner layers by blind vias. It
is also possible to have only one of the t or b parameters, so for
instance
[2:1+((2*3)+(15*16))]
would also be a valid setup. Finally, blind vias are not limited to
starting at the Top or Bottom layer, but may also be used in inner
layer stacks, as in
[2:1+[3:2+(3*4)+5:4]+16:5]
A blind via from layer a to layer b also implements all possible blind
vias from layer a to all layers between layers a and b, so
[3:1+2+(3*16)]
would allow blind vias from layer 1 to 2 as well as from 1 to 3.
|
H: Have some queries about Fourier Transform?
I am fond of Fourier Transform. I have some queries about Fourier Transform
In most of the cases,the Fourier transform of a signal is symmetric about positive and negative axis.I think the computational complexity increases because only half part of symmetric spectrum (i.e. spectrum except on negative axis) is of use. Also,while calculating in frequency domain we could get wrong value of energy /power due to the spectrum on negative axis .
In the Fourier transform formula the limits of integration are from -infinity to +infinity .But for a signal which is continuously or exponentially increasing with time,one can't compute it's Fourier transform.
After computation of Fourier transform of a signal, we get Phase and Frequency spectrum of the whole signal which is localised in frequency domain only . But from both these spectrums,we don't get any spatial component features like which frequency component is present at which time (and same with the phase value ).
If we think practically ,concept of negative frequency doesn't exists. But after computation of Fourier transform of signal,with dc and positive frequencies we also get unnecessary negative frequency components. I think concept of negative frequency doestn't exists practically.
So can anybody give explanation on any of the above doubts?
AI: In most of the cases,the Fourier transform of a signal is symmetric about positive and negative axis. So i think the computational complexity increases. Also, energy on negative side unnecessarily gets calculated/wasted.
For real-valued signals, the Fourier transform is conjugate-symmetric about the y-axis.
However, it's entirely possible to use this information when calculating the transform (or estimating it numerically) and so there's no increase in computational complexity.
In signal processing, complex-valued signals are also considered, and when these are used then the transform is no longer conjugate-symmetric.
In the Fourier transform formula the limits of integration are from -infinity to +infinity .But for a signal which is continuously or exponentially increasing,one can't compute it's Fourier transform.
Yes. This is essentially why the Laplace transform exists.
My experience, however, is that the Laplace transform is rarely needed for practical engineering work (at least in my area of experties).
After computation of Fourier transform of a signal, we get Phase and Frequency spectrum of the whole signal which is localised in frequency domain only . But from both these spectrums,we don't get any spatial component features.
I'm not sure what you mean by this.
In image processing, they certainly do do Fourier transforms between the spatial domain and the spatial-frequency domain.
If we think practically, concept of negative frequency doesn't exists.
Negative frequency exists if you consider complex-valued functions and use the complex exponentials \$e^{j\omega{}t}\$ as your basis set. This allows you to keep track of in-phase and quadrature components without doing separate sine and cosine transforms.
As mentioned above, practical Fourier transform calculations take advantage of symmetry and don't do any extra work to determine the negative-frequency components.
|
H: Saving the state of the LED on illuminated momentary button
I have three illuminated momentary buttons with a separate anode and cathode for the led.
(source of image)
I want a circuit that illuminates the LED of the button that was most recently pushed.
My initial idea has been to use three SR Latches created using NOR gates. Connect the S or Q of each latch to the reset of the two other latches. But then this creates the opportunity for HIGH on both S and R which is disallowed. Right?
I imagine there are other ways to accomplish this without latches too. Any help is greatly appreciated.
AI: The circuit you're looking for is called a "flip-flap-flop". It is essentially a 3-state (rather than 2-state) flip-flop. It can be implemented using a single 74xx27. It is the circuit on the right below:
|
H: Can a power MOSFET for switching application be used as a linear amplifier?
Power MOSFETs nowadays are ubiquitous and fairly cheap also at retail. In most datasheet I saw power MOSFETs are rated for switching, without mentioning any kind of linear applications.
I'd like to know whether these kinds of MOSFETs can be used also as linear amplifier (i.e. in their saturation region).
Please note that I know the basic principles on which MOSFETs work and their basic models (AC and DC), so I know that a "generic" MOSFET can be used both as a switch and as an amplifier (with "generic" I mean the sort of semi-ideal device one uses for didactic purposes).
Here I'm interested in actual possible caveats for practical devices which might be skipped over in basic EE university textbooks.
Of course I suspect that using such parts will be suboptimal (noisier? less gain? worse linearity?), since they are optimized for switching, but are there subtle problems that can arise by using them as linear amplifiers that can compromise simple amplifier circuits (at low frequency) from the start?
To give more context: as a teacher in a high school I'm tempted to use such cheap parts to design very simple didactic amplifier circuits (e.g. class A audio amps - a couple of watts max) which can be breadboarded (and possibly built on matrix PCB by the best students). Some parts I have (or I could have) available cheaply, for example, include BUK9535-55A and BS170, but I don't need specific advice for those two, just a general answer about possible problems wrt what I said before.
I just want to avoid some sort of "Hey! Didn't you know that switching power mos could do this and this thing when used as linear amps?!?" situation standing in front of a dead (fried, oscillating, latched,... or whatever) circuit!
AI: I had a similar question. From reading application notes and presentation slides by companies like International Rectifier, Zetex, IXYS :
The trick is in the heat transfer. In the linear region, a MOSFET will be dissipating more heat. The MOSFETs made for linear region are designed to have better heat transfer.
MOSFET for a linear region could live with higher gate capacitance
IXYS app note IXAN0068 (magazine article version)
Fairchild app note AN-4161
|
H: Saturation when using differential amplifier with op amp
I built an amplifier following this schematic:
And the gain is approximately 1000. When I use 10mV sine wave from a function generator, it works perfectly (it also work with other signals: triangle, pulse, square etc). However, when I get the input from electrodes to measure ECG (2 on each shoulder, 1 ground on the abdomen), it always saturates at op amp power voltage (10V). How do I fix this problem?
AI: The NIOSH states "Under dry conditions, the resistance offered by the
human body may be as high as 100,000 Ohms. Wet or broken skin may drop
the body's resistance to 1,000 Ohms," adding that "high-voltage
electrical energy quickly breaks down human skin, reducing the human
body's resistance to 500 Ohms."
Let's say that body resistance is 100k and the op-amp chosen is a TL082. This op-amp has an input bias current of 50pA and through 100kohm will generate an offset voltage of 5uV. Multiply this by the gain of 1000 and no problems - the output voltage will be 5mV and not end-stopping. However, the TL082 has an input offset voltage of 5mV so this would offset the output by 5V when gain is 1000. But remember that two op-amps are present and their offset voltages could be additive.
What about the LM324? Bias current is 40nA and with 100kohm this generates an input offset of 4mV and not surprisingly produces a whopping 4V offset after amplification of 1000. It looks like the beginning of the problem if the LM324 is used but there's another issue - the LM324 has an input offset voltage of typically 2mV so, after a gain of 1000 this might look more like 2V added to the 4V due to input bias currents. Given also that there are two amps the offset voltage could be additive making a total of 8V offset.
An LM741 has a bias current of 80nA and this, given the same rationale as above produces an 8V offset on the output. Plus, the input offset voltage is maybe 2mV thus adding another 2x 2V. Grand total is 12V offset.
My favourite quad op-amp (OP4177) has a typical offset voltage of 25uV and a bias current of 0.5nA - this would, in the configuration above produce 50mV at the output due to bias currents and 2x 25mV due to offset voltages. 100mV total offset, ahhh that's better....
So, what op-amp are you using?
|
H: How does the RC time constant affect behavior of a passive integrator/differentiator?
I'm reading the first chapter of the AoE. I've come across this section on differentiator/integrator circuits and couldn't understand the math behind it.
For the first picture, it says small RC means dV/dt being much lower than dVin/dt, but I don't understand how it does this. Similarly, I'm not sure how large RC means Vin >> V. I know this may be a petty question, so please be patient with me.
AI: What they mean is that a passive R/C filter can only approximate a differentiator/integrator so long as the time constant is much slower than the signal. The reason for this is that the true behavior of an R/C and R/L circuit is exponential in time e.g. from basic circuit theory, the general response of an RC circuit is $$V=V_0 (1-e^{(-t/RC)})$$ If RC is large, e^x is close to linear for small values of t, yielding behavior close to an ideal integrator/differentiator.
Another way to think about it is to consider the integrator case in 1.15 with a constant voltage input (e.g. Vin = 10 V). We expect the output to be a linear ramp of constant slope (integration of a constant = straight line). However, if RC is too small, what happens is that after a time of integration, V will increase due to the capacitor C charging up. This will decrease the current through R, which in turn decreases the "slope" the output voltage. At some point, when V = Vin the integrator stops working completely. This is how the behavior of the passive integrator deviates from the ideal integrator. Conversely if RC is large enough, the voltage across capacitor C will never get large enough to reduce the current through resistor R, and so the current through resistor R will be approximately constant, behaving as an integrator.
Note that you can use op-amps and other active components to force the current through the resistor (in 1.15) to be constant, this is how the "ideal integrator" circuit below works:
|
H: Calculating charging current
I have following situation:
Solarpanel with 20 W, 2 12V lead-acid batteries, the solar panel is charing the batteries. I measure the current coming from the panel with a ACS712 in a 3 minute interval.
The output then looks as following:
Amp;Date
1.08699;"2015-05-07 12:42:33"
1.09222;"2015-05-07 12:45:33"
1.00338;"2015-05-07 12:48:32"
0.984217;"2015-05-07 12:51:32"
1.06783;"2015-05-07 12:54:33"
1.01557;"2015-05-07 12:57:33"
0.386719;"2015-05-07 13:00:32"
I want to calculate the total ampere hours of a day.
I had the followin in mind:
As the ACS712 is very noisy, I set all values below 0.10 amps to 0.
I square every value, then sum it, then divide by the amount of values I have for each day.
Then I multiply by the amount of seconds that have passed between start and end.
The result is the charging current for that day, I think. Is this a viable approach to this problem, or should I skip the squaring part?
AI: Skip the squaring. You are trying to integrate current, not power. And don't bother limiting the low values, either. Just sum them up, multiply by time between samples, and divide by total time.
|
H: Estimate minimum discharge voltage in a laptop battery
The objective is to determine the best 18650 cells to replace the ones in my laptop battery.
To do that, I need to know the minimum voltage the battery controller accepts before ordering the OS to shutdown the laptop. So, I have discharged my battery until the laptop refuses to turn on. I opened the battery case and measured the cell voltages:
Cell#1: 3.427V | Cell#2: 3.331V | Cell#3: 3.421V
These voltages were measured under no load (battery disconnected). I think the minimum voltage the controller accepts under load must be slightly lower than the ones measured. Do you have any idea how to estimate them?
Just to be sure I am not making any other mistakes: I have estimated the discharge current around 2-3A, which means 22-33W under ~11V (typical laptop consumption). Does that seem correct?
Finally, I checked the voltage when the battery is 100% according to the controller, and I got voltages around 3.85V per cell. I found this a little odd, as I expected something closer to 4.2V. Maybe this is due to the deteriorated state of the cells?
EDIT: Apparently, the battery controller is still calibrating itself after a full discharge. Right after I measured the 100% charge voltages, on the next boot the battery percentage went from 100% to 75%. I have remeasured and the voltages are around 4.05V, which are closer to 4.2V.
EDIT2: I tested the battery under load until the controller turns off the laptop. The voltage in each cell was around 3.2V, oscillating greatly due to the "knee" region Bruce Abbott mentioned.
EDIT3: I plotted Cell#2 voltage vs. time while charging:
What Filek said is indeed correct, the controller does not go to 4.2V, it stops at about 4.06V.
AI: At 3.3~3.4V a 3.7V Lithium ion cell has practically no capacity left, and is past the 'knee' where increased current causes a large voltage drop. At this low charge state, small differences in capacity between cells also become significant, and some cells may drop their voltage before others.
In the graph below showing discharge of a typical modern 18650 cell at various currents, I have added a red dot showing where it reached 3.4V at 0.2C. At this low current the curve should be similar to open circuit - a bit lower but with a similar rapid drop. If you extrapolate and line up the discharge curves to meet at 0V then you can estimate what the voltage would be at higher current.
But what does this mean for determining "best" 18650 cells to replace the ones in your laptop? The laptop probably refused to start because the voltage under load went below the minimum it accepts. You don't know what that cut off voltage is, but you do know that the battery had practically no charge left, so it was past the 'knee' where voltage drops rapidly. So long as your replacement cells have a similar discharge characteristic they should be OK, and the exact cutoff voltage probably isn't that important.
|
H: is it possible to mix chipsets
I'm designing a home made game console motherboard, and want to integrate the ATI Mobility Radeon 9000 GPU in it. The problem is, the Northbridge I picked only supports PCI Express, and the GPU uses AGP x4. So I found a second Northbridge chip designed for AGP, but as far as I know, is not designed for my processor (Intel Dual Core). I'm thinking about attaching both Northbridges to the CPU, having the AGP one solely control the GPU, and the other control everything else. So here are my questions: 1) Is this method practical, as in, is it possible to wire everything correctly so they work, 2) will the AGP chip work without having RAM attached to it, 3) If my method won't work, is there a way of bridging the PCI Express interface to the AGP x4 interface?
My two Northbridge chips are the GL960 for the main chip, and the 855GM for the AGP. I don't want to get different chips, because I have these two on hand from scrapped laptops.
AI: That is a really bad idea. It's highly unlikely to work as the front side bus is not designed to be split that way. Also, those two chips are designed to run at completely different FSB frequencies and so there may be little hope of putting them on the same bus, even if the bus architecture supported it. The chipset is the the bridge between the processor, the peripherals, and RAM. One of the main issues that you would run in to with that configuration is that DMA will not be possible for the graphics card as the chipsets are designed to do DMA to the locally attached memory, not across the FSB. A much better idea would be to get a PCIe to PCI bridge chip and then hang the AGP card off of this as a PCI card. You will take a performance hit with this configuration. Or you can try to find a different chipset that has an AGP port. Don't bother using salvaged chips, it's going to cost you a very significant chunk of change to get the 8 to 12 layer board made up, which will completely override the cost savings of using salvaged chips. Don't even attempt this if you're not going to get a board made - the number of pins and the signal integrity requirements will come back to bite you very, very hard.
|
H: Add exposed copper to eagle board?
I am new to using eagle. I want to put some exposed copper pads that I can solder to on my board. The catch is that they cannot have through holes. I tried just making polygons in the top layer, but I am unsure if I will be able to solder to that. Any advice?
AI: You'll want to make polygons on the top and bottom stop masks. Use the "tStop" and "bStop" layers to do this, it will basically prevent the soldermask from being put on the area you specify with the polygon.
|
H: Measuring the dampening sinusoidal wave of a simple LC Tank circuit?
I've been trying to do the simplest of things...measure the decaying sinusoidal wave of a LC Tank circuit on a breadboard.
I've created an "inductor" out of a coil of copper wire I wrapped around a pen. I have a 1uF capacitor connected in parallel with this "inductor."
I charge the capacitor with a 9V battery, then remove the positive lead which should produce a sin wave.
I have the ground of my o-scope connected to one lead of the capacitor, and the positive lead attached to the other lead of the capacitor.
On the o-scope (analog, mind you) I just see what looks like the capacitor discharging its current over time. (I.E. I see 9v steadily decreasing to 0v.)
What could I possibly be doing wrong with such a simple circuit?
AI: Try this:
1) Set the scope to single-sweep at a trigger level of about half of your supply voltage.
2) Connect one end of the capacitor, one end of the inductor, the scope ground, and the power supply ground together.
3) Connect the scope probe hot to the other end of the inductor, but let the cap float.
4) Arm the scope.
5) Momentarily connect supply hot to the floating end of the cap in order to charge it.
6) Disconnect the supply from the cap and, while watching the scope screen, touch the floating end of the cap to the end of the coil connected to the scope probe.
7) You should see the damped oscillation.
If you don't, make sure that your scope is triggering properly, that you've got the sweep speed and amplitude settings right, and that you've got the brightness cranked high enough to let you see a single sweep.
Just for grins, here's a link to an LTspice simulation.
|
H: Switching bjt transistor with photodiode
Initially I thought this would be a pretty simple problem, but I'm just getting started learning electronics so I would love some assistance.
This is my circuit:
I calculated the values based off this calculator: http://www.daycounter.com/Calculators/Transistor-Switch-Saturation-Calculator.phtml
This the photdiode I am using
Basically I want to detect when the light is on, I have a visible light photodiode, and when it is on, I want it to send a HIGH signal.
Maybe I am naive, but I thought the transistor would be "ON" or "OFF", but I'm reading 1.4V at the output. I am not sure why this is, maybe to do with such a low Rc resistor value, but the reason why I chose that is because of the calculator, I didn't want a too high base resistor because otherwise it would be a too high threshold to switch. But in any case, it doesn't detect the light very well so I'd really like to understand what's going on before I just try some random values.
EDIT: Thinking about this some more, doesn't the photodiode generate current, so if I change the circuit to be like this.... random values chosen because I haven't calculated it. Would this work? It didn't when I just tried, though I'm not sure what values I need. Am I totally wrong here?
Also to clarify, the long leg of the photodiode is the end with the higher voltage?
AI: I believe the polarity of your photodiode is incorrect. Typically, you want to use a photodiode in photoconductive mode with a BJT, which means that the photodiode should have an external reverse bias. Increased optical power causes a linear increase in reverse current through the device. This reverse current can then be amplified by the BJT transistor. Also note that increased photocurrent will cause the output voltage to decrease in this case, since the current and thus the voltage drop across the 100 Ohm resistor will increase. This is an easy fix if undesirable, another transistor or inverter can be used to reverse the polarity, or it can be reversed in software if you are going into a microcontroller.
EDIT: Your circuit is still not correct, try this instead. Note that R2 is not strictly required, but should be inserted to limit base current in case you have D1 backwards.
simulate this circuit – Schematic created using CircuitLab
|
H: Is it possible to decrease the speed of a standard servo motor?
I have TowerPro SG-5010 and its speed is 0.17 sec/60° at 4.8V and 0.14 sec/60° at 6V.
Just to be sure, is it possible to further decrease the rotation speed?
AI: Yes, the given speed would be "maximum" speed.
To decrease the speed of a regular motor you would use PWM to lower the voltage.
But servomotors (usually) work with a fixed width (20ms?) to indicate their rotation (1ms 0degrees, 4ms 360degrees?)
The illustration below shows the difference between PWM and servo control.
Image at the right is the servo signal, you should increase the ON time from 1ms to 4ms, the speed at which you do this will be your servomotor speed.
You should check the datasheet of your servo-motor to be sure.
|
H: Calculating different rated power from the rated parameters for a delta AC motor
A delta connected motor in the following link:
http://www.vanbodegraven.nl/en/products/ac-motors/asea-mbg-200-m-60-6/
has rated current 47A, voltage 380V
Here is the rated values. Cos(fi) = 0.8 on the label.
So the power I calculate is = sqrt(3)*380*47*cos(fi) = 24747 is around 25kw
But AS YOU SEE the label stated rated power 22kw
Why my calculation does not match the rated power?
AI: The rated power of a motor is the mechanical output power. That is, speed × torque.
The input power to a motor is the electrical input power. That is, line-line voltage × current × √3.
The motor is not 100% efficient. Some energy is lost in the conversion between electrical power and mechanical power.
Your 'missing' 3kW represents the motor's losses, including copper (resistive) loss, iron loss, friction, drag, spinning the integrated cooling fan (if any), etc.
|
H: PORTB and PINB in avr studio while programming for Atmega8?
what is the difference between PORTB and PINB in avr studio while programming for Atmega8 using " Atmel Studio.6 " software ??
AI: There is a massive amount of difference. Those differences are most noticeable when the pin is in input mode (DDRxn = 0).
When in input mode:
PINxn reads the value of the IO pin as it is at the actual pin itself.
PORTxn controls the pull-up resistor.
When in output mode:
PORTxn sets the pin output level. PINxn reads that pin output level.
It's all explained in the datasheet. The simplest thing is to look at the diagram of how an IO port works:
From that you can see that PINxn gets its value direct from the IO pin itself, and PORTxn sends its output to the IO pin and also to the gate that controls the pullup resistor.
Reading from PINxn will always read the raw pin, though, and reading from PORTxn will always read the value before the DDRxn buffer. Thus, in input mode, reading PINxn will read the pin value, and reading PORTxn will read the pullup resistor state. In output mode reading either PINxn or PORTxn will return the value being driven by the PORTxn register.
So in general - if you are outputting, write to PORTxn. If you are inputting, read from PINxn.
|
H: Why does I²C only have pull-up resistors (interview question)?
The interviewer wanted to know why pull-up resistors are used on SDA and SCL when the opposite logic can also be implemented. Is there an explanation for why pull-up resistor usage is the chosen design?
AI: To expand on Jon's answer a little:
Yes, it is all to do with which MOSFETs you want to use.
N-channel MOSFETs are much better for switching logic than P-channel because:
They generally have a much lower on resistance (\$R_{DSON}\$)
They switch on and off faster
So for an open-drain configuration (which is what I2C is) it's much cheaper and easier to create it using an "idle high" arrangement with N-channel MOSFETs rather than "idle low" with P-channel MOSFETs.
A third option would be "idle low" using N-channel MOSFETs, but for that you require a high voltage gate driver to raise the gate voltage far enough above the source voltage for the MOSFET to switch on. Not practical for small communications buses, but this arrangement is actually used quite commonly in H-bridges for motor driving where you want to have the same (or similar) response between the high side and low side of the H-bridge. Using P-channel and N-channel pairs in an H-bridge typically means you have to incorporate a dead zone between switching the P-channel off and switching the N-channel on since it takes so much longer, and that reduces your power efficiency.
But for small communications buses like I2C where you need high speed, low cost, and simplicity of use, the "idle high" with N-channel MOSFETs and pull-up resistors is by far the most cost effective.
|
H: what to do with JTAG pins when idle?
So I have this design here where I talk via JTAG from a microprocessor to a CPLD.
The JTAG protocol is done via bit-banging of four GPIO pins.
This connection is just very infrequently. It's just for the initial programming of the CPLD and may be used if a bitstream update is needed.
Now the question: What should I do with the JTAG interface during these idle times? Right now I have the interface pins at high-z state, but I could also drive them low or high.
What's the best practice?
AI: I would suggest referencing the specksheet for the specific device - they usually have pretty extensive documentation and try to cover most of questions. Here's what I found in my family of devices:
XC9500, XC9500XL, XC9500XV, and CoolRunner XPLA3 have internal pull-ups on TDI and TMS. CoolRunner-II devices have internal pull-ups on TDI, TMS, and TCK.
It is recommended to place external pull-up resistors on the JTAG input pins TDI, TMS, and TCK. For a single device, the resistor value of 4.7 kΩ is suggested. This value may be adjusted depending on the number of devices in the JTAG chain or the application. External pull-down termination is not recommended as it would conflict with the internal pull-ups.
|
H: VHDL - Can't see why these two architectures produce a different RTL
I have the following examples:
architecture two of andex is
begin
process(abus)
variable y: bit :='1';
begin
for i in 7 downto 0 loop
y:=abus(i) and y;
end loop;
x<=y;
end process;
end;
architecture three of andex is
begin
process(abus)
variable y: bit;
begin
y:='1';
for i in 7 downto 0 loop
y:=abus(i) and y;
end loop;
x<=y;
end process;
end;
The only difference between the two architectures is the location of the variable initialization.
The first architecture produces this RTL:
Why is there a feedback path ? y is known for the first loop iteration and every iteration there after.
I had expected it to produce an 8 bit AND gate like the 2nd architecture does.
AI: In the first, y is set to a default value of 1 at start-up time. In the second, y is set to an initial value of 1 each time the process runs.
The first carries the resulting value of y over to the next time it runs, the second doesn't but starts with y equal to 1. Hence the feedback.
In C it's the equivalent of something like this:
int y = 1;
void doSomething() {
y = y + 1;
}
int y;
void doSomething() {
y = 1;
y = y + 1;
}
In the first, each time doSomething() is called y is incremented. In the second, each time doSomething() is called y is set to 1 and then incremented, so always ends up as 2 regardless of how many times you call doSomething().
|
H: Periodic Steady State voltage Buck Converter
What does Periodic Steady state mean with regard to a Buck Converter ? Does it mean the normal operation ?
Also, From what I read, the net change in the capacitor voltage in the buck converter is 0. Why is it so ? Is it because it becomes constant ?
AI: Periodic steady state does mean normal operation (which obviously isn't a constant output voltage). The idea is that the dynamics of the converter have settled and all startup transients have settled.
The net change in the capacitor voltage in a buck converter must be zero or else the output voltage would be moving around, which means the circuit wouldn't be working as a DC-DC converter. So yes, I would say it is because the output DC voltage settles to be a constant (with some ripple).
|
H: Where i can select and buy ferrite toroid core for SMPS?
I made push-pull smps working on 50 khz. It has up to 90-120w power (input voltage 12v and output 24v). For transformer i am using two ferrite toroids on top of each other, it's a old USSR toroids with initial permeability 2000
(there is a link, for more information)
/ out diameter-45mm/ inner diameter-28mm/ height-8mm/ and together 45mm/28mm/16mm. I know that modern toroids have much higher permeability and high saturation current, therefore i can use relatively small toroid. Question is, where i can buy toroids that i can use for my project? Can anyone suggest good selector program or calculator? thanks in advance.
AI: Modern materials are probably not vastly better than old ones for power use.
Iron powder power toroids may still be the best solution.
"Micrometals" make a good range and have design guides.
ARNOLD are also a well known brand - and I see that they are now also part of Micrometals.
You can download Micrometals Inductor design software here**
ARNOLD design software here
I have no vested interest in Micrometals - I used their products in production volumes long ago and found the company helpful and the product of good quality.
WARNING - Micrometals products are well enough known that other manufacturers produce lookalike products. They may not be work-alike. An important aspect of powdered iron cores in having a binder that tolerates induced thermal energy across the design lifetime. Inferior binders break down earlier or much earlier with higher temperatures causing core material changes leading to increased thermal losses leading to ........... thermal runaway and destruction.
|
H: Measuring change in inductance
I have a coil whose inductance will be changing continuously. I would like build a circuit to monitor the change in inductance continuously. Can I just use a RCL circuit and give it a sine wave of f Hz, and measure the voltage across the coil?
AI: It might be easier to build the inductor into a colpitts oscillator and measure the frequency variations with indutance change.
The circuit above is very easy to get working and I have used it plenty of times in oscillators and VCOs.
Of course you can buy Ti's inductance to digital convertor: -
(source: electronicproducts.com)
But where's the fun in that?
|
H: Impedance matching for a guitar pedal
So it's me working on a pedal again. I would like to add some equalizer effects on the pedal and found a fascinating circuit "Smash Drive":
As far as I know, maximum power with lowest effect is attained with high impedance input and low impedance output. However, I see in the circuit using 500kOhm pots up to 1MOhm pots. Do they result in a high input impedance? I have numerous 10kOhm pots. Are they better alternatives, with suitable capacitance values calculated to have same RC?
AI: In general when building stomp-boxes you want high input impedance and low output impedance.
A passive guitar pickup usually has 6 to 15 kOhm impedance.
Typical values for inputs are 1 MOhm for the input. That's what has been traditionally used in the input stage of tube amplifiers. The value is so much higher than the output of the pickup that you will see almost no change in tone by plugging your guitar into this input.
You can go lower, but the lower you go, the more the input affects the tone. I wouldn't go below 10 times the pickup impedance (so 150kOhm would be my limit).
The LM368 has a input impedance of 50kOhm. If you directly connect your guitar to it, you'll load your pickups quite a bit. This results in a shift of the resonant frequencies of the pickup and some loss in treble. If you on the other hand connect this to the output of another stomp-box you may just lose some gain.
For comparison the good old Tubescreamer effect has a input impedance of 510kOhm.
Regarding output impedance of a stomp-box: There is no real standard here. Some effects are driving the output with a impedance of 10kOhm. That's fine and in the ballpark of what the pickup itself has. On the other hand some effects have the output as high as 100kOhm.
The schematic you've posted has a output of roughly 1Meg. That is a lot. The reason for this is, that there is a passive tone control in front of it. If the volume pot would be of a smaller resistance the pot would not only control the voltage but also influence the tone stack.
Overall I don't like the design. I would expect a simple buffer amplifier between the tone-stack and the volume pot. That would allow you to drop the resistance of the volume pot to something reasonable.
On the other hand I have never heard the effect, and the impedances might be just right for the tone. A good distorted guitar tone is the sum of all imperfections down the signal path after all :-)
|
H: Low Output from MIC5205 LDO Reg
I have a PCB that's designed to output 3.3V from a 5V source. The 5V source I'm currently using sources several amps (3A), though I only need about 100mA. I based the schematic on "Ultra-Low-Noise Fixed Voltage Application" circuit in the datasheet for MIC2505. For some reason when I plug in the power, I'm showing only 1.5V on the multimeter. Any ideas why this might be?
Regulator: MIC5205-3.3BM5
C1: 1uF / 16V ± 10% X7R Capacitor SMD-0603
C2: 2.2uF / 16V SMD Tantalum Capacitor ± 10%
C3: 470uF / 16V SMD Aluminum Capacitor
AI: You have a 470uF electrolytic capacitor connected to pin 4. Pin 4 is the bypass (noise reduction) capacitor pin and should be typically 470pF i.e. a million times smaller in value.
Try removing this capacitor and see what happens. I suspect that it's the hugeness of the 470uF causing instability.
Here's what the MIC5205 looks like internally: -
As you can see the bypass capacitor is in parallel with the lower feedback resistor. Both feedback resistors are used to regulate to the correct voltage and with such a big cap placed here the output is liable to be unstable and what you read on a multimeter might appear to be a constant value DC but is probably the output switching up and down and a few kHz.
|
H: Why does the Tx pin of an USB to TTL adapter provide current?
I'm currently making a little electronic board on my own. On it, a microcontroller: an ATmega328p running with 5V. To communicate with a computer, I've planned to use an XBee, but for now I'm using simply an adapter TTL <-> USB based on an FTDI FT232RL chip (here).
The problem is, no matter if the power is plugged to my board or not, as soon as I plug this serial adapter to it, a little LED used to display whether my own board is alimented or not lights. After verification, the TX pin of the adapter is guilty: from it flow almost 11 mA.
The problem doesn't seem to come from my home-made electric board, since with an arduino, the same issue occurs. Moreover, the programmer (an USBasp clone) heats a bit, and I have the feeling my serial adapter is responsible...
How can I fix this ? Is this normal ? Couldn't it damage my microcontroller ?
AI: The USB<->TTL_serial adapter is being powered from USB.
One of its main jobs is to drive the TX signal line to whatever state is appropriate for whatever communication is taking place.
If there's no comms happening, the default state for a TTL UART TX line is high (so 3.3V or 5V or whatever the device's supply is).
As a result, you're feeding this 'high' into your other circuit (ATmega, adruino, whatever).
So now what does this other circuit do with this 'high'?
Well most microcontrollery things (and many others too) will have some sort of diode arrangement between their I/O pins and their supply lines - either deliberately as ESD protection, or simply as an unavoidable result of the manufacturing process - something like this:
simulate this circuit – Schematic created using CircuitLab
So as you can see, if your USB<->TTL_Serial adapter is trying to do its job and apply a 'high' to the micro's I/O pin, due to the micro's internal diodes it will end up trying to supply power to the whole circuit. Some adapters may be more or less tolerant of this state of affairs than others...
|
H: AC-DC Adapter - Output lower than rating and what's this 'cont' output for?
I have a Canon AC adapter, Model K30321, that I'd like to use in a project. I'm attempting to test the adapter for correct voltage output.
Adapter specs:
Input Voltage: 100 - 240VAC ~ 50/60Hz 60VA 0.65A
Output: 24V === 1A
Connector: 3 pin rectangle white flat 1 [ ■ ■ ■ ] 3
(from Left to right)
Pin1 = + 24v
Pin2 = Cont
Pin3 = GND
When I connect my multimeter to pin 1 and pin3, I see only about 8V DC.
I also see the same voltage between pin 2 and pin 3.
After some experimentation, I've learned that applying a constant 3.3V DC to the control pin causes the pin 1 voltage to rise and fall periodically between ~19V DC and ~21V DC. I can also hear a clicking sound coming from the adapter when doing this.
Questions:
I'm wondering if the control pin needs a PWM applied to it to function, or maybe it's expecting something else?
Thanks for all the advice!
AI: Probably a control input to put the adapter into standby mode so the printer uses less total power when in standby.
You might be able to coerce the adapter into giving you 24V by grounding the CONT line (use a few K resistor in series, just in case).
|
H: Operating system on an ARM9 MCU
I have an ARM926EJ MCU (datasheet) that I am learning how to program for a research project. I have been researching, as well as reading a textbook on Professional Embedded ARM Development in order to learn how to program, and I have learned much about bare metal programming for my specific project.
However, I want to learn how to program with an operating system on my MCU. I intend on running a simple program, and I would like for an operating system to do much of the low-level handling for me. From my research, programming in the Linux environment will speed up my learning curve by much.
My question is two part:
How do I download an operating system onto my MCU?
How do I run a program on my MCU after installing an operating system on it?
If you are interested in the details of my project it is very simple: I will use the internal clock to detect the timing of 30 ns pulses received by a GPIO pin with a rough resolution; and I will upload these recorded clock values to another MCU via SPI connection. So, I will have to simultaneously handle the GPIO pulse stream and the SPI upload connection.
AI: Your two questions depends on the operating system.
Choosing the Operating System is a whole new question that is where you might start from.
As your application seems very simple, and might have real time contrains, I suggest you to analyse the possibility of implementing it bare metal.
But, if Operating System is required, take a look at FreeRTOS, might interest you!
|
H: Solar battery charger with load sharing
I'm looking to design a circuit around the CN3065 that will be solar powered with a li-ion backup battery. I've seen circuits that can achieve this by using a charge controller with a dedicated load output, but CN3065 doesn't have one (and unfortunately I'm restricted to this chip since SeedStudio's OPL has no other charge controllers.)
The basic recommended circuit is as follows:
I'm reluctant to connect the load just over the battery terminals since this would presumably upset the charger, but most load sharing circuitry I've seen is designed for a constant input voltage rather than a solar panel:
My worry is that enough current will flow through Q1 to block current through the battery, but this will not be sufficient to drive the microcontroller attached to the load. I could place a (for example) 5V zener at the gate of Q1 to get around this, but then I'm back at the problem where I'm potentially charging the battery partially whilst drawing a load across it.
What supporting circuitry should I add to safely and sensibly connect a load to this circuit when the input power supply is given by a solar cell?
AI: I'm reluctant to connect the load just over the battery terminals since this would presumably upset the charger,
Direct connection of load to battery + charger out is not a totally terrible solution. Whether it is acceptable depends on application and circumstances.
Adding a load will drop even a fully charged battery to < 4.2V and charger will attempt to charge in CC (constant current mode) at whatever current it is set to (as controlled by Riset. If Icc is > Iload the charger will raise the battery + load voltage to 4.2V then change to CV (constant voltage) mode and maintain voltage at 4.2V.
The CN3065 termiantes charging when Icharge = C/10 where C is the programmed charging rate = 1800/RIset.
If Iload > C/10 then the charger will remain in the CV charge mode at 4.2V and the battery will be subject to a constant 4.2V. This will shorten battery life if used in this mode for long periods but may be acceptable in prototype or one-off applications.
If Iload < C/10 then the charge cycle will terminate when Ibattery = (C/10-Iload). This will also shorten battery life but less than in the previous case.
Improved load switch control:
You could drive Q1 with a comparator that compares Vbat with Vin.
When Vin > Vbat + Vextra, Q1 is turned off, where Vextra is enough extra voltage to make up for the drop in D1.
A comparator across D1 will implement this "well enough" - when D1 conducts current is flowing to the load from Vin and the battery can be turned off. When you turn off Q1, if the PV panel cannot support the load its voltage will drop and again enable the battery,
With this scheme (and many other load sharing schemes) there is some risk of oscillation between modes. This can be addressed with hysteresis and addition of a degree of delay in the switching. Operating Q1 in a linear mode rather than on/off so you get a smooth changeover probably helps. Dissipation in Q1 will be small as voltage differential need not be large. For Vin more than say about (V_diode_drop + 0.2V) MOSFET can be fully off. As Vin exceeds Vbat + V_diode_drop MOSFET can start to turn off.
The "ideal" solution is for Q1 and D1 to be "ideal diodes" with minimal voltage drop when conducting. Almost as good is to have D1 as a diode as at present and Q1 replaced with an ideal diode. An ideal diode can be implemented with Q1 and an added opamp or purpose built "ideal diode" controller ICs are available.
These devices implement ideal diodes when used with an external MOSFET such as Q1. I'm not recommending this specific device, but it shows the principle.
|
H: Sonar single transducer element SNR vs array SNR
Background:
I'm taking a flexible learning unit on radar and sonar systems in a vocational college for an assoc. degree level course. The teacher has given us a reading assignment: "Chapter 15 Underwater Acoustics", source unknown, and we are to answer the end-of-chapter questions. I have searched online and off, unsuccessfully, for the source reference, and there are no texts on sonar systems in the college library system in my state.
My question is a general systems engineering design question, I believe:
Given a single sonar transducer element SNR (signal to noise ratio in dB) of X dB, what would be the SNR of an array of M elements?
My question relates to calculation of the array gain, AG,
where, stated in the reading material,
$$
AG = 10~ log ~{ \left \lbrace {{(S/N)_{array}} \over {(S/N)_{element}}} \right \rbrace }.
$$
I've been given the element SNR, but I have no information on the array SNR.
In the absence of any information on multiple SNR elements, I've gone ahead and guessed a possible answer.
In the problem I'm trying to solve, \$ ~ SNR_{element} = 40 ~dB \$ $$ \therefore decimal ~ratio ~ (S/N)_{element} = 10^{40 dB ~/~ 20} = 100 $$
Question now is, to get the \$ ~SNR_{array} \$, do I add or do I multiply? Taking the RMS doesn't make sense, since it is implicit.
Given M = 25 elements in the array. Raising \$ (S/N)_{element} \$ to the 25th power, i.e. \$ ~100^{25} \$ doesn't make sense.
But adding together: \$ {~(S/N)_{array} = (S/N)_{element} \times M = 100 \times 25 ~ elements = 2500 ~} \$ puts the answer in the realms of possibility:
$$ So, ~ AG = 10 ~log {2500 \over 100} = 10 ~log~ 25 = 14 dB $$
Am I close? A reference would be appreciated.
@drfried, the problem is assuming an ideal solution. The book problem is originally stated as:
If an element of an array has a signal to noise ratio of 40dB, what would be the array gain of 25 similar elements in such an array?
No further information or diagram is given.
@Andy, From the response you've provided, and a response I received on the linkedin.com "Antenna Solutions" group, https://www.linkedin.com/grp/post/2232865-6001739735423270914 , without source reference, I'll see if I understand you correctly. The answer I've been given there is \$ SNR_{array} ~=~ 10~log~M ~+~ X ~ [dB] \$. Let's see if they coincide. Apologies for the mathematical massacre that follows.
Firstly, let's replace the subscript "array" with \$ a \$ , and the subscript "element" with \$ e \$.
If I understand @Andy correctly, we can write his statements as,
$$
(S/N)_a ~=~ { {\sum\limits_{i = 1}^M S_{ei} } \over {\left( \sum\limits_{i=1}^M N_{ei} ^2 \right )^{ 1 \over 2 } } }
$$
where, for my problem, \$ S_{ei} ~=~ S_{ej} ~=~ 100~ units ~(ie~ \mu V,~ mV, ~etc.) \$ and \$ N_{ei} ~=~ N_{ej} ~=~ 1~ unit ~(ie~ \mu V, ~mV,~ etc.) \$.
$$
\therefore ~~~ SNR_a ~=~ 20~ log ~ \left \lbrace {M S_e} \over { ( M ~ N_e^2 )^{1 \over 2} } \right \rbrace
$$
$$
=~ 20~ log ~ \left \lbrace {M S_e} \over { M^{1 \over 2}~ N_e } \right \rbrace
$$
$$
=~ 20~ log ~ M^{1 \over 2} (S / N)_e
$$
$$
=~ 10~ log ~ M ~+~ 20 ~log ~(S / N)_e
$$
$$
=~ 10~ log ~ M ~+~ X ~[dB]
$$
Looks like both responses coincide. I'm inclined to accept @Andy's reasoning and experience, so I think my question may have been answered.
To finish the whole problem for posterity, as stated above, M = 25 elements and X = 40 dB.
So, \$ (S/N)_a \$ = 14 + 40 = 54 dB.
$$
\therefore ~~~~AG ~=~ 10~ log \left \lbrace {(S/N)_a} \over { (S/N)_e } \right \rbrace ~=~ 10~ log \left \lbrace {10^{54~dB ~/~ 20}} \over { 10^{40~dB ~/~ 20} } \right \rbrace ~=~ 10~log~(10^{(2.7 ~-~ 2) }) ~=~ 7dB
$$
Don't even need a calculator!!!
Thanks to all who helped.
AI: I'm going to take a stab at this though I don't know if it applies to sonar. If you have a signal received on a transducer there will be both signal and noise. If you have two transducers (in an array) it can be presumed that the same signal is received on both but the noise on each is likely gaussian and not coherent.
The upshot of this is that when you add the two signals from the two transducers the signal will double (+6dB) but the noise will add as per: -
Noise = \$\sqrt{A^2+B^2}\$ i.e. it will only increase by 3dB
This means the SNR has risen by 3dB
Each time you double the number of transducers, the SNR increases by 3dB
Hope this helps.
|
H: How to measure input/output impedance of amplifiers in a simulation?
What is the most effective way of measuring the input/output impedance of an amplifier in a simulation?
When I know the impedance I want to measure is purely resistive, I usually set up an input signal Vin and a test resistor as a resistive divider with the desired impedance. Then I compare the voltage values of the input/output and work my math to get a number for the impedance.
Is there a better way?
I am using Orcad Capture with PSpice.
AI: Create a current source whose AC magnitude is 1A. Connect one terminal to GND, and connect the other terminal to the appropriate pin of the device under test. Do an AC simulation, and plot the voltage on that pin. Remembering that \$Z(j\omega)=V(j\omega)/I(j\omega)\$, and since \$I(j\omega)=1\$ (because you've set it to 1), then the impedance is simply equal to the voltage: \$Z(j\omega)=V(j\omega)\$.
You might be concerned that 1A is a lot of current for your circuit to handle. But it doesn't matter: AC simulations are, by nature, small-signal analyses. Namely, when doing an AC simulation, the DC operating point is computed first and the AC relationships are found without consideration to AC amplitudes -- i.e., assuming small-signal conditions. So even with a 1A AC current source, the AC analysis is still a small-signal analysis, even if your circuit couldn't handle that level of current in real life (or even in a transient simulation).
|
H: How does an AM radio filter out only the desired frequency?
I understand that electromagnetic waves in the air induce an alternating current in the antenna. I also understand how, once you filter the signal to obtain the desired frequency, you can get the envelope of the signal and drive a speaker.
What I don't understand is the bit in the middle, where the radio takes the signal from the antenna and filters out just the desired frequency. Say it's a very simple radio that only cares about a single frequency. Can you explain how this works in electronics, and how this would work if you were trying to write a radio in software based on discretely sampled data?
AI: It uses something called a filter. You can build filters out of all sorts of different things.
RC filters made out of resistors and capacitors are probably the simplest to understand. Basically, the capacitor acts as a resistor, but with a different resistance at different frequencies. When you add a resistor, you can build a voltage divider that is frequency dependent. This is called an RC filter. You can make high pass and low pass filters with one resistor and one capacitor. A low pass filter is designed to pass low frequencies and block high frequencies, while a high pass filter does the opposite. A low pass in series with a high pass forms a bandpass, which passes frequencies within some range and blocks other frequencies. Note that the operation of an RC filter (and most filters, for that matter) will depend on the source and load impedance. This is especially important when cascading simple filter stages to construct larger filters as the operation of each stage will be affected by the impedance of the adjacent stages.
simulate this circuit – Schematic created using CircuitLab
Filters can also be made with other components, such as inductors. Inductors also act like resistors, but they change in the opposite direction as capacitors. At low frequencies, an inductor looks like a short while a capacitor looks like an open. At high frequencies, an inductor looks like an open while a capacitor looks like a short. LC filters are a type of filter built with inductors and capacitors. It is possible to make a rather sharp LC filter that cuts off quickly and is easy to tune with a variable capacitor. This is what is normally done for simple radios like crystal radios.
simulate this circuit
You can make bandpass filters out of anything that has a resonant frequency. A capacitor and an inductor in series or in parallel form a resonant tank circuit that can be used as a bandpass or bandstop filter, depending on precisely how you hook it up. An antenna is also a bandpass filter - it will only receive frequencies well that have wavelengths around the size of the antenna. Too large or too small and it won't work. Cavities can also be used as filters - a sealed metal box has various standing wave modes, and these can be exploited to use as filters. Electronic waves can also be converted to other waves, such as acoustic waves, and filtered. SAW (surface acoustic wave) filters and crystal filters both work by mechanical resonance and use the piezoelectric effect to interface with the circuit. It is also possible to build filters out of transmission lines by exploiting their inherent inductance and capacitance as well as by exploiting constructive and destructive interference that results from reflections. I have seen a number of microwave band filters that are made out of a crazily shaped piece of copper printed on a PCB. These are called distributed element filters. Incidentally, most of these other filters can all be modeled as LC or RLC circuits.
Now, a software defined radio is a different animal altogether. Since you are working with digital data, you can't just throw some resistors and capacitors at the problem. Instead, you can use some standard filter topologies like FIR or IIR. These are built out of a cascade of multipliers and adders. The basic idea is to create a time-domain representation of the filter you need, and then convolve this filter with the data. The result is filtered data. It is possible to build low pass and bandpass FIR filters.
Filtering goes hand in hand with frequency conversion. There is a parameter that you will see all over the place called Q. This is the quality factor. For bandpass filters, it is related to the bandwidth and center frequency. If you want to make a 100 Hz wide filter at 1 GHz, you would need a filter with an astronomically high Q. Which is infeasible to build. So instead, what you do is filter with a low Q (wide) filter, downconvert to a lower frequency, and then filter with another low Q filter. However, if you convert 1 GHz to, say, 10 MHz, a 100 Hz filter has a much more reasonable Q. This is often done in radios, and possibly with more than one frequency conversion. Additionally, this method makes it very easy to tune the receiver as you can just change the frequency of the oscillator used for the frequency translation to tune the radio instead of changing the filters.
In the case of digital filters, the longer the filter, the higher the Q and the more selective the filter becomes. Here is an example of an FIR bandpass filter:
The top curve is the frequency response of the filter and the bottom curve is a plot of the filter coefficients. You can think of this type of filter as a way of searching for matching shapes. The filter coefficients contain specific frequency components. As you can see, the response oscillates a bit. The idea is that this oscillation will match up with the input waveform. Frequency components that match closely will appear in the output and frequency components that do not will get cancelled out. A signal is filtered by sliding the filter coefficients along the input signal one sample at a time, and at each offset the corresponding signal samples and filter coefficients are multiplied and summed. This ends up basically averaging out signal components that do not match the filter.
Frequency conversion is also performed both in software and in hardware. In hardware, this is required to get the band you're interested in inside of the ADC IF bandwidth. Say, if you want to look at a signal at 100 MHz but your ADC can only receive 5 MHz of bandwidth, you will have to downconvert it by around 95 MHz. Frequency conversion is performed with a mixer and a reference frequency, generally called the local oscillator (LO). Mixing exploits a trig identity, $$\cos(A)\cos(B) = \frac{1}{2}(\cos(A+B)+\cos(A-B))$$. Mixing requires a component that multiplies the amplitudes of the two input signals together, and the result are frequency components at the sum and difference of the input frequencies. After mixing, you'll need to use a filter to select the mixer output that you want.
|
H: Why do fiber-optic interfaces have much higher bandwidth than optocouplers?
I've noticed that optocouplers typically have a bandwidth in the kHz range (i.e 200 kHz). Fiber-optic interfaces can reach speeds of 100 gigabits / sec. What's so different about these two technologies that gives fiber-optics such a high bandwidth?
AI: There is really nothing preventing someone from designing an optoisolator to go that fast. Fiber interfaces are generally designed for long range communications and so they have a very different set of design constraints. Fiber interfaces can use lasers or VCSELs and possibly separate high bandwidth modulators and wavelength division multiplexing to cram huge amounts of data into a fiber and transmit it several km. Optoisolators, on the other hand, generally have to transmit low frequency signals very short distances while being very small, cheap, and robust. It's not worth the extra expense to design very sensitive devices that can transmit multiple Gbit/sec if they are only going to be running at a few kHz. There really aren't a great deal of applications for that kind of isolation and that kind of bandwidth that don't also involve sending signals at least a few meters where common telecom or datacom transceivers can be used instead.
As far as optical transceivers are concerned, just getting the data to the transceiver at those speeds is not easy. Right now the highest rate for a single wavelength that's commonly available is 25 Gbit/sec. That's about 13 GHz of bandwidth. You can't send that very far on a PCB as FR4 (fiberglass substrate) is quite lossy at those frequencies. The SERDES modules on the chips to send data at that rate take up quite a bit of space and consume a not-insignificant amount of power. The links are all current mode logic that have very small voltage swings of less than 100 mV. The VCSEL or laser at the other end has to be designed to have enough bandwidth so that it can be modulated at that frequency, which is not so easy to do. The lasers also consume significant power and dissipate most of it as heat, requiring a lot of cooling. If you use wavelength division multiplexing, then you may need to use active temperature control as well, which uses even more power and generates even more heat. All of this is expensive and requires a lot of engineering and process controls. It also takes up quite a bit of space.
Optoisolators, on the other hand, are a phototransistor and LED inside a glob of transparent epoxy.
|
H: What use is a pull-up/pull-down with a push-pull output?
For at least some STM32F4 MCUs, push-pull + pull-up/pull-down is a valid GPIO configuration, but what would you ever use it for, and why? I assume there's some saner reason than "I just really felt I needed that juicy extra 100µA of wasted current per pin"...
The configuration is listed in section 8.3 of the reference manual, on page 269:
AI: Basically the chart shows that the PUPDR bits control the pull-up and pull-down connections independent of the OTYPER bit that switches between push-pull and open drain.
Probably you would not typically set PUPDR to anything but 00 when using the push-pull configuration.
One situation where you might do so is if you were going to switch the pin between output and input functions. You might want the pull-up or pull-down to be configured before you switch to input mode to avoid the input ever being in a truly floating state.
|
H: How does a diode valve work based on this sentence?
In a diode valve, the anode would not emit electrons when heated so when the polarity of the voltage across the tube was reversed, no current flowed. Notice how the solid state transistor works in a similar way.
Why does the anode not emit electrons?
Why does no current flow when the polarity of reversed? Why reverse the polarity?
What is a solid state transistor? Just a normal transistor?
What do they mean by solid state?
AI: The statement is a bit misleading. In vacuum tube diode the cathode is heated but the anode is not. That means the cathode emits electrons but the anode does not.
So if you apply a positive voltage to the anode and a negative voltage to the cathode the electrons emitted by the cathode move towards the anode and a current flows.
However if you apply a negative voltage to the anode and a positive voltage to the cathode no current flows because the (unheated) anode doesn't emit any electrons.
The comparison with a transistor seems odd, because a transistor is not like a diode. It's more like a triode. The obvious comparison is with a semiconductor diode, though this works by a completely different mechanism.
|
H: How do you solve Thevenin equivalent for circuits with variable resistors?
I have this circuit and I have to solve the Thevenin equivalent across the terminals A and B. How do I solve for RTh in terms of the variables present in the circuit?
AI: The problem is not to have one (or two) variable resistors, because you always take a look at one moment and treat them as fixed during your analysis.
I guess your problem is to see that the variable resistor (potentiometer) actually consists of two resistors (R2 and R - R2). In general they form a voltage divider; in this particular case one of the two "sub"-resistors (R - R2) is shorted so you can forget the (R - R2) part (i.e. treat it as being 0).
|
H: Which way are heat sinks supposed to be mounted?
Which way are heat sinks supposed to be mounted? With the fins around the component, or facing outside?
AI: For a non-forced-air cooling regime, the fins are best positioned so that they are vertical. The air can then naturally rise and these convection currents flow along the fins and provide better cooling.
As for inside the heatsink or outside the heatsink, personally I don't think it matters.
|
H: How can I relay 120VAC with a raspberry pi?
I have five 75W 120V AC LED lamps on a dimmer switch. They draw 13 watts each (375W total). I also have a 600W Lutron dimmer switch hooked up but the thing buzzes when the lights are dimmed. I also tried a Leviton dimmer with the same results. I've heard about debuzzing coils but seems to be that many folks have mixed results with them.
I'm wondering if I can just wire each of the 5 bulbs to a relay of some sort and then use a raspberry PI to control the relay (the PI is already wired into my home LAN so I can put an app on my smartphone to control the relay). This way, I can turn only one lamp on for dim lighting and all 5 for full brightness. No, it's not true dimming but at least it won't be buzzing.. I hope.
What kind of relay should I use? Looking on Ebay for a 120V AC 8 channel relay I found this but not sure if this beast will work. I just want to use the GPIO pins on the PI to trip the relay so it seems pretty straightforward. The listing says the relay is good for 2A per channel and if I did the math right, that should be well within the 13W per bulb (13W / 120V = .1A)
Anyone have some thoughts as to what the best method would be to implement here? Should I go with a Triac relay instead (not sure what that is but will look into it if that's a better way to go).
Thanks!
AI: To pick a relay, check:
Relay operation modes: (monostable, bistable, normally open/closed...)
Relay type (solid state, mechanical)
Output capacity: voltage and current. Make sure your load doesn't exceed these. Keep transients in mind: I recently had a mechanical relay fail because of the instantaneous current draw when charging a big filtering capacitor down the line.
Input requirement: voltage and current. Make sure whatever is controlling the relay can comfortably provide the required power to activate the relay.
Sounds like you've checked #1-3. The ebay page says that the relays will activate in response to an input above 2.5V (OK for a PI's 3.3V GPIO), but not how much current they will draw. So check the relay input current draw, and the maximum current the PI can source on GPIO to see if this relay board will work with a PI.
|
H: LED organ input problem
I'd made this LED organ (schematic below and here):
And one of the filters are not working. On the opamp's inverting input, there's a small sin signal, and I don't know where it comes from.
I measured these with my scope on the filter. This is the (audio) input signal from the function generator:
This is the output one of the filters:
This is on the inverting input of the opamp without an audio input signal:
I don't know which component is causing this. I replaced the opamp IC and the transistors are working too, but I get the same problem.
AI: Check that the virtual ground is stable, then double check all the capacitor and resistor values (and presence) in the affected filter.
I suspect that one or more is off by 10:1 or something like that. If the 680 ohm resistors have 3-digit markings, they should be 681 not 680.
|
H: Understanding this LPF in opamp feedback loop
I'm using a basic opamp driven mosfet to implement a precision low current sink for an LED:
This seems to work just fine within the range that I want it (10mA or so). I've seen other circuits that put an lpf in the feedback loop (R3 was 1K in the one I saw):
I'm trying to understand how that works. With the values of 100n and 10k, the -3dB point is 160Hz. What I don't understand is why the capacitor is connected to the opamp output instead of to ground. Wouldn't that divert noise above 160Hz back into the output? Of course, there'd be a phase shift as well but I cannot see why you wouldn't connect the capacitor to ground.
I'm guessing this is a really basic opamp circuit in a slightly different form but I'm not understanding what that form takes. I may have it backwards as high frequencies at the output of the opamp will "short" back to the inverting input and reduce the opamps output.
AI: The purpose of this circuit is to make sure the circuit has sufficient phase margin (does not oscillate). It's a particular problem with MOSFETs and not BJTs. 100nF is a very large capacitor- 1nF would work as well here (10nF is a good value), but maybe they wanted a bit of a LP filter or just wanted to be sure.
The problem is that the MOSFET (with such a low sense resistor) represents almost a purely capacitive load on the (exceptionally wimpy in this case) op-amp output. That produces a phase shift with the (relatively large) open-loop output impedance of the op-amp. In the case of the MCP6002 the maximum capacitance you can safely put on the output is less than 100pF with G=1. The Cgs is relatively low on that MOSFET (31pF typically, 46 max) but Miller capacitance comes into it too. Fortunately, with an LED load it's almost looking like a cascode arrangement, so you may be out of the woods.
You'd have to do a bit more calculation or simulation to be 100% sure- maybe try feeding a square wave to the non-inverting input and look to see how much overshoot you get in the current waveform. Varying when someone touches it sounds like it might be oscillating!
It's poor form IMO to do this in general- the second circuit above is the right way to do it. Even if you conclude it's working well enough, be careful that in production the MOSFET does not get replaced with something with substantially more capacitance. For example, the inexpensive AO3418 has a Cgs of 235pF (typical).
|
H: LED can't stop blinking (AVR-C)
So here is the basis of my code, I have an array of different instructions for the prescribed LEDs to follow.
The led will blink for the prescribed number of times in an interrupt before they stop and increment the array value, which will lead to the second set of instructions in the arrays for the led.
Here is my problem, as the program executes the first set of instructions from the array, when ab = 0, it works just fine, however when I execute the second set of instruction, when ab = 1, the led I am trying to infinitely blinks without stopping
I have tried everything to stop the led, from boolean if statements, to disabling interrupts but nothing is working.
In the program there is a variable which I have to reset to zero which is the x_counter (working with just the x-axis to keep it simple), so that the second set of instructions can happen
However I suspect that it is the reason why it continue to loop on forever.
Any ideas on how to fix this problem? Here is my code:
/////EDITED///Again////////////////
/*
* avr_cnc.c
*
* Created: 2015-04-01 6:13:58 AM
* Author: Alvi
*/
#include <avr/io.h>
#include <stdio.h>
#include <util/delay.h>
#include <avr/interrupt.h>
//axis bools
//////////////////
// x-axis
#define stp_x_led PB0
#define dir_x_led PB1
//////////////
// y-axis
#define stp_y_led PD6
#define dir_y_led PD7
////////////
// z-zxis
#define stp_z_led PD4
#define dir_z_led PD5
// PORTB
#define led_portx PORTB
////////////
/// PORTD
#define led_portyz PORTD
///////////
// PORTC
//#define led_portz PORTC
// DDR////////
#define led_x_ddr DDRB
#define led_y_ddr DDRD
#define led_z_ddr DDRD
#define F_CPU 16000000UL
int axis[] = {1,1};
int direction[] = {1,0};
int steps[] = {20,20};
int entries = sizeof(axis)/sizeof(int) - 1;
int ab = 0;
int vay = 1;
static volatile int x_counter = 0;
static volatile int x_boolean = 1;
static volatile int y_counter = 0;
static volatile int y_boolean = 1;
static volatile int z_counter = 0;
static volatile int z_boolean = 1;
//////////////////
int direction_ab;
int steps_ab;
int main(void)
{
////// DDR setups /////////////////////////////////////////////////
led_x_ddr |= ((1 << stp_x_led) | (1 << dir_x_led));
led_y_ddr |= ((1 << stp_y_led) | (1 << dir_y_led));
led_z_ddr |= ((1 << stp_z_led) | (1 << dir_z_led));
///////////////////////////////////////////////////////////
TCCR1B |= (1 << WGM12); // configuring timer 1 for ctc mode
OCR1A = 4678;
TIMSK1 |= (1 << OCIE1A); // enable ctc interrupt
TCCR1B |= ((1 << CS12) | (1<< CS10)); //Start at 1024 prescaler
sei(); //Enable global interrupts
//Sets ctc compare value to 1hz at 1mhz AVR clock, with prescaler of 64
while(1)
{
}
} //end of main loop
ISR(TIMER1_COMPA_vect)
{
///////Define which stepper, direction and how many steps in this code
stepper_protocol(axis[ab],steps[ab],direction[ab]);
////////////////////////////////////////////////////////////////////
}
stepper_protocol(enable,steps_ab,direction_ab){
// checks if user enabled x-axis
if (ab < entries){
if (enable == 1){
//direction
if (direction_ab == 1){
led_portx |= (1 << dir_x_led);
}
else if (direction_ab == 0){
led_portx &= ~ (1 << dir_x_led);
}
///////////////////////////////
//stepper logic
led_portx ^= (1 << stp_x_led);
if (x_counter >= steps_ab){
led_portx = (0 << stp_x_led);
led_portx = (0 << dir_x_led);
x_boolean = 0;
ab++;
x_counter = 0;
}
if (x_boolean == 1){
x_counter ++;
}
//////////////////////////////
}
// checks if user enabled y-axis
else if (enable == 2){
//directjion
if (direction_ab == 1){
led_portyz |= (1 << dir_y_led);
}
else if (direction_ab == 0){
led_portyz &= ~ (1 << dir_y_led);
}
///////////////////////////////
//stepper logic
led_portyz ^= (1 << stp_y_led);
if (y_counter >= steps_ab){
led_portyz &= ~ (1 << stp_y_led);
y_boolean = 0;
//vay = 1;
ab ++;
}
if (y_boolean == 1){
y_counter ++;
}
//////////////////////////////
}
// checks if user enabled z-axis
else if (enable == 3){
//direction
if (direction_ab == 1){
led_portyz |= (1 << dir_z_led);
}
else if (direction_ab == 0){
led_portyz &= ~ (1 << dir_z_led);
}
///////////////////////////////
//stepper logic
led_portyz ^= (1 << stp_z_led);
if (z_counter >= steps_ab){
led_portyz &= ~ (1 << stp_z_led);
z_boolean = 0;
//vay = 1;
ab ++;
}
if (z_boolean == 1){
z_counter ++;
}
//////////////////////////////
}
}
}
AI: some of the more obvious problems with the code.
'ab' is not being incremented during the ISR,
so the interrupt continues forever
should turn OFF the timer & associated interrupt when 'ab' exceeds/equals 'entries'
(not every interrupt, just the timer interrupt)
fails to clear the timer interrupt pending flag at the end of the ISR, resulting in:
the ISR will immediately re-execute,
rather than waiting for the timer to produce another interrupt event.
Is the interrupt rate slow enough to allow time for the CNC (or whatever)
device to complete the step before the next interrupt occurs?
|
H: Are op-amps always assumed to be ideal?
Given a circuit with resistors, op-amps, independent voltage and current sources, how do I solve the node voltages? Do I always assume that op-amps are ideal?
AI: No, depending on what you are trying to do you might have to use a more complex op-amp model with finite gain, input offset voltage, input bias current, etc.
If it is a real problem you can apply experience and judgment to decide what can be left out. For set problems the rule usually is that you should use all the information given. If nobody mentions offset voltage, assume it is zero. There are exceptions- sometimes they'll throw in an irrelevant number (find DC voltages and they tell you the Gain-bandwidth product, for example), and sometimes you might be expected to parametrize the unknown variable.
|
H: Perpendicular tracks in 2 different layers (PCB)
Many times ago I heard it's not good to have 2 perpendicular track in 2 different layers of 2 layers PCBs(I mean one track in one side become perpendicular to another track in other side of the pcb) . I can't remember what was the type of tracks (digital signals or voltage or ...)
but now in my recent pcb(it is 2 layers) because of lack of space there is a lot of these perpendicular tracks(some of them are high current- 2 to 5 amp).
Is there any absolute rule for these kind of tracks ?
Is it forbidden ?
How about multilayer PCBs?
Thanks in advance
MA
AI: It's not "forbidden" but there is a reason behind the advice. It's meant to help you avoid coupling from one trace to another, and therefor limit noise or crosstalk between the traces. This could apply to any time varying signal, be it digital or analog. It could apply to your power traces if the flow of current is changing, or they could be the victim if a nearby signal is coupling noise onto them.
One way to avoid this is to route traces perpendicular to each other on opposite layers, and to minimize parallel run length. The only thing that stops or reduces coupling is distance or isolation.
2 layer boards come with their own set of complications when you start thinking about current return paths. Things get messy pretty quickly. There's no rule that you follow and you'll be OK, you have to look at your individual design and decide if the amount of coupling is too much or not. I suppose you could simulate it but surprisingly simulating 2 layer boards is more complex than 4 :)
Multi layer boards with reference planes between them will isolate your outer signal layers from each other (as well as provide a nice clean return path). Just another reason to consider a 4 layer board instead of a 2.
|
H: DALI: Handling multiple responses from DALI system
I designed my own DALI devices and now trying to build-up a DALI system. Since my DALI gears are based on MSP430G2553, I took advantage of code being already done for TI's DALI demonstration board.
Source code
Application note
Right now I'm working only with one DALI master and one DALI slave. Question is, how will (or how should) the system behave with more DALI slaves.
My point is... How to handle multiple DALI responses, e.g. after broadcasting the QUERY STATUS command? I suppose that all devices in the system would try to send their response, not knowing that there are more slaves trying to communicate. What is the work-around for this case?
AI: There is no "work around" for the case of multiple responses to query, you just have to handle all possible outcomes. DALI is designed to allow for collisions which can occur when multiple gear respond to the same query, so you have to handle this in your master software. The slaves must not attempt to avoid collisions with each other - just wait the required time after the query and respond (unless your response is a NO, which is No Response).
There are various cases you have to handle in master software when you expect a response:
No response within the timeout period. All gear that were addressed
have answered NO or were not powered up.
A collision occurs when multiple responses overlap, usually resulting
in bit timing errors. When you only have 2 or 3 gear responding, the
usual case is that the low bits overlap and you end with shortened
high bits, significantly less than the minimum pulse time.
It is possible but rare to have a multiple gear respond with the same
data at exactly the same time and not have any bit timing errors. In
this case you would not know the difference between the response
being from one gear or several. Maybe you can design the system so that you don't care.
It is also possible but rare to have multiple responses align so that
no bit timing errors occur but the response frame is longer than 8 bits. Some gear actually do this by design if they are one hardware unit with several logical short addresses, but the Edition 2 requirement is for this type of design to respond in this case with an extended length low pulse. You should treat either of these as a collision since it is also an invalid frame in terms of what is allowable as a valid response.
As a gear designer, you are encouraged to use the some randomness inside the response timing window so that the rare cases occur as rarely as possible. It has been clarified in Edition 2 that you must not use collision avoidance for the response.
As a control device designer, you need to choose your queries and addressing modes carefully so that you can cope with what the responses might be. For example, if you require the QUERY STATUS bits from several gear, you would query each using its (previously uniquely allocated) short address in turn, rather than using broadcast or group. Alternatively, if you only want to know lamp failure status, a broadcast QUERY LAMP FAIL has the advantage that only gear whose lamp has failed will reply at all, so you might be able to design your logic to deal with a single YES or a collision since that tells you that at least one lamp has failed.
|
H: How to drive 3 1/2 digit static TN LCD with ATmega328 in low power mode?
I've built this clock with a 3 1/2 digit static TN LCD (GYTN-0587). I drive the 28 LCD segments with an ATmega328 and four 74HC595N shift registers chained together.
Here are the (simplified) schematics (link to larger image):
And here's a picture of the board:
(The board also have an LT1111 regulator in boost configuration and a DS1307 RTC that are not shown in the schematics for simplicity.)
To save batteries and run the clock for several months on the same cells, I planned to run the ATmega in low power mode, waking it up only once per minute or so just to change the display and putting it back to sleep. However, I later learned that TN static LCDs must be refreshed at 10 to 30kHz (sorry, I couldn't find the document I read that - I couldn't find much information about driving TN LCDs out there).
Well, if I drive the shift registers at 30kHz it will leave me no time for the '328 to sleep. In the end that's what I'm doing, refreshing the LCD at 30kHz and the clock works ok. But the batteries run for only about 40h.
If I ignore the LCD refresh rate and leave the segments on for several seconds, the segments kind of vanish away (as in the picture below).
I couldn't find anything about the LCD power consumption and I haven't measured it, but I'm pretty sure the LCD consumes very little current (in the uA range). The '595 also won't use much current either since they are not sourcing much current and consume only 160 uA each. What's wasting all my batteries is the ATmega refreshing the LCD. I just need a different way to drive this LCD that doesn't take as much cycles from the MCU.
My questions are:
Can I drive this LCD in low power with this circuit I've built? If so, how?
If not, what changes should I make to my circuit so I can drive the LCD with minimum power consumption?
What is the Right Way (TM) of driving these LCDs?
AI: No, you don't need to refresh static LCDs anywhere near that fast. 60Hz should be fine- which will give you a couple years of battery life if it scales (it's normally in the range 30-100Hz).
The main thing is that you need to make sure that the average DC voltage from each segment to backplane is very low (like < 50mV). Easy when you're driving it with CMOS push-pull outputs. You must drive the COM high/low with a 50% duty cycle (very close to exact) and drive the 74HC595 outputs high/low in sync either inverted (on) or not (off). If you want to blank the display make sure no DC voltage appears across the display (the reason for this is that DC voltages cause electrochemical action which can permanently damage the display). It doesn't look like you are doing that.
Here is an application note from Atmel that shows how to do it with a microcontroller, showing the proper waveforms:
So probably the best way would be to wake up, flip the COM and RCK the new data in from the shift registers (perhaps alternate doing COM or RCK first), then calculate the next frame segment data, shift it out to the HC595s via SCK without transferring to the output register and go sleepy-bye. Repeat forever.
|
H: How does this timed fan controller work?
My bathroom fan controller recently started misbehaving, so I pulled its PCB out. This is what I found.
simulate this circuit – Schematic created using CircuitLab
LIVE/NETURAL are the 230 AC mains. I think I have a fairly good grip on what's happening here: the upper part of the system is there to generate 5V for the lower part. The lower part is a 14-bit counter whose most-significant output controls Q1 and therefore TRI1. R10, R11, R12 and C6 together with the invertors integrated in the IC provide a clock, whose frequency can be set by R11. D5 stops the clock once Q14 goes high. In other words, the circuit cuts off the power to the fan after a delay selected by R11.
Now, the user can restart the fan by pressing SW1. I guess the idea is that the switch pulls RESET on the IC high, thus clearing the counter. However, the divider formed between R7+R8 and R9 will make the voltage on RESET only 0.21V! (And yes, I've rechecked the resistor values like 100 times. The two resistors say 624, the R9 says 653; they're all 1206 SMDs.) Yet, somehow, it works (at least it used to).
How?
Furthermore, I've not been able to figure out the intended function of some of the components. I'm guessing that R1 and R2 are there to provide a discharge path for C1 if the mains are disconnected. But what are R3 and C4 for?
AI: I take it you meant to write 563 rather than 653.
Well, it would make more sense if SW1 went to live rather than neutral.
R3 is to limit the current (if the power is suddenly applied at the AC line peak) to less than 1A, thus protecting D3 and C1 from overcurrent.
C4 may be to kill glitches on the counter output or for some other reason to do with the triac.
|
H: My For loop isn't exiting and I don't know why
I'm programming an ATMEGA328p on a breadboard and using an arduino board to do the USB to Serial conversion. Part of some code I'm writing involves a for loop that is being used to take the 8 bit output from the SPI MOSI line (connected to an SD card) and stick it into a 64 bit binary number so I can output it to the serial monitor. I've been testing it, and I'm getting an infinite loop and I don't know what is wrong. The code is posted below. Don't worry about the USART code, it works. The test case (the array purple[5]) is being used to find out why the for loop is infinite.
Also, if you know an easier way to output the 40 bit code that an SD card outputs after using the commands SEND_IF_COND and SD_SEND_OP_COND that would be helpful as well.
#define USART_BAUDRATE 9600
#define F_CPU 16000000
#define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 16UL))) - 1)
#include <avr/io.h>
#include <util/delay.h>
#include <stdio.h>
#include <avr/power.h>
void Init_USART(void);
void newLine(void);
void transmitByte(uint8_t my_byte);
void printNumber(uint8_t n);
void print64BitNumber(uint64_t bits);
void printBinaryByte(uint8_t byte);
void printString(const char myString[]);
int main(void){
Init_USART();
uint8_t purple[5];
purple[0] = 0b11110111;
purple[1] = 0b00010000;
purple[2] = 0b11111111;
purple[3] = 0b00110000;
purple[4] = 0b11111111;
uint8_t counter = 0;
uint64_t push_bit = 1;
uint64_t error_codes = 0;
for (int i=0; i<5; i++){
for (int loop_bit=7; ((loop_bit < 8)||(loop_bit < 254)); loop_bit--){
push_bit = 1;
printNumber(loop_bit);
print64BitNumber(error_codes);
newLine();
printNumber(counter);
newLine();
if(bit_is_set(purple[i],loop_bit)){
error_codes |= (push_bit << (loop_bit+(i*8)));
}
else{
error_codes &= ~(push_bit <<(loop_bit+(i*8)));
}
}
counter += 1;
}
return 0;
}
////////////////////////////////////////////////////////////////////////////////
//USART Functions///////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
void Init_USART(void)
{
clock_prescale_set(clock_div_1);
UCSR0B = (1<<RXEN0)|(1<<TXEN0); //Enables the USART transmitter and receiver
UCSR0C = (1<<UCSZ01)|(1<<UCSZ00)|(1<<USBS0); //tells it to send 8bit characters (setting both USCZ01 and UCSZ00 to one)
//now it has 2 stop bits.
UBRR0H = (BAUD_PRESCALE >> 8); //loads the upper 8 bits into the high byte of the UBRR register
UBRR0L = BAUD_PRESCALE; //loads the lower 8 bits
}
void printBinaryByte(uint8_t byte){
uint8_t bit = 0;
//This code is really efficient. Instead of
//using large ints to loop, it uses small uint8_t's.
for (bit=7; bit<255; bit--){
if(bit_is_set(byte,bit)){
transmitByte('1');
}
else{
transmitByte('0');
}
}
}
//uint8_t is used for 8bit chars
void transmitByte(uint8_t my_byte){
do{}while(bit_is_clear(UCSR0A, UDRE0));
//UDR0 is the transmit register.
UDR0 = my_byte;
}
void newLine(void){
printString("\r\n");
}
void printString(const char myString[]){
uint8_t i = 0;
while(myString[i]){
while ((UCSR0A &(1<<UDRE0)) == 0){}//do nothing until transmission flag is set
UDR0 = myString[i]; // stick Chars in the register. They gets sent.
i++;
}
}
//Prints 64 bit number.
void print64BitNumber(uint64_t bits){
printBinaryByte(bits >> 56);
printBinaryByte(bits >> 48);
printBinaryByte(bits >> 40);
printBinaryByte(bits >> 32);
printBinaryByte(bits >> 24);
printBinaryByte(bits >> 16);
printBinaryByte(bits >> 8);
printBinaryByte(bits);
}
void printNumber(uint8_t n){//This function Prints a number to the serial monitor
//Algorithm to convert 8 bit binary to 3 digit decimal.
//N=16(n1)+1(n0)
//N=n1(1*10+6*1)+n0(1*1)
//N=10(n1)+1(6(n1)+1(n0))
//Also: N = 100(d2)+10(d1)+1(d0)
//Then make: a1 = n1 and a0 = (6(n1)+1(n0))
//Then get rid of the carries since n0 can be from 0-15, etc.
//c1 = a0/10 This is the number of 10's to carry over
//d0 = a0%10 This is the decimal that's outputed.
//c2 = (a1+c1)/10
//d1 = (a1+c1)%10
//d2 = c2
uint8_t d2, d1, q;
uint16_t d0;
//0xF is 15
//00010000 this is 16 (n)
//d0 = 00010000 & 00001111, d0 = 00000000
d0 = n & 0xF;
//If you AND n then the bits in the original number show in d0,d1 and d2
//d1 = (00010000 >> 4) same as 00000001, & 00001111, d1 = 00000001
//this sees if there's anything in the second 4 bits
d1 = (n>>4) & 0xF;
d2 = (n>>8);
//this sets d2 to 0.
d0 = 6*(d2+d1)+d0;
q = d0/10;
d0 = d0%10;
d1 = q + 5*d2 + d1;
if (d1!=0){
q = d1/10;
d1 = d1%10;
d2 = q+2*d2;
if (d2!=0){
transmitByte(d2 + '0');
//remember to add '0' because its an ASCII char
}
transmitByte(d1 + '0');
}
transmitByte(d0 + '0');
}
AI: In your code you have the following suspect line:
for (int loop_bit=7; ((loop_bit < 8)||(loop_bit < 254)); loop_bit--){
...
}
The key problem with this is that as loop_bit is an 'int'. In avr-gcc this is a 16bit signed data type. You have a loop condition which is loop_bit < 254 (and the redundant loop_bit < 8). As such if you keep subtracting 1, you have to count all the way down to -32768 and then one iteration further to have it wrap around to 32767 before the for loop will exit.
If you want to count down from 7 to 0 inclusive, you can do one of the following two things:
This is closest to what you currently have
for (int loop_bit=7; loop_bit>=0; loop_bit--){
...
}
This is more efficient on an 8-bit AVR
for (int8_t loop_bit=7; loop_bit>=0; loop_bit--){ //assuming avr-gcc understands 'int8_t', if not you can do 'char'
...
}
As a side note, and just my personal opinion, in another part of the code, you also have this for loop:
for(bit=7; bit<255; bit--){
...
}
This will work fine as bit is declared as a uint8_t, but it would be more readable to declare 'bit' as a signed int8_t type and use the following loop:
for(bit=7; bit>=0; bit--){
...
}
Using the signed type allows the number to go negative and cause the loop to exit. Functionally (and probably in assembly) the two loops are identical as I say, but personally I find the latter easier to follow.
|
H: Multimeter battery test
I have a multimeter with a battery test setting in addition to voltage measurement. This is marked 1.5-6V 50 ohm. I have some questions:
is the only difference between the voltage and battery tester increased decreased resistance?
why is the top voltage rating only 6V? should larger batteries be tested with the voltmeter? what would the extra couple hundred mA damage that would not be damaged be higher voltage testing in general?
Having decided to try a 9V battery anyway, I find that weaker (8-8.5V) ones have continually decreasing voltage as long as I hold then multimeter on the terminals. Stronger (~8.9V) batteries do not seem to exhibit this behaviour. I have read that weak batteries sometimes do this and should be disposed of.
does a continually decreasing voltage indicate a weak battery or a damaged meter (or something else)?
AI: Some battery chemistries will show a normal voltage under no load (and a multimeter is next to no load - 1 to 10 Mohm) even when they're flat. The 50 ohm load in battery-test mode is to negate this tendency and give you some indication of whether the battery is holding up to a light load, or is actually completely flat. If you connect in battery-test mode and its voltage drops as you watch, then yes, that's a flat battery. Most batteries other than little coin-cells should be able to supply a 50ohm load without sagging.
The voltage limit is probably about the power (Watts) rating of the 50ohm resistor - it'll comfortably handle up to 6V batteries. Above that, it'll cope for a few seconds, maybe, beyond that all bets are off :).
|
H: Transistor help
I've read some info on transistor about how it works. So let's say I savaged a receiver from a remote control car which generates 1.2v output from the receiver when signal is transmitted from the controller, would my circuit works? Still a beginner in electronics, forgive me for my bad circuit drawing.
EDIT - I think I may understand a bit now, if it's positioned the way on the left, would it work? Also, what are the difference between the left and the right?
AI: What you have drawn is an emitter follower circuit to power your load and, if your load was normally activated by about half a volt then your circuit will work.
However, it seems you want the full 12 volts to be applied to your load and this means voltage amplification. Try researching common emitter load control circuits.
I'll draw a picture in a while but I'm on android at the moment and that is impossible for me!
EDIT (simple analogy alert)
This is overly simple but hopefully demonstrates what happens (or what you need to do) when using a BJT to switch a load: -
In this circuit the BJT's emitter is grounded to 0V and any voltage applied to the base (provided it is above a certain threshold ~0.7V) will start to conduct a reasonable amount of current from collector to emitter. You can fully control small to medium loads when they are placed in the collector circuit.
Can you see that if you placed the load in the emitter AND your control voltage is only 1.2 volts, the voltage across the load cannot ever become 12V.
|
H: Output clock of the LPC1768?
I am using NXP's LPC1768 development board and I came across the User Manual for this part and page 67/849 section 4.10 descusses the External Clock Output. I couldn't figure out which pin out of the 40 pinouts on the board is this clock out sitting on. By Googling around, I found this code that is supposed to output 10MHz:
/* clkout of 10mhz on 1.27 */
LPC_PINCON->PINSEL3 &=~(3<<22);
LPC_PINCON->PINSEL3 |= (1<<22);
LPC_SC->CLKOUTCFG = (1<<8)|(14<<4); //enable and divide by 12
But again, I am not sure which of the board pins this clock is coming out from? Or do I have to solder a connection to somewhere on the board?
UPDATE
I have tested the code that Nils Pipenbrinck provided. It works. But I am not sure if there are limitations on that clock output and its accuracy. Here are screenshots for frequencies 10MHz, 5MHz, 1MHz: - it seems to degrade as you we go higher. Any input on how to improve the signal's shape and accuracy?
AI: Inspired by Zuofus answer I'd just thought I post the code to setup timer2 for this duty. I use it to clock a CPLD at various frequencies, and it works like a charm.
// flexible frequency synthesizer for the LPC1768 mbed board.
// this uses the TIM2 timer in countdown mode, toggling
// the MAT2.0 pin on each event (DIP-8 in the prototype board)
// the frequency will always be a bit off for high clock-rates because
// only integer divisions of the main clock are possible.
void StartFreqSynth (uint32_t freqHz)
/////////////////////////////////////
{
// ------------------------
// Enable Power for Timer2:
// ------------------------
LPC_SC->PCONP |= (1<<22);
// -----------------------------------
// Disable counter and hold in reset:
// -----------------------------------
LPC_TIM2->TCR = 2; // rest counter
// -------------------------------------------
// Set Clock source for Timer2 (bit 12 and 13)
// we pick full system clock, divider 2,4,8
// are also available
// -------------------------------------------
LPC_SC->PCLKSEL1 = (LPC_SC->PCLKSEL1 & ~(3<<12)) | (1<<12);
// ----------------------------------
// Use normal Timer mode, no capture
// ----------------------------------
LPC_TIM2->CTCR = 0;
// -----------------------------------------------
// Match on MR0 = TC. Reset counter, no interrupts
// -----------------------------------------------
LPC_TIM2->MCR = 2;
// ------------------------------------------------
// set pin function for pin DIP_8 (P0.6) to MAT2.0
// ------------------------------------------------
LPC_PINCON->PINSEL0 |= 3<<12;
// ----------------------------------
// toggle pin MAT2.0 (DIP_8) on match
// ----------------------------------
LPC_TIM2->EMR = 1 | (3<<4);
// -----------------------------------
// Set clock divider and match value
// this determines the final frequency
// -----------------------------------
LPC_TIM2->PR = 0; // set prescaler to full speed.
// since we toggle the pin, the generated frequency is half
// as fast as a cycle, so we have to run the timer twice as
// fast to compensate:
LPC_TIM2->MR0 = CORE_FREQ / (freqHz*2); // match value
// start counter
LPC_TIM2->TCR = 1;
}
void StopFreqSynth()
////////////////////
{
// check if timer2 is powered:
if (LPC_SC->PCONP & (1<<22))
{
// never generate interrupts:
LPC_TIM2->MCR = 0;
// put timer2 in reset, stop timer.
LPC_TIM2->TCR = 2;
// disable peripheral power
LPC_SC->PCONP &= ~(1<<22);
}
}
|
H: Fuse rating and safety
Under what conditions would a faster, higher rupture capacity, higher voltage-rated fuse be less safe than the equivalently current-rated fuse which was slower, lower rupture capacity or lower voltage-rated? My intuition is that a faster fuse is not always good because it may trigger too easily but that higher rupture capacity and voltage rating is always safer.
As an example, I have a multimeter which has "fast" 250V fuses (one glass 200mA, one ceramic 20A). I presume at least the glass one is not HRC. It is rated for measuring up to 15A/600V DC or AC (two separate probe sockets for low/high current, corresponding to the two fuses). One of the fuses has blown and I am considering replacing both with >600V HRC fuses of the same speed and current rating. It strikes me that in the case of user-error on a 600V circuit, you would want the fuses to be able to cope safely.
It also seems that fuses tend to be rated for AC volts (LBC fuses state "breaking capacity at maximum voltage rating and then state maximum voltage in VAC, HBC fuses often have gaps in the charts for DC like this). Littlefuse implies that you should halve the AC rating, but also suggests contacting them to be sure.
AI: The "speed", rupture capability, and voltage spec of a fuse are each separate traits that aren't necessarily related. They will be specified according to the nature of the application.
Many cheap multimeters (i.e. <50$us) will only have glass fuses, because they're cheaper. But 'high rupture' capability (the ability to break a circuit up to a rated voltage, when a high fault current is flowing) is important in a multimeter (and in a circuit breaker), especially in a DC scenario, because DC >40V can easily form an arc and continue the current flow, even though the fuse has blown.
For the "20A" fuse in your meter, a HRC type would be wise, and rated for the max V rating of your multimeter, and same speed as the original.
|
H: Why does my mainboard power the cathode to turn the Power-LED off?
Close to all desktop motherboards have a pin header that is supposed to power an LED on the front of an enclosure with to indicate that the computer is on.
In the documentation for my board, these pins are described as "Power LED Signal anode (+)" and "Power LED Signal cathode (-)".
This made me curious and by measuring both pins with a multimeter seperately, I found out that the anode on the board is always supplied with 5V (compared to GND) as long as the PSU is connected to AC current while the cathode is supplied with 5V normally and gets pulled down to ground potential when the PC is turned on.
What I would've expected was the cathode being connected to GND directly and the anode being pulled down normally and pulled up to 5V when the PC turned on.
Is there a good reason why the manufacturer would choose to do this? Is there any advantage in powering a LED this way over the way I described? Is this probably even common practice with desktop mainboards?
AI: This would be standard in all motherboards compliant with the Intel Front Panel I/O Design Guide.
Note that what you see as +5V is actually a resistor of a few hundred ohms to the +5V supply.
The likely reasons include the practical (a short of either side to ground damages nothing) and possibly they were thinking a MOSFET drive could be used, so the higher carrier mobility of electrons over holes means an N-channel MOSFET would be preferred. I believe bipolar drivers are typically used at least on motherboards sold at retail- probably for ESD immunity reasons- no motherboard maker wants to have to take returns because an LED driver got fried by a sloppy assembler.
|
H: Average alternating power
"For each of the values calculate P and Q" (average)
V = 250 \$\cos (\omega t +45)\$
I = 4 \$\cos (\omega t -30)\$
The way I would try solving this would be by converting to phasors, taking the product and multiplying by a half. I will then be left with a phasor with phase shift of \$15\$ deg. But turns out the final phase shift is \$75\$ deg. I assumed this may be as a result of taking the current max at \$t = 0\$. So that way we just shift both the current and voltage by \$30\$ deg. But then the next question...
V = 18 \$\cos (\omega t -30)\$
I = 5 \$\cos (\omega t - 75)\$
If I now shift by \$75\$ deg, then \$-30 + 75 = 45\$ deg and not \$105\$ deg as stated in the memo. What is happening?
AI: Using phasor (Steinmetz') notation with peak values (not RMS):
\$ V = 250 \angle{45°}\$
\$ I = 4 \angle{-30°}\$
Complex power is \$ S=\dfrac{1}{2}\; V\cdot I^{*}\$ where the asterisk means "complex conjugate", an operation which inverts the sign of the phase. Therefore:
\$ S = \dfrac{1}{2}\; (250 \angle{45°}) \cdot (4 \angle{-30°})^{*} = \dfrac{1}{2}\; (250 \angle{45°}) \cdot (4 \angle{30°}) = 500\angle{75°}\$
Analogously for the second example:
\$ S = \dfrac{1}{2}\; (18 \angle{-30°}) \cdot (5 \angle{-75°})^{*} = \dfrac{1}{2}\; (18 \angle{-30°}) \cdot (5\angle{75°}) = 45\angle{45°}\$
So effectively it seems there is an incoherence somewhere. Maybe a typo? In the second case if V had a phase of +30° the results would match.
EDIT
Just to incorporate and expand a bit of theory I explained in a comment to the OP.
Let's consider a linear load driven by a sinusoidal current \$i(t)\$ and having the voltage \$v(t)\$ across its terminals; because of the linearity of the load the voltage is sinusoidal too:
\begin{align*}
v(t) &= V_m \; \cos(\omega t + \phi_V)
&& \Leftrightarrow
& V &= V_m \, \angle \phi_V = V_m \, e^{j\phi_V} \\
i(t) &= I_m \; \cos(\omega t + \phi_I)
&& \Leftrightarrow
& I &= I_m \, \angle \phi_I = I_m \, e^{j \phi_I} \\
\end{align*}
Assuming the directions of i and v are associated, the instantaneous power absorbed by the load is:
\begin{align*}
p(t) &= v(t) \cdot i(t)
= V_m \; \cos(\omega t + \phi_V) \cdot I_m \; \cos(\omega t + \phi_I) = \\[1em]
&= \dfrac{1}{2} \, V_m \, I_m\;
\left[{
\cos(2\omega t + \phi_V + \phi_I) + \cos(\phi_V - \phi_I)
}\right]
\end{align*}
where the trig formula
\$ \quad \cos(a)\cos(b) = \dfrac{1}{2}[cos(a+b) + cos(a-b)] \quad\$
was used.
The average power \$P\$, since \$p(t)\$ is periodic, can be computed averaging on a single period, thus:
\begin{align*}
P &= \dfrac{1}{T} \int_0^T {p(t)}\, dt
= \dfrac{1}{T} \int_0^T \dfrac{1}{2} \, V_m \, I_m\;
\left[{
\cos(2\omega t + \phi_V + \phi_I) + \cos(\phi_V - \phi_I)
}\right]\, dt = \\[1em]
&= \dfrac{1}{T} \int_0^T \dfrac{V_m \, I_m}{2} \cos(2\omega t + \phi_V + \phi_I) \, dt
+ \dfrac{1}{T} \int_0^T \dfrac{V_m \, I_m}{2} \cos(\phi_V - \phi_I) \, dt
\end{align*}
The first integral is zero, because it is the integral of a sinusoidal function over its period, while the second integral is the average of a constant, so it is that constant. Therefore the average power becomes:
\begin{align*}
P = \dfrac{V_m \, I_m}{2} \cos(\phi_V - \phi_I) = \dfrac{V_m \, I_m}{2} \cos \phi
\end{align*}
where \$ \phi = \phi_V - \phi_I \$ is the phase difference between the voltage and the current.
Without entering in further details, the average power is also called active power, whereas the reactive power is defined as:
\begin{align*}
Q = \dfrac{V_m \, I_m}{2} \sin \phi
\end{align*}
On the other hand, if we define the complex power S as :
\begin{align*}
S = \dfrac{1}{2} \, V \cdot I^{*}
\end{align*}
we obtain:
\begin{align*}
S &= \dfrac{1}{2} \, V_m \, e^{j \phi_V} \cdot (I_m \, e^{j \phi_I})^{*}
= \dfrac{1}{2} \, V_m \, e^{j \phi_V} \cdot I_m \, e^{-j \phi_I}
= \dfrac{V_m \, I_m}{2} \, e^{j (\phi_V - \phi_I)}
= \dfrac{V_m \, I_m}{2} \, e^{j \phi} = \\[1em]
&= \dfrac{V_m \, I_m}{2} \cos \phi + j \, \dfrac{V_m \, I_m}{2} \sin \phi
= P + j \, Q
\end{align*}
Hence it's easy to see why P and Q are respectively the real and the imaginary part of S.
|
H: How to Find value of Early voltage of output characteristic curve of BJT?
I have this graph from data that I have. This is the output characteristic curve of BJT.
I want to find the value of early voltage VA?
AI: Early voltage: -
It should be fairly easy to plot back the curves and see if they intercept as Mr. Early predicted
|
H: Does DC DC switching converter needs pot readjusting on input voltage change?
I have adjusted my LM2596 regulator to 5V output with 16V input. I'm about to change the input to 25V. Will my 5v output stay the same or do I need to adjust the pot again ?
I can't easily test myself because my circuit would need unsolder and resolder to check and tweak.
Thanks
AI: The potentiometer sets the output voltage, which has little or no bearing on the input voltage. The whole point of a regulator is to ensure the output voltage is a fixed value regardless of the input voltage, as long as it is between the limits of the regulator input.
|
H: Binary calculation (11 +01010101 )
Kindly help me with this Binary calculation
Add the following 2's complement numbers
11 +01010101 = ?
How i solve it :
0000 0011 + 01010101 = 01011000 but the book say its 84???
Please help!
AI: The two values have differing bit widths which you aren't taking into account.
The value 11 isn't the same as the value 00000011 when 11 comes from a 2-bit system. Instead, to take into account the two's complement sign you have to sign extend the smaller value into the same space as the larger value. That means copying the most significant bit into all the "new" bits you add to make it 8-bit.
So, 11 becomes 11111111. In two's complement, that equates to -1.
01010101 is decimal 85. 85 + -1 is 84.
Alternatively, in binary, 01010101 + 11111111 = 101010100 but since it's 8 bits the top-most bit gets lost, and the result is 01010100 which is 84.
|
H: How to request “GET” from an ASP.NET website using ESP8266?
AT+CIPSTART="TCP","zgrdataserver.com.azhar.arvixe.com",80
OK, works.
AT+CIPSEND=92
The response is ">". Works fine, but
GET /androidserver.aspx?param=esp8266 Http/1.1\r\nHost: zgrdataserver.com.azhar.arvixe.com\r\n\r\n
This code, no matter what, always returns either HTTP 400 bad request error or Wrong syntax error. Ironically, all the examples on the internet that I found use thingspeak.com API example. However I want to use my own server to log data. Thank you for any efford, in advance.
For those who wonder http://zgrdataserver.com.azhar.arvixe.com/androidserver.aspx?param=asdf request works perfectly fine.
AI: You have an error in your HTTP request, the HTTP version should be uppercase (i.e. HTTP/1.1). Try your requests on a computer with telnet first. I get the following output:
$ telnet zgrdataserver.com.azhar.arvixe.com 80
Trying 23.91.112.247...
Connected to zgrdataserver.com.azhar.arvixe.com.
Escape character is '^]'.
GET /androidserver.aspx?param=esp8266 Http/1.1
HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 10 May 2015 18:54:39 GMT
Connection: close
Content-Length: 311
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request</h2>
<hr><p>HTTP Error 400. The request is badly formed.</p>
</BODY></HTML>
Connection closed by foreign host.
Notice that the server responds immediately after sending the first line. With a correct request I get:
$ telnet zgrdataserver.com.azhar.arvixe.com 80
Trying 23.91.112.247...
Connected to zgrdataserver.com.azhar.arvixe.com.
Escape character is '^]'.
GET /androidserver.aspx?param=esp8266 HTTP/1.1
Host: zgrdataserver.com.azhar.arvixe.com
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/8.0
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 10 May 2015 18:55:53 GMT
Content-Length: 846
<br>
78.167.30.214<br>
asdf<br>
[...]
This is at least part of your problem.
As a suggestion, use HTTP/1.0 instead of HTTP/1.1. This prevents the server from sending chunked data and it will also automatically close the connection (without a Connection header in the request).
|
H: How do residential analog and smart meters measure power?
I'm building my own digital meter in order to upload the data to a SQL database; up until now I know that there are several parameters being measured into a digital watt-meter:
volts, current, apparent power, instantaneous power, actual power, power factor
However, I still have to understand which one of these is the real value used to increase the counter on both devices. To be more precise with my question, old analog meters couldn't do all these calculations, and still they worked out as intended. However with, no linear loads being more and more common, I'm guessing the power factor in a common house would be hanging around .7 or .8 so how does this change in the type of load would affect the measurement in a digital vs analog watt-meter?
My first guess would be that analog meters measure REAL POWER, but I can't be so sure about digital smart meters.
How do they work?
AI: The analog meter is built around a motor. The magnetic fields that produce the torque to drive the motor are proportional to the voltage and current at any instant. So, in that sense, it is measuring instantaneous power. Because it spins and turns a counter, it also measures the total energy consumed.
The digital meter measures voltage and current directly, and over the course of many samples can measure and accumulate voltage and current readings, as well as calculate the apparent power used. Real power is calculated by multiplying the voltage and current at any instant (one pair of samples). Apparent power can be calculated by measuring the amplitudes of the voltage and current over the course of one or more cycles and multiplying the two results together.
An additional consideration is seeing what you are charged for. Residential customers (at least in the U.S.) traditionally paid only for real power consumed. This was probably due to the fact that originally most loads were predominately resistive, and also the fact that while analog meters that can measure power factor do exist, it wasn't cost effective to use them in residential neighborhoods.
Nowadays, our uses are quite different, and switching power supplies can really mess with the power being supplied to the grid. This has resulted in regulations forcing power supply builders to get the power factor back closer to 1. I'm still not aware on anyone other than industry paying for power factor, but if the smart meters can measure it, that situation could change.
|
H: Is it OK to use switching regulators to charge mobile phones (and such)?
I had several issues with car usb adapters (dc-dc step-down regulators), and I'd like to make one myself.
Issues:
Nokia cell phone refusing to charge
Motorola phone power supply circuitry fried
Insufficient current
The first thing I'd like to know is whether it's fine to use a switching regulator.
My understanding is that the output will be a square wave signal, am
I right?
Will the cell phones like that?
Will the ripples and resonance in the signal will a further problem?
If I employ a regulator such as the TSR 1-2450, should I
add other components to filter the input and clean the output or is
it fine to use it just by itself?
Am I better off with a linear regulator?
UPDATE:
I tried using a LM2576T-5 with the standard configuration:
It worked great! Then I added the suppression circuitry suggested by @tcrosley, but with a couple changes:
I did NOT double the 100 uF capacitor at the beginning of the first circuit and that at the end of the second.
Since I couldn't find a 1.5KE18A, I used a 1.5KE15A, which works at 12.8V, so I suppose it should be fine.
The 1N4001 wasn't good for me, since it limits current to just 1A. I used one of the 1N5822 I already had, comforted by what I found here.
This resulted in inverted polarity (!!) on the output of the LM2576, which ceased to work shortly afterwards (I didn't cut the power, because it seemed to work somehow, so I was measuring voltages when it died).
I triple checked my soldering work, but all seems in order (well, quite ugly actually!, but correct).
I really can't figure out what's the problem. Maybe I'll just get another LM2576 and remove the suppression circuitry.
The power supply for the testing was a 12V, switching (common PC PSU).
UPDATE 2:
The IC is still good and the circuit is finally working. Turns out that the inverted polarity was my fault. I connected 2 usb sockets to the leads, but I had the pin-out upside down! Since the usb socket casing is shortened to pin 4 (ground), and I thought that was pin 1 (Vcc), I ended up this circuit:
Now I fixed it, and I will try again with the suppression circuit. I'll use at least the TVS diode.
UPDATE 3:
It seems like I had this all wrong from the start.
I added the suppression circuitry, checked that all was working and mounted the thing on the car. And surprise surprise, it started smelling after a few minutes! While in the LM2576 datasheet it is written that there's no need for a heat sink, the reality is that it gets really, really hot after just a few minutes of phone charging :(
Note how it melted the inductance nearby!
Now, in order to fit a heat sink, I'll have to do a new board...
UPDATE 4:
take 3, finally working properly. thank you all!
AI: Yes, it's okay to use a switcher, that's what the majority (if not almost all) of cell phone chargers use. For example one of the Apple chargers is a 1" cube:
To get that small, it has to be a switcher. Of course it goes from 110 VAC to 5v DC. To that, it first rectifies the AC to DC, chops it to a very high frequency (100's of kHz) using the switching regulator, and then it is rectified again. It is much easier to rectify a high-frequency than a much lower one.
In your case, you will be doing the same except you don't need the first rectifying step.
The output of a DC-DC converter will be filtered DC, with minimal ripple (about 150 mA or 3% for the TRS1-2450).
You can also make your own regualtor using a "buck" switching regulator IC, like the NR887D, which accepts an input up to 18V and a output up to 14v at 2A (you will set the voltage of 5v using two resistors). It is available from Digi-Key for $1.58 in a 8-pin DIP package.
Here the 22 µF capacitors C4 and C5 remove the ripple.
You need the extra margin on the input since 12v car batteries often are as high as 14v. You will also want to add some suppression circuitry on the input since there are often spikes on the 12v line too. You can use a circuit like this:
This circuit can be used either in front of the DC-DC converter or the discrete switching regulator above.
|
H: can i use a 20V, 2.0A for my laptop that needs 19V, 4.74 A?
i have a laptop that needs 19V and 4.74A
my laptop's charger broke and i replaced it with a 20V and 2.0 A
the laptop seems fine but i noticed that the charging goes on and off for like an interval or 1-2 minutes if i use it heavily. but if i just place it on standby or i dont do too much work (still charging) the charging goes normally and it will now not go on and off the same also when i shut down my laptop and charge it. i know that you must follow the right voltage and ampere for your device but is it alright if i continue to use it? any advice? your answers will be very much appreciated
AI: The power supply can't provide enough current to both charge the battery and run the computer. Fortunately your computer seems clever enough to realise this and make a choice - charge the battery, or run the computer. When you're using the computer and consequently consuming more current the charge circuit gets shut down since it doesn't have enough current available to it.
So what would the consequences be of long term use? Hard to say really, but two that spring to mind might be:
Reduced battery life due to "stop-start" charging (some say it effects the batteries, some say it doesn't).
Overheating power supply (you're over-stressing it)
So should you continue to use it? Well, in the short term, until the power supply melts, there shouldn't be too much to worry about, but you should really look at getting a new properly rated supply before your current one breaks. It may break in 3 years time, or it may break tomorrow.
|
H: How is speed torque curve of an induction motor plotted experimentally?
I'm wondering how is obtained a 3-phase induction motor's speed-torque curve obtained. I couldnt find any procedure.
Are they first applying fixed voltages with fixed freq. to stators. and then gradually increasing the torque or they speed up the motor with a variable torque such as wind fan?
Any info about the procedure?
AI: Usually in college laboratory they connect brake drum and spring balance meters with motor's pulley to measure the force. (you may refer this image below i attached)
While running test you need to measure the speed using tachometer and spring balance force.
And need to find the radius of brake drum .
Torque= force*distance
Torque= (difference between spring balance )* (radius of brake drum)
In industires they may use test benches they can easily do this
|
H: Two power sources connected with one device, from where will get its power
Hello and thanks for your time reading my post.
I've my router connected with its adapter and a battery for backup.
Details:
The adapter output 15V and 0.4A
The battery output 12V and 12A
The router needs: 9V and 0.6A (I've a regulater to minimize 12V to 9V to fit router needs).
Now, if I connect the router's adapter with battery then to the router, the router will take power from where? The adapter or the battery?
Thanks.
AI: The adapter will power the router, but also try to charge the battery since it's voltage is lower. This could be bad depending on what kind of battery this is.
You can use diodes to prevent the power from one power source from going to another power source. In this case, the router will get power from which ever source is supplying a higher voltage.
simulate this circuit – Schematic created using CircuitLab
In this configuration, the adapter will power the router for as long as it is plugged in, if the adapter looses power then the battery will take over and the router will keep running (at least until the battery goes dead).
You can use pretty much any diode that can handle current needed - which is the current needed by the router plus the current wasted by the regulator.
|
H: what happens to SPI clock in loop-back
I want to use SPI module in loop-back mode. so I wire MOSI to MISO.
but how about the clock??. is it wired somewhere?? and how the clock works loop-back mode
AI: Usually you don't have to wire the clock as the sending device, which transmits also the clock, always samples the data at the same clock it sends the data out.
|
H: Xilinx bitgen warning
I am getting a warning in bitgen like:
This design is using one or more 9k block RAMs(RAMB8BWER). 9k block RAM data, both user defined and default requires a special bit format.
Is it a critical warning or can I proceed with dumping the bitfile onto the FPGA?
AI: The warning you are seeing is most likely
WARNING:PhysDesignFules:2410 - This design is using one or more 9K Block RAMs
(RAMB8BWER). 9K Block RAM initialization data, both user defined and default,
may be incorrect and should not be used. For more information, please
reference Xilinx Answer 39999.
I just got this warning myself a couple of days ago and looked in to it. See http://www.xilinx.com/support/answers/39999.html . Basically, if you are using a newer version of ISE (13.2 or newer, 14.7 is the latest) and you are not using bitstream encryption, you can ignore the warning.
You really gotta love this sort of bug: they screw up the design, find the issue, and then just change the spec. "That's not a bug, that's a feature!"
|
H: Flash memory read and write
I'm always disabling interrupts while writing to some sectors on Flash, but does Flash memory requires disabled interrupts even when you just want to read from it? Why yes and why not? Thanks for the answer
AI: No, you don't need to disable interrupts to read from Flash. Reading Flash is just like reading RAM. It's only writing which is different: inside the device a "high voltage" (just a couple of volts extra, really) is generated for the gate, and that gate voltage needs there to be longer than just a few nanoseconds to charge the floating gate. That is self-timed by the device.
|
H: Why does an induction motor not have the maximum torque when it is locked?
In theory the maximum current induced at rotor must be when the rotor is locked hence the force is B.I.L it must produce the max torque.
But why torque is not maximum when it is locked but it is maximum at a point called breakdown torque?
AI: Consider the equivalent circuit of an induction motor:
Torque is proportional to the amount of power dissipated by the 'rotor running resistance' element, which varies as a function of slip.
To maximise torque, we must maximise the power dissipated in the rotor resistance. As per the maximum power transfer theorem, this occurs when the impedance of the 'load' (the rotor resistance) is equal to the resistance of the 'source' - meaning the equivalent impedance of everything else, seen looking back from the rotor resistance.
If the slip is zero (locked rotor) then the rotor resistance will be too low for maximum power transfer.
The book 1 by Sarma, section 7.4 Polyphase Induction Machine Performance, explains this in complete detail. Expressions for maximum torque, and speed at maximum torque, are given. I highly recommend this book as a comprehensive treatment of induction motor theory.
1 Sarma, Mulutkula S, Electric Machines - Steady State Theory and Dynamic Performance (1985)
|
H: Rotor current paths and lamination in an induction motor
As far as I understood, the rotor currents circulate through the aluminium bars embedded inside steel laminations. But why do they use steel for laminations instead of an insulator if the the idea is to loop the current only through the bars?
AI: Steel/Iron/Ferrous material are used for the rotor laminations to facilitate created a magnetic circuit, a key requirement of an electrical machine.
The magnetic circuit is not meant to carry any electrical current. This is why they are laminated to reduce the loop size for the induced eddy currents. The thinner the laminations the smaller the eddy currents, great the efficiency
The slots are filled with electrically conductive material to facilitate creating an electromagnet. Aluminium can be used (for weight) but usually copper is used (conductivity). Sometimes silver (when weight and conductivity are paramount).
These windings are electrically insulated from the ferrous stator.
|
H: Can rated stator voltage frequency be exceeded safely in an induction motor?
The following induction motor at the link has rated 50Hz stator freq. on namespace label:
http://www.vanbodegraven.nl/en/products/ac-motors/asea-mbg-200-m-60-6/
It seems it is 6 pole since it has 970 rated rpm and therefore sync. freq. is 1000 rpm.
Which means this motor cannot exceed 1000 rpm unless its stator freq. more than 50Hz.
A VFD speed control reads 1070 rpm mechanical speed when at full speed.
It means it is definitely applying more than 60Hz to stator windings.
My question is is that safe to apply a volateg with more than 50Hz if the rated freq. is 50Hz?
AI: Induction motors run on a design Volts per Hertz ratio. You didn't specify the rated voltage of the motor, but if it is 50Hz, I'll make the assumption that it is a 380VAC three phase motor. 380 VAC at 50Hz is a ratio of 7.6. Almost all 380VAC rated motors have an insulation system that can safely handle 460 VAC, which is another motor standard voltage. 460 at 60 Hz is a ratio of 7.666, so yes, the speed can be safely increased.
Now, you need to consider some other factors about induction motors. Almost all induction motors under 75KW or 100HP can be safely run up to twice rated speed, but in order to do that, you lose torque. When running over base speed, the stator voltage is kept at the maximum (380 or 460 for example), but the frequency increases. If you were to compare the operation of an induction motor from 0 RPM up to base speed, and then to the extended speed, it would be almost identical to running a DC motor from 0 to rated speed with full field, and then decreasing the field current to increase the motor speed. From 0 to base speed, both operate in the constant torque region. Past base speed, they operate in the constant power (KW or HP) region.
To answer what I think is your actual question, no, do not run a 50Hz motor at an increased voltage unless you increase the frequency to 60Hz to keep the same V/Hz ratio. Increasing the voltage without increasing the frequency leads too excessive winding heating and also reduced cooling capacity from the internal fan (or just the rotor turning).
Other considerations can come into play depending on the mode your VFD is running in, but the above are the basics.
|
H: variable or field ' ...' declared void
I'm working on an ADC ATSAM3x8e arduino due board. I'm trying to write new .h and .cpp files.
How do I create a datatype like same as instance name of the processor?
Example: In ADC of the processor the datatype name like as ADC* p_Adc, In EMAC the datatype like as Emac* p_emac
I've already written one program; the error is showing like:
error 1. Variable or field ' function name' declared void
error 2. 'ADC' was not declared in the scope (ADC is datatype)
error 3. 'p_adc' was not declared in the scope
My Header file contains:
void ADC_Mode_register(ADC* p_adc,uint32_t value);
void ADC_control_register(ADC* p_adc,uint32_t value);
AI: Your project is missing the definitions and declarations for ADC which very likely should be a struct or at least union, such that ADC* appears to be a pointer to ADC.
For Example:
Your project is supposed to have something like this:
typedef struct stADC_config{
unsigned int uiChannels;
bool bEnDisCtrl;
uint8_t uCdataDescriptor[8];
long lAdc_count;
}ADC;
|
H: Calculating rechargeable battery charging rate curve
I am trying to understand the rate of charge of a rechargeable battery - let's say, a Li-ion one. What I'm interested in, is to understand whether the charge (the "percentage" of the Ampere-hours the battery can hold) in the battery is accumulated homogeneously over time. For example, if 2000 mAh battery takes 2 hours to get charged in a certain charging regime, whether after the first hour, it will act like a 1000 mAh battery.
It seems logic to me that if you keep the battery under constant current during the charging, then the rate of charging will be constant as well, but as the voltage of the battery changes over time - but what about other charging regimes? Is there any way for me to deduce charging rates from curves such as the one brought in Fig. 5 below (from A Designer's Guide to Lithium Battery Charging - DigiKey)?
Figure 5: Li-ion charging profile using constant current method until battery voltage reaches 4.1 V, followed by “top-up” using constant voltage technique. (Courtesy of National Semiconductor.)
AI: Different chemistries charge in different ways. In your example, Lithium charges firstly with a constant-current until the cell voltage reaches ~4.2V. Most lithium cells are about 75% or more charged by this stage. The remainder charging time is at with a constant-voltage until the current tapers off to the 0.1C rate (in your example, 200mA) at which point the cell is considered to be full.
So, let's say it takes 2 hours to reach 100% charge. It will be over 3/4 full within 1 hour. The 2nd hour is required to take it up to 100%.
But if you were to integrate current over time & find out at what time interval an equal amount of energy has been delivered into the battery (for 0-50% as for 50-100%), then you could interrupt the constant-current phase at, say, 40 minutes, and the cell would be, approximately, 50% charged (I'm just plucking some approximate figures out of the air here).
This also doesn't account for inefficiencies, namely I^2.R losses in the wires.
|
H: Poor clock output from Spartan6 FPGA
I am using the LX9 Microboard from AVNET with the Spartan 6 PFPGA. I implement SPI to read from an ADC (ADS7822). I was getting wrong sampled values. When I ched the signals with an oscilloscope, it was not as I expected.
FPGA Clock (system): 100 MHz
Divider: 32
SPI Clock (output): 3.125 MHz
So I expect, a 320 ns cycle with 160 ns HIGH time and 160 ns LOW time. From the Simulation I get exactly what i expect. But using an oscilloscope at the PMOD output, I get a clock signal thats not too satisfactory. Also from the ADC datasheet (Pg. 10) the minimum high or low time should be 125 ns when working above 4.74 Vcc. Vcc in my case is 5V.
The high time is 132 ns at 3V level (HIGH) and and 144 ns at 0.8V level (LOW). The clock duration converges at 3.2V level. The expected time is 160 ns and I don't find any reason for the poor clock output.
Below diagram is a screen print of the oscilloscope.
Blue line: CLK
Redline: CS
The reason I mark the values at 3V and 0.8V is as per theADC datasheet (Pg. 4)
the Vih is from 3 to 5.5V and Vil is from -0.3 to 0.8V.
Please suggest a possible reason for the poor output from the FPGA and how do I improve the clock output?
AI: Your measurement is not correct.
The duty cycle is measured at 50% (\$\frac{1}{2}V_{dd}\$). So measure again at 1.65 Volt, if \$V_{dd}\$ is 3.3V.
The 'real' high and low times:
- above 90% of \$V_{dd}\$ is high
- below 10 % of \$V_{dd}\$ is low
does not matter. If a circuit has special requirements for the clock or data signal, it defines rising and falling time.
Example: \$t_{LH} \le 5\,ns\$
Rise time from low to high (10% to 90%) should be less than 5 ns.
So the circuit can be sure that the signal is risen or fallen after a predefined time and the signal can be sampled by the input logic.
|
H: What does it mean for reactive power to be delivered / consumed?
Real power makes sense since there is actual consumption, but regarding reactive power; what is consumed / delivered? And how does the circuit change once this happens?
AI: To answer the question: Real power is consumed by a circuit. Reactive power is transferred between the circuit and the source.
Real power in W (P) is useful power. Something we can get out of circuit. Heat, light, mechanical power. Power that is consumed in resistors or motors.
Apparent power in VA (S) is what the source puts into a circuit. The full impact the circuit has on the source.
So the power factor is a kind of efficiency pf = P / S for a circuit. The closer it is to 1, the better.
Reactive power in VAR (Volt Amps Reactive) (Q) is power that circulates between the source and the load. Power that is stored in capacitors or inductors. But it is needed. For example, inductive reactive power in electric motors form the magnetic fields to spin the motor. Without it the motor would not work so it's dangerous to consider it is wasted, but it sort of is.
Capacitors and Inductors are reactive. They store power in their fields (electric and magnetic). For 1/4 of the ac waveform, power is consumed by the reactive device as the field is formed. But the next quarter waveform, the electric or magnetic field collapses and energy is returned to the source. Same for last two quarters, but opposite polarity.
To see it animated, see Waveforms for Series AC Circuits. It shows all 6 series circuits (R, L, C, RL, RC & RLC). Turn on the instantaneous power. When p is positive, source is providing power. When p is negative, power is being sent to source.
For a R, power is consumed. For a L or C, power flows between source and device. For a RL or RC, these two relationships are combined. Resistor consumes and reactive device stores/sends power to source.
The true benefit is when an inductor AND a capacitor are in the circuit. Leading capacitive reactive power is opposite in polarity to lagging inductive reactive power. The capacitor supplies power to the inductor decreasing the reactive power the source has to provide. The basis for power factor correction.
Select RLC in the reference. Notice that the source voltage \$V_S\$ (hypoteneuse) is formed from \$V_R\$ and \$V_L - V_C\$. It is less than if formed from \$V_R\$ and \$V_L\$
If the capacitor supplies all the power of the inductor, the load becomes resistive and P = S and pf = 1. The power triangle disappears. The source current required is less, which means the cabling, circuit protection can be less. Inside the motor, the uncorrected power triangle exists, with additional current coming from the capacitor.
The reference shows series circuits, but any C will supply power to any L in the ac circuit decreasing the apparent power the source must provide.
Edit...
![Power Factor Correction][2]
Let's take an example. P = 1kW motor at 0.707 pf lagging with 120V source.
Before power factor correction: \$Q_L = 1kVAR\$ and \$S_1 = 1.42kVA \$ (dashed line) \$Θ_1 = 45° lagging \$ as in I lags \$V_S\$ by 45°. \$I_1 = 11.8A \$
Increase power factor to 0.95 lagging by adding capacitor in parallel with load.
After factor correction: P and \$Q_L\$ still exist. Capacitor adds \$Q_C = 671VAR\$. This decreases reactive power source has to provide, so net reactive power is \$Q_T = 329VAR\$. \$S_2 = 1.053kVA \$ and \$I_2 = 8.8A \$ A 25.8% saving in current. Everything on power triangle exists except \$S_1 \$.
The capacitor supplies 671VAR of leading reactive power to the lagging reactive power of the motor, decreasing net reactive power to 329VAR. The capacitor acts acts as a source for the inductor (motor coils).
Electric field of capacitor charges up. As the electric field discharges, the magnetic field of coils form. As the magnetic fields collapse, capacitor charges up. Repeat. Power is going back and forth between capacitor and inductor.
Ideal is when \$Q_L = Q_C \$. Power triangle disappears. \$S_2 = P = 1kVA \$ and \$I_2 = 8.33A \$
|
H: Switching a resistor by software
I want to communicate with a microcontroller over a CAN bus which needs (ideally) 120 ohm terminal resistors at the end of each bus. I want to integrate this resistor into the circuit and activate it only if it is needed (depending on which software is being flashed on the controller). Is it possible to switch a resistor using software?
Basically what I have in mind is something like this:
What's the automatic equivalent of a variable resistor?. With only two states (120 ohm and 0 ohm open circuit) switched by a digital signal from the microcontroller.
AI: There are a couple ways of doing the termination with CAN (from AD application note AN1123):
Here is one scheme that uses switched termination to a common mode level, using two smallish p-channel MOSFETs. Raising the gates to +5V turns off the termination.
As an alternative, there are some pretty low-resistance analog switches available (a couple ohms or less) which might simplify things but you'd have to analyze how well they'd do with ESD etc., and many won't handle even 5V. For example the TS5A3167.
|
H: How can I power a cellular phone for a week with a car battery or other type of battery?
I am about to start construction of a new house, and unfortunately the property is not near where I live. I wanted to keep an eye on the construction site (No electricity) to make sure workers are showing up. To do so I want to put a cellular phone in a bird house, which will transmit a video from the camera over the cellular signal. (The Android/iPhone app is called "Alfred") My only problem is, how do I keep my cellular phone powered for 7 to 14 days without charging?
I would GREATLY appreciate any help anyone can give. I think the best option would be to connect a 6 or 12 volt motorcycle or car battery to the phone, but what do I need to consider other than converting the voltage from 6V or 12V to 3.7V? How do I convert from 12V to 3.7V? The screen on the phone will be off the whole time. All I will be running is the camera, and data transmition.
Does it make a difference if I connect the external battery directly to the phone so the phone thinks it is the internal battery versus connecting the external (6 or 12v) battery to the phone via its USB connector which the phone would think is a charger?
Solar won't work because of tree coverage. My other thought was breaking open laptop batteries and using the cells. I would like the least expensive and a simple way of doing it. Can I just connect a cigarette lighter USB plug to a car battery directly and plug the phone into the USB?
Thank you in advance for any help anyone can give.
AI: Can I just connect a cigarette lighter USB plug to a car battery directly and plug the phone into the USB?
This is exactly what I thought of when I was reading your... text wall. Using an USB plug would be the cheapest way to power your phone, concerning both, costs and time.
The plug reduces the 12V from the battery to 5V for USB. There are still old plugs with a linear regulator out there, which waste more energy than the phone consumes. As they have to dissipate this as heat, they usually can not provide much power. So, pick one which can deliver 2-3 or more Amperes, as those plugs contain a more efficient switching regulator.
About calculation: My phone has a 3100mAh battery (3.7V) and lasts 5 days when idle. So, the battery contains an energy of 3.1Ah * 3.7V = 11.5Wh
A small 12V 4Ah scooter battery contains 48Wh , which is four times the energy of my phone. So, it in principle it will last 20 days.
Note that this is a rough estimate. Switching power supplies are also not 100% efficient, so may be it's only 70-80%.
In your case, check out how long the battery lasts when doing the desired job. Keep in mind that signal strength also plays a role.
Finally, it may not be legal to keep your workers under surveillance.
From the comments:
Anyone know if it is more efficient to connect an external battery to the phone via USB and charge the internal battery or connect the external battery directly to the battery connector inside the phone? Lastly there are 4V 20AH lead acid batt. Can I connect direct to phone?
It would be less efficient to charge your phone battery from the external battery, as charging always goes with some losses. However, if the phone battery is full at the beginning, you have the charge of the external battery plus the charge of the phone battery, and in total, the phone will last longer.
About the 4V: I doubt the phone will take the 4V from the external battery supplied via the USB port, as this is clearly below the USB standard. On the other side, a fully charged 3.7V LiIon battery has about 4.2V
This means it should be OK to remove the internal battery and connect to an external voltage of 4V. (It is not recommended to switch batteries, especially different types in parallel.)
On the other side, a lead-acid battery behaves similar: When it's full, it has up to 2.4V per cell, so 4.8V for your battery. This is only 0.6V above the maximum of the LiIon battery. I would guess, that it is OK, but I can't guarantee it. But if you measure the voltage of your battery, the actual voltage may not be that high, and then it's OK.
|
H: Does the STOP condition have any meaning on a single master I2C system?
I'm currently working on rewriting some of the code on my I2C driver, and I'm trying to figure out the exact purpose of the STOP condition.
I believe that on a multi-master system, once a START condition has happened, the other masters can't/shouldn't drive the bus until a STOP has happened.
But what about on a single master system?
Does the STOP condition have any function? What exactly does it do?
Because the I2C bus works DC-x Hz, couldn't you just keep doing start
conditions and re-start conditions any time you need to send data?
What does doing the STOP at the end give you?
AI: The STOP is an indication to the slave device that communications are complete. It's ingrained in the protocol and spec-compliant slave devices expect to see it, so you simply can't wish it away.
For instance, if you're doing a page-write to a 24AA04 serial EEPROM (up to 16 bytes of data), the EEPROM will stop buffering data and start writing to flash upon receipt of the STOP from the master. You're not obligated to send a full page, so you need the STOP as a clear indication that there are no more bytes coming. (Some EEPROMs will throw away pending transactions if a STOP is not received at an expected/appropriate time, which can lead to some head-scratching debugging problems.)
Some devices also use the STOP to enter a reduced-power-consumption mode, drawing less power until the next START wakes them up.
|
H: How to find the direction of current of the circuit?
Okay this is the circuit
The magnet given is moved up and down continuously at a high rate through the coil and generates a peak voltage of 4V.
I eagerly use the right hand rule in order to find the direction of the current . But what I have found is very-much confusing.
To use the right hand rule the I assume that the magnetic field is pointing downwards, so without any doubt I point my fingers towards that direction, I.E I got the dorection of current as rightward. But I am confused with it.
Yes of course I got the direction of current point toward right side, but I am muddled due to this spiral thing in the coil,because in the coil we have springs which are usually take a shape of a circle place in a 3-D space , So I couldn't get a clear idea about where does the current coming out whether it is from the upper end or the lower end of the coil.
So how can I make this puzzle clear?
AI: The diagram you posted is a schematic diagram. It is not meant to represent the physical arrangement of the parts, or the direction that the coils turn.
Since the magnet is oscillating in and out of the coil, sometimes the current will flow one way and sometimes it will flow the other way. It doesn't really matter which current direction is associated with which direction of movement of the magnet.
The connection of LED B is problematic. Your statement that the magnet moving through the coil "generates a peak voltage of 4V" is nonsense given the placement of LED B. I'll leave it as an excercise to understand why.
The connection of the resistor is problematic. One end seems to be only connected to itself. This means the resistor has no effect on the operation of the circuit and only makes the drawing more confusing.
The connection of LED C is problematic (or maybe part of what you're being asked about). Do you see why LED C will never be forward biased?
The schematic drawing conventions are inconsistent. At one place, a connection between two wires is shown with a blob connection. At another, a crossing without a connection is shown with a "jump" to emphasize the lack of connection. At other places, wires cross without either a blob or a jump, so it's not clear whether a connection is intended or not.
|
H: TLV1117 Regulator Thermal Issue
I'm using this Regulator namely TLV1117-50IDCYR for a project that I'm working on. Aparently everything is working fine with this chip but sometimes I feel it's heating too much and I'm not putting all the load there.
I'm making a regulation from 12V to 5V for a 300mA output current and it will have a 2,1W of power dissipation.
I've checked the datasheet on the page 5 and it says the Junction-to-ambient thermal resistance is 104.3 ºC/w and the Junction-to-case (top) thermal resistance is 53.7 ºC/W.
My doubt is: which parameter should I consider for my case? If it is the second one I am not seeing any problem.
Can you help me on this?
Many thank's!
AI: The thermal resistance of SMT packages is highly dependent on layout and other factors. I suggest you read relevant sections of this application note (AN-1028 Maximum Power Enhancement Techniques for
Power Packages) from TI on their SOT-223 package's thermal characteristics. Pay attention to the thickness of copper (2 oz, for example, is much thicker than typical prototype board thickness- might be 1oz or less).
2.1W is not likely practical in that tiny package unless you can heat sink the tab with a big copper area.
Even in a TO-220 case, you should have a heatsink to be able to safely dissipate that amount of power.
|
H: PIC18 GPIO switch from input to output(dual mode)
I have always wondered if it is possible to switch the port direction of the PIC GPIO during the course of the program execution. So, for instance I start with a particular port set as an input(digital level). I monitor that pin, and if the level changes, I change the direction of that pin and drive a signal to turn on a LED. Is this too far fetched or is it doable? If so, some pseudo-code to would be very helpful.
AI: Yes, it's completely possible you simply change the relevant bit in the associated TRIS register from 1 to 0 in order to change the pin from input to output.
In general on the PIC18 series you should read pins using the PORT register and write using the LAT register.
So suppose you had a pin like this:
simulate this circuit – Schematic created using CircuitLab
You could periodically read RA0 as an input and drive the LED the rest of the time. To read the switch state you would set bit 0 in the TRISA register high, wait a bit, then read the PORT pin (bit 0 of PORTA), and then clear bit 0 in the TRISA register.
To avoid contention, only set the pin to output if the LED is to be driven low. The LED will always come on as long as the switch is pressed.
|
H: Changing potentiometer from a gaming steering wheel
I have Logitech's vibration feedback wheel which has a rotation of 270 degrees. I am considering changing its potentiometer so it can rotate more (900 degrees or 1000 degrees maybe).
What potentiometer should I choose for this task? Apart from its Ohm resistance what other parameters should I keep in mind?
THank you
AI: It's easy enough to get multi-turn pots, and they aren't all that expensive.
As has been pointed out, if you go this route, you'll lose resolution - that is, you'll need to turn the wheel more in order to get the same effect on the game vehicle.
You may well have mechanical problems. The existing pot probably has an intrinsic range of ~330 degrees, but the wheel mount has built-in stops to keep the 270 degree range, and do it so that the pot doesn't provide the limiting (that would be a great way to destroy the pot). You would need to modify the wheel assembly to get more rotation. Once you do this, there's no easy way to build in stops for multi-turn ranges, so limiting would be done by the pot. That is, if you were trying to make a very tight turn, you'd turn the wheel until the pot ran out of range - and I can pretty well guarantee that you'd kill the pot. These things just aren't that strong.
|
H: Detect NeoPixel status
I am powering a 1m strip of these with a Spark Core (which is like a small web-enabled Arduino if you're not familiar with it). Right now everything is working swimmingly on my workbench, but I'm looking forward to when it is in the field, and I want to know if there's any way for the controller to detect the status of the strip? In particular, I would love to know if the strip is connected and working and ideally how many pixels are present. I'm expecting the answer to be no, but I'm no electrical engineer and maybe there's some magic I'm not aware of.
Also, what's a good primer on things I should be protecting against (the EE version of defensive programming, I guess)? The circuit is just the Spark Core, the NeoPixels, a protective resistor and a capacitor as described here (https://learn.adafruit.com/adafruit-neopixel-uberguide/best-practices). Currently it's powered by USB, but will eventually be powered by an adapter connected to mains.
AI: An idea: The pixels pass the data through, removing one message, each time.
You could write to the string as though it has one more pixel.
Connect the data line from the far end back to the Arduino, and check that you get one message out the other end. This would confirm that the string is working.
Note that the first line on the Adafruit site says "There are some things to watch for. These LED's use about 9.5 Watts max (~2 Amps @ 5V) per meter" So you will quickly run into trouble powering them from USB. You will need to protect your circuit against the neopixel strip!
|
H: Difference between dichroic, LED and halogenated lamps
I think that this isn't the best place to ask this, but I couldn't find my answer to this and I couldn't find any other stack exchange site related to this.
I wanted to know the differences between dichroic lamps, LED lamps and halogenated lamps. I don't want too technical answers, just the most relevant.
Thanks in advance and sorry for my bad english and sorry if I'm asking this in the wrong forum.
AI: In short: Led is the most efficient, halogen is only a little better than your old-school lightbulb. In other words, horribly inefficient. I have never heard of dichroic lamps, but I am no native speaker. A short Google search resulted in the information that "most dichroic lamps are in fact halogen lamps".
Nowadays you should get every light technology in almost every base. LEDs sometimes come built-in in new lamps, because they have a very long lifetime. (over 10000 hours if I remember right).
I don't know for which purpose you want that information. If you want to modernize your home I would recommend using only leds. They are more expensive, but you never have to change them. And they are the most efficient solution on the market.
However, leds have drawbacks. You should get familiar with terms like color temperature, lumens and color quality . It is not as easy as buying a incandescent lamp. The above link claims not to be exhaustive. Just do a Google search for terms that you don't fully understand.
|
H: What determines BLDC rotation direction when using trapezoidal commutation with BiPolar Switching?
Block commutation seems simple enough, the rotation is determined by the order in which the 6 step sequence is executed (ex. cw 1->6 or ccw 6->1)
What's confusing is how bipolar switching changes this. If you drive the low side phase with the same modulation that is applied on the high side, then this suddenly becomes bipolar, and it supports braking/regeneration all thru the pwm duty cycle (i.e. >50% is cw, <50% is ccw).
https://e2e.ti.com/blogs_/b/motordrivecontrol/archive/2012/04/04/so-which-pwm-technique-is-best-part-4
So how do these 2 non-mutually exclusive techniques (6 step trap & bipolar) reconcile with each other to determine the motor spin direction ?
AI: The link you provided (a nice read!) is focused on brushed DC motors, not brushless (BLDC) motors. Brushed DC motors are mechanically commutated (via the brushes) so applying a constant voltage will make the motor spin. Brushless DC motors are electrically commutated (the block commutation is one of the strategies) so things get a little bit trickier. But for either type, the key is this: relative voltage across motor windings generates action
This article discusses the fundamentals of a BLDC driver. The short story is that (for a N-phase BLDC) you need N 'half' H-bridges; you want to be able to connect any of the N winding wires to either V1 (often +V) or V2 (often GND). Bipolar switching is compatible with BLDC drivers. See Figs 10 and 12 in the eFlexPWM application note.
Consider that since PWM frequencies are higher than the motor's response time, the PWM'ing is effectively creating an average voltage. For the sake of getting to the heart of the matter, mentally simplify the PWM'd transistors to a variable voltage source.
In the unipolar case, this winding voltage is generated between one winding held to a specific reference (often ground), while the other is raised a variable amount (again, 'raised' is typical but 'lowering' would work the same).
In the bipolar case, the winding voltage is generated between two windings 'raised' to a variable amount (as you reminded me, the mean need not be 'ground'). For example, you could use power inputs at -1V and +3V, set (via PWM) winding A to +2V, winding B to +1V, and generate voltage of 1V across the motor, with current flowing A->B. Whatever madness your motor controller is doing, the motor just sees and responds to the relative voltage across its windings.
So finally returning to your question – the 6-step sequence of relative voltages across the motor windings remains unchanged. What's different is how your motor controller generates that internally.
I'd hesitate to use bipolar switching - if your PWM source goes down, the motor will remain energized (typically not good for safety).
|
H: Self Contained Temperature Display in Very High Temp Environment
I want to build a temperature sensor + display that will operate in a high temperature environment (routinely 120C, but periodically spiking to 175C, and might be exposed to 350C where it would be required to survive for very long--ie it could fail after a short time) that would also function at lower temperatures (e.g. 15C). Can it be done? How?
Challenges/Questions:
1) Is there a way to generate power from the heat that would drive the electronics?
2) Is there a battery that would survive the heat?
3) Is there a way to keep the electronics cool enough to function?
OR:
Is there another (non-electronic) way to display the temperature where the sensor and display are while in the high-temp environment (obviously not a EE question here...but open to idea).
AI: Inrared non-contact thermometer inside the suit; sensor behind the faceplate focused on a fixed object mounted just outside the faceplate. The IR transmittance of the faceplate material will be a factor. This would address questions 2, 3, & (the un-numbered) 4: keeping the battery and electronics comfortable and not penetrating the suit.
Any visual display inside the helmet will have to contend with the limited space vs. the eye's normal focusing range, the near end of which gets worse as we age. You'd need some optics and create a "heads-up" display if the firefighter needs precision visual reporting; if not, LEDs coded with color and/or blink codes might work (does it have to overcome high ambient light?). Audible display might work (noisy environment?). An audible alarm would be well served by the in-helmet location.
|
H: Physical significance of singular matrix in two port network
In class we've been introduced to two port networks, and I was wondering: Is there a physical significance to a singular matrix in a two port network, or is it simply where the mathematical model breaks down?
AI: It doesn't mean that the model breaks down, but it is usually a symptom that the model is idealized.
Consider the simplified common-emitter AC model for a BJT:
simulate this circuit – Schematic created using CircuitLab
It is clear that:
\begin{align*}
I_2 &= h_{fe} \cdot I_1 \\
V_1 &= h_{ie} \cdot I_1
\end{align*}
which can be rewritten in matrix form as:
\$
\begin{bmatrix} V_1 \\ I_2 \end{bmatrix}
=
\begin{bmatrix}
h_{11} & h_{12} \\
h_{21} & h_{22}
\end{bmatrix}
\cdot
\begin{bmatrix} I_1 \\ V_2 \end{bmatrix}
=
\begin{bmatrix}
h_{ie} & 0 \\
h_{fe} & 0
\end{bmatrix}
\cdot
\begin{bmatrix} I_1 \\ V_2 \end{bmatrix}
\$
from which you can see the h-parameter matrix is singular, having the second column made up of zeroes, but still the model is valid, although really idealized, since it neglects the output conductance \$h_{oe}\$ and the \$h_{re}\$ coefficient.
|
H: SMD package sizes in Altium
I'm a little confused about the standard SMD package sizes. Say I have picked some capacitors, and their package is 1005 (0402). Would this have the exact same footprint as a resistor or inducor with the same listed package size? I ask because I'm using Altium, and the generic PCB footprints libraries seem to be slightly different between the inductor and resistor footprint for a given package number.
AI: I suspect that the reason for the different footprints is to allow for the different heights of different parts.
For example, a SMT ceramic capacitor is typically about as tall as it is wide. While a "typical" 0402 resistor I just looked up (here) is 0.5 mm wide, but only 0.35 mm tall.
Because the capacitor is taller, it needs a longer "toe" in its footprint to allow a good solder fillet to form (I haven't checked if this is actually what's different between Altium's resistor and capacitor footprints).
Altium's footprints are just a reasonable guess at good footprints for typical parts in each category (resistor, capacitor, inductor). To really optimize your design for high manufacturing yield, you should consider the actual height of each part number in your design, and account for your manufacturing process: for example, reflow and wave soldering have slightly different requirements for optimum pad sizes.
|
H: 1N4148 Diode for 3V-5V PWM
I have 2 PWM ports, one generates 5V PWM and another at 3V. But only 1 will be generating at a time. However, their output is going to a same wire or pin. So, to avoid the current to pass into each other, I want to put diodes on each port as protection. For this, I would need fast switching diodes as PWM frequency is KHz or few MHz. I searched for fast switching diodes and found 1N4148. It does what I need but its forward voltage is 1V. So, for 3V and 5V, it will blow up ? or am I understanding the datasheet wrong ?
I read in this answer that forward voltage is voltage drop ? so if its voltage drop, whats the max voltage that it can support ? also 3V will come out as 2V, so I will be required to level it up to 3V again, right ?
simulate this circuit – Schematic created using CircuitLab
AI: The forward voltage itself is not really the value that you need to worry about - using it in this circuit won't break it as you aren't going to be intentionally dropping large voltages across the diode. In fact it is the power rating of the diode you need to look at.
Why? Well, you will need something at the output to pull it down when both are low - e.g. a resistor.
With the diodes your signal output levels will e reduced - by the forward voltage drop of the diodes. So if you have say a 10k pull down resistor, this would mean when the 3V logic signal is pulsing you will get a 0V low level (pulled down by the resistor) and a 3-Vf=2V high level (sourced through the diode). In this case you would have Id=2/10k=0.2mA flowing through the diode which should be fine for most diodes (it is only ~0.2mW).
For the 5V logic signal you would have a 4V high level output, and in this case there would be 0.4mA flowing through the diode, which again should be fine.
The problem with this is that if the load on the PWM signal is quite highly capacitive then it will switch from high to low quite slowly as you only have the resistor doing the switching. If the PWM signal was driving a large power transistor which would have a fairly high gate capacitance, this slow switching could cause excessive heating in the transistor (but that is a whole other story).
If you want to switch faster you may find you need to reduce the resistor value to increase the speed in which the output is pulled low. When doing this you have to be mindful of the current that flows through the diode and on through the resistor during a logic 1 and hence the power dissipation in the diode (Vf * Id).
Given you have a 3V signal and a 5V signal, what you can do (and I have done in the past) is use a TLL level 2-input OR gate (e.g. 74HCT32). These tend to have high input voltage thresholds (Vih) which are quite low and can thus run at 5V but support a 3V input (e.g. this 74HCT32 has a Vih of 2V on a 5V power supply). By using an OR gate like this you eliminate the issue of voltage dropped across diodes and get a Push-Pull output - the output of the logic gate sources and sinks current, so you don't need the pull down resistor.
|
H: How to select power (between USB and battery)?
I'm hoping to power my device with a 3.7V (2x 18650) rechargeable battery which can be recharged within the device using USB (like in a phone, laptop, etc.). I believe some kind of power selector is needed if the device is to be operational during charging. Is there a module out there that enables this?
Thanks
UPDATE --
Changed question to consider 3.7V (2x 18650).
AI: You can use a load sharing circuit like this one:
When no USB is connected, the PMOS transistor Q1 will be on (through the pull-down resistor R2) and the battery will power your circuit. When the USB is connected, the input voltage will bring the gate of Q1 up disconnecting it, and the MCP73831 will be charging the battery, while at the same time the usb will be providing power to your load through the D1 diode.
You can use any battery charger chip (this one is very simple), but the principle and load sharing circuit can be the same.
|
H: Solar panel strings: when does current flow via bypass diodes where each panel has a different maximum power point?
When having strings of 5 till 6 a-Si/µ-Si thin film PV panels and each panel its junction box is containing a BY255 by-pass diode, to make sure that current will flow in less optimal situations, like partially shaded/broken panels.
I am now wondering how current will flow in less optimal situations, for example one PV panel receives 200 Watt/m2 and other one 1000 Watt/m2. In other words: one panel in the string would could generate a current of 1.5 amps at its maximum power point while another will only generate 0.3 amps.
Will the string now produce/generate 1.5 amps of current? Will at the shaded panel 1.2 amps of current will flow via the bypass diode, and the other 0.3 amps through the PV cells? I guess not because of the potential difference: some 80-90 volts over the cells, and maybe -1 volt (the voltage drop) over the diode. So my guess is that only 0.3 amps will flow, but when will the bypass diode in the PV junction box carry current? Where are tipping points?
The answer in question regarding How does physical damage affect the performance of photovoltaic cells? does explain something related, though I still don't understand the role of the bypass diode.
AI: So my guess is that only 0.3 amps will flow, but when will the bypass diode in the PV junction box carry current? Where are tipping points?
The situation is far worse than you imagine.
The maximum available current from a series string of cells is the current available from the lowest current cell. A lightly shaded cell will reduce panel output. A heavily shaded cell will almost stop panel output - and as this happens reverse potential from all the unshaded cells develops across the shaded cell so that it contributes nothing. If bypass diodes are fitted current then flows in an appropriate diode around the shaded cell, and also nearby cells if the diode spans several cells.
Accordingly, panels and cells within a panel which are connected in series need to be matched as closely as possible both in performance and in received isolation (light) levels. A 1000 W/m2 : 200 W/m2 imbalance such as you suggest can be fatally bad and should be avoided by design to the maximum extent possible. Bypass diodes should be fitted to prevent large reverse voltages occurring across shaded cells as this can destroy whole installations, or worse.
Bypass diodes are connected in reverse polarity to a cell's output when it is acting as a current source. If the cell current is lower than the string current the voltage acting to drive current through the cell causes it to produce less output and then to reverse bias it completely. A bypass diode across the cell conducts when the cells reverse polarity exceeds the forward polarity of the diode. So none of a partially shaded cells output contributes to string current when the bypass diode is conducting and instead the diode reduces output by a diode drop, plus whatever voltage the shaded cell(s) no longer produce.
While this description may seem contrary to what you expect, you will find it well described in many online articles.
Many examples can be found here and here
The effect can be easily visualised using the diagram below. For the diode to conduct the usually -ve terminal of the PCV-cell must be positive relative to it's usual +ve output. No current is produced.
Diagram from Dale Marshall here
Note that in the absence of bypass diodes the full remaining string voltage will build up across the shaded PV-cell. In high voltage systems this may be in the 400-1000 V range. Cells with defects my have low isolation voltages and break down. In such cases fires can (& do) occur which destroy the cell concerned, often damage others, will usually write that panel off and may destroy the whole installation or even an the building it is mounted on.
|
H: A doubt on potential difference
This is a schematic diagram of the circuit.
Is it possible to take the voltage difference between AB as E? I have learned that the potential difference in parallel circuits are the same, Therefore I did assume the potential difference across AB as E. But I want to know whether I am correct or wrong?
Because I hope this concept is a crucial factor when solving circuits, since I am beginner I would like to know it.
AI: As @Curd Mentioned A-B are not in parallel. An easy way to tell if any 2 elements are in parallel is to check if they share the same nodes. This is how you solve the circuit.
Note how i assume the 2 resistors to be in series, this can only be done because there is no load connected across A and B. After that all you have to do is apply 2 voltage dividers.
Edit 1, The Values in the diagram below have nothing to do with the solved axample above
The Nodes in the above circuit are represented as the following (Ignore the values):
simulate this circuit – Schematic created using CircuitLab
In simple terms why is AB not in parallel with E? It is because E shares the color Red and black, when AB shares the colors yellow and black. 2 Elements may be considered as being in parallel if they share the same "colors" (node). In the diagram each nodes has been colored in a distinct color.
So how does parallel look like? It looks like the following:
simulate this circuit
As you can see R2 and R3 share the same color (node) hence they are in parallel.
|
H: Is it possible to transfrom a transfer function into voltage?
So I have a control system which I'd like to control externally. I have the error signal from the system and I use an ADC to convert the signal to number. I have the transfer function of the system (calculated with CHR method). I want to use a PI (Proportional- Integral control). And I have a DAC to convert a signal back to an analog signal.
But now, I only have a transfer function: $$11.3 s^2 + 5.82 s$$
So how can I feed back this signal to the system?
AI: As your controller must be implemented digitally in discrete time, you have two options :
Design your PI controller in the continuous time (s) domain and
convert to discrete time.
Convert your plant to discrete time and design the PI controller
accordingly.
Method (2) is generally preferred because it handles delays more easily and the controller is directly implementable without additional transforms. Look at this answer for more info.
Once you have the controller in discrete time make sure it's in polynomial form:
$$\dfrac{Y(z^{-1})}{X(z^{-1})}=\dfrac{b_0+b_1z^{-1}+..+b_kz^{-k}}{1+a_1z^{-1}+..+a_kz^{-k}}$$
and convert to a difference equation that can be directly implemented in hardware:
$$b_0x(n)+b_1x(n-1)+..+b_kx(n-k)=y(n)+a_1y(n-1)+a_2y(n-2)+..a_ky(n-k)$$
|
H: Voltage drop through a resistor using Ohm's Law?
I'm having a very hard time comprehending basic Ohm's Law.
I was recently watching a video on the operation and use of Operational Amplifiers, and was completely lost because I couldn't understand what the resistors in the circuit were actually doing.
For example, part of his circuit was limiting 10V to 5V using a 2.2k resistor.
Where do the values go in Ohm's equation? How can we know the output voltage without the current in amperes?
The only two configurations I can think of:
$$10V = \frac{2.2k\Omega}{I}$$
$$5V = \frac{2.2k\Omega}{I}$$
But, he got the value without knowing the amperage, how is this accomplished?
AI: I was mistaken. He was using a voltage divider.
This is the equivalent piece of the circuit I was concerned about...
simulate this circuit – Schematic created using CircuitLab
Because I didn't understand exactly how current flows, in my mind ONLY THE FIRST RESISTOR had any effect on the voltage of the output wire because logically current had only gotten to that point. The following terms may not be correct, but they suffice to explain the concept.
What I realize now is that the TOTAL RESISTANCE of everything leading back to ground dictates voltage as well. CURRENT RESISTANCE dictates the amount of CURRENT at the point of measurement.
The following is how you would get the output voltage, if it were unknown:
I = Amperage
V = Voltage
R = Resistance
Ohm's Law: \$I = V/R\$
Total Resistance (TR) (The Total Resistance in Your Circuit) = R1 + R2
Current Resistance (CR) (The Resistance in Your Circuit Up To The Point of Measurement) = R1
TR = 4.4k
CR = 2.2k
This will be the current in mA that flows through our circuit:
\$I=V/R\$
\$I = 10v/TR\$
\$I ≈ 2.27mA\$
This will be the voltage drop across the resistance up to our measurement point. (I.E. Our Output):
It is completely coincidental that our voltage drop is our desired voltage. At this stage we have out voltage drop not the actual output voltage.
\$V=IR\$
\$V = .00227A * CR\$
\$Voltage Drop (VD) ≈ 4.994v ≈ 5v\$
This will be the voltage on our measurement point. (I.E. Our Output):
\$Output Voltage (OV) = V - VD\$
\$OV = 10v - 5v\$
\$OV = 5v\$
|
H: How come product makers tend to make their own boards rather than putting together modules?
It seems that most, if not all, companies that produce mass market goods use custom boards rather than putting together modules that are already available in the general market.
For instance, battery chargers use custom circuits rather than some module from aliexpress. Bluetooth speakers also tend to use custom circuits instead of a HC06.
Wouldn't it be more economical to put together existing modules than to build custom boards? Wouldn't that require lower development and testing costs?
Thanks
AI: Another consideration in addition to davidrojas' and the other answers, is that if I as a manufacturer decide to use another commercial product (note: this isn't outsourcing to another company to make my product, but actually purchasing a commercial 'module' as you specify), then I am forever on having to live with it. There are two serious problems there: One, what if the 'Module' has a defect, especially a safety one. Then my company is responsible to bear the brunt of the publicity, and there is no guarantee that I can have the module maker fix the problem, leaving me back at square one, which is designing my own. The second major problem is that the company selling the module to me has no obligation to keep it the same. They can change footprint, connections, features, or even discontinue it at any time.
Major companies that do outsource parts generally still design the parts in-house, and then send them to one or more manufacturers to actually make them. This is true in just about every commercial product from electronics to lawnmowers.
|
H: ADC calibration in ATTiny88
I'm writing an embedded app for ATtiny88 - one of its task is to measure voltages of serial connected two 12 V car batteries.
The microcontroller is powered from around 4.5-5 V (LM317), measured voltages (~24 V and ~12 V) are connected to two ADC channels via proper voltage dividers. In my application I have to use internal bandgap reference theoretically equal to 1.1 V.
I should probably do some kind of calibration procedure, so right now I figured something like this:
Apply exact 24 V (12 V) from an external, stabilized power supply to the voltage divider.
Calculate gain coefficient for each used channel (24 (12) / mean value from ADC channel)
Store these coefficients in EEPROM and use it as a scaling factor for calculating measured voltages.
I tried to measure the internal bandgap reference - one of the ADC channels is internally connected to it - but it always returns 1023 - maximum value for a 10-bit ADC.
My question is, how to measure the voltages most accurately? The bandgap reference voltage probably varies between different chips (datasheet says 1- 1.2 V). Is there any better way to do calibration than the one I'm doing it right now? Please help.
AI: The bandgap reference defines the upper limit of the ADC range, so you will always get a reading of 1023. Even dividing it down does not help, as the ADC will always measure a fixed fraction of its reference.
As it's already available on a pin, just measure the internal reference voltage precisely.
Apply a voltage to a free ADC input. While this decouples your voltage from the rest of the circuit, you assume that the inputs have identical characteristics. And you need some extra code to read out that ADC.
Apply a voltage to the used ADC input.
Apply a voltage to the 24V input of your voltage divider. This is the best calibration solution, as it also corrects errors due to the precision of your voltage divider resistors. (The worst-case error can be estimated as twice the precision, so 2% if you use 1% resistors. So better use 0.1%)
While the last point allows to calibrate the ADC for a certain voltage with the highest precision, you may also do the measurement for several voltages. This way, you will find out if there is an offset (0V is not 0x0000), a non-lineariy or another effect. Here is a result from my work:
This is a calibration of a 16bit bipolar ADC with some electronics upstream, which also have an impact on the measurement performance. In general, the linearity is fine, and you can use a linear function to convert ADC reading to voltage vice versa. However, the residuum (difference between read out and expected ADC value) shows this wave-shaped curve. The effect is not large, but it's there.
Note also, that I did not use the highest and lowest value to calculate the function, as this would bend down the left side and bend up the right side of the residuum and so give less precision. Instead, the function is determined so that it fits the readings over the full range well. (I could have chosen something non-linear for even better results)
OK, I guess that's more than what you need to know. One last point:
Always think about what precision is achievable and feasible. Your ADC has a dynamic range of 0.1% (1/1023), so you should take into account when you use 1% resistors for the divider, but 0.1% resistors would be OK. And if your multimeter has a precision of 3%, that's the best you can get for your calibration. Finally, 0.1% of 24V is 24mV, so decide how much precision you need.
|
H: Detect change on D- D+ lines
How can i detect change on D- and D+ (puling one or both of them high) USB lines from Linux environment?
AI: After attaching 1k5 pull-up resistor from 3.3V to D+/D- unknown/unrecognized USB high/low speed device would show up. You can check for this device with e.g. lsusb.
Much riskier method: most USB hosts should detect excessive current from USB power line. The problem is some other motherboards may not have overcurrent protection at all or it may not work correctly.
Other than that STM32F042F4/F6 in TSSOP20 package costs less than $3 ($1 in volume), does not require crystal and would be much more flexible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.