text
stringlengths
83
79.5k
H: Two Component's Top overlay above Rule As you see in the attached picture. Two component's top overlay is above in my pcb design; I saw a error, but Altium Rule check does not inform me. I need a rule for this error. Can you help me? AI: You can't create a rule using the Top Overlay as an outline. If you want to have the clearance between two components checked, create 3D models and a component clearance rule.
H: How is the equation for Compliance voltage? Here is the link for a current mirror discussion. I am trying to understand the meaning of compliance voltage equation and how it can be dervied: https://wiki.analog.com/university/courses/electronics/text/chapter-11 Attached the same relevant section screen-shot: Please advice. Thanks VT AI: Please commit this to memory forever. You will need to thoroughly understand it, literally over and over and over again. $$\begin{align*} \textrm{Shockley equation applied} \\ \textrm{to simplified BJT model}\\ \textrm{ignoring the Early Effect:} \\ I_C&=I_S\cdot\left(e^{\cfrac{V_{BE}}{V_T}}-1\right) \\ \textrm{move }I_S\textrm{ over to the other side,}\\ \cfrac{I_C}{I_S}&=\left(e^{\cfrac{V_{BE}}{V_T}}-1\right) \\ \textrm{Drop the insignificant -1 term,} \\ \cfrac{I_C}{I_S}&\approx e^{\cfrac{V_{BE}}{V_T}} \\\\ \textrm{Take the logarithm of both sides,} \\ \\ \operatorname{ln}\left(\cfrac{I_C}{I_S}\right)&\approx\cfrac{V_{BE}}{V_T} \\ \\ \textrm{Solve for }V_{BE}, \\ \\ V_{BE}&\approx V_T\cdot \operatorname{ln}\left(\cfrac{I_C}{I_S}\right) \end{align*}$$ You should be able to do the above in your sleep. You should understand the meaning of any of the above in your sleep. And you should also know that the thermal voltage \$V_T=\frac{n k T}{q}\$ and is approximately \$26\:\textrm{mV}\$ at room temps, that for small signal BJTs \$n\approx 1\$ (but for diodes it is usually larger), and that \$I_S\$ is, itself, highly temperature dependent (roughly proportional to the 3rd power of T) and that it dominates the temperature behavior of \$I_C\$ because its effect is not only larger but of opposite sign to the temperature effects caused by \$V_T\$. (The temperature-dependent \$I_S\$ equation is more complicated and not provided here.) Getting back to the point of the current mirror discussion, this just means that you don't want to saturate the BJTs in a mirror -- that's a bad thing as they stop being much of a mirror then. So \$\vert V_{CE}\vert\ge\vert V_{BE}\vert\$, as computed above. Just make sure that the load you attach, together with the mirrored current, doesn't violate that.
H: Why is noise power not multiplied by spread gain? In my intuitive consideration, noise power should be enlarged as spreading gain times. The reason why I think this is: Let us consider that the information signal is of bandwidth W Hz. Then, the receiver suffers from the noise of bandwidth W Hz as well. If we do the spreading work into the information signal with chip rate. Commonly it is much larger than W Hz. Then, (information signal X spread spectrum signal) will also have much larger bandwidth than W Hz. Thus, the receiver suffers from the noise of bandwidth much larger than W Hz. Since noise is BW*N0/2, where BW=bandwidth and N0=noise density, I think much larger noise happens. Why do people say DSSS makes much better SNR? (roughly, people say SNR will be increased around spreading gain times.) $$C = BW \log_2 ({1+SNR}) \quad\longrightarrow\quad C = BW \log_2 ({1+G\times SNR})$$ Edit!! AI: If you measure the total background noise that comes in through the receiver bandwidth, then that's true. We can conceptually measure total noise by connecting a resistor to the IF (the Intermediate Frequency of the spectrum analyser we're using) and seeing how hot it gets, the total thermal power, and this increases with the bandwidth (assuming spectrally flat noise). As we increase the bandwidth 10 times, we get 10x more noise power in. However, even though the spread signal looks noise-like on a spectrum analyser, as a result of being spread with a noise-like signal, we know something very important about it. We know what the spreading signal is. That means that once we have locked on to the spreading signal, we can average the signal coherently, which means the signal adds as voltage, not as power. What we do is correlate the incoming wideband signal with a locally-generated replica of the spreading signal, to extract the original narrow-band signal. Now something quite important happens when we correlate and filter the wideband spread signal, because we are also correlating and filtering the wideband background noise as well. If the spreading signal is uncorrelated to the background noise (an assumption we can usually make safely) then this process reduces the power of the background noise by the ratio of the bandwidths. Because our local spreading signal is the same as the original, we retain all of the original signal power, while we reduce the total background noise. And there is your SNR improvement. A corollary of this is, we need to generate a local replica of the signal to do the demodulation, if it's wrong, then we don't get the SNR improvement. So how do we lock-on in the first place? It doesn't happen by magic. But perhaps that's the answer to another question.
H: KCL and KVL with Laplace incorrect result For t > 0 find v0(t) Using nodal analysis I was found v1 and v2, take difference and find correct result But I can’t make same with direct KCL, KVL approach May anyone help with that? AI: Your equation for \$V_o(s)\$ may be factorised to: $$\frac{40}{(s+1)(s+6)}=\frac{8}{s+1}-\frac{8}{s+6}$$ Giving: $$v_o(t)=8(e^{-t}-e^{-6t})$$ The MathCAD answer is also correct (expand it by hand to see), but not in as compact/useful a form.
H: Why am I measuring earth to line voltage as zero? Is that what it supposed to be? I have a confusion about the topology below: I was wondering the potential difference between the line and earth. So I decided to check this through a 3-prong plug. I thought I would read the same voltage as line to neutral, since neutral and earth are supposed to be connected. But when I measure the voltage between the line and the earth, the voltmeter shows zero volts. What does that mean? AI: If you are measuring 0V between the hot/line pin and ground, you either have a big problem or are doing something wrong. The most likely option is that you have your multimeter set to DC voltage rather than AC voltage which would result in the average DC level being measured which for an AC waveform is theoretically 0. You could also be measuring the wrong pins on the plug, but that I imagine is not the case. The alternative is that your plug or house is incorrectly wired such that the neutral and hot/line pins are swapped over. In theory (from an electrical standpoint) this shouldn't cause a problem, but in practice and from a safety standpoint it is a big issue in devices which don't have an earth pin and assume that the neutral will be at earth potential.
H: i want to know the usage of a low pass filter and the usage of the phase Good morning. i would like to know a practical example about low pass filter and high filters.. and whats the use of calculating the phase between the input and output AI: Low-Pass filter: Your ADSL line filter so the higher frequency "broadband data" does not impact voice Likewise from a power electronics point of view & a PWM inverter, there maybe a need to "smooth out" the higher frequency PWM to produce a sinus voltage High-Pass filter: These are used in the loud speakers to reduce the low level noise. Likewise as Andy aka stated: hifi crossover Use of calculating phase... depends on the field of interest. Control stability is one, group delay is another. You tagged this "power electronics" do you have something specific in mind...
H: Is 74LS139 a DeMUX? If yes then how can I give it input and select lines? I am trying to learn DEMUX. So, I have a 4 Channel DeMUX Diagram: From this diagram we can clearly see that a DEMUX needs 2 Select Lines and 1 Input to give 4 output lines. After that I have taken a look at the datasheet of 1:4 DeMUX IC 74LS139, which describes a diagram as shown below. Here is the link to take a look at datasheet. In second image we can clearly see that it has two input lines A0a and A1a on left DeMUX and A0b and A1b on right DeMUX. There is no select lines. Also we cannot see a logical connection between any input lines of Left and Right DeMUX. Can someone explain me, how to use this IC as DeMUX? AI: This chip is a dual 2->4 demux, not a 3->8 demux, that's why there are no connections between the two halves of the chips internals. The A lines are the address (=select) lines. The E (enable) line can be used as (active low) input. The outputs are active low, hence the outputs that are not addressed are high.
H: Analysis of the current mirror PNP transistor section Attached is the circuit I am trying to learn, Lower part with NPN transistors (outside the boxes) are familiar circulatory and and it is self explanatory as well. I am not sure how to is the working of the PNP transistor section (in black box) ? Also what is the significance of R3 and R4 (red box). How will calculate the value of these resistors. AI: This circuit looks a bit like a reference current generator I sometimes use: It also works with bipolar transistors so ignore the fact that this circuit uses MOSFETs. Also imagine that R is your sense resistor and the load resistor. This circuit works by having a 1 : 1 current mirror (M3 and M4) which makes the left and right currents equal. But M1, M2 and R also make a mirror but it is not 1 : 1, it is actually non-linear. Having these two work together means a solution must satisfy both mirrors. The wanted solution is where left and right currents are equal but there is also the solution where all currents are zero. So this circuit needs a start-up circuit to detect that and pull it to the wanted (1:1) solution. I think that is where your R3 and R4 come in. These force a current through Rsense, R2 and T2 so that the circuit can start up properly. Once started the current through R3 and R4 will be small, much smaller than the nominal current the circuit is designed to run at. That explains the relatively high value of R3 and R4. I would have drawn R3 and R4 not up but down from the base of T2 so that it is clear they conduct a current to ground.
H: Circuit for generating a PWM signal of varying high and low voltage values I have a microcontroller that is generating a PWM signal between 0 and 12V of varying frequency. I'm looking to build some sort of circuit that will take this signal and then attenuate the high and low values of that square wave to be anywhere between 0 and Vmax (12V in this case). The trick here is that the low signal may need to be higher than 0V, so it can't just be a voltage divider on the square wave. I would guess that this would use some sort of potentiometer or maybe even a variable op amp, but I'm a software guy and don't know what I'm doing. :) Basically I want to be able to generate high and low voltage values of my choice while still retaining the frequency and duty cycle. Ex. 250mV-500mV, 3V-5V, etc. Any ideas where to start, or what a circuit for this would look like? Please comment if more information is required. AI: Using a precision clamp might be OK: - U1 clamps the maximum level at the value V3 and U2 keeps the minimum level no lower than V5. Probably best to do a simulation to see how those particular op-amps handle the maximum PWM frequency you are wanting to use - faster devices can of course be chosen.
H: Noise Shaping of \$ \Sigma \Delta \$ Converters I am currently reading the book Continuous-Time Sigma-Delta A/D Conversion to get myself a better understanding of \$ \Sigma \Delta\$ converters. Unfortunately I got stuck at the point where noise-shaping is explained. The chapter started clearly, saying that for studying the effects of noise, the quantizer can be approximated by a linear model, which then leaves us with a linearized \$ \Sigma \Delta\$ modulator as shown at the bottom. With this model we want to achieve two different transfer functions, one for \$x(n)\$ and another for \$e(n)\$. $$ Y(z)=STF(z)X(z)+NTF(z)E(z) $$ STF ... Signal Transfer Function, NTF ... Noise Transfer Function Since the noise shall occur at high frequencies and the signal at low frequencies, it seems clear that the STF should be a low-pass and the NTF a high-pass filter. Now the author states that this leads to the following concrete transfer functions: $$ STF(z)=\frac{1}{\frac{1}{H(z)k} + 1} \\ NTF(z)=\frac{1}{H(z)k + 1} $$ No further explanation. How is it possible to conclude that the STF and NTF need to look like that? Furthermore, how can we conclude that the simplest \$H(z)\$ is an integrator? AI: Noise shaping refers to the spectral shaping of the quantization noise. It is assumed that the input signal of the quantizer is sufficiently "busy" so that it can be modeled as a linear gain element with some additive noise. The ultimate goal is to get an output signal that represents the input signal as closely as possible. The modulator shown in your post is the most common type, a lowpass delta-sigma modulator, it works best for low-frequency input signals. Looking at $$ Y(z)=STF(z)X(z)+NTF(z)E(z) $$ we see that for an accurate representation of the input signal X(z) the STF should be about 1 and the NTF should be as small as possible. To fulfill the first requirement STF ~ 1 we can look at $$ STF(z)=\frac{1}{\frac{1}{H(z)k} + 1} $$ and quickly see that we only need to have an \$H(z)\$ that is much larger than 1 for small frequencies. An integrator will be a perfect fit, however other filter types are possible as well. For the second condition, quantization noise suppression at low frequencies, the expression $$ NTF(z)=\frac{1}{H(z)k + 1} $$ shows, that again \$H(z) \gg 1\$ for low frequencies is required. Again this can be achieved with an integrator.
H: P-channel MOSFET for 100A? I'm using an alternator to charge a 300Ah lead-acid battery bank. The cable has a 60A fuse, yet measuring a shunt suggest that the current is about 70-90A for the first 10-20 seconds, and after a few minutes it is down to about 30-35A. The alternator provides 14.0V at the car battery, and cable losses further reduce the voltage to about 13.8V at the battery bank. I'm trying to introduce an automatic switch using a P-channel mosfet. The problem I'm facing is that the voltage drop over the MOSFET should be minimal, since a lower voltage will significantly reduce the charge current. A better alternative would be a boost circuit to 14.4V, but at 30-50A, they get very expensive. The P-channel MOSFETs tend to have higher on-resistance that N-channel. But I've found this: AP6681GMT The package is undesirable, and heatsinking is only possible through the PCB. I built a circuit, but when I tried it, black smoke came out of it, and it failed open. I'm wondering what I did wrong, and what my alternatives are. I understand a relay has a high on-resistance? Would it make sense to buy a starter relay for a car? The specification is typically not provided and perhaps the on-resistance will be too much. simulate this circuit – Schematic created using CircuitLab For testing the circuit, I connected the drain-side to a blower motor, and switched the gate-signal on/off every 1000ms. The circuit worked correctly, even with peak currents of 20A and sustained current of 7A. I then connected the drain-side to the battery bank. When I turned on the alternator, it jumped to 14.0V on the source side. Is it possible that the 1k resistor didn't raise the Vgs fast enough, such that it continued to conduct without pulling-up? e.g. a runaway scenario. UPDATE: Using an N-channel MOSFET instead? N-channel MOSFET: IRFB4410Z (100V, 97A, 9mOhm/10V) or AP99T03GP (30V, 200A, 2.5mOhm/10V 4mOhm/4V) simulate this circuit This would allow me to use a heat sink, to have lower on-resistance, bigger package. But with the inconvenience of a boost-circuit. AI: The basic formula for smoking parts depends on the thermal resistance of temp rise per power lost in ['C/W] for the rating of the heatsink. Pout = 50A*14.4V= 770 Watts However when supplying a sine wave into a rectifier with a battery acting as almost a short circuit the peak currents can be be >>10x times as much, thus increasing the requirement now to >>500A pk With 2.2mΩ and short term currents of >>100A for a drained battery I^2R = 22Watts with good heatsink to chassis with insulator and paste. To minimize the Vds drop, AND have sufficient Vgs now you need a Pch switch with gate to ground control AND have accurate gate control. Use a Pch and decide what threshold to disable the switch. I roughly chose 13.2V R divider with a poor-man's 1.1V Darlington comparator. ( it works) The threshold may need to be corrected slightly to 13.4V to as a threshold. Since the drain current is low the input threshold to the Darlington is also low at 1.1V @1mA $2.20(1pc) 2.2mΩ Pch type 170A simulate this circuit – Schematic created using CircuitLab The Vbat threshold to Vgs voltage gain is > 1k so the threshold amplifies Vgs to quickly move past turn ON threshold from 1.2 to 2.2V with just a few mV change in V+.. the only drawback is the threshold is not very precise (5%) and has a NTC of -2.1mV/'C which translates to 21mV/'C of battery threshold shift or a drift of 13.2 +/-0.4V for +/-20'C so 13.5V is advised.
H: Arduino - Use arduino as to switch to power up 12v pump I want to switch a 12 V pump from the digital output of an arduino. I have tried using 5 V relays by using this schematic: and put two of these in parallel. The power they are getting is from the arduino itself which is connected by 12 V in V-in. The relays behave abnormally. Can I get an arduino to switch on a 12 V pump and a 24 V pump? AI: What you show is a good topology. However, there are details in parts values that matter. One problem may be that the relays require too much current from the 5 V supply. You say you have 12 V available, from which the 5 V is derived anyway. A better solution is to power the relays from 12 V. Very likely the same relay series you are using has a version with a 12 V coil. That is a very common coil voltage. Relays in a series have about the same coil power. The 12 V version will need about 5/12 of the current the 5 V version does. For example, if the 5 V version requires 60 mA, then 12 V version will require about 25 mA. Running the relays from 12 V does some good things: It doesn't load the 5 V supply with the relay coil current. That can be 10s of mA per relay, and could possibly be exceeding the 5 V current budget. It's more efficient. The 5 V supply is being made from 12 V somehow, and that process is not 100% efficient. Drawing the same power from the 12 V supply is therefore more efficient than drawing it from the 5 V supply. This can be a significant issue if the 5 V is being linearly regulated from the 12 V. A extra 100 mA, for example, from the 5 V supply cause additional 700 mW dissipation in the regulator. That might be the difference between OK and too hot. Since the 12 V version of the relay uses less current, there is more margin for the gain of the transistor, or the amount of base current needed. The other issue is that you have to make sure the transistor is driven properly. That means it needs enough base current to be saturated when on. Let's say the 12 V relay takes 25 mA, just to pick something as example. Let's say the transistor can be counted on to have a gain of at least 50. That means the base current needs to be (25 mA)/50 = 500 µA minimum. Figure the B-E drop is 700 mV, so that leaves 4.3 V across the base resistor. (4.3 V)/(500 µA) = 8.6 kΩ. That allows for the absolute minimum base current. To get solidly into saturation, I'd double the base current, which means half the resistance, or 4.3 kΩ. Any handy value from about 2 kΩ to 4.3 kΩ would be fine.
H: Counting signal frequency with binary counter I'm trying to count the frequency of an audio signal using a 393 binary counter which is read by an Arduino without using its ADC. If the reading is done once per second, should the counter be somehow stopped when the counter is READ in order to avoid additional counts before the output pins are read? Perhaps, keep the input signal high with a MOSFET? What is the canonical approach to reading binary counter outputs? AI: Am only going to address your question about reading the count output from a '393 counter. You have two problems to overcome. This counter is asynchronous, which means that a clock input ripples through the individual flip-flops to its next count state. If you happen to read the count value on-the-fly very soon after a clock transition, you might get a really strange number, neither one less, nor one more than what you'd expect. This rippling occurs very quickly but nevertheless can cause errors. The other problem occurs when you concatenate counter chips for extended count values. A microcontroller can read in eight bits at-a-time. To read in a count from two or more '393's, the microcontroller must read in eight bits from one chip and save that result into a register, then return and read in eight bits from the next chip. In the interval, the count value can change, leading to the possibility of a wrong result. simulate this circuit – Schematic created using CircuitLab One solution to both problems adds a "gate" to clock input. When the microcontroller wants to read the count value, the clock input is disabled, freezing the current count value. Then the microcontroller can read the count value, which is now frozen at some value. A 74HC590 counter chip is more appropriate. Its counter flip-flops are arranged to make a synchronous counter, having no ripple-count problem. It also incorporates an internal gate. It also provides an internal counter latch that stores the current count value, while letting the counter flip-flops continue counting up. And it provides a tri-state output, so that a micrcontroller can read in its eight-bit value from the same port. This makes it very easy to concatenate these chips if your microcontroller has limited I/O pins. However, most microcontrollers have versatile internal counters that can give similar results with no external chips.
H: Art Of Electronics - 2nd Edition - Absolute Value of a Complex Number - Math In this section, I am struggling with the line Abs(V) * Abs(I) = V0^2 / [blah blah] I am not seeing how the math simplifies to that term. After looking up the absolute value of a complex number, and seeing that it is basically Pythagoras theorem on the real and imaginary axis. I just don't see the simplification for the line above power faster = Can someone help? I assume it is correct? Thanks AI: It's not quite "absolute value" -- it's "magnitude", and is defined to be the square root of the number times it's complex conjugate -- \$ \sqrt{ X \cdot X^*}\$. This is what Matlab does when it takes the absolute value of a complex number. To take the complex conjugate, just replace the \$j\$ with \$-j\$. When you do the multiplication, you'll see that all of the complex terms (i.e., odd powers of j) will end up cancelled out through subtraction, and of course even powers of j are real. If you do this, and develop a little more facility with complex numbers, along with remembering that \$j^2= -1\$ you won't have to remember any tricks or formulas or theorems. It will work all the time. So, Let's look at the magnitude of Z $$ Z= R- \frac{j}{\omega C}$$ $$ Z^*= R + \frac{j}{\omega C}$$ $$|Z|=\sqrt{ZZ^*}= \sqrt{R^2 - \frac{jR}{\omega C}+\frac{jR}{\omega C}+ \frac{1}{\omega^2C^2}}= \sqrt{R^2+ \frac{1}{\omega^2C^2}}$$ Now, lets look at your example $$ I = \frac{V_o}{Z}= \frac{V_o}{R-\frac{j}{\omega C}}$$ Multiply the numerator and denominator by the complex conjugate of the denominator (i.e., multiply by 1, which is always allowed): $$\frac{V_o}{Z}= \frac{V_o}{R-\frac{j}{\omega C}}\cdot \frac{R+\frac{j}{\omega C}}{R+\frac{j}{\omega C}} =\frac{ V_o \left[ R+\frac{j}{\omega C} \right]}{R^2 + \frac{1}{\omega^2 C^2}}\text{,}$$ which is your middle equation. Note that the denominator is a real number, so you can just factor that out if you need to go on and figure out the magnitude of the numerator. So all of this, combined with the understanding that all of this complex math will yield a vector in the complex plane, and that the magnitude is the length of the vector, and the angle is the angle of clockwise rotation (i.e., \$\tan^{-1}\left( \frac{\mathbb{I}\text{m}}{\mathbb{R}\text{e}}\right) \$) will answer all of your series of recent questions on the manipulation of these sorts.
H: Electron flow through the semiconductor crystal Talking about doped semiconductors: Atoms outer shell extends and overlaps with another ones outer shell (the conduction band), right? So when electrical field is applied to a semiconductor bar (lets say n-type) electrons start to flow in the bar, and then what happens? I know that the electrons in the outer shell/conduction band contribute to electrical current density (usually marked as "J"). But my question is: How is it done? How the outer electrons contribute to electron flow when other electrons flow by or however it is done. AI: You only have overlap of valance and conduction bands in metals (e.g. copper) and marginally so in semi-metals (e.g. graphene, tin). That's why they are "conducting" - there are lots of free electrons floating around from the valance band that happens to already overlap as an "electron gas" around the lattice ions. This is NOT how semiconductors or insulators conduct carriers however. Stop thinking in terms of shells; they are a thing in chemistry but they will only be confusing because "shells" sound like physical orbits or distances in classical mechanics, and they aren't that. A carrier to change in energy without even changing location. Instead the key is carrier energy and what's allowed by the lattice they occupy. This is where stuff like Brillouin zones and crystal structures enters the picture. The regularity of crystals constrains carrier movement something like how your line of sight through an orchard depends on the particular angle at which you look through it: some angles are obscured and others you can see clear to the other side. The regular lattice of a crystal conspires with the wave-nature of electrons to cancel out certain directions which is the same as forbidding any motion which results in band gaps at certain energy levels (which by quantum mechanics relates to electron "wavelengths" by hc/lambda). Another way to think of why band gaps exists because quantum mechanics forbids certain energy levels from being occupied. If you look at electron spectral lines of a single atom (e.g. the classic hydrogen lines) you get characteristic lines corresponding to particular energy. You can never get energy levels between the lines. However, bands exist in solids because you can't have the same energy/location occupied at the same time (because electrons are Fermions and follow Pauli Exclusion) so two electrons in an otherwise identical circumstance (i.e. in a crystal lattice) cause the one "spectral line energy state" to split in two, with one energy level above and one below the original line energy. Add two more and you get a 4-way split. Continue this to an Avogadro Constant number of atoms and electrons and you get energy bands. So these energy bands simple "are" - don't try to think of them beyond the "quasi-quantum" explanation. Quantum mechanics pervades semiconductors utterly so classical mechanics pretty much always fails as a model or analogy. In semiconductors, there's a band gap between the valance and conduction bands so you ONLY get conduction if the carriers have enough energy to "cross" the gap in a single instant (no partial crossings allowed). Note this "gap" is not a physical gap but an energy gap so the carrier can be "unmoving" and magically get some energy from somewhere and then suddenly it is in the conduction band and on its way, susceptible to forces that had no effect on it before. The magic energy source, for instance, can be a photon interaction (e.g. a photovoltaic cell), statistical thermal agitation by not being at 0K temperature or merely a sufficiently strong electric field. The difference between valance or conductance occupancy and reactivity is almost like the carrier didn't exist in the valance band but was "summoned" into existence in the conduction band! Doping actually drops little archipelagoes of energy levels into the gap which facilitates conduction and provides "cheater" carriers already near the conduction band that only have to hop a small energy distance to become conducting. Typically thermalization (being at 300K) does the trick of completely ionizing the dopant atoms and freeing the carriers into the conduction band. The term "freeze out" is when this thermalization reverse with (usually) cryogenic temperatures and the semiconductor becomes a "stone cold insulator" because none of these carriers can thermalize. BTW if you make the gap bigger (because the atoms involves dictate it by "dumb luck of the periodic table"), there is nothing that actually differentiates a semiconductor from an insulator other than the band gap size and what happens to be relevant electrically. So for example, SiC (silicon carbide) can be deemed an insulator or a semiconductor depending. In the former it makes a decent insulator. In the latter you can make blue LEDs and kick-ass power MOSFETs. Similarly diamond can be used as either an insulator (used in some silicon processes) or a semiconductor (at high temperature once it have thermalization of carriers to allow traversal of the band gap). Just a matter of degree. One final thing: once you have a carrier (electron or hole) in the conduction band, the primary mode of movement is a combination of classic diffusion with a little bit of field-induced drift. So think of a drop of magnetic ink in a glass of water where you have a magnet on the side of the glass- that's how carriers move in solids: metals, semiconductors and even insulators. Yes, all insulators conduct carriers, just not well and not without some destructive effects in doing so - this insulator "conduction damage" is related to what causes semiconductor devices to eventually fail - and because of quantum mechanics you can't prevent it! So thinking in terms of a mechanical particle movement analogy will only get you into trouble - eventually your analogy will utterly fail and then you stop learning/understanding. Better to grok the deeper models a bit.
H: Small (SPST?) Switch to connect two independent circuits I am looking for a small push button switch (similar to: https://www.radioshack.com/collections/switches/products/rect-pushbutton) but with the capability to connect two independent circuits (without a common connection). I am using this for a Lionel model train setup. For example, when the button is pressed, wire A is connected to wire B, and wire C is connected to wire D. However, there is no connection between circuit AB and circuit BC. Then when the button is released, the circuits are no longer connected. Do switches like this exist? If so, where would I be able to buy one (either in store or online) I am new to electrical engineering so please let me know if more information is needed. Thank you in advance! AI: Look up DPST (double-pole single-throw) switch. EDIT: here's an example.
H: Calculating resistance for variable voltage I have built a very primitive voltage booster with a push button. Now I would like to add my ATMEGA328 to automate the switch. The problem I'm facing is that I want to measure the output voltage as well, but my ATMEGA's VREF is set to internal 1.1v. I could make a simple voltage divider in parallel and take the reading from there, but I'm a bit confused about the following: If my max output voltage is 20v, then my R1 value should be 190kΩ and R2 value should be 10kΩ, BUT what happens if my output voltage drops from 20v to 3v, wouldn't that cause R1 and R2 to try to 'draw out' more current from the capacitor? or is this the proper way of doing this? The boost converter voltage will change from 0v to 20v in random phases, so basically what I'm trying to do is find a proper way to do this in circuit: voltage_reading = voltage_from_capacitor / 20v; AI: It's certainly the simplest way. Don't worry about the current draw -- even at 20V you'll only draw 0.1mA: \$ I_{FB,max} = V_{out,max}/R_{FB} = 20V/(10k\Omega + 190k\Omega) = 0.0001A \$ When V_out is 3V you'll draw even less current -- about 15uA. This isn't much at all -- just the price we pay for feedback.
H: How many bins can I code using a resistor on an ADC I have a device that can have different things plugged into it. I'm trying to use a resistor on these things to uniquely identify the thing. That is, if I plug in Thing A, I can read the resistance as 100Ohms and know that it's a Thing A. If I plug in Thing B, I can read the resistance as 200Ohms and know that it's a Thing B. I know resistance varies with temperature, and tolerance, so I want to have some wiggle room on each unique code. I'm using a voltage divider and an ADC to measure the resistance. Assuming I have a constant 3V input, a 12 bit ADC reference of 2.5V, I will use resistors with a 1% tolerance, and I want as many unique codes as possible, with a small gap in between each, how do I calculate how many unique codes I can get? I was first thinking that I could use a 2k resistor, and then the second resistor could be anything from 0 to 10k. With 4095 steps in 12 bits that means .6mV per step. At 1% tolerance, that means the top bin could be anything from 9900 to 10100, so 200 ohms wide. That's only 80 bins, though. However, a 100 ohm resistor would only vary from 99-101, or 2 ohms wide, so a linear formula is clearly not appropriate. How can I figure out the correct number of unique bins that I can reliably detect? AI: If you use E192 0.5% resistors then there are 192 values per decade. If you set each decade as A/D full scale (ie 2.5 V) then the values are 4096/192 apart --> 21 * 0.6 mV --> 12.6 mV apart. So easily discernable by your A/D. Test which decade a resistor is in by setting a current (or simply a series resistor) so the top value of the decade is 50% scale (1.25 V). The voltage should be between 10 and 2047. If the A/D output value is less than < 10 or > 2047 choose a higher or lower decade to test. Once you know the decade, set the measurement current to get full-scale with the largest resister in the decade. Measure the A/D value to identify the bin. You should be able to measure at least 3-4 decades with little complexity and using a single purchased resistor giving a practical number of bins around 768. You could do the same with E96 1% resistor values and get 384 bins in 4 decades.
H: Relying on datasheet graphs The accepted answer to the question What's this importance of datasheets says that one should ignore "typical" performance and instead design around the min/max performance parameters. All well and good. The answer then goes on to highlight the utility of the "picture's worth a thousand words" graphs. But aren't these graphs necessarily "typical" performance, contradicting the first piece of advice? Specifically, the referenced BSS138 datasheet specifies the minimum \$I_{D(on)}\$ to be 0.2 A for the conditions \$V_{GS}\$ = 10 V, \$V_{DS}\$ = 5 V. However, the answer's referenced graph shows that for the \$V_{GS}\$ = 10 V curve, \$I_D\$ is between one and two orders of magnitude greater than the specified minimum of 0.2 A. How would a designer make use of this "typical" information when it is so far removed from the guaranteed performance? Or am I misinterpreting something? AI: Speaking as an Applications Engineer at Maxim Integrated: Yes, the Typical Operating Characteristics graphs are "typical" and therefore not "guaranteed" operation. But designers don't actually "ignore" the typical operating characteristics graphs as you seem to imply. There would be no point in collecting and publishing this information if it was not deemed useful. Each of these graphs can represent days or weeks of data collection, so it is very expensive to generate and publish these typical operations charts. The Typical Characteristics Graphs are real data, collected by an Applications Engineer measuring a small number of actual devices in the lab. These graphs show the relationships actually measured on a representative sample of early production or pre-production units. This is more in-depth measurement than can be done on each unit produced. This is expensive and time-consuming. The value is that these graphs help a designer estimate performance not explicitly specified in the EC table limits. The Electrical Characteristics table of Min/Max values represents pass/fail test conditions that the manufacturer is responsible for, on each unit produced. (The actual pass/fail threshold is guard-banded.) If the manufacturer sells or ships a unit that does not meet the min/max EC table ratings, the manufacturer can be held liable for problems resulting from such a defective component. The EC table defines the outer limits of how a shipable part can perform, while the Typical Characteristics show more realistic performance within those limits. Because production units are tested against the EC table min/max limits, the Typical Characteristics cannot be guaranteed, except insofar as they lie within the min/max test limits of the EC table. To be robust, a design must tolerate a component being at either min/max extreme limit -- but not every design needs to be that robust. A toy or a hobby circuit does not need to be as robust as a space shuttle. Finally, the Absolute Maximum Ratings represent damage ratings, which will cause irreversible damage if exceeded for even an instant. These ratings are heavily guard-banded and are not tested on every unit (because doing so would destroy an otherwise shippable unit). Testing is done during process technology development, more or less regardless of the actual IC design. So when going to a smaller geometry process technology, the smaller feature size will result in reduced breakdown voltage, and the abs max limits will be reduced accordingly. Sometimes the EC table may list only a min or a max, rather than giving both boundaries. For example a diode reverse breakdown voltage might be given as a maximum limit, and the implied minimum is "zero" because that would be ideal. A diode with Vf less than Vf(MAX) passes, but the manufacturer isn't going to fail that diode because its Vf was greater than some Vf(MIN). And this is where those typical operations charts can be really helpful, in giving a better idea what kind of behavior to expect. Your specific question references a Fairchild BSS138 N-channel logic level FET... Test conditions \$V_{GS}\$ = 10 V, \$V_{DS}\$ = 5 V; \$I_{D(on)}\$ min 0.2A The referenced Typical Characteristic graph (On-Region Characteristics) looks like it was generated using a curve tracer, so this is "pulsed" rather than "continuous" current (Note 2 Pulse Width < 300us, Duty Cycle < 2%). Note that 220 mA is the Absolute Maximum continuous drain current - the pulsed drain current is higher, 880 mA, so this limitation is due to the small SOT-23 package being unable to dissipate the heat without melting its junctions. Note the maximum power dissipation is 360 mW. There's a clue in Note 1 at the bottom of the EC table, they mention the thermal dissipation 350degC/Watt when specified copper traces are connected. In practice this can be improved by using large copper pour areas to help pull excess heat away from the junction. This technique is commonly used in SMPS designs. But since this part is being marketed as a logic-level FET rather than a power FET, that point is not really talked about or emphasized for this part.
H: What is exactly does it mean when someone says "memory-mapping","IO Mapping","Memory Mapped IO" & "Port Mapped IO"? All these words become a little bit confusing to me as I am a newbie. I understand that bascially a microcontroller is processor + memory to store data(RAM) + data memory OR from where the instructions get executed(ROM/Flash) + Peripherals. How I am supposed to visualize the mapping? Is the mapping done on the RAM? If yes, Isn't it memory itself? How does the processor know about it? Is this mapping done when I switch on the Controller or is done by the person/Company that manufatured the controller? if it is really a stupid question or a broad one, let me know. I would try to narrow in. AI: I think these terms are primarily used with microprocessors, rather than microcontrollers. "Memory-mapped" I/O devices just appear as normal memory locations, and can be read or written by any instructions that can read or write normal data memory. Memory-mapped I/O can be used by any microprocessor. Some microprocessors (Intel 8085 and relatives) have a separate address space for I/O device use, not part of the normal memory space, and a limited number of instructions to read from, or write to that address space. I/O devices using this space would be "IO Mapped".
H: Problem in finding equivalent resistance The question was the find the equivalent resistance of the circuit between A and B. I simplified the circuit as: The triangular circuit is electrically symmetrical along XX', YY' and ZZ'. Therefore A, B and C are equi-potent points. I thus reduced them to a single point. I am stuck here. How can current flow from A to B as they are equi-potent? AI: The very popular Star Delta Transformation is very ubiquituous here. It can be useful to memorize it: $$R_a=\frac{R_1R_2+R_2R_3+R_3R_1}{R_1}$$ Essentially, \$R_a\$ is the inverse of the \$R_1\$, by a number which depends on the whole setup. Through this, and replacing every value with \$r\$, the resistance on the delta branch after the transformation is: $$ r_{aux}=\frac{3r^2}{r}=3r$$ Hence the resistance of every delta branch are the original \$r\$ plus the \$3r\$ in parallel: $$ r_{branch}=3r||r=\frac{3r^2}{3r+r}=\frac{3}{4}r$$ where the operator \$||\$ stands for the calculation of the parallel resistance from \$r_1\$ and \$r_2\$: $$r_1||r_2=\frac{r_1r_2}{r_1+r_2}$$ So, the final result is suming two branches in series, plus one branch in parallel: $$R_{AB}= (\frac{3}{4}r+\frac{3}{4}r)||\frac{3}{4}r=\frac{3}{2}r||\frac{3}{4}r=\frac{3}{2}r (1||1/2)=\frac{3}{2}r \frac{1/2}{3/2}$$ Finally: $$R_{AB}= \frac{1}{2}r$$ Note that this requires to leave the C point open-circuited. This calculation can be also done by converting everything to a delta, which will give a resistance of \$r_{aux}=\frac{1}{3}r\$ on the converted delta, a sum of \$r_{branch}=r||\frac{1}{3}r=\frac{1}{4}r\$ in the new complete branch, and then a final series of only two branches: \$\frac{1}{4}r+\frac{1}{4}r=\frac{1}{2}r\$, recovering the same result. EDIT: The suggested method can be applied by supposing a 10A current flowing from A to B | or alternatively, by applying a 10V voltage over A and B. The variables are \$i_1\$,\$i_2\$ and\$i_3\$. Hence, the four loops voltage equations are: $$ r(i_0-i_1)=10\\ r(i_1-i_0)+r(i_1-i_3)+r(i_1-i_2)=0\\ r(i_2-i_1)+r(i_2-i_3)+ri_2=0\\ r(i_3-i_1)+r(i_3)+r(i_3-i_2)=0 $$ with the matrix form: $$ r[1 -1 0 0;-1 3 -1 -1; 0 -1 3 -1; 0 -1 -1 3][i_0;i_1;i_2;i_3]=[10;0;0;0] $$ Which leads to the solution: $$ i_0=\frac{1}{r}20, i_1=\frac{1}{r}10, i_2=\frac{1}{r}5, i_3=\frac{1}{r}5 $$ Hence we recover our previous results. $$R_{AB}=v_0/i_0=\frac{10}{\frac{20}{r}}=\frac{r}{2}$$ Finally, this result is trivial. Once you realize that \$i_2=i_3\$ by symmetry, the node D is equipotential with C, the resistance through the ACB branch is the same than the ADB branch, and twice of the AB branch, and the total resistance is: $$r(2||2||1)=r(\frac{4}{4}||1)=r(1||1)=\frac{r}{2}$$
H: Arduino I2C communication between 2 master networks I have 2 Arduino microcontrollers, each with a network of I2C devices connected to them (one has 2 ADCs and the other an LCD display and a RTC). How can I use the I2C connection to transfer the values obtained by the first uC from the ADCs to the second ? Both uC are masters on their I2C busses. I was thinking of making a software I2C on the second uC and connect it as a slave to the first one (so the second controller would have 2 I2C ports: one hardware and one software). Problem is, I can't find any software I2C library that works as a slave. All are masters. Waiting for your ideas. Question is, can I make 2 I2c networks using one Arduino Mega2560 ? One as master on the hardware port to communicate with the LCD and the RTC and one as a slave on a software port on 2 other pair of pins (for SCL and SDA) for receiving data from another master arduino... After analyzing all the data, I reach the conclusion that the 2 I2C busses cannot be linked together. On the external I2C port I have available on the data acquisition uC, I will connect another arduino as slave that will receive the information an pass it on by using a wireless adapter (probably a NRF24N01). That way, I don't need to have wires from my solar controller to the arduino that reports the production to pvoutput website. AI: From the comments you want to be able to connect only to the I2C bus on each of your separate projects and transfer data. Wirning lets you run an Arduino as either an I2C Master or a Slave. You can't run both the master and slave Wiring software on a single micro since they both want to use the USI hardware, but since you only have one USI, you can connect to only one bus anyway. I'd suggest an effective way would be to use an ATTiny85 as a I2C slave interface on each bus and then connect them together via a software UART. There is a very nice TinyWire library available for ATTiny85 from Adafruit....they also have a very small board called the Trinket too that you could use. There are a bunch of ATTiny85 boards (like Digispark) available that can hook up to the Arduino programming environment so this should be a simple and cheap way to create an I2C slave. While some may say this is overkill, it would be extremely simple to implement and would not require any hardware mods to your project.
H: How to tell logic low and logic high from a datasheet? I want to control the SHDN (shutdown) pin of a FAN5333B LED driver (datasheet) with a 3.3V device using PWM. I was planning to drive it with 5V on Vin. I see from the datasheet that logic high turns it on and that logic low turns it off, but I can't find anything about what is considered logic high and low. I'm very new to reading datasheets; what should I be looking for? Is it just assumed knowledge? AI: I don't blame you -- it's a tricky datasheet to read. I believe this is the answer you're looking for: Normally, you'd look for a "SHDN_B VIH/VIL" specification or similar. In this case, since that table specifies all conditions are at "VIN=3.6V", I feel comfortable taking those numbers as the threshold levels. I see that you can PWM the SHDN# pin (common on these types of parts) to get variable brightness -- this is where IMO, it would have been nice to get a labeled 'Logic Low/Logic High' threshold in the table, with the VIH and VIL numbers. My read is that a voltage below 0.5V, at the tested conditions is guaranteed to be read as a logic '0', and a voltage at and above 1.5V, at the tested conditions is guaranteed to be logic '1'. So, your 3.3V drive will work. If this was a device where VIH was 0.7*VDD, then you might have needed to consider a level-shifter. (This is often the difference between CMOS/TTL inputs, whether VIH is a fixed number independent of VDD, or if it's a dependent value). Additionally, I don't see a direct path between VIN and SHDN# from the block diagram, so I don't think you have too much to worry about with 5V making it to your 3.3V device -- it can't hurt to put a 0R resistor in line so you can change to say a 100R resistor in the future if you have issues. This application note from TI is a really good resource in general for understanding datasheets. It's focused at the conventional 74xxx logic families, but I think there's a lot to be learned from it.
H: LM338, which capacitors to use Do you have to place the input and output capacitors on LOAD basis or the capacitors specified in the datasheet. I have a linear power supply that supplies 16 Volts with no load. The maximum load I want to apply to this regulator would be less than 2A and maximum Voltage, 9 Volts. I just need to know what values of capacitors to use. My supply does have a 4700uF filter capacitor but it is around 2 feet away. I want to make this regulator in separate box. AI: In case there are instances where there will be no load OR if load is disconnected, consider the following. A 250-Ω feedback resistor between OUTPUT and ADJUST consumes the worst case minimum load current of 5 mA Regarding the capacitor, go with the datasheet spec. atleast for input voltage: CIN: 0.1 μF of input capacitance helps filter out unwanted noise, especially if the regulator is located far from the power supply filter capacitors. Output capacitor is optional: COUT: The regulator is stable without any output capacitance, but adding a 1-μF capacitor improves the transient response. certain values of external capacitance can cause excessive ringing. This occurs with values between 500 pF and 5000 pF. A 1-μF solid tantalum (or 25-μF aluminum electrolytic) on the output swamps this effect and insures stability. Finally, 14 W of heat is dissipated, which is okay for the device. But, consider it wisely.
H: Is it possible to induce few volts IMPULS over 10 m distance? I'm currently learning about electromagnetism at college, so this question is on mine mind for a few weeks. If we have on one side: 9 V battery, DC-DC converter to few kilovolts, capacitor, button, and small dimensions inductor L1 (no more than 5 cm diameter). And on the other side at distance of 10 meters secondary inductor L2, simillar dimensions as primary, but thinner wire and more turns. In schematic DC-DC converter is missing. MAIN QUESTION: Is it possible to induce for example 3 V short impuls on the secondary inductor when we press button? I tried to calculate it with Lenz law, but i didn't succeed, :/ Sub-Questions: Does induced signal duration depends on resistance in series with capacitor? If we have instead of inductor, resonat circuit, is it practically possible that signal last for a few seconds? Thank you for your answers. AI: Is it possible to induce for example 3 V short impuls on the secondary inductor when we press button? Hopefully this should help: - It's all about dumping as much current as you can into the transmit coil and, with your circuit you probably want to rethink the 1000 mF capacitor on the transmit side; it will have significant ESR and restrict current significantly. It might be better if you concentrated on making both coils resonant at the same frequency - not only does this vastly increase the current into the transmit coil but it makes the receive coil more sensitive by the Q-factor. So, will you get 3 volts with a 10 metre gap - not you won't easily - just do the math following my equation and you will see that it's likely you will need hundreds (if not thousands) of amps flowing in the transmit coil to achieve volts at the receive coil. Much beyond the transmit coil the flux density falls as a cube law with distance. However, you might receive 3 mV that you can amplify up to 3 volts. If we have instead of inductor, resonat circuit, is it practically possible that signal last for a few seconds? The better the tuning (higher Q) the longer the signal will last. The higher the frequency the higher the induced voltage will be because of rate of change of flux BUT the harder it is to dump the intial current into the transmit coil due to reactance. I would recommend wiring you transmit coil from high quality litz wire and making the diameter as large as you can.
H: Want Square Waves based on Injection Timing of 4-Stroke Engine Recently, I am working on ECU of CNG Auto Rickshaw having 4 Stroke Engine. Here, Image shows the Engine of Auto Rickshaw. I have already taken reading of MAP, TPS and RPM. And now I am working on Injection Timing. For that, I am taking signals from RPM socket(As I don't know which sensor it is). Here, In Image there is RPM Socket from which I am taking Signals by connecting probe of Oscilloscope to Yellow Wire. And I am getting Signal like given in this Image, Now, I want signals as showed in below Image, I have tried getting this signal using LOW PASS FILTER followed by SCHMITT TRIGGER and getting this kind of result, So, I need help in getting proper square wave signal..... Circuit that I have implemented.... 1) Schmitt Trigger 2)Low Pass Filter AI: It takes me 20x as long to explain this as it would for me to design it. The optimal design should minimize latency with adequate noise resonance filtering , i.e. matched filter with maximum signal to noise ratio. My timing analysis indicates your pulse has a resonance in the 3.75kHz region and the pulse interval is 32ms (31Hz) or 1875 RPM ( if 1/rev, 938 RPM if 2/rev). Tolerance to latency of 1 deg at 6000 RPM is equivalent to 28 us which needs to be accounted for in filter ignition timing vs RPM. A 28us = T for low pass filter, LPF (maximum. ) if 1 pulse/rev then 100Hz = 6000 RPM = 10ms interval Alternative to a LPF filter is a non-retriggerable one shot with much like a scope trigger with a dwell of 6 cycles @3.75kHz unless there are conditions that exceed this. Thus 6/3.75kHz= 1.6ms The signal is the negative edge from +22V for the leading edge of the Hall sensor while there is resonant noise on the trailing edge of the sensor after a period of reaching +22V. There is some DC dwell to be ignored after the sensor's leading active low pulse and the trailing active high sense positive pulse. The ideal threshold appears to be in the 2 to 4V range for hysteresis thresholds. it is worthy to note that the positive feedback ratio defines the threshold and hysteresis as a function of the output swing with respect to the Reference level for a a differential amp. So ground is a poor choice of reference. It should be ~ V/2 or if 2,4V thresholds desired then V- =(2+4)/2 = 3V ( not ground ) Design Recommendation Confirm signal, noise range and sensor if 1 or 2 pulse per rev and max RPM. Stage 1: a 28us LPF with Rseries 100k with shunt of C= 28us/100k = 280 pF max. Stage 2: 5V 74HC14 Schmitt trigger using Vcc=5V with hystereis thresholds of Vcc/3, 2Vcc/3 Stage 3 rising edge 1 shot of 1.6ms with feedback to make non-retriggerable to disable input using. 74HC123 simulate this circuit – Schematic created using CircuitLab 100K current limits to 0.22mA satisfies the max 5mA internal ESD clamp diode specs. Latency of 28us needs to checked for max RPM. One shot syncs to leading negative edge and filters out trailing edge glitch at all RPM. ( T=1.6ms needs to be increased to satisfy minimum RPM)
H: Decreasing sensitivity of a touch paddle I made a simple touch paddle using the circuit here. It is too sensitive and also triggers when my finger is only near one of the sides. How can I decrease sensitivity? AI: add <=10M to gnd on both Vbe down to 1M. Stray hum is needed to create the E field to trigger the transistor so in some conditions 1M may be too low. Your body acts as the antenna. The R shunts the E field picked up by your body. Somehow this triggers your circuit with (Non-retriggerable dit-dahs) Beware to ground yourself before Mr. Morse's finger burns out your transistor from excess peak inverse Vbe. a reverse clamp diode protects this greatly but not sure if that affects pulse trigger.
H: maximizing IR range and peak current I ordered a bunch of infrared detector diodes (aka IR diode) and infrared transmitters. luckily the detectors are blue (and don't respond to normal room light). I am planning to order a bunch of infrared detector transistors (aka IR NPN) because I heard they respond to transmitters farther away. I understand that IR diodes respond faster than IR NPN's but when I read the datasheet of the IR NPN, I can put up with several milliseconds of waiting time in my application. I have heard of transmitters and detectors modulating their data over 38Khz. Since I connect my IR components to a microcontroller, I should be able to create a custom modulation all within a microcontroller with no problem. For a standard LED, there's a "Max DC forward current" and a "Peak DC forward current". The value of the former (normally 20 to 50ma?) is lower than the latter (150ma?). I tend to limit my current to all my LED's to the Max DC forward current to eliminate burn-out. I'm curious... Do IR circuit designers modulate data over a fixed frequency (such as 38Khz) in order to give the IR transmitter (diode) the "Peak DC forward current" instead of the "Max DC forward current" as an attempt to maximize range without blowing the part up? Why was 38Khz chosen as a standard frequency to modulate IR over? Why not 1Mhz or even a few hundred Hz? If the answer to question 1 is yes, then could I get away with sending a one-time short burst of raw data (byte switches every millisecond) to an IR transmitter (diode) with using the Peak current instead of the max current? I just feel that if I leave any type of LED or photo diode on for too long at peak current, it will blow up. I may be wrong. I'm tempted to use peak current in a photo-transistor and photo-diode in order to maximize IR range but I don't know if I'm on the right track. AI: Hang on... 'cause we 'bout tuh answerin' completely! Why was 38Khz chosen as a standard frequency to modulate IR over? Why not 1Mhz or even a few hundred Hz? The short answer is: some history and some technology (also, 38kHz is not the standard frequency, but one channel of many allowed by APA in that same band). First, Question #2 Consumer applications for wireless remote control technology first appeared in 1955, when Zenith, Inc. founder-president Eugene F. McDonald Jr. yearned for a wireless remote control that would mute the sound of commercials. So convinced was he of the imminent demise of commercial television that he viewed the development of the “Flash-matic,” designed by his engineer Eugene Polley, as a temporary fix until subscription television arrived. The “Flash-matic” was the first commercial demonstration of wireless remote control widely sold to the public. This television controller consisted of a focused electric flash light that the user aimed at one of four photo-sensors positioned in the four corners of the television set to control volume muting (1 corner), power (1 corner), and change the channel (the last two corners). However, its unmodulated visible light approach worked poorly during the day when incident sunlight would randomly turn on the television or cause other unwanted behaviors from the set. In accordance, Zenith released an improved controller in 1956 based on a design by the now widely revered “father of remote control” Dr. Robert Adler (Dr. Adler went on to hold over 180 patents worldwide, including critical breakthroughs in vacuum-tube technology). Dr. Adler’s remote, sold under the trade name “Space Command”, was based on ultrasonics (sound frequencies above the range of human hearing) and did not require batteries in the handheld unit. A set of light-weight aluminum bars, akin to tuning pitch forks, were individually struck when the user pressed a button positioned above the bar. Because the button had to strike the bar without damping it in order to maximize its audio output, a snap-action switch was used giving the remote its affectionate name, “the clicker”. The bar would ring when struck producing a fundamental pitch in the near-ultrasonic audio spectrum (20kHz-40kHz). We now interject! Notice this frequency band?! The ultrasonic bell design will be "upgraded" to IR emitters later in the story. However, modulating at the ultrasonic audio frequency band was retained so that they didn't have to redesign the baseband processing section of the television receiver. Clever, right? That's why APA (more later) will specify IR modulation frequencies between 20k-50k. ...and we resume... ;-) In Adler's television, an audio transducer fanned out to a set of six vacuum tubes forming a bank of bandpass filters to decode the incoming signal and discern which button the user had pressed (e.g. which bell they had rung in the remote). Despite its technical achievement, Adler’s ultrasonic remote control saw slow consumer adoption in the late 1950’s as the additional vacuum tubes raised the market price of the television set by 30%. Although invented in 1947 by Dr. William Shockley, John Bardeen, and Walter Brattain, the transistor did not begin appearing in consumer products until the early 1960’s. The advent of the transistor (solid state semiconductor) brought about dramatic reductions in the cost to manufacture remote control electronics. Adler’s ultrasonic design was reborn as a battery powered electronic version and gained widespread use. More than 9 million ultrasonic remote controls were sold by 1981. This large-scale adoption created market pressures for new products with ever greater capabilities and convenience. Engineers, in turn, began demanding greater range, longer battery life, and greater control over a larger number of features (i.e. more buttons on the remote) from their remote control interfaces, all the while reducing the cost to manufacture the assembly. Advances in semiconductor transistors occurring in parallel with semiconductor-based optoelectronics brought about the modern infrared (IR) remote, the first commercial product appearing in 1978. Electronics companies, Plessey and Philips, both of whom had divisions specializing in semiconductors, were the earliest manufacturers of chips that contained the entire IR transmitter and receiver. Unfortunately, they failed to predict the multiplicity and popularity of the medium and assumed that only one receiver would be in range of a remote at any given time. Their protocols and modulations schemes made no attempt to distinguish one receiver from another. In July of 1987, the Appliance Product Association (APA) standardized the protocol used by most commercial IR remote controls. The standard was subsequently adopted by the AEHA, a government consumer product regulatory authority, in Japan and Philips began a product registration service to further ensure correct operation across multiple vendors and devices. However, collaborative projects always branch and by 2000, more than 99 percent of all TV sets and 100 percent of all VCRs and DVD players sold in the United States were equipped with IR remote control based on one of five major variants of the APA protocol. All variants specified a fixed carrier frequency, typically somewhere between 33 and 40 kHz or 50 to 60 kHz. The most commonly used protocol is the NEC protocol, which specifies a carrier frequency of 38 kHz. The NEC protocol is used by the vast majority of Japanese-manufactured consumer electronics. The Philips RC-5 and RC-6 protocols both specify a carrier frequency of 36 kHz. However, the early RC-5 encoding chips divided the master frequency of the 4-bit microcontroller by 12. This required a ceramic resonator of 432 kHz to achieve a 36 kHz carrier, which was not widely available. Many companies therefore used a 455 kHz ceramic resonator, which is commonplace due to that frequency being used in the intermediate frequency stages of AM broadcasting radios, resulting in a carrier frequency of 37.92 kHz (essentially 38 kHz). Even documentation for Philips' own controller chips recommended an easier-to-obtain 429 kHz ceramic resonator, yielding a carrier frequency of 35.75 kHz. Modern IR transmitters typically use 8-bit microcontrollers with a 4 MHz master clock frequency, allowing a nearly arbitrary selection of the carrier frequency. So you see... Mechanical ultrasonic audio became electrical ultrasonic audio became Infrared ultrasonic modulation. All the while, the processing electronics could evolve without disturbing the other parties and could remain compatible with their immediate legacy counterparts. Additionally, operating at MHz scale modulations would severely curtain range and increase cost as it would require shorter accumulation times in the receiver, which would make the receiver more vulnerable to noise (lower signal per pulse, ergo lower SNR) This has led to a plethora of commodity-priced IR emitters, detectors, demodulators, and encoders that are manufactured in staggering quantities. Back to Question #1... Do IR circuit designers modulate data over a fixed frequency (such as 38Khz) in order to give the IR transmitter (diode) the "Peak DC forward current" instead of the "Max DC forward current" as an attempt to maximize range without blowing the part up? Nope. They modulate data for three reasons: Eliminate false positive signals. Unlike the first remote controls (flashlights), modern IR communication uses modulation to make the transmitted signals look as unnaturally occurring as possible. This makes it extremely unlikely that sunlight or other phenomenon are confused by the receiver as data sent by the operator. Expand the code space. Using modulation means that many different commands can be transmitted on the same channel and the receiver can distinguish them. The data is now represented by a combination of carrier, sub-carrier, timing, and sequence. There are many valid unique combinations and that allows volume up and volume down to be sent with the same hardware and same environment -- unlike the early remotes where your command was just signal present or absent. Deconflict devices. Using modulation enables different devices and different manufacturers to identify to whom they intend to transmit. This prevents the experience of early days of remote control where turning on your TV might turn off your VCR. As to transmit power, many IR transmitters are, in fact, designed to their pulsed limits (which can be 10x-100x higher than their continuous limits). This allows the use of smaller/cheaper diodes. On the receive side, the amount of transmit power it would take to "burn out" an IR receiver (diode, photo-transistor, or otherwise) is extreme and doesn't play a role in practical earth-based system design. That output power level would be dangerous to humans long before it would be dangerous to the silicon. Finally, Question #3 If the answer to question 1 is yes, then could I get away with sending a one-time short burst of raw data (byte switches every millisecond) to an IR transmitter (diode) with using the Peak current instead of the max current? Yes. If you are careful. This is typically how it is done. That said, you don't have to design/build this system yourself. There are plenty of commodity parts that do all the physical layer protocol handling (modulation/demodulation) for you. Behold the Sharp GP1 family: It's got the complete demodulating receiver in there! :)
H: Resistors of 7 Segment display I'm trying to understand 7 segment display, In the picture below there is a resistor for the GND pin of the 7-segment display, My question is why it needs a resistor? Because the other display pins doesn't have resistors, Is it correct to assume that the resistors are embedded in the component? Thanks. AI: If I read this right, the schematic would look more or less like this. simulate this circuit – Schematic created using CircuitLab This works as long only one LED is powered at the same time. You could just multiplex them. If you want to power severals at the same time, they will not shine with the same brightness depending on the number of lit LEDs. You might want to check This question
H: Preferred way of interfacing relay coil with active low output from MCU My prototype would use active low output from 8-10 ESP8266-12E GPIO pins to control AC appliances. I have several options to interface the relay (or SCR phase control circuit) with the MCU. But I can't decide which one is best to use from long term performance and cost perspective. What I have thought of so far, is as below: Using a suitable PNP transistor: simulate this circuit – Schematic created using CircuitLab Using an optocoupler simulate this circuit Using darlington transistor array such as ULN2803 I have around 8 to 10 GPIO control pins. If I see from cost perspective, discrete transistors solution seems cheaper than those wired around optocoupler or ULN2803. However, I am not sure if all the hand soldering 8-10 transistors are worth the effort (I am no good at it, I admit). Also, I don't know how the discrete transistor based solution will perform in the long run. Any advice will be appreciated. AI: If the microcontroller and relay run from the same supply voltage, then your top option using a PNP transistor is probably the easiest. It's just like the typical NPN low side drive, just flipped around. The optocoupler circuit you show is not good. Most optos don't have the output current capability to drive a relay, especially relays with low voltage coils that can be run from the same supply as the microcontroller. Take a look at the CTR (current transfer ratio) of the opto. However, these digital signals are coming from a microcontroller that can drive them with arbitrary polarity. Unless these are very special open drain outputs or the like, simply invert the logic in the firmware. Now you have active high signals, which can drive a low side NPN switch directly. That also allows for powering the relay from a higher supply voltage, which is a good idea when possible. I go into more detail on that here. For completeness, here is a way to drive a relay from a active low digital output, but where the relay coil requires higher voltage than the digital logic: Again, it is only worth doing this if the digital signal can't be inverted to be high when the relay is supposed to be on, or you are stuck with a open drain output. This is very rarely the case when the signal is coming straight out of a microcontroller. Q1 is in the common base configuration. It acts as a switchable current sink, to voltages higher than the 3.3 V supply. When the digital output is asserted, the bottom end of R1 is held at ground. Figure about 700 mV for the B-E drop, so that leaves 2.6 V across R1. (2.6 V)/(2.4 kΩ) = 1.1 mA. Due to the gain of the transistor, most of that will come from the collector, not the base. That provides about 1 mA base current for Q2. The relay I used as example draws 27 mA at 12 V, so this circuit requires Q2 to have a gain of at least 27. That particular transistor has a minimum guaranteed gain of 100 at both 10 and 100 mA, so there is a comfortable margin. D1 is not optional. It gives the kickback current from the coil a place to go that doesn't require creating a high voltage and blowing out Q2.
H: Problems with deriving output resistance without using a model I was reading different texts about input and output resistance of the above circuit and trying to understand the Rin and Rout without using any transistor models such as pi-model or T-model. I think I understand how can we comprehend what input resistance is. Here is the summary from what I understand for Rin without using any model: Lets assume the voltage at the point Vb is increased by ΔV. This means the voltage at the point Ve also will increase by ΔV. If we want to write an equation for the input resistance Rin(the resistance seen from the point Vb); we can use the β relation between the base current Ib and the emitter current and the fact that the same ΔV will appear both at Vb and Ve. So Rin can be derived by using the following steps: ΔIb = ΔV / Rin ΔIe = ΔV / (Re//Rload) since Ie = (1+β)*Ib Rin = (1+β)*(Re//Rload) So Rin input resistance can be written in terms of Re, Rload and β. Question: I cannot find a similar step by step comprehension derivation of the Rout without using a model. I mean I would like to clearly write down how Rout is obtained as I have written for Rin above. The texts say that the Rout(looking back at the emitter) is: Re // (Rsource/β) But I'm very confused at this point how this is derived How they obtain Rsource/β conceptually. How can we step by step explain Rout here as in Rin case? AI: How they obtain Rsource/β conceptually. let Rsource=Rb if Rb is biased from some voltage Vcc such that the value of Rb controls Ie using KVL; Ib=(Vcc-Vbe-Ve)/Rb thus Ie/β =(Vcc-Vbe-Ve)/Rb or (Vcc-Vbe-Ve)/Ie = Rb/β let Vcc and Vbe be constants and Zout=Ve/Ie the change in emitter voltage from an external current can be nulled by an opposing base current amplified by Rb/β thus Zout = Rb/β // Re q.e.d. in conclusion the Emitter Follower Zin = β Re and ( for any Ze ) Zout = Rb/ β ( or any complex Zb parts) For future reference a transistor amp with negative feedback using Rf from collector to base with Rseries to a source voltage input, we get voltage gain Av= -Rf/Rb and it turns out that the input and output impedance becomes a function of negative feedback gain such that Zin is reduced by feedback ratio and Zout at collector is reduced by feedback R ratio //Rc The same is true for Op Amps but the gain is so high we consider the differential input impedance to be zero and likewise for output impedance until current limiting saturates the driver.
H: Does FT232RLMG support RS-485? I have the FT232RLMG device and a device that uses the RS-485 connection standard. I would like to know if I can connect the TX and RX pins of the FT232RLMG to the device that uses RS-485 directly or do I need some more components? I have looked for information in the datasheet and found that it specifies in the section typical applications: USB to RS232 / RS422 / RS485 Converters In the and in technical characteristics says: Data transfer rates from 300 baud to 3 Mbaud (RS422, RS485, RS232) at TTL levels. Link to datasheet: FT232R Thank you very much. AI: RS-485 uses differential signaling RS-232 uses single ended signaling They are not directly hardware compatible. There are many off the shelf hardware devices that convert from RS-232 to RS-485. I believe single IC components exist as well although the ones I've worked with are RS-232 at 5V and 3.3V logic levels (UART) not full RS-232 voltage levels.
H: 4 ways DC power connector KPJX-PM-4S mounting method I came across a standard 4-ways DC power connector like the Kycon KPJX-PM-4S, for panel mounting. What I was not able to find on the producer website nor on the web is a standard method to secure this connector to a panel. If I have a panel cutout of the shape of the connector (and according to the specifications found in the datasheet) how can I secure it to the panel? I got a standard industry stainless steel panel of 1mm thickness. Should I use screws and nuts on the other end? Or are there specific mounting brackets? Or is the connector supposed to be mounted on a thicker panel?? AI: You can use standard 1mm stainless (what you use depends on the stress it will need to withstand). The actual cutouts are fairly simple: There will be a circular cutout for the main body (I have not downloaded the 3D stuff - available on the website) - the specific information is not on this print. Make sure you have at least 0.5mm oversizing. This print is not clear as to whether the keyways are exposed at the outer main connector body. The seating plane is where the vertical red lines are. Drill two holes based on the two screwhole centres from the drawing, suitable for self tapping screws (the recommended method, see above) or use a nut and bolt combination - the hole size depends on the specific screw you use. There is a note on the length of the screw to use.
H: "Where" does back EMF appear in motor? I've been trying to understand EMF, specifically back EMF in electrical motors, such as for example a setup like this: Say that we apply a voltage to the brushes. I understand that once the rotation gets going, a current will be induced by the changing flux inside the loop, and that this induced current is opposite to the current created by the external voltage. My understanding is that this results in a smaller net current in the loop, and this phenomenon is called back EMF. However, I don't exactly understand "where" the EMF (which is a voltage?) appears. One explanation I've seen is this: where the EMF appears as a voltage source in series with the motor resistance. While this may be a good way to explain how the current will behave, it doesn't seem to explain what actually goes on inside the motor. Surely, the EMF doesn't appear "before" the motor, but rather inside it. Should it then somehow be seen as being superimposed on the resistor? Where, in the first figure, would the EMF appear? Does it change the voltage across the brushes? Where does the "extra" voltage drop happen? AI: The back EMF is generated in the wire that makes up the coils of the motor. When a wire is swept sideways thru a magnetic field, a voltage is generated along the length of the wire. Spin the motor with just a voltmeter connected, and you will see it make a voltage. So yes, the resistance and the back EMF are actually distributed along the wire in the coil. There are lots (infinity, actually) of little resistances in series that each get a little voltage in series with them when the motor turns. Viewed electrically from the outside, this can't be distinguished from a lumped resistance in series with a lumped voltage source. Since this is simpler to draw, think about, and analyze, that's how motors are usually shown.
H: Through hole SMA parasitic effects I'm designing a PCB involving signals up to 6 GHz. I want to use through-hole SMA connectors like these: If I route the RF traces on the top of the board, will the center contact of the through-hole connector sticking out to the other side of the board have any significant effects (compared to something like an edge mount connector)? (I have been looking around for advice on this but have not really found anything, apart from USB 3.0 application notes suggesting that the superspeed traces better be routed on the opposite side of the board when using through-hole USB 3.0 connectors.) In case my description is not clear I'd be happy to provide pictures to illustrate. Thank you! Update: I think I'm going to go with edge launch connectors, just to be safe. Thanks for the answers. AI: When I worked at 1 and 2.5 GHz, we used these connectors without consideration for stub effects. Now, working at 25 GHz , we would just about never use them (even the variants with coax connector types appropriate for 25 GHz). However, your operating band is somewhere between there so these rules of thumb aren't especially helpful. I agree with Andy's analysis that, especially if you trim the stub protruding from the opposite side of the board, this connector is likely to work well at 6 GHz. But you haven't said what your application is and whether you have particularly stringent requirements for VSWR or for insertion loss flatness across a wide frequency band. So I'll add a few suggestions. Using an edge-mount SMA connector would produce a smaller discontinuity between the pcb trace and the connector. Routing the trace on the opposite side (or, on a buried layer close to the opposite side) of the board from where the connector is mounted would produce a smaller discontinuity. In comments you asked, Do you have any suggestions regarding the ground planes underneath the connector? I'd pull all copper away (maybe by 3 - 5 mm?) from the stub part of the connectors center pin. (The stub is the part that isn't on the path from the trace to the other side of the connector). This will reduce capacitance between the pin and other nets (mainly power and ground) which I'd expect to reduce the impedance discontinuity. (The trade-off is it could increase radiation from the stub) If you want to get really fancy, and your signals are not particularly wide-band, you could break out an EM simulation and probably find a way to add an inductive discontinuity (neck down the trace as it approaches the connector) to your trace to compensate for the capacitive discontinuity from the stub --- but I wouldn't advise doing this without being able to optimize it in simulation, and it's probably not justified if your application doesn't have especially strict VSWR requirements.
H: Joule-hour is Watts * 3600? Maybe this is a dumb question, but I am trying to get my units straight. I have an electrical system which has its consumption metered every hour, so I need units that are graduated in hours. So, given SI units, should I base it in "Joule hours" which would be Watts * 3600? In other words, let's say I have a 100 Watt light bulb and it runs for 3 hours. Then energy consumption would be 100 * 3600 * 3 = 1,080,000 Joule-hours. Is that right? AI: In other words, let's say I have a 100 Watt light bulb and it runs for 3 hours. Then energy consumption would be 100 * 3600 * 3 = 1,080,000 Joule-hours. Is that right? No, there is no need to say anything other than joules. Your lightbulb has consumed, over the 3 hour period, 1,080,000 joules of energy. It's as simple as that. If you had a tap running to fill up a 1000 gallon container and it took one hour to fill the 1000 gallon container you wouldn't say you have used 1000 gallon-hours of water. That would be stupid.
H: Why is there no net current in a wire without a voltage applied? Atoms of materials with loosely bound outermost electrons constantly exchange charges between each other over time, and these materials are called conductors. Now, the conducting process is different from the one often described in the electrical engineering textbooks. This implies that in order for current to flow in the circuit, an electron has to move from one lead all the way to the other, which is simply not true. Reality is something like this: The electron at the far left coming from the negative lead of a battery, for example, is then colliding at the nearest atom and because of its acceleration it's knocking out the electron which is revolving at this shell level. The knocked electron is heading to its closest atom and in turn it's doing the same, knocking out an electron which creates a chain reaction. So, basically, electrons move just a little bit, but the overall outcome is virtually instantaneous. What I don't understand is if we take a regular conductive wire WITHOUT applied voltage on it, electrons still constantly bounce from atom to atom which means that literally there is "an electron flow" in the wire, but if we connect the wire to a LED diode nothing would happen. So, what I am really asking is how differs "an electron flow WITH applied voltage" from "an electron flow WITHOUT applied voltage" in a wire. AI: Statistically, there are as many electrons moving in one direction as there are in the 180º opposite so there is effectively no net current. What we know as "current" is the movement of more electrons in one direction than all the others (1D, 2D or 3D through a piece of metal). That's how you can have "tons of free electrons" but no net currents flowing or measurable. The random agitation of those electrons has a name: thermal noise. This agitation is proportional to temperature so you get more of it as you heat things up. However, the average motion is always zero so you can never do any useful "work" or equivalently extract usable energy from the process. This is in agreement with the laws of thermodynamics.
H: Resistors in serial communcation In my still relatively short journey into the world of electronics, I have stumbled on multiple schematics that use resistors in serial communication and suspect that they are there for some kind of protection but I have never been exactly sure. I have found these resistors to vary from 27R to 1K or not being there at all, which is probably the most common case. Here is a picture of such an example from an Arduino schematic. AI: Regards your schematic, I think the reason might be is to avoid two data transmission chips/systems clashing or fighting. It appears that if the external TX pin is active then it will win the battle to talk to the chip on the right i.e. the chip on the left is defeated. Other uses of resistors in series with data transmission output pins are usually to provide impedance matching and to prevent reflections - they are usually in the range 10 ohms to 33 ohms.
H: Processing an analog signal to square wave using a SN74LS14 Schmitt trigger? I'm looking at using the SN74LS14 for turning a 3V peak-to-peak audio signal to a square (rectangular) wave. If my reading is correct, the positive-going threshold (V_T+) is 1.6V and the negative-going threshold (V_T-) is 0.8V. Since the input has a 1.5V peak, which is less than V_T+, would the output be a constant LOW? If so, is there a more appropriate Schmitt trigger for a 3V peak-to-peak signal? In fact, what's SN74LS14's recommended input voltage? I can't seem to find it anywhere. AI: The SN74LS14 is intended to be used with TTL logic levels, not with random analog voltages. The specs you quote say that a voltage above 1.6 volts will be seen as a High, and below 0.8 volts will be seen as a Low. The actual switching threshold will be somewhere between 0.8 and 1.6 volts, but there is no guarantee exactly where. You should use an analog voltage comparator such as an LM339 to "square up" your analog signal. With a comparator, you can set the switching thresholds and hysterisis to suit your application.
H: Is this alternative relay flyback diode path good enough? This question is about an alternative path for relay flyback diodes. I plan to drive panasonic 12VDC CP1-12V relay with a NUD3160LT1G power MOSFET and the NUD3160LT1G datasheet says because the MOSFET comes has protection diodes, there is no need for a "free−wheeling diode" to drive an inductive load. My question is, is it good enough to have this topology of flyback path through high voltage zener diodes and ground and +12 ground rails or should I connect a reverse diode directly on the relay's coil anyway. This is for a non critical one-of audio application in my car. Cost is not an issue but I would prefer avoiding an additional diode if possible. The NUD3160LT1G will be driven by a 5V ATMEGA328P MCU if it maters. AI: This is good the zener from the drain to gate turns on when Vds starts to rise due to the inductive load when the fet turns off. This partially turns the fet back on dissipating the inductive spike in the fet safely. This is a recognized method of flyback protection.
H: Resistor in parallel to circuit is causing VREG to output higher voltage I am building a circuit to convert a 300Hz PWM signal to a 30Hz PWM signal with the same duty cycle. It's a pretty simple circuit: I'm having a weird issue with the R7 resistor though. It's in parallel to the GND_PWM_IN to 12V_IN part of the circuit. I have it there because the system that outputs the 300Hz PWM frequency tests the resistance of the circuit, and if it is shorted or over ~150ohms, throws an error code. In short, it wants the resistance to be about 100 ohms. When the resistor is put in the circuit, the output voltage as measured on the 5V line of the VREG is actually around 7VDC. When removed, it outputs a perfect 5VDC. I do not understand what is causing this, but it is causing problems for my microcontroller and the rest of the circuit which doesn't like that high voltage. Any insight into why it would be causing that voltage irregularity, or if there is a better way to fool the impedance sensing circuit without having R7 there, would be greatly appreciated! AI: R7 is pulling that input line towards 12 volts and, via the ATTINY's input protection diodes, is pulling the 5 volt line up. A 5 volt linear voltage regulator will pull its output up to 5 volts, but cannot pull the output down if something else trys to pull the output up. You should probably connect R7 to Ground, rather than +12. However, not knowing what sort of circuit is feeding GND_PWM_IN, I can't say for sure it that will work - at least it won't continue to risk damaging the ATTINY, and anything else using that 5 volts. If GND_PWM_IN is a 12 volt signal, you will need a voltage divider or level shifter to reduce the signal to the 5 volt maximum that the ATTINY can tolerate.
H: How does the MAX7219 in a LED Matrix work? i am wondering how the MAX7219 in a LED matrix actually works? I mean, the datasheet: http://tronixstuff.com/wp-content/uploads/2010/07/max7219.pdf only describes if it is used with 8 x 7-Segment Displays but not how to turn a dot on / off on a 8x8 matrix. I would be really thankful if anybody could explain to me how i turn a single dot on a 8x8 matrix on. Thanks AI: Set decode mode register 0x9 to 0, which disables the BCD to 7 segment decoding on all digits, then set the appropriate bits for the segment rows in the addresses 0x1 to 0x8 for the first to 8th digit columns. By default after a reset or on initial power up the intensity is set=0, scan=1 digit and display is blanked (shutdown). So you also have to set: Set the 0xA intensity value or the display will be minimal brightness. ..set the 0xB scan to the number of digits you want to scan. ..set the 0xC shutdown register to 0x1 to enable scanning oscillator.
H: Do I need to wire the lamp via a relay? I'm wiring my brother's bicycle with a Philips Rally H4 Bulb 12V 130/100W. He needs it for like 15-25 minutes everyday. Since I have only a 12V 7Ah battery and it is insufficient for this bulb as it will drain the battery really bad and damage it, I have connected the bulb to a 6V 10Ah battery. So the bulb wattage will be like 32.5W(current draw of 5.41A) & 25W(4.2A) at 6V and its quite bright in my opinion. I have a few questions: Can I run both the filaments at once?(Total watts will be like 57.5W) Right now I have connected it directly through a Universal Ignition Key Switch, but it does heat up after running for 20 minutes. Do I need a headlamp relay? Also what wire gauge would be the best. Right now I'm using 18 AWG I guess. Will a 12V 10A fuse be enough? AI: you need at least a 10A relay and pref 25A due to the bulb surge being 9x or so with no NTC soft start. try this . Life time on contacts reduced from 60k to 10k simulate this circuit – Schematic created using CircuitLab Bulbs hi or low side. 10A slow blo
H: Why is the magnitude of the transfer function for my plot showing a band-pass filter, but a strange phase plot? I am trying to plot a transfer function, and plotting the magnitude appears to show that it is a band-pass filter, which seems reasonable. The problem is, the phase plot does not match with the magnitude plot at all. The phase plot just looks like the magnitude plot inverted. How is this possible? I've attached my Matlab code and a figure (the top plot is magnitude and bottom plot is phase) Here is also my Matlab code and the function that I want to plot. My question is: what is wrong with my code, and how is this possible at all in general? numElem = 2000; w = linspace(1,2*pi*10^20,numElem); wRes = 2*pi*10^9; %%Insert wRes (resonant frequency) for n = 1:numElem if w(n) > wRes w = [w(1:n - 1), wRes, w(n:numElem)]; resLocation = n; break end end freq = w ./ (2*pi); lambda = 299792458 ./ freq; %Beta B = pi/2 .* w ./wRes; B(resLocation) = pi/2; %Distance d = lambda ./ 4; Z0 = 50; %Ohms C = 10^(-12); L = 1/(wRes^2*C); Zl = 1 ./ (j .* w .* C + 1 ./ (j .* w .* L)); y = 1./((Z0./Zl).^2 .* j.*sin(B .* d)/2 + exp(j .* B .* d).*(1 + Z0 ./ Zl)) Amplitude=20*log10(abs(y)); phase = angle(y)*180/pi; subplot(211) semilogx(w,Amplitude) subplot(212) semilogx(w,phase) AI: Using a linear spacing for the samples on a logarithmic plot gives a strange result because in the plot the points are not equally spaced any more. Modifying the code to show the calculated points reveals the problem: It's better to use a logarithmic spacing as shown below. I've also unwrapped the phase. numElem = 2000; %% --> w = linspace(1,2*pi*10^20,numElem); w = logspace(0, log10(2*pi*10^20), numElem); wRes = 2*pi*10^9; %%Insert wRes (resonant frequency) for n = 1:numElem if w(n) > wRes w = [w(1:n - 1), wRes, w(n:numElem)]; resLocation = n; break end end freq = w ./ (2*pi); lambda = 299792458 ./ freq; %Beta B = pi/2 .* w ./wRes; B(resLocation) = pi/2; %Distance d = lambda ./ 4; Z0 = 50; %Ohms C = 10^(-12); L = 1/(wRes^2*C); Zl = 1 ./ (j .* w .* C + 1 ./ (j .* w .* L)); y = 1./((Z0./Zl).^2 .* j.*sin(B .* d)/2 + exp(j .* B .* d).*(1 + Z0 ./ Zl)) Amplitude=20*log10(abs(y)); %% --> phase = angle(y)*180/pi; phase = unwrap(angle(y))*180/pi; subplot(211) semilogx(w,Amplitude, ':*') subplot(212) semilogx(w,phase, ':*') Which gives the following result:
H: How to trigger a relay and LED when a certain threshold of 0.15 volts is passed I am learning electronics so this question is learners question. So what is a circuit (referred to as A) that does the following. It has two input wires (say a DC battery for example) , from some circuit referred to as B, which is a input voltage source to A . When this source voltage exceeds 0.15 volts then a 5v relay is turned and and also an led (that needs a minimum of 1.2 volts to work). The LED is turned fully on with no flickering. The input voltage,into A, can range from between 0 to 6 volts. The power supply for the circuit A is isolated from B power supply meaning it does not power B (from which the two input wires come from). A has an independent power supply. The circuit is to use the LM339N-which seems right to use- which I dont fully understand so far how it work. So I am asking this question on here. I dont need a fancy answer-just an answer that show how to use LM339N in a clear way to make the circuit. AI: simulate this circuit – Schematic created using CircuitLab So in the schematic i've drawn, you get the behaviour you have requested. From left to right: Input signal (0 - 6V) goes through R1 as a current-limiter, and is clamped to 5V maximum by a 5V Zener diode D1. This is useful to protect the input of your op-amp/comparator. The resistor R6 next to the zener diode ensures the signal input will be pulled low if it floats and will stop strange behaviour on the output relay etc. The protected signal then goes into the non-inverting input of a general purpose op-amp which is in an "open loop" or basically "comparator" mode, where if: A (non-inverting "+" input) > B (inverting, "-" input) , OUTPUT = HIGH (~5V) If A < B, then OUTPUT = LOW (~0V). I have a voltage divider made from Supply A's voltage level, to give 0.15V from 5V through the divider (10k and 310Ohm). This is a "reference" voltage that lets us trigger the logic change at above or below 0.15V. The op-amp is a general purpose, single-supply op-amp which will work fine in this diagram as shown. The supply capacitor C1 for it is important, for any IC you use - always have this as close as you can to the VCC and GND pins. The output of the op-amp has a pull-down resistor to give it a load, and a discharge path for the MOSFET M1's gate if for some reason the MCP6001 goes high-impedence. It may be removed if you don't want it. The output of the op-amp then goes to the gate of N-channel MOSFET "M1". This mosfet acts as a "power switch", in a "low side" configuration. This means that the power switch completes the connection of the "Load" to "ground", to allow current to flow. The load is the relay coil (L1) (with protection diode D2) and the LED with a series resistor to limit the LED current. if it's a red LED using only 1.2V and with a 5V supply, to limit the current to 10mA the resistor should be around 330 or 470 Ohms. The part used in this first circuit is not really what you wanted, but actually the MCP6001 op-amp is probably better than the LM339 comparator for a newbie to start with, because the LM339 uses an open-drain output logic which is annoying because it inverts the logic, whereas the MCP6001 uses a standard 'push-pull' style output which is easier to understand what a logic HIGH and LOW is. You can simply use a P-channel MOSFET (or PNP BJT) for the LM339 though and a different way of using the resistor R4, i'll show that version below: simulate this circuit This second version with an open-drain comparator like the LM339 uses a P-channel MOSFET arranged as a "high side switch". This connects the load to the positive power supply (B), and ground in this case is always connected unlike in the last circuit. The high side switch is actually preferable for many more types of loads than a low-side switch, just because some loads do not enjoy having their ground removed, but basically any circuit is fine with the positive power supply rail being removed (switched off).
H: When does the capacitor charges and discharges in an astable multivibrator circuit? I was reading this and couldn't figure out at what point and how are the capacitors getting discharged? AI: First some background. Bipolar Junction Transistors (BJTs) are devices in which the base-emitted current, that is current flowing between base and emitter, controls the collector-emitter current. In other words when a current flows through the base of the transistor, a typically larger current flows through the collector. While the current is variable, the voltage at the base of the transistor remains fairly constant when forward biased regardless (*) of the current, and for a silicon device is typically in the region of 0.7V. Now lets look at the multivibrator circuit. Lets start by assuming that Q2 is off and Q1 is on. In this configuration Out2 is pulled up to 9V, and the base of Q1 is at 0.7V (so there is 8.3V across C2). Out1 is pulled down to 0V by Q1, and C1 is charged up to 8.3V. In this configuration the base of Q1 is being driven through R3, which allows C1 to discharge up through R2 and Q1's collector. As it discharges, it will reach 0V, and then begin to charge up in the opposite polarity until the voltage at the base of Q2 eventually reaches 0.7V. At this point Q2 is starting to turn on as the current flows into the base. The moment Q2 turns on, its collector is pulled down to 0V. However C2 is still charged up to 8.3V as it hasn't has a chance to discharge. Because the right hand plate has been pulled down to 0V, the left hand plate must be pulled to 8.3V below this, meaning the base of Q1 is now pulled down to -8.3V which turns it off. Now what happens in there is no current flowing into the base of Q1, but there is a pull-up resistor to +9V - through R3. It is then through R3 that C2 begins to discharge until it reaches 0V. Once it reaches 0V, it now begins to charge again - remember it is pulled up to +9V. At the same time, C1 is now being charged up through R1 and the base of Q2 towards +9V because Q1 is now off. Having reached 0V, C2 charges until the base of Q1 hits +0.7V again. At this point Q1 switches on, which immediately pushes the base of Q2 down to -8.3V because of C1, switching it off. The cycle repeats. Basically the capacitors are charged and discharged by the two resistors. When a transistor is off, the corresponding capacitor charges up through the pull-up resistor on the collector. When the transistor switches on, it generates a negative voltage on the base side of the corresponding capacitor, which allows it to be discharged through the pull-up resistor on the base. There is a nice simulation of the Astable Multivibrator in the examples of the Falstad Java Circuit Simulator. (*) Technically it is variable, but only over a small range, similar to a diodes forward voltage
H: Which motor should I use? High speed, reasonable torque I am currently taking on a project of building a small jet engine, and I wanted to know which kind of motor I should use to drive the shaft. Apparently steppers lose torque at higher speeds. Idealy I would like a motor which offers high speed with torque greater than or equal to 4Nm. I know very little about servos. Power consumption is not an issue. AI: I'd suggest you go back to the drawing board. 4 Nm torque will translate into an insanely high power output for your engine. I'm assuming here that it's not actually a jet engine, but uses a motor driven blower to compress the air.....these are called Ducted Fan motors, and common as an electric RC model 'jet' engine. 4 Nm is about 11 HP (@20k rpm). That is a very very big motor. Most large RC ducted fan motors are in the 1-3 HP range and typically run at 20000-45000 rpm. Here's one that is less than 1 HP for reference. These are BLDC motors, and like a stepper motor they have multiple phases, in this case normally 3 phase (and they are not like stepper motors in many ways). You could investigate a range of RC motors with your own fan design, but they are very hard to balance, so this should not be taken on lightly.
H: How to protect the enable signal (EN) of a car battery powered device? I am developing an electronic circuit that will be installed in a car and get its power from the car battery. The circuit will use around 600mA. The device should be turned on when the car ignition is turned on. To protect the device against voltage transients and reverse battery connection I will use this power supply reference design from TI. I'm planning to only use the smart diode controller (LM74610) and the buck converter (LM53603) part of the design, and leave the supervisory circuit and the boost controller out of the design. My plan was to connect the car ACC wire directly to the enable pin (EN) of the buck converter, so that my device gets power when the ignition is turned on. My question is whether I also have to protect the EN pin from voltage transients and reverse battery connection? If yes, do I need the full transient and reverse polarity section of the reference design in front of the EN pin or is there an easier way to safely detect voltage in the ACC wire? AI: There are two things I would suggest: If it's an EN signal, it doesn't need much current to function correctly (look at the input bias current specification). Putting a large resistor in series with this pin will help. As Tony mentioned in his comment, yes, automotive electrical systems can see huge transients as a result of load dumps. Imagine that your vehicle is driving around with your headlights on, and you then decide to turn them off. Your alternator will not respond fast enough to the sudden disappearance of that load that you'll see a huge spike in voltage on what is nominally a 12V system until it recovers. Additionally, the nature of the loads (inductive, etc.) can contribute to this. The solution here is to find a nice TVS diode that can withstand say 100V, and protect a line around 12V nominal. It looks like that part is already designed with automotive in mind, and EN can tolerate up to 40V, so choosing a TVS diode that starts conducting around say 18V + adding a 10K series resistor should be sufficient from a protection point of view. Choosing a TVS with too low a working voltage may cause it start conducting earlier than expected, burning power / potentially destroying the device. The other way you could do this as well would be with an optocoupler, but I don't think that is necessary here. e: You can see here the minimum bias current: So if we back of the envelope some margin on that (pretend it needs a minimum of 10uA), and then pretend our worst case low voltage is 10V, then a 1Mohm resistor would still allow up to 10uA of current to flow. So of course the 10K could work, or you could then use a 100K or 1M resistor in series with that input.
H: What is the thevenin voltage and resistance of the following circuit? What is the Thevenin voltage and resistance at terminals a,b? Specifically, how do you go about finding these? I tried this problem and I get a Thevenin voltage of 57.6 V and a Thevenin resistance of 6 Ohms. AI: The equivalent resistance, short-circuiting the voltage source is: $$R_{eq}=(5||20+8)||12=(5(1||4)+8)||12=(5\frac{4}{5}+8)||12=12||12=12(1||1)=6\Omega$$ where \$||\$ is the parallel resistance operator: $$r=\frac{r_1r_2}{r_1+r_2}$$ The equivalent voltage source is obtained by calculating the open-circuit voltage. There are two loops, one for the power source (\$i_1\$) and other for the \$12\Omega\$ resistor (\$i_2\$). Hence the equations are: $$ 5(i_1-i_2)+20i_1=72\\ 12i_2+8i_2+5(i_2-i_1)=0 $$ In matrix notation: $$ \left[ {\begin{array}{cc} 25 & -5 \\ -5 & 25 \end{array} } \right] \left[ {\begin{array}{cc} i_1 \\ i_2 \end{array} } \right]= \left[ {\begin{array}{cc} 72 \\ 0 \end{array} } \right] $$ Hence: $$ \left[ {\begin{array}{cc} i_1 \\ i_2 \end{array} } \right]= \left[ {\begin{array}{cc} 3 \\ 3/5 \end{array} } \right] $$ And thus, the voltage is: $$v_{eq}=20i_1=60V$$
H: OP-AMP RC multivibrator not producing a sine wave across capacitor. I have built my first ever OP-AMP RC Multivibrator using the schematic above. It produces a square wave, but it is somewhat problematic. The square wave has a duty cycle of only ~25%, and has a non-zero base value. To inspect this, I put a probe up to the points on the schematic labelled PROBE + and PROBE -. This is what my scope read: What seems to be happening here is that the capacitor seems to be both charging and discharging through the resistor. This is problematic because the duty cycle also based on the capacitor discharging to a non-zero value. Here's a picture of the output (yellow) compared to VCC (blue). The second problem is that the capacitor is charging to only 350mV, which results in a 350mV peak. Is this supposed to be the case? how would I amplify this? My first Idea was to hook up the output to a second OP-AMPs inverted in, and the positive rail to the non-inverting in, which should produce the inversion of the wave. I would then hook that up to a NOT gate to get the original waveform. The result of hooking this up to the OP-OP AMP is Shown above. The yellow line is the output of the multivibrating OP-AMP, the blue line is the result of the secondary OP-AMP. As you can see, the secondary input messes with the first output signal. I'm not sure why. Any help with this would be appreciated. OP-AMP datasheet: http://www.ti.com/lit/ds/symlink/ne5532.pdf AI: You need to read your data sheet more closely. Specifically, read section 10. The NE5532x and SA5532x devices are specified for operation over the range of ±5 to ±15 V but you are trying to run it from a single +5 supply. So the op amp is not remotely happy. Furthermore, if you look at section 7.5, Maximum peak-to-peak output swing, you'll see that for +/- 15 volts the op amp is only guaranteed to swing +/- 12 volts, or within 3 volts of the power supply voltages. Notice that, if this applies to lower voltages (although there is no way to tell, since there is no spec for this, which ought to be an enormous red flag) there would be no output swing at all for power supplies less than 6 volts total. And this is pretty consistent with the 350 mV that you're getting. Worse, there is no obvious reason for your oscillator to work. Consider the situation when the output transitions from low to high. The high output level both charges the capacitor AND sets the trigger level, so in theory it will take infinite time for the capacitor to charge to the point where it will trigger an output change. Given that real op amps have input voltage offsets, it would be perfectly possible for the capacitor never to reach a trigger voltage. Furthermore, even if the circuit did work, why in the world would you expect a sine wave output? What you have made is a very bad version of a classic relaxation oscillator, and a square wave output is exactly what you'd expect. The capacitor voltage should then be exponential waveforms as the square wave charges and discharges it, and that (within the limits of your setup) is exactly what you're seeing.
H: ATmega168 GND and AVCC not needed when programming? I'm reading "MAKE: AVR Programming" and for their "basic LED blinking setup" I noticed in their diagram they don't have the 2nd GND and AVCC (or AREF, which I'm still unsure what it's used for) connected. I'm using an ATmega328 and everything I read said that BOTH GNDs and AVCC need to be hooked up, but in this picture (and it is indeed the "final" hookup picture) the GND on the right side of the chip and AVCC are not hooked up......is this correct? If so, can I do the same on my ATmega328? If so, what exactly is the point of the 2nd GND and AVCC? edit: I also note that a Crystal (with the 2 Ceramic Caps) is not needed here either? Any particular reason? When programming my ATmega328 it would not work at all unless I hooked up the Crystal and 2 caps to Xtal1/Xtal2 (But I pulled my ATmega out of an Arduino, not sure if that matters.) AI: Leaving AVCC or a GND pin disconnected is an error -- this diagram is wrong! The chip may work with these pins disconnected, but it may also cause damage. Don't chance it. A crystal is not always needed, as the AVR has an internal oscillator, which runs at roughly 8 MHz. However, the AVR must be configured to use that internal oscillator, and cannot be reprogrammed (even to change the oscillator settings!) if it isn't receiving a clock signal.
H: How to calculate the value of the fixed resistor in a light dependent voltage divider? I have made, what I thought would be a simple circuit, an LED night light. The power source is 5V@1A. The schematic is below: simulate this circuit – Schematic created using CircuitLab The intended goal is to increase the brightness of the LED, as light decreases. The LED should be completely off when the light level is high. When I built the circuit, I was very surprised to find out that it functioned in the COMPLETE OPPOSITE way. When I pointed the LDR towards a light source, it increased the brightness of the LED as it turned closer to the light source. If I put my finger on the LDR so that almost no light can enter, the LED looks as if it is completely off. Why is this happening? I suspect it has something to do with the voltage divider, whose output is connected to the base of the 2n3904. I have calculated the value of R1 using the voltage divider equation. I will post my calculations soon(within the next couple of hours), my final result of the inequality that I solved(will post my steps soon)is 1kΩ I tested the LDR and here is how the resistance changes in different cases: 1.Almost complete darkness-0.83MΩ 2.Dark-0.44MΩ 3.In between-8.66KΩ 4.Kind of Bright- 2.45kΩ 5.Bright-1.23kΩ 6.Very bright-1.21kΩ The LED has a forward voltage drop of 2V. The peak brightness of the LED is achieved with a current of 9.09mA. In case 1, full brightness of LED should be achieved. In case 2. Half brightness of LED should be achieved In case 3.Very little emission from the LED. In cases 4-6, the LED should be off. It seems as if my calculation(which I will post soon) for the fixed resistor of the light dependent voltage divider is wrong. How do I calculate R1 correctly.Sorry if my question is a bit vague, if you need any details please ask. Thanks. Edit: The positions of the LDR and R1(the 1.1k resistor) are now switched in the actual circuit. After doing this, the LED is always on. I will calculate the value of R1 again. AI: JohnD is correct: the LDR and R1 need to be interchanged for the circuit to work as you want. Assuming you interchange the LDR and R1 (the LDR should be connected from the transistor base to emitter), here's a way to calculate the required value of R1: Let's say the the LED should turn off for a condition somewhere in between Case3 (LDR= 8.6kΩ) and Case4 (LDR= 2.5kΩ). Let's choose LDR=5kΩ as a suitable condition. For this condition, the base voltage of the transistor should be approximately at the turn-on Vbe (about .6V from the 2N3904 datasheet available here http://www.jameco.com/Jameco/Products/ProdDS/783421.pdf). So Vout= .6 = 5* LDR/(LDR+R1) = 5* 5k/(5k+R1). Solving for R1 gives R1~36kΩ. Next we should check how much current can be supplied to the LED in full darkness: In full darkness, the LDR resistance is very high, and approximately an open circuit. R1 will therefore supply the entire base current, calculated as Ib = (5-.6)/R1 = 122 µA. With a transistor gain of ~70, this would result in a maximum collector current through the LED of app 8.5 mA. This may be enough for a nightlight LED. If you find the LED is not bright enough, then using two transistors in a Darlington arrangement would provide a much higher current gain and thereby allow more LED current.
H: Identify Symbols on Circuit Diagrams for VHF Radio Build I found these circuit diagrams for VHF transceiver online, and am having trouble recognizing a number of the symbols. Would you please help me identify the boxed symbols? Here's what I've got so far: Drawing #1 (original image link): The red box with CLK should refer to clock, but I don't know which type or if it even matters. Completely clueless on the blue box with L12. Drawing #2: I think that the red boxes labeled CuAg and CuL are likely wire coils of copper/silver and enameled copper wire respectively. The blue box appears to be a transistor of some kind but I don't know which and googling the label (looks like 25k1904) has gotten me nowhere. The yellow boxes should be inductors with a ferrite core but I don't understand the VK200 labelling. The purple box I'm thinking is a trimming potentiometer? The brown boxes are diodes but not sure if the 12V is referring to a specific type or just that they are connected to the 12V GND. And the green box is a switch ('preklop' translates to 'switch' from Slovenian) but RX izhad and +VRX are throwing me off there. Drawing #3: The pink, red, yellow, and orange boxes look like relays, but I have no idea wha kind. The blue box is a straight up mystery as is the green. Lastly, was going to use Amazon & a local RadioShack to track down most of the items, but if y'all have any sourcing tips for small electrical components, they are all welcome. Thanks in advance for any and all help! AI: Image #1 The two components in the red box are the two halves of a 74HC74 flip-flop. (The part number is a bit out of place, but is present -- look under the right half.) The component in the blue box looks like some sort of adjustable inductor or transformer. Image #2 The component in the blue box is a MOSFET. Andrew Morton's interpretation of the part number as "2SK1904" looks plausible. The component in the purple box is a 1MΩ potentiometer. (This diagram is using IEC resistor symbols, which look like a rectangle instead of a jagged line.) The component in the green box, on the right, is a coaxial connector of some sort. The component in the left brown box is using an uncommon symbol for a Zener diode; the "12V" marking is probably supposed to be its breakdown voltage. The brown box on the right is drawn as a normal diode -- this may be an error, given that it's also marked as "12V". The symbols in red and yellow boxes all look like inductors of various sorts. The ones with T's across the middle are adjustable; the ones with a line next to them have a core. I'm not sure what "VK200" would mean either; it may be an abbreviation in another language. Image #3 Blue box is an inductor, connected to ground (the inverted T below it). Red, yellow, orange, and purple boxes all look like adjustable inductors and transformers, similar to ones which appeared in previous images. I'm not sure what the significance of the box is supposed to be; it may mean that they should have metal cases. Green box is two crystals. I can't quite read the text, but it probably indicates the frequency and cut.
H: Photovoltaic cell bias? How does voltage bias affect a photovoltaic cell? I'm receiving conflicting opinions online, with some saying that photovoltaic mode is entered only with forward bias, some saying reverse bias, and some saying no bias. Furthermore, is there a breakdown voltage for photovoltaic cells? The Shockley equation as I understand it (\$I = I_0(e^{(eV/k_BT)} – 1) – I_p\$) physically cannot tend to negative infinity as it's an exponential equation. Thanks! AI: If you're asking whether light is converted into photocurrent only under specific photodiode bias voltages, the answer is that photovoltaic action is (mostly) independent of the PD bias. Notice that the photocurrent, Ip, is independent of the diode voltage, V, in your equation. In communications circuits, photodiodes are operated with negative bias or zero bias. PD's are not operated with forward bias, because it saturates the device(somewhat defeating the purpose of having a photodiode), as well as increasing junction capacitance (undesirable in high speed optical communications circuits). Some sources classify "photovoltaic" mode as the mode under negative bias, and "photoconductive" mode as the mode with zero bias. Yes, PD's have a reverse breakdown voltage, as does every diode. In high speed PD's used in optical communications, this reverse breakdown can be quite low, about 4-5 V. For PD's used in power generation, the reverse breakdown voltages are much higher. Typical PIN diode characteristics: Some photodiodes have much higher reverse voltage (up to 200 V) and are used very close to their avalanche point. This gives these diodes much higher effective optical gain (100-100*) than a low voltage PD.
H: Can i wire two switches so that the circuit is closed when one or both switches are thrown? I basically want my switches to act like a logical OR, so that when one, the other, or both doors are opened, a common light turns on. AI: Simply wire the two switches in parallel, like so: simulate this circuit – Schematic created using CircuitLab
H: What happens If the interrupt occurs during the execution of HALT instruction? Consider the following program segment on a hypothetical processor . I have this question in my course CS201. Suppose this processor has 32 bits Load/Store operations, ALU operations is 16 bits and Branch instruction is 16 bits. Program has been loaded in the memory with a starting address of 3000 (which is in decimal) . Assuming Byte addressing, If the interrupt occurred during the execution of halt instruction, then the return address pushed onto the stack will be ? My Try : I referred this link http://x86.renejeschke.de/html/file_module_x86_id_134.html According to this link, The return address saved on the stack must be of instruction after the HALT instruction, which gives me the saved address as 3000 + 4 + 2 + 2 + 2 + 4 + 2 + 2 = 3018 Hence, Stack saves 3018 . But, I don't have the answer with me, so I can't confirm it. Am I going Right ? AI: The return address saved on the stack must be of instruction after the HALT instruction Of course. Otherwise the CPU would go into a HALT instruction again after returning from the interrupt. Suppose this processor has 32 bits Load/Store operations, ALU operations is 16 bits and Branch instruction is 16 bits. Some information is missing: Either the information, that "HALT" is a "branch" instruction on the given CPU (on modern automotive CPUs "HALT" is a special instruction class, not "branch") or the information how long "HALT" is. 3000 + 4 + 2 + 2 + 2 + 4 + 2 + 2 = 3018 Assuming "HALT" is also 16 bits long: Yes, it's correct.
H: Confusion with RCD principle of operation I understand that an RCD is a device that interrupts an ac circuit (usually in outlets) if current between live and neutral/ground is not the same, indicating something is wrong with the circuit. It is in a closed state (the power is connected to the circuit) when two wire coils (live and neutral) on the same core oppose each other (as they normally do), cancelling out their magnetic fields. However when these coils do not oppose each other or vary enough to produce a net magnetic field they trigger a sensor which opens relays that disconnect the circuit. In practice this could be done when someone touches the live wire for instance and are connected to ground (which is connected to neutral on most modern systems). But my question is in an ac circuit wouldn't shorting out or allowing current flow between the two wires not vary their relative currents because they are already connected in the ac generator and the electrons that pass through the person (or connection point) would just join the alternating flow that already exists (at higher currents but still at the normal 60hz oscillation/opposition between live/neutral)? AI: As Martin hinted in his comment, an RCD is most usefull in a situation where a fault is likely to cause a current flow to ground/earth. This is actually a very probable situation for a fault: touching one wire (while being somewhat ground-connected) is much more likely than touching two wires simultaneously. And it doesn't need to be you: the compromised live wire could touch a grounded part of the appliance. In my country (230V, both wires live) there are generally two classes of appliances: metal casing earthed, and insluated casing 'double isolated'. In this situation a RCD is very usefull for detecting a compromised wire touching a metal part of an earthed appliance.
H: Can I lay traces on internal layers of 4 layer PCB? In a 4 layer board it's advised to have power and ground plane as internal layer, but due to space crunch on my board I want to lay a few tracks on the internal layer. Is it okay to do it ? If yes then is there any precaution to take while doing so ? AI: Tip. Keep ground as an inviolate layer, and use the 3 other layers for signals and power. Treat power as a signal. While there are theoretical benefits to having a power plane, it is rarely worth it. The cost of dedicating a whole plane to power is too high, in terms of real estate. The time saving of 'not having to think' about power distribution is actually a problem, because it means that you tend not to think about power distribution! A good layup is critical signals on the top (where you can keep an eye on them), ground as layer 2, then layers 3 and 4 as a strict Manhattan grid of power and less critical signals. The point about using a Manhattan wiring grid is that you always have a consistent way to get from A to B, you never have a tricky rip-up and reroute of half the board, late in the layout process. Those are the frustrating times when some people yield to the temptation to 'just run a little track through the ground plane', then you do it again and the next thing you know, the ground has fallen apart like a lace curtain. Route power like this. Put a bulk electrolytic capacitor at the power entry to the board. Put a 10nF at every package power pin. Now think. Where can you tolerate voltage drops? Where must you have isolation between groups of components conducted via the power rails? What is the power consumption of each block. Now route tracks sufficiently wide for the current, and use series resistors, inductors, ferrite beads, or LDO regulators to achieve the desired rail voltages and inter-block isolation. You don't always need a ground plane of course, only for RF and very high speed logic. A good alternative is to use a Manhattan pair of layers to lay an XY grid of ground tracks, connected at every intersection. Then you have 2 clear layers and two half layers for the rest of your signals and power. If you have a few critical signal tracks, then you can add a local ground under just those few.
H: Preserving state of variables in RAM on Power Failure I'm developing a code with Nuc240(ARM Cortex M0) Microcontroller. Where I need to store the value in some of the variables to the Internal Data Flash. I use ISP procedure for this, but i have seen the following statement in the datasheet of the Microcontroller ,about the Data Flash. 'The size of each erase unit is 512 bytes.' The Data Flash Memory starts form 0x20000,and if I need to update this location I will have to erase this location first, does this mean that if I erase 0x20000 - memory till 0x20200 will be also erased? and that might result in other values stored in these locations. How can I overcome this situation? AI: Well, if you want to preserve data in erase block you should first read that block, store it in ram, do whatever changes you need to do, erase flash block, write block from ram. Simple enough.
H: NMOS Gate-Source voltage My current understanding of the NMOS device is that if you apply a big enough potential difference to the gate relative to the p-type substrate \$V_{GB}\$, free electrons are going to appear near the surface of the substrate creating a channel that can potentially conduct a current if there is a non zero Drain-Source voltage. However, the equations that describe the mosfet like \$V_{GS} > V_{T}\$ are considering the difference between the gate voltage and the source voltage. My question : Why those equations depend on \$V_{GS}\$ and not \$V_{GB}\$? Why the source voltage matters in creating this channel of electrons? Wouldn't it work if \$V_{GB} > V_{T}\$ and \$V_{GS} = 0\$ ? What I am missing? AI: Your understanding is correct. For circuit design the source is usually the reference terminal. Consequently so called source-referenced transistor models were introduced to formulate the drain current as a function of voltages relative to the source terminal (\$V_{GS}, V_{DS} V_{BS}\$). In cases where the bulk source voltage \$V_{BS}\$ is zero the gate-source voltage \$V_{GS}\$ is equal to the gate-bulk voltage, so we actually see the effect of the gate-bulk voltage \$V_{GB}\$. For non-zero \$V_{BS}\$ the body-effect is used to model the behavior of the transistor. The body-effect describes a change of the threshold voltage \$V_T\$ and so the behavior of the transistor depending on \$V_{GB}\$ is obtained. The threshold voltage \$V_T\$ with a backgate-bias voltage \$V_{SB}\$ is given by the following expression (see Wikipedia). $$ V_T = V_{T0} + \gamma\left(\sqrt{|-2\phi_F + V_{SB}|}-\sqrt{|2\phi_F|} \right) $$ where \$V_{T0}\$ is the threshold voltage for \$V_{BS} = 0\$, \$\gamma\$ is the body-effect parameter and \$2\phi_F\$ is the surface-potential. Integrated MOS transistors are often symmetric. Therefore the source and drain terminals are not defined by the layout of the transistor but only by the applied voltages. In order to get equations that reflect that symmetry body-referenced models are used, where indeed voltages are referred to the substrate and not the source of the transistor.
H: What is "conditioning of a capacitor"? I got a flash for my DSLR camera: A mecablitz 52 AF-1 digital Canon with multi-language manual from here. In section 15.2 (page 187 for english), it states: 15.2 Conditioning the flash capacitor The flash capacitor built into the flash unit undergoes a physical change when the device has not been used for a long time. For this reason it is necessary to switch the device every three months for approx. 10 mins. The power supplies must deliver enough power so that flash standby lights up no later than 1 min after switching on. I'm wondering: What physical change could a capacitor undergo, which can be more or less avoided by charging it every three months? AI: Even when not in use, there are chemical reactions going on bettween the anode oxide layer and the electrolyte. This may increase leakage current and reduce withstand voltage over time. This effect however is reversable by performing a so called "voltage treatment" (also "conditioning"). Voltage treatment normally involves applying the rated voltage with a resistor in series to the capacitor for a extended period (say 1 hour). This reforms the dielectric.
H: 11.1v to 12v vs 14.8v to 12v I need to get either a 11.1v battery to 12v or 14.8v battery to 12v for a camera. The current draw will be less than 250mah. I was wondering what the cheapest and easiest way to do that is. I will be using either of my two lipo batteries for my power source. Thanks for any help. AI: Be aware that a nominal 11.1v battery is 3 LiPo cells, or 3S, which could range from as low as 8.1v to as high as 12.6v. Similary the 4S battery could cover 10.8v to 16.8v. However, you don't need to use all of the range, at the cost of capacity. A standard charger will always tend to give you the top of the voltage range, but you can curtail the use before it drops to the bottom of the range. As the 3S would need both buck and boost to cover your 12v requirement, it's probably easier to use buck only from your 4S battery, and not use the bottom part of the range. Ideally you'd use a 2S or a 5S, and then the whole discharge voltage range will be covered with a single boost or buck converter respectively.
H: SMD MOSFET copper pad heatsink I am currently searching for a NMOSFET for driving a solenoid valve with maximum coil current of 500mA at 24V. I am choosing between: FQT7N10L and FQT13N06L. The problem I am having is thermal caclulations and proper copper pad heatsink design. On both devices maximum thermal resistance between junction and ambient is written to be around 60°C/W (when mounted on the minimum pad size recommended). But what is: "minimum pad size recommended"? I've also read thrue this app note by Fairchild. There is a table and graph on page 4 of the document. Surface area is written to be for 2 oz copper. But looking at the document layouts 1-6 are done on a single side of a 1 oz copper PCB (page 6). So are measurements for only top and bottom sides for a 1 oz copper? Is there any ballpark calculations for calculating required copper surface area for heatsinking SOT223 package? AI: The minimum recommended pad size is shown on page 8 of the datasheet for both (Fairchild do not let me take snapshots but this follows a specific footprint which can be found here) The minimum pads are the red ones in the picture below: To get better heat sinking, the simple answer is to use more copper, as outlined in blue. There is no single formula for effective \$R_{\theta\ ja}\$ as the internal structure of all these devices are a bit different. We can, however, use the application note to see how much of an improvement we can get based on total copper area. The minimum pads give a total copper area for the drain (the terminal of greatest interest in this case) of about 8mm\$^2\$. Taking 60C/W for this pad area, then if we increase the effective pad area to about 40mm\$^2\$ (an increase of 5:1), then from the application note curve, we should achieve a decrease of thermal resistance of about 2:1 to about 30C/W. All manufacturers have their own methodology and the only way to actually get figures is to slog through each manufacturers application notes, because the data is all empirical (measured) from testing. There are standard PCB layouts for thermal resistance testing, but many parts simply do not use this methodology. Note that this gives a particular thermal resistance under very specific conditions and nothing other than that configuration is guaranteed. On a layout, it is not at all unusual to have surface layers plated up to 2oz for power hungry setups. In response to the comment, I would expect a doubling of copper thickness to yield the same type of curve as found in the application note, and that appears to show about 30% to 60% reduction in thermal resistance (depending on where in the curve we are looking). For this part, we are looking at the left area (in a relatively steep part of the curve) so a 50% reduction in thermal resistance would not be a bad starting point.
H: Possible to measure voltage while charging? Suppose we have a simple DC power source charging a battery or capacitor. If we try to measure the capacitor's voltage in this simple circuit we will instead read the power source voltage: simulate this circuit – Schematic created using CircuitLab Is it possible to directly measure the voltage of the capacitor without disconnecting the power source and without knowing the capacitor's time constant and capacitance? I.e., I'm wondering if there's a circuit that can accomplish this without exploiting the following solutions: If we temporarily disconnect or turn off the power source, the voltmeter will read the voltage across the capacitor. If we know the time constant and capacitance we could instead connect an ammeter and deduce the capacitor's voltage from that. (Or, if we knew it was fully discharged initially, we could integrate the amperage from the start of charging to calculate its voltage.) I've been trying to picture some clever arrangement of diodes that might allow for measurement during charging, but that has left me thinking that this is theoretically impossible using a static circuit. AI: As Peter says, in the ideal world with ideal components and circuits, you can't do this - the voltage measured on the capacitor is always the voltage source value (and the capacitor is instantaneously charged with an initial infinite surge of current - because these are ideal circuits). Of course in the real world you can measure the charging voltage if your meter is much higher input impedance than either the voltage Thevenin and higher than the capacitor ESR. In the real world, these devices do not resemble the ideals so much. You have to model reality differently: the voltage source becomes a Thevenin source with a series resistor, the capacitor has a series resistance and the voltage meter has an input resistance/impedance in parallel with the ideal infinite impedance meter. So your actual circuit should have 3 resistors, an ideal voltage source, an ideal meter and an ideal capacitor, like this: simulate this circuit – Schematic created using CircuitLab If you simulate this circuit, it will come closer to reality. A key lesson here: circuit simulator results are always lies because they must assume a model of reality that is always has less fidelity than the real world (the only model that can have total fidelity is the reality itself - so build it and you'll have that): a model is a map, not the territory itself. It's a universal flaw in modern thinking to confuse the map with the territory! Of course models can tell some truths as well: the responsibility of the user of simulators is to know the point where the lies start.
H: Op-Amp giving unexpected output I've connected an op-amp (TL071) as buffer, with Vcc+ = 5V and Vcc- = 0V (ground). When the input voltage (v+) is greater than 1.44V I get the expected result (the output is the same as the input). However, if the input is below 1.44V, the output saturates. Anyone knows why this might be and how can I get around it? My goal was to amplify the signal of a LM35 (a temperature measurement). Thanks. simulate this circuit – Schematic created using CircuitLab AI: The TL071 is not designed to be used with an input any less negative than 4V above its negative rail. When powered from +/-15V, the input common mode voltage (from the datasheet) is between -12V and +15V. In reality, you will probably get away with 2 diode drops above the negative power rail (about 1.4V - there is a huge clue when you see multiples of about 0.6 to 0.7V). The reason for that is quite clear from the functional block doagram in section 8.2 The saturation you are seeing is due to phase inversion; this is a common issue with JFET input devices. Most bipolar amplifiers will have a common mode range to the negative power rail (but only up to about V+ -1.4V); you could alternately look for a rail to rail input / output amplifier. Some possible amplifiers: LTC2057. Vcm V- to V+ - 1.5V LTC6078 There are numerous offerings from TI, Maxim and ADI.
H: Bandgap Reference Circuit question Note: All the questions below are extracted from A Novel Wide-Temperature-Range, 3.9 ppm/ C CMOS Bandgap Reference Circuit I could not post all the imgur picture links as I am limited to two links only. What is the purpose of R1 as in Fig. 1 of the paper ? Image Source For equation (6) in the paper, I am wondering where the extra term "VG(T)" comes from to be part of Vbe(T) ? What is the purpose of the MosCAP (MPa8) connected to "Out" in opamp topology below ? Are there ways to derive maths equations for "biasp" and "casp" ? Image Source AI: Question 1: The purpose of the circuit is a bandgap regulator, to provide a constant voltage across temperature. (I'm sure you know this, but I want to provide context for other readers.) What is the purpose of R1? You get the constant reference voltage at the top of R1: the voltage across R1 goes up with temperature, and the voltage across Q2 goes down with temperature, so the sum of the two is constant with temperature (achieving the purpose of the bandgap regulator). In more detail, Q1 is scaled by N relative to Q2, so the base-emitter voltage \$V_{be1}\$ will be smaller than \$V_{be2}\$. The difference \${\Delta}V_{be}\$ is proportional to the absolute temperature (called PTAT) for magic semiconductor reasons. Since the op amp inputs are the same voltage, R2 must make up the difference \${\Delta}V_{be}\$. So R2 must have a current proportional to temperature \$I_{PTAT} = {\Delta}V_{be} / R_2\$ through it. The MOSFET current mirror forces both sides to have the same current so the voltage across R1 is \$I_{PTAT}R_1\$, which we will call \$V_{PTAT}\$. Now, the voltage \${\Delta}V_{be}\$ across Q2 goes down with absolute temperature, so call this voltage \$V_{CTAT}\$ (complementary to absolute temperature). So the voltage at the top of R1 is \$V_{CTAT} + V_{PTAT}\$. If the resistor ratio is correct, the voltage drop and the voltage rise with temperature cancel out, and you get a constant voltage. This turns out to be approximately the bandgap voltage of silicon (1.22V), giving the regulator its name. Question 2: equation 6 is \$V_{be}(T) = V_G(T) + \frac{kT}{q} ln[\frac{I_C(T)}{CT^n}]\$. The first term in the sum is the base-emitter voltage at 0 degrees Kelvin, which is the silicon bandgap voltage. The second term is the term that decreases with temperature. The derivation of equation 6 in the paper is unclear to me. Here's an image from an article I wrote about the 7805 regulator that may help: The intercept on the left is \$V_G\$, the bandgap voltage. The slope is from the second term in the equation 6 sum. Q1 is the line in orange and Q2 is the line in red. Both lines drop with temperature (CTAT), but the difference between them increases with temperature (PTAT). Question 3: the capacitor across the op amp is for stability. The handwaving explanation is you don't want the op amp to respond too fast or the system might start oscillating. Question 4: I'm not positive about biasp and biasc, but I think they are the biases that generate the PTAT and CTAT currents. Thus, they aren't interesting values to compute, just whatever gate voltage the op amp ends up producing to create the desired currents, which are what matters. You could use an equation with the MOSFET properties and the currents to determine biasp and biasc, but I don't think it would tell you anything useful. It's the currents they generate that are important to analyze.
H: Powering 33063 Buck Regulator near its Vmax I usually work with 12VDC or less, but I needed to get a quick ~350mA@5VDC supply out of a 24VAC transformer available to me. "No problem," I think: "I'll bridge rectify, filter a bit, and feed it to this Recom 7805-like switching regulator I have in my parts box." So I do that, and test it by feeding the rectifier both ways from my 12V testing battery, everything looks good. Hooking it up to the 24VAC transformer, though, releases the magic smoke. Turns out, you gotta figure you'll see ~80Vp-p when you're working with a 24VAC transformer (24 x 1.414 = 34, but unloaded you might see 28VAC or 30VAC). Rectify and you might still be looking at 40V. Recom is only good to 28V, so ... lesson learned. Next thought: I can use a trusty 33063. I have some of those, and I know how to work them at lower voltages. But Vmax is 40V, and I know that's close to what's lurking in the rectified transformer. So my question: what do people who work with AC in the range from ~25-75 do here? It seems like I could use a 5-10V linear regulator to drop 40V down to a comfortable range for the 33063. If I use an external transistor to feed 40V to the inductor, I should be OK, and the linear regulator only has to provide ~5mA to run the buck controller. Is there a smarter approach, or is this something an actual competent designer might do? AI: Output power: 5V x 0.35A = 1.75W. Estimated efficiency is %85, so the required input power is 1.75 / 0.85 = ~2W. At 38VDC, input current is 2W / 38V = ~55mA. After rectification and filtering, if you get 40VDC then you can place a resistor with a value of RS = (40-38)/0.055 = 36R (Place 39R). Power dissipation of this serial resistor is about PR = 39 x 0.055² = 0.11W. So you can put a 39R/0.25W resistor. You can also make a simple RC low-pass filter by putting a 220uF elco after this resistor to filter-out some ripple.
H: How to turn on a optoisolator with a desired resistance I am learning electronics so this is learners question. I wired up the LM339N quad voltage comparator (powered using a 5v power supply through a current limiting resistor of 40 ohms)-in a straightforward standard way- like at http://www.learningaboutelectronics.com/Articles/LM339-quad-voltage-comparator-circuit.php but for the potential divider I used two 4.7k resistors-not a pot- and I only used one comparator. I also had an output LED too.I added the standard 0.1uf capacitor near the power supply. So the circuit worked as it should (for the input voltages to the comparator I used AA batteries that went through a current limiting resistor of 500 ohm into the comparator input) at 1.5 volts or less the LED turned off. At higher input voltages the LED turned on. I want to replace the LED with a pc123 optoisolator, so the optoisolator behaves like a switch (just on or off, with a resistance of between 1 and 60 ohms when on, or zero ohms when off)-so what is the circuit to do this? So when the opto is off there is no resistance . There maybe other components (im not sure of this) required in addition to the pc123 for answering this question . I replaced the led with a pc123 optoisolator but smallest resistance I could get (directly from the phototransistor) out of it was between 5k ohms and 6k ohms and I used a 40 ohm resistor for limiting the current into the opto. If its not possible to get between 1-60 ohms , then whats the the best that can be done to control the output resistance? here is data sheet for the opto http://www.ges.cz/sheets/p/pc123.pdf Also, for a different and related circuit, I want to replace the LED with a bc547 transistor to turn on a 5v relay -so what is the circuit to do this? I dont need a fancy answer. I want an answer that show clearly how to answer the question using those components in a minimal way. Come on folks, try and answer the question! AI: You cannot change the characteristics of the transistor inside the optoisolator. If you need to switch more current than it can handle (and you need the isolation that the opto provides), then you will have to use the opto to drive a larger transistor that can switch the load you have. Measuring the "resistance" of a transistor C/E junction is not meaningful. That part of your question, as posed, cannot be answered. As to the second part -- if you want to switch a relay with the output of the comparator using a small NPN transistor is a fine choice. Consider that you are going to want to use active-high output from the comparator in that case, so you will need to reverse the sense of the inverting and non-inverting inputs to the comparator. Simply drive the base of the BC547 through a suitable current-limiting resistor. Keep in mind - the maximum collector current is rated at 100 mA - just like with the photocoupler above, you're going to be limited how large a load you can switch with such a small device.
H: Does noise match of an LNA mean minimum signal to noise ratio at output? Does noise match of an LNA means minimum signal to noise ratio at output? According to definition it means minimizing relative contribution of noise power with respect to source. If not why do we then do noise match at input of LNA. AI: It means maximum SNR for the LNA stage. This, however, decreases the gain i.e. you need more stages. Since the contribution of the first stage is the most important one, and the NF for each stage is divided by the gain of the previous stages, it is sometimes better to decrease the total number of stages instead of maximizing the SNR. In some applications you try to maximize the SNR e.g. satellite communications when you have pushed the channel SNR close to the limits. This means that you need more amplifiers, but you cannot avoid it.
H: Base resistor on NPN transistor I have recently started medelling in the understanding of circuitry and have started creating this project. As i bought the SSR from eBay i strongly suspect it is a fake and will probably not handle 25A of current nor be as safe as the official data sheet states [Even with an optocoupler]. Now to fully let my heart rest i tried to design this basic protection circuit, but have been hitting a wall with the answer to how to calculate a resistor to limit current and voltage at the base of the transistor. I have googled around for a long time now and still can't seem to figure this out so any help would be appreciated. :) Here are links to the specific data-sheets : http://pdf1.alldatasheet.com/datasheet-pdf/view/4856/MOTOROLA/MJE350.html (MJE350) AI: Rather than just send you away with criticism of what you do or don't know, let's work through your problem and help you learn something. Clearly you have a microprocessor with an output pin and you want to turn on/off an SSR. Whether it's fake or not is beside the point. You can learn much from it's somewhat sparse datasheet. The block diagram tells you the basics of the switch: ...and here I've corrected the diagram so some won't get upset at not using conventions for voltage and I/O in a schematic. Let's deal with just the drive requirements for the moment. From the datasheet: From this you can within reasonable limits work out how much drive current is required to turn on the SSR. The switch drive is optically coupled to the output side, and you can see there are actually two LED's used (and they are almost invariably IR/Red with a forward voltage about 2.2 V). Given the datasheet defines the current as 7.5 mA @12 V input, we can get a rough idea of the resistor values. (12 - 2.2)/0.0075 --> 1.3k Ohm ...we can't establish what the value is for each since we don't know how much current flows in each LED, but we can now decide how much current would flow when driven by a 5 volt input signal. (5 - 2.2)/1300 --> 2.1 mA (approximate). From this low current at 5 V we can deduce that you don't need a drive transistor at all since most microprocessor I/O pins will typically support > 10 mA. But we'll deal with your actual microprocessor later. So you can drive this switch directly with no transistor and no series resistor from a 5 V supply. Note: My guess is that the drive is unevenly set between the visible status LED and the optocoupler LED, so it may be that the status LED is barely visible at 5 V drive. It appears that your microprocessor board is a Wemos D1, and from it's datasheet this is a 3.3 V device. The board has a 5 V to 3.3 V regulator on it, but all the I/O signals are 3.3 V. Since your microprocessor is 3.3 V, you will actually be able to drive the switch directly. While you are very close to the minimum 3 V specification from the datasheet, notice that they actually break out separately and specify 2.4 V as the absolute minimum on voltage. However if you are nervous about temperature ranges etc, then it can be wise to provide a higher level of drive, so your original thought of a transistor drive is quite valid. However we now know the current requirements are very small when driving the switch input from 5 V so you could use almost any general purpose TO92/SOT23 NPN switch to do this task. Lets choose a 2N2222 which has more than enough current sink capability for our task and is cheap ($0.03). IC is 2.1 mA in this application and the 2N2222 has min Hfe of 50 @1 mA. So the base current required is approximately 0.0021/50 --> 42 uA (a poofteenth). We can essentially ignore this base current requirement and simply set an overdrive level we are comfortable with. From the ESP8266 datasheet the I/O pins are able to sink/source 12 mA. If we set the base current to 1 mA @3.3 V then we have a series resistor of 3.3k Ohms. So the final circuit looks like this: simulate this circuit – Schematic created using CircuitLab Hope this helps.
H: Adding breakouts for external components in Eagle PCB I have an external actuator which is connected to a component on the PCB board. This connection will be through wiring. However, I cant seem to find the breakout component which will allow me to solder wires onto the PCB. Does anyone know how to do this on eagle PCB? AI: You could use an appropriate connector footprint - you don't need to install the connector. Or you could make a suitable footprint yourself - it probably just needs a few pads with holes of a convenient size to solder the wires. You would also want an appropriate schematic component to go with the footprint. If you do any amount of PCB design, you will have to make your own PCB footprints and schematic components.
H: Why don't battery manufacturers make 5 V batteries? Why is it so? Is there something that cannot be crossed while manufacturing 5 V batteries? They can make billions though! Still... AI: They don't have a choice The voltage of a battery is decided by the reactants in the battery. There are only so many viable battery chemistries out there. You can't just pick two random chemicals off this table; it also has to be possible/practical -- To actually build it. At a competitive price-point. Out of readily available materials. Which are relatively non-toxic. And don't weigh too much. Endure many recharge cycles (if it's secondary). Never explode. Have enough storage capacity to bother. and not hopelessly lag behind other successful batteries in any category. These restrictions soon winnow the thousands of combinations down to just a few, and none of them have a 5V or 2.5V output.
H: Soldering problem with heavier components I'm having difficulty soldering components with 1 mm thick or larger wires. The solder melts fine, but it does not 'stick' and just falls off. Components with smaller prongs like regular LED's are no problem. Right now I'm trying to solder some diodes to make a bridge rectifier and the solder is just not sticking. What could be the problem? I'm using a battery-powered RadioShack soldering iron that works fine for the small stuff, and lead-free flux solder. Any tips on how to solder the heavier stuff that seems to reject solder? I've looked all over for this issue and don't seem to be able to find an answer. AI: Short answer: you need a better soldering iron. Longer answer: Your little battery-powered soldering iron doesn't have enough power to heat a large mass of heat-conducting metal up to the melting point of solder. All of the heating energy from your tiny iron is being conducted into the mass of metal that you are attempting to solder and being radiated away. I would suggest that you need at least a 40 Watt iron to solder 1mm thick wire of any significant length. Others might suggest that you would need more power than that but it all comes down to what soldering iron you use. A really inexpensive iron may not work well just because the design is such that there is significant thermal resistance between the heating element and the object being soldered. But better irons will do the job nicely. A specific example of a soldering iron that works well at 40 Watts is the Metcal MX-500 series with a sttc-137 tip. This iron will solder the entire perimeter edge of a male DB-25 connector to a piece of copper-clad PCB material using 63/37 solder. It takes a while but it will get the job done. Your typical Radio-Shack 40W iron won't even attempt to work.
H: Does stranded wire reduce resistance from skin effect when strands are not insulated? I've seen many discussions where people mention multi-strand wire improving conductivity when skin effect is a concern, but I often see Litz wire mentioned in these discussions. Litz wire contains individually insulated strands of wire. I haven't found anyone clearly explain whether or not the individual strands have to be insulated to reduce resistance from the skin effect. Can someone clarify this for me? For specific application, I have a lot of 10 AWG multi-stranded wire. The conductors are bare strands all held in a THHN jacket. This is common stuff used in regular electrical wiring, but I want to use it for audio frequency application. Will the individual bare strands improve conductivity versus a solid wire in this case? AI: The answer is "yes", even a non-insulated strands are better than solid round copper, because the strands have limited contact area between each other, and field distribution is better, reducing skin effect. See this study, Stranded Wire With Uninsulated Strands as a Low-Cost Alternative to Litz
H: If serial ports are asynchronous, why do they have an SCL line? For instance, this tutorial has these two seemingly contradictory quotes: Because serial ports are asynchronous (no clock data is transmitted), devices using them must agree ahead of time on a data rate. and: Each I2C bus consists of two signals: SCL and SDA. SCL is the clock signal, and SDA is the data signal. Can someone help me to understand how these are compatible statements? How can it have a clock signal without transmitting clock data? AI: Serial ports exist in both synchronous and asynchronous forms. I2C is synchronous but the more familiar UART serial is asynchronous. Both have their own advantages. Synchronous serial ports allow for arbitrary timing, while asynchronous ports require precise timing but use less connections.
H: Constant current circuitry tolerance dependency I have the below given circuitry. The circuit provides ( 12 - 3.3 = 8.7 V) 8.7 V / R3 of current. My specific question: What is the effect of PNP transistor on the variation of current? Until now I have following cases: more current case: R1 max, R2 min, R3 min min current case: R1 min, R2 max, R3 max supply voltage variation. Thank you in advance. Edit: the output is taken across capacitor. The load is not shown which can vary between 1k and 10k. AI: One parameter of the transistor that's not corrected by the feedback loop with the op-amp is its alpha, the common-base current gain. You are controlling its emitter current via R3, however the output current is from the collector. I have a problem working with alpha as I lose count of the 9s. However, as alpha = 1-1/beta, it's almost as easy to work in terms of the beta or hFE. If the transistor beta changes from 100 to 200, the alpha will change from 0.990 to 0.995, and your output current will increase by 0.5%. As transistors do have this sort of variation, your sensitivity to transistor variation is similar to your sensitivity to using 1% components for R1, R2 and R3. As the reference voltage across R3 is directly proportional to the supply voltage (you're not using a reference or regulator here), any change in the supply voltage will give you a directly proportional change in the output current.
H: Why is this switching transistor heating up? I am building control board for fan cooling. It runs from 12V power source, and it is controlled by input analog signal 0.3V to 1.2V which just controls speed of fan. Problem is that switching transistor Q2 gets hot. I tried to use opamp in circuit and then comparator. It takes longer time to heat up with comparator, but it gets hot as well. I switched from opamp to comparator to minimize switching losses in mosfet. How could I minimize heat dissipation in that transistor? AI: This does not work as a switch. Instead, forms a current source or a linear voltage regulator (You can explain how by yourself, but detailed explanation is below). According to the notes on the schematic, the voltage across D and S of Q2 will be 12-3.0 = 9V at worst. If you multiply this with load (fan) current then you'll find the power dissipated (\$P_D\$) by Q2. Multiply \$P_D\$ with \$R_{th j-a}\$ of AO3401 which is given as min. 100 in the datasheet and you'll find temperature rise. This may explain the excessive heat. You can verify this by applying maximum control input voltage (1.2V) and seeing that Q2 does not heat up. Now let me explain how this works as a linear regulator (According to the schematic in your question): 1) At the time of energizing the circuit, (assuming control input is 0) the output of comparator will be 0 due to pull-down resistor (R2). So, comparator output is low --> Q1 is off --> Q2 is off --> No load current/voltage --> Voltage across R2 is zero --> Output remains low. 2) When the control voltage is applied, comparator will attempt increasing its output to 12V. When this output voltage approaches/reaches Vbe threshold of Q1 (neglecting 100R tied to emitter), CE resistance of Q1 starts to decrease. Thus, G-S voltage (so, DS resistance) of Q2 starts to decrease, leading load current (so, load voltage) to increase. 3) This load voltage is divided by 1+(R8+R3)/R2=1+90k/10k=10 and fed back to negative input of comparator (\$V_{in-} = V_L/10\$). When this FB voltage (i.e. voltage across R2) reaches and exceeds the voltage on the positive input terminal (i.e. control voltage), the comparator attempts decreasing its output to 0. 4) Comparator output starts to decrease, so Vbe of Q1 starts to decrease, leading CE resistance to increase and forces Q2 to increase its DS resistance. This results in decreasing load current (and so, load voltage). This voltage is divided by 10 and fed back to negative input of comparator (\$V_{in-} = V_L/10\$). 5) Now the voltage at negative input is lower than positive input, so the comparator will attempt increasing its output to 12V. Output start to increase and the cycle begins afresh from (2). Consequently, the voltage across the load will be 10 times control voltage: \$V_L = V_{ctrl} \cdot 10\$ and the voltage across MOSFET is \$V_{DS} = 12V - V_L\$. We don't have any information about your load, so it's quite hard to guess how much the load current is. Anyway, the power dissipated by MOSFET will be \$P_D = V_{DS} \cdot I_L\$. I made a simulation on Proteus 7. You can download from here and here's a screen shot: (I used LMV393 because LM393 is not defined in Proteus, but LMV393 is the low-voltage version of LM393). Let's assume your fan current \$I_L=50mA\$ @ \$V_L=5V\$. So MOSFET's power dissipation will be \$P_D = 7V \cdot 0.05 = 0.35W\$. Multiplying this with \$R_{th(j-a)}=100\$ will give a temperature rise of \$\Delta T = 100°C/W \cdot 0.35W = 35°C\$. Assuming ambient temperature is 24°C, so MOSFET's final temperature will be 24+35=59°C. Hope this explanation is enough and useful for you.
H: Getting code onto a microcontroller I have broken open a few circuits of some electronics (DVD player, handheld game device) and I can see the microcontrollers in them. However, I have no idea how they get their programs onto them! There is no micro usb or usb plug anywhere on them! How would they have gotten their program onto there? AI: How the code gets loaded will depend on the type of microcontroller being used. Some possible schemes used: The MCU gets its program code from a ROM (read only memory) that is manufactured onto the part. The MCU has FLASH memory built onto the chip that contains the program code. This FLASH may be programmed in a number of ways. (see below) The MCU may load its program code from an external memory chip. This external memory could be a ROM, serial FLASH, FRAM chip or for some older types of products an EEPROM or parallel FLASH. Embedded products that have the program code on re-programmable memory such as the on-chip FLASH or serial FLASH chip as mentioned above can have their program code loaded via: The code is programmed into the bare component before it is soldered down to the board. Sometimes the MCU or FLASH device is placed into a socket where it can be removed for programming at the chip level using an external programmer. There may be a special connector or header that is used to connect an external programming device that allows loading of the program code. Some high volume products do away with the connector mentioned above and allow access via spring loaded pins to test points on the board to permit a programmer to access the FLASH programming pins.
H: Altera equivalent of the Xilinx Zynq UltraScale+ MPSoC I'm new the the FPGA world. I was wondering if anybody could tell me the Altera equivalent of the Xilinx Zynq UltraScale+ MPSoC? I'm looking to buy a development board but it needs to be from Altera. Thanks Tom AI: Altera don't seem to have a nice easy to find selection table, but Stratix 10 seems to be their current high-end part with an ARM-v8 core. Hard to know if its a good fit for your application without any more detail in your question. They don't appear to have their own development board (but maybe I missed something).
H: Risks of using device body as heat sink for IGBT and Bridge rectifier I want to know is any risk factor (electrical shocking or ... ) for using one aluminum (4mm) device body which is used as heat-sink for IGBT and Bridge rectifier that are working with 220 volt power? is use a silicon layer between those surface how is it? (guess withstand temperatures up to 260°C (500 °F)) thanks a lot. AI: Obviously there are two primary risks: shock and temperature. For shock, any mains-connnected circuitry MUST be double-insulated. — in other words, there must be at least two independent layers of insulation, each of which is capable of withstanding the peak voltage on its own. For temperature, you must make sure that no exposed surfaces ever get hot enough to cause burns, or worse yet, to ignite any flammable materials. If the heatsink gets too hot on its own, sometimes it makes sense to put a shroud around it and blow air across it with a fan.
H: How to normalize PWM signal from RX and TX serial lines First off, I just want to apologize for any part of this question that may not be properly phrased. This is still very new area for me. I am building an adapter for my Commodore 64 that lets it connect to a RS232 DB9 serial port over its user port. I've already made the connector and it works beautifully. What I want to do now is get fancy and create LEDs that light up when the adapter is either sending or receiving data. I've followed the schematics used in this picture here, but the LEDs are barely visible due to high flicker when data is coming or going. I assume this is PWM and the LEDs are barely lit because the rx and tx lines on the serial adapter are turning on and off at super high rates (its a 9600 baud adapter). I've tried connected a few different sized capacitors at different points in the circuit but I'm still not getting the desired effect, which is to see the LEDs light brightly at times of rx and tx signals going across. Is there a way to "tone down" the pwm so that the LEDs are brighter? One LED is wired to tx and another to rx. Thanks. AI: What I want to do now is get fancy and create LEDs that light up when the adapter is either sending or receiving data. This implies peak detect and slow decay. 9600 baud is 960 char/s and a dwell time may be desirable from 0.1 to 1s by changing 1M to 10M (as shown.) simulate this circuit – Schematic created using CircuitLab Schematic fixed ( brain-phart) With 50ohm driver in 74HCxxx @5V and 100nF RC=T=5us attack time. Good 'nuf. It is possible to source 1mA current limited into ultrabright LEDs and load directly on Tx Rx. The logic level threshold is same as TTL= 1.4V and LED needs at least 3 to 5V with 1k series which works for RS-232 ( but blinks )
H: Problems with a back-to-back MOSFET dimmer for 230VAC I've been trying, unsuccessfully, to dim a 60W incandescent light bulb with two back-to-back MOSFETs, similar to this web page: http://easy-electronics4u.blogspot.co.uk/2012/02/switch-ac-loads-using-mosfets-as-relay.html My circuit looks like this: (there is also a 2A fast blow fuse and a 7D471K MOV on the mains input) Below is the output from my scope: Channel 1: Live_230VAC Channel 2: DIM_GPIO Channel 3: Drain of Q5 Channel 4: Source of Q5 and Q6 Q6 gets burning hot, Q5 stays cool. The lamp (obviously) flickers as we are only getting every other half wave. Thinking about the circuit, I feel that it will be impossible to get a Vgs voltage that is higher than the voltage at the source pin, since the source pin will see the peak of the 230VAC? Is there a way (preferably simple & cheap :)) of solving this kind of double MOSFET dimmer? Edit 1: Added revised schematic as per suggestions from @RoyC for consideration (and so that RoyC doesn't have to click my pasteboard.co link) AI: On the right of the diagram swap live and neutral. This is a safety thing although you should never assume neutral is anywhere near earth it is likely to be lower than live. Now your switch is floating on the neutral line not the live. The drive voltage for your gates has to float referenced on the source connection between Q5 and Q6. Short out R19. Remove the bridge rectifier it is not needed instead connect D7 to the point that you have labelled in your current diagram as Neutral_230VAc. The ground end uses the body diode of Q6. Here is a rough diagram component values same as yours
H: 8 Ohm Speaker Gets Hot - Simple 555 Piano I have built this circuit and nearly everything works fine. I replaced the piezo with a small 8 ohm speaker and put a potentiometer in, so I can control the volume. Unfortunately the speaker gets really hot. What am I doing wrong? Can't I just replace the piezo? Do I have to add something else? Thanks for helping me AI: An ordinary 8 ohm speaker has a coil of wire inside, which will get hot if you pass a DC current through it. The DC resistance is likely to be considerably less than the nominal 8 ohms impedance to AC. By switching the output between 0V and (approximately) the battery positive, you are - on average - applying a significant voltage across the speaker. It's normal in such circuits to place an electrolytic capacitor in series with the speaker to block the DC and let through only the AC signal. Make sure the capacitor is the right way round. A rough back-of-an-envelope calculation says about 470 microfarad or more for the capacitor. That gives less than 2 ohms impedance at 200 Hz (and lower still at higher frequencies). We want the impedance to be significantly less than that of the speaker. Impedance = 1/(2 π f C), where f is the frequency and C the capacitance.
H: What is the purpose of everything to the right of the proton exchange membrane in a microbial fuel cell? A standard microbial fuel cell looks like: And my question is, why is the cathode and bascially the entire right chamber necessary for electricity production if the load (or the multimeter) can be attached to that area that says "electricity" and can produce electricity? Would there be an actual problem if one were to leave the hydrogen cations just floating in the anionic chamber? AI: If you just left the H+ ions floating around, then the area around the anode would be positively charged. Eventually the potential will get high enough so that it prevents new positive charges coming off the anode. This is actually what happens when you leave a batter open circuit, and why that doesn't use up the battery. The reaction quickly reaches a equalibrium where the "waste" of the reaction builds up so that more can't be produced. The reaction stops and the battery is not depeleted. When the charges are given a path to flow, they get swept away from the anode as they are produced. The reaction now acts as a pump. We can harness the energy in the charges being at a elevated voltage. The faster the charges are removed, the faster more can be made, and the faster the chemicals in the battery get used up.
H: Measure AC magnetic field strength between 100kHz and 300 kHz I have tried to find in the webpage some answers about measuring AC magnetic field strength in the range above 50 kHz and more with no luck. Sorry, but if someone could help me will be great. I need to measure intensity or strength of the magnetic field (1 to 50 mT) between 50khz to 300khz aprox. I have some like "induction heater", with different LC ! L= workcoil., Radius = 2 cm copper tube. I have used a sensing coil in the middle and i have a strong signal, but iI don't know how to calculate the mT i have. The small sensing coil is 10 turn of awg32 cooper wire with radius = 2,5mm. The voltage is about 0.5 volt, and I can see the out put in the osciloscope(sine wave), I cannot measure the currrent of the sensing coil, and at this frequency my ammeter does not work. AI: As with any coil, induced voltage is N\$\dfrac{d\Phi}{dt}\$. So, armed with the number of turns (N), the voltage, the frequency and the area of your loop, you can calculate the average magnetic flux density seen by the coil. The coil should be measured open circuit and the fewer turns the better because coil parasitic capacitance can easily resonate the circuit and give big errors.
H: Does it make sense to only mention amp requirements and not voltage requirement, or vice versa? When reading about the electrical needs of various electronic devices and/or appliances, I often see a requirement for only one of the various measures related to electricity. For example, it will say "requires 120 volts" and not mention anything about amps. Or it will say "takes 10 amps" but not mention anything about volts. Here is just one example from Lifehacker, in an article about the Raspberry Pi: A power supply: The Raspberry Pi is powered by a micro USB, much like the one you’ve likely used for your phone. Since the Pi 3 has four USB ports, it’s best to use a good power supply that can provide at least 2.5A of juice. Isn't a statement like that useless? There is no mention of required voltage. 2.5 amps can be delivered in many different ways - i.e. by 5 volt pressure, or 240 volt pressure. I suspect this can produce very different outcomes. So why are volts and amps often given as though they are completely independent? Why would there be an explicit requirement for one and not the other? AI: You are right in that both voltage and current must be specified for a DC power supply. However, sometimes one or the other is omitted because it is implied. In the case you quote, note that it mentions "The Raspberry Pi is powered by a micro USB". That implies 5 V. The only remaining question is now the current. As another example, consider a device that has a power cord attached ending in a wall plug compatible with outlets in your locality. Brief documentation for that device might only say that it draws 1.5 A. Since it obviously runs from line power, it might not explicitly say "120 VAC, 60 Hz" (or whatever the line power is in your locality). More thorough documentation someplace should specify the voltage, current, and any other relavant parameters.
H: The induced emf in a straight wire when I came to an explanation of the induced emf in the straight wire it shows that the wire starts its motion from the position where it is perpendicular to the magnetic field lines, like this: And the equation used to measure the induced emf is (emf=Blv sin theta) Where theta is the angle between the direction of motion and the magnetic lines. And I was wondering how the equation would be if the wire starts the motion from a position where the angle between it and the field line is less than 90, something like this: In the figure above there are two angles: the angle between the wire and the field lines and the angle between the direction of motion and the field lines, so which angle will be involved in the equation (emf=Blv sin theta)? Or are the two angles equal each other? AI: General form of the equation: \$ V_{Ind} = N B l v \ sin\ \theta\$ Where: \$ V_{Ind}\$ = Induced Voltage in V. \$N\$ = Number of turns - in your case 1. \$B\$ = Flux Density in T (Teslas = Wb/m^2). \$v\$ = velocity in m/s. \$l\$ = length in m. Let \$ V_{MAX} = N B l v\$. Motion can be split into two components, \$x = V_{MAX}\ cos\ \theta\$ and \$y = V_{MAX}\ sin\ \theta\$. The x-component (\$cos\ \theta\$) would be in parallel with the magnetic field. No flux lines are crossed, no induced EMF. It can be ignored. The y-component (\$sin\ \theta\$) is perpendicular to magnetic field. It is the only component which crosses the lines of magnetic flux, so it is the component which induces the voltage by Faraday's Law. Maximum induced voltage would occur at \$90\unicode{xb0}\$, and min at \$0\unicode{xb0}\$. This demonstrates the how motion of a wire in a magnetic field induces a voltage. The basis for generators. Too get any meaningful voltage you will have to have many turns. Now, look at your drawing and concentrate on the motion. Horizontal movement would produce no voltage (parallel to magnetic field). Vertical maximum. This should help you clarify which angle \$\theta\$ or \$\theta _L\$.
H: Whats the difference between amplification and gain in a BJT? Considering the 3 basic BJT amplifier configurations For example consider CB config, we have a high Voltage gain but the input resistance is low, so there is no effective transfer of voltage Vsupply to BJT. If we needed to amplify the current, since CB config has low input resistance, we have an ideal transfer of input current but the current gain is terrible (~1) How do we qualify amplification then?i mean what is it that the CB amplifier amplifying here.please help me understand the difference between gain and amplification. AI: The current through a BJT is controlled by the base-emitter voltage VBE, it depends exponentially on that voltage. From this point of view the BJT is a voltage-controlled device. The way a BJT works makes it necessary to also have a small current through the base. This base current is only a small fraction of the collector current. The ratio between these currents can be used to define a current gain and it is possible to regard the BJT as a current-controlled device. So we have two ways to deal with a BJT and depending on the situation one is better than the other. For a CB circuit we see that input and output current are almost equal and it could be used as a current buffer. Apart from that current gain doesn't help much to gain further insight. Using the voltage-controlled operation approach, we see that the emitter gives direct access to VBE of the transistor in almost the same we as for the CE configuration, only the sign is different. So a change of the emitter voltage causes a change of VBE. This also changes the current through the transistor which is then converted into a voltage by Rc. So it is possible to achieve voltage amplification. However, usually voltage amplifiers with a larger input impedance are wanted and therefore the CE stage is preferred over the CB configuration. This leaves the use as a current buffer like in a cascode circuit.
H: Simple Shunt Voltage Limiter Following up on my previous question, I tried this simple circuit to ensure that there is a minimum load on my power supply (to prevent the unloaded supply from rising above Vmax of a part). I actually built this and it works as I expect, but I wonder if there are any additions that should be made to improve its performance? simulate this circuit – Schematic created using CircuitLab My idea is, if the load (Rload) is disconnected or not drawing at least a minimum current, the supply rail will rise because of inadequate current through Rsource. When the voltage reaches 9.6 (Vzener + Vbe), Q1 will start to turn on, and will sink enough current (and turn on the "Overvoltage" LED) to keep the rail below about 9.6V. If the load is drawing enough current, the zener/transistor circuit will be inactive. Incidentally, I believe that just a zener alone would be sufficient, except that it would have to shunt all the excess current itself. By adding a higher-power transistor Q1, a lower-power zener can be used. AI: simulate this circuit – Schematic created using CircuitLab Do this instead. A bridge is a voltage doubler unless you connect the centre tap also to 0V then it becomes a half bridge. 33R load is possible with 33R series then voltage is 50%. No load is always 50% more due to rms-pk (rt2=141%) xfmr loss 8~10% thus 141%+10% - diode drop~ 150% of RMS with no load. Choose Vf of LED to tune active clamp.
H: How can I vary the output voltage of a "smart" USB charger? I have a Raspberry Pi 3 system that I need to be small, light, and portable. It uses a USB Wi-Fi adapter that plugs into one of the ports on the Pi. The adapter is quite particular about its supply voltage, and will not initialize if its supply voltage is below 5.0V. The Pi and adapter draws about 400mA. I've found that my standard 5V 2A power supply for the Pi puts out 5.0V, which results in about 4.85 at the USB port. I have another 6-port USB "high power" USB charger, but it is also well-regulated to 5.0 volts output. By powering from a variable bench supply, I've found that a supply of 5.3V results in 5.1V measured at the Pi's power connector, and 5.0V available at the USB ports on the Pi. This is acceptable to the adapter but not very portable. Rather than having to create some sort of customized supply to provide the 5.3V, I'd prefer to use a commodity "high power" "smart" USB charger (Anker, etc) and provide whatever type of circuitry is required to cause the supply to provide 5.3V. This is a problem that was solved decades ago (in the general case) with remote sensing power supplies, but USB chargers approach it differently - more about current limiting than voltage control. Is there any smart charging variant that enables output voltage regulation based on resistor values on D+ and D-? AI: There is no "smart charger" that would allow you to negotiate output voltage to a fraction of volts, to 5.3V as you need. At most you can have 5V, 9v, 15V, and 20V. There could be an optional mode for custom voltages, but this is likely beyond normal reach and requires implementation of full Power Delivery protocol. You don't want to do this. Even in the simplest case of QUALCOMM QuickCharger you would need to provide certain sequence of voltages on D+/D- to get it into 9V mode. Power delivery across Raspberry Pi 3B is kind of under-engineered, so better cables cannot possibly solve the voltage droop problem across the Pi. Your best option is to take a standard 5V adapter, open it up, and adjust the feedback circuit for 5.5V output - this level is the official top for USB VBUS power, and it will give you some margins for your overly-sensitive WiFi dongle.
H: Does radiative recombination occur in regular diodes as well? If so then where does the photons produced go ? If not, why not ? AI: Radiative recombination is generally not significant in silicon diodes because silicon is an indirect bandgap semiconductor. This means that the highest energy states in the valence band and the lowest energy states in the conduction band have different momentum, so there's no way for an electron to transfer directly from one to the other while also conserving momentum. Recombination in silicon therefore generally involves intermediate states associated with impurities (this is called Shockley-Read-Hall recombination after three guys who first described it in the literature). Since the impurity sites are localized in space, they are spread out in momentum (due to Heisenberg's principle), and they enable the captured electron to transfer momentum to or from the crystal lattice.
H: Will a filtered 250V IEC socket still work given 110V? I have a very nice IEC socket I got at a swap meet a few years back: http://www.mouser.com/ProductDetail/Schurter/43025001/?qs=%2fha2pyFadujsDnoxV1Y8VQWgys%252bCu2GOjmVWnETFUeE%3d It has an integrated switch and filter, and I would like to incorporate it into my 3D printer build. However, according to the spec sheet (link), this socket is designed for 250V. I live in the USA, and our standard wall voltage is 110V. My question is: Will this socket still work for my application? If the filtering circuit isn't provided the full 250V, will it still produce a filtered 110V sine wave on the other side? If not, I'll need to go hunting for another socket/switch combo. AI: If you look into the spec sheet of your link, there are ratings for both 50 and 60 Hz: Ratings IEC 1 - 10 A @ Ta 40 °C / 250 VAC; 50 Hz Ratings UL/CSA 1 - 10 A @ Ta 40 °C / 125 VAC; 60 Hz Those parts are used by companies in Europe and USA to build electronic devices for the world market which are exported from Europe to USA or from USA to Europe. But there are different versions of this socket for currents of 1, 2, 4, 6 and 10 A, I hope you got the right one for your project. If you look at the diagramms for attenuation, they all start at 10 kHz, there is no difference for 50 and 60 Hz.
H: Standard cables for modbus protocol What are the standard cables I can use for modbus protocol? lease add the price details if anybody knows AI: There are no universally standard cables per se, but the most popular implementation of Modbus is Modbus RTU, which will work over any 8-bit asynchronous serial line, like EIA-485. Unfortunately, EIA-485 doesn't specify any sort of standard cable or connector either, beyond that it be a shielded twisted pair. Modbus doesn't even specify a shielding requirement. So, what you use is not really dictated by anything beyond cost and availability. For Modbus, OR EIA-485, it is pretty hard to beat good old registered jacks and Cat5 twisted pair cabling. You are already likely familiar with these cables and connectors, as they are used widely for Ethernet. Your computer or router is almost certainly using it right now. Economies of scale have made the price and availability of these jacks and cables very attractive, so a lot of things besides ethernet use it. Now, ethernet even at the lowest end is much faster than ModBus. For Modbus, you can use Cat5 cable. This is probably the cheapest twisted pair cabling you'll find, as it is unshielded. If you need shielding, you can upgrade to Cat5e. Another option of course is simply sending ModBus over actual ethernet. There is a version of ModBus that operates on top of an Ethernet networking layer as TCP frames. This of course requires a full TCP/IP stack and ethernet magnetics and what not on either end, so this is not going to be a cheap option....unless you happen to already have networking in your application. In which case, it is effectively free, as you can just piggy-back it on something you already have. So, in short, you can send ModBus over any twisted pair cable. There is no agreed upon standard cable or connector in use, nor is there a need. ModBus is a master-slave protocol, and so by nature is going to be very application specific. There is little need for interoperability, so just use whatever is robust enough for the speed and distance in your application. As for prices, the fancy stuff (shielded, cat5e), can be had complete with connectors on the ends for almost nothing. There is probably even cheaper out there, but a quick google lead me to this 3 foot Cat5e cable, which costs a princely sum of 74 cents. Yep, $0.74. If you don't mind crimping connectors on the end yourself, you can get any length you want for 19 cents a foot here. But again, shop around. If you buy in bulk 1000 feet spools, you can get cat5 cabling for as little as 3-4 cents per foot.
H: Has anyone worked with HX711 load cell amplifier / weighting sensor through Arduino? I am talking about this product: http://www.ebay.in/itm/232127458591?aff_source=Sok-Goog I need to know the following: 1) What outputs are represented by DT and SCK? 2) What inputs are B- and B+? 3) Where to connect DT and SCK on Arduino board? 4) I have load cell with 5 wires (Red, Black, Black, Green, White). What wires shall connect to B+, B-? 5) Why two wires are black? How do I find which one of them is E- and what is the purpose of second black wire? AI: If you look at the picture of that breakout board, you can see that it is mainly exposing the pins of the ADC. For example, DT and SCK are connected to pins 12 and 11 of the IC. So check the datasheet for HX711 and you'll find the description: Reading through the datasheet should tell you more than you need to know about how to use this device. Sparkfun also supplies a breakout, which has a tutorial. The board is very similar to yours and you can see what connects where and follow the example given. The ADC is two-channel but only one is used, so the B+ and B- inputs, which are for the second channel are redundant and therefore do not need to be connected. Look at the equivalent Sparkfun schematic to see what pins are used for which purpose and then do the same thing with those connections on your board. Then the Arduino library and example code should work in the same way.
H: Misunderstanding of output impedance and linearity When I read about Thevenin equivalent, it is mentioned it only applies to "linear circuits". But I also see in texts and tutorials showing how to measure the output impedance of an actual device which is complex such as a power supply or a transducer. They measure the output impedance and equate the device to a single power/signal source with a single output resistance/impedance. But these devices are composed of non-linear circuits. And the whole idea of Thevenin applies to linear circuits. How come we can employ Thevenin in these devices conceptually? Is a power supply or a transducer a linear circuit? AI: You have to separate some things first. Thevenin by itself has nothing to do with output impedance. Thevenin is just a method for "shuffeling around" sources and impedances (resistors) and represent them in a different way which is sometimes convenient. For example: This only whorks for linear elements and therefore only for linear circuits. Output impedance is also only defined assuming that a circuit behaves in a linear way. We can however define for example output impedance of a non-linear circuit by linearizing it around a certain operating point. This is what is for example done for all transistor based circuits which have a (somewhat) linear behavior for small signals but have a non linear behavior for large signals. For that large signal behavior we cannot define an output impedance but for the small signal behavior we can. And in case that results in an inconvenient circuit, we could use Thevenin to represent it in a different way which is more convenient.
H: Measuring the strength of a magnetic field with a microcontroller I'm trying to make a very simple magnetic field detector. Now I have a very simple circuit (the transmitter/magnetic field generator) with an inductor and a push button. Whenever I press the button 'on/off' really fast, the inductor produces a magnetic field. I have ATMEGA328 with ADC enabled (the receiver/detector circuit) (voltage reference set to 1.1v). I have a primitive antenna (a 6cm wire floating in the air - plugged directly into ADC input, no op-amp). If I move the micro-controllers ADC input 'antenna' next to a 220v power socket, I can see a sine wave being formed, so I believe the ADC is picking up the 50/60Hz AC sine wave from the wall, so it's detecting magnetic/electric field. The problem is when I place my inductor circuit next to the microcontroller and trigger it, the micro-controllers ADC doesn't pick up anything, even when the 'antenna' and inductor is right next to each other. I tested the same inductor in the 'dc to dc boost converter' and it was working very well on 2A 0.3v. I can't understand why it doesn't detect the magnetic field generated by the inductor, but picks up the field generated from the wall. Is it because I need to 'convert' the magnetic field/current into voltage (as ADC shows voltage levels) or is the magnetic field produced in the inductor too weak for the simple wire to pick up and I need ether another inductor to pick it up or an op-amp for my 'antenna'? Thanks AI: so I believe the ADC is picking up the 50/60Hz AC sine wave from the wall, so it's detecting magnetic/electric field. Electric field, but not magnetic field. The wire and the AC mains wire work like a low value capacitor, allowing the 50/60 Hz to pass through to the ADC. The problem is when I place my inductor circuit next to the microcontroller and trigger it, the micro-controllers ADC doesn't pick up anything It cannot pick anything up. The frequencies required for any measureable inductive coupling into a 6cm wire are in the GHz range and waaay outside usable ADC frequencies on an AVR µC.
H: Super capacitor directly on the power supply simulate this circuit – Schematic created using CircuitLab I have an atiny with an RF emitter which would detect the 220V AC off in the circuit and send a radio signal. to do this I use 2 super capacitor (1.5F 2.7v each ) to send the signal before dying. I've done some tests and putting the two super capacitor in series in my 5V power supply and it is working. But It seems that super capacitor have special design with charging and discharging. can I use them like this or it is more complicated? Thank you AI: You can use them like that. The only drawback is the self-discharge of them internally. For a battery application this can be disastrous. For a mains powered application it's probably neglectable. If you are using them in series to get 5+ V rating, you might need to consider the difference in leakage current though them and balance them off with external resistors.
H: Making voltmeter accepting bipolar input voltage, using a microcontroller I want to build a very simple voltmeter using ATMEGA328 ADC. I can successfully measure up to 20v (using a voltage divider), but the problem is, I don't understand how to measure the voltage properly, if negative and positive terminals are switched. Only way I could think of is adding another ADC input which is responsible for measuring the 'inverted voltage'. The join both ADC inputs, but add diodes in between, so one channel works with (+-) and the other channel works with (-+) and then 'merge' the input from both channels using code. Then I should be able to measure things like sin wave for example (where polarities switch). Is the concept correct or is there a better way to do this? Thanks! AI: As FakeMoustache explained, the diode idea isn't really a good one. The easiest way to achieve this is to adjust the input range (which includes both positive and negative voltages of high amplitude) to something useable by the ADC (a voltage between GND and VREF). For this, you need to reduce the input voltage amplitude and add a fixed offset. The aim is that when you have an input at 0V, you obtain a VREF/2 voltage to feed the ADC with. We can build this stage with an operational amplifier. But this can be more effectively done if you accept to invert the input voltage sign and fix that later from the MCU firmware. For example, the adjusting stage will output a +VREF voltage when the input is -20V and a GND voltage when the input is +20V. Basically, the transfer function is reversed. This way, you can do this without needing negative voltage biases, and with a single opamp. Here is the basic circuit: simulate this circuit – Schematic created using CircuitLab Now, how to compute the values of R1, R2, R3, R4 and R5 so that you get the range you need and the correct offset? Well, basically, everything can be deduced from the following formula: \$\dfrac{\frac{V_{OUT}}{R1}+\frac{V_{IN}}{R2}}{\frac{1}{R1}+\frac{1}{R2}+\frac{1}{R3}}=\dfrac{\frac{V_{REF}}{R4}}{\frac{1}{R4}+\frac{1}{R5}}\$ You must find the appropriate values so that when VIN is 20V, VOUT is 0V and when VIN is -20V, VOUT is VREF. You must also ensure that the total input impedance is high enough to leave your input signal unaltered (values in the 100kΩ range will probably be fine). But I'm a bit too lazy to solve this myself... Last thing: choose an opamp with rail-to-rail inputs and output. Otherwise, you'll get incorrect results in the far end of the input range. Edit: It seems I'm not that lazy after all. Let's say the input range is +VMAX → -VMAX and you want to translate this to 0 → +VREF, the above formula gives the following relationships between the resistance values: \$\frac{R1}{R2}=\frac{V_{REF}}{2V_{MAX}}\$ and \$\frac{R4}{R5}=1+2\frac{R1}{R3}+\frac{V_{REF}}{V_{MAX}}\$ So, a possible solution for VMAX=20V and VREF=5V would be: R1 = R3 = 12.5k R2 = 100k R5 = 10k R4 = 32.5k I checked with the circuitlab simulator, it seems consistent. Here is the transfer function:
H: Colpitts oscillator not oscillating I have tried breadboarding a simple Colpitts oscillator, just to see how it works (and to get to use my 'scope for something more interesting than measuring static voltages). I have been following this example, specifically the second circuit design: (source: play-hookey.com) I'm feeding it 5 V, the resistors are all at 1k, L is a .22 µH fixed inductor, Q is a 2N4401, C1 and C2 are .001 µF ceramics and the unlabeled cap at the base is a 220 pF ceramic (maybe way too low?), and I'm probing between emitter and ground. Now, admittedly, these values are more or less randomly chosen from my component drawers. In this case I am not interested in obtaining a specific frequency as long as it's low enough for me to measure it (50 MHz), so I figured I could just throw in any values for the caps and the inductor, as long as they were high enough - I've read that this can actually be a pretty accurate method of measuring capacitance and inductance respectively, based on the frequency you get. Questions: Why is the circuit not oscillating? I'm measuring a DC voltage of 1.87 V at my probe point. How do you calculate the proper resistor values (or ratios)? What's the base cap used for? Just power decoupling? Am I probing in the right place? AI: My answer: 1.) I do not know because I didn`t recalculate the circuit. However, it is YOUR task to find a suitable design (not using random parts values), see point 2). 2.) At first, you must understand the circuit (why it can oscillate). There is a frequency-selective feedback network with a bandpass characteristic (L in parallel with C1 and C2). Do not overlook that the supply voltage is identical with signal ground. Hence, at the midband frequency the phase shift will be zero. A part of this signal (depending on the C1-C2 ratio) is fed back to the emitter establishing the required positive feedback (loop gain). 3.) It is the task of the base capacitor to keep the base at signal ground (transistor in common base configuration and positive gain, see 2). 4.) The classical (normal) output for common base stages is at the collector .
H: PWM DC motor drive with L293 and TBA820M oscillator In "Applications of Monolithic Bridge Drivers" from ST there's an application circuit for PWM control of a DC motor which uses an L293 and a TBA820M. Here it is: I know how the L293 works and I've read the 820M datasheet but I don't get the PWM signal part. why using an audio chip to get a triangle wave? how does this particular oscillator works? how are the DC offset and triangle signal added? Thanks for helping! AI: Why using an audio chip to get a triangle wave? Short answer - Because you can. Most of the time we don't want an audio amp to oscillate but they have a natural tendency to do so. The 'designer' probably had a few of these chips lying around and decided they would do the job. Strictly speaking it doesn't produce a triangular wave but two exponentials as it charges and discharges the capacitor. One possible advantage of the audio amp chip is its drive capacity so a low value resistor could be used to charge and discharge the capacitor. The 'triangular' signal is taken across the capacitor so a high impedance drive would be problematic. How does this particular oscillator work? Regardless of the pin names the 820 internal circuit shows pin 2 and 3 as effectively the inverting and non-inverting inputs but unlike the op amp the output is internally biased to half the supply (making Vp1 = 2.5) so that it can give maximum swing. This allows the 820 to use a single supply without additional offset resistors. The circuits shown compares the relaxation oscillator version and audio amplifier TBA820M pinout. The output of the relaxation oscillator has two feedback loops. The resistors (R2 and R3) form a (switched) reference voltage at the 'non-inverting' input to compare the voltage across the capacitor at the other input. The capacitor charges up to the more positive value (output is high). Once reached the output switches low. The capacitor then discharges to the lower level. Once reached the output switches high and so on. What looks odd is that we don't use the normal output (squarewave) but take the signal from across the capacitor. How are the DC offset and triangle signal added? The two 47nF capacitors separate the triangle waveform from any DC they may have (2.5V). These are coupling capacitors. The potentiometers, P2 and P1, are connected by 10k resistors to the other side of the 47nF capacitor plates. The DC level of the wipers is then added back to the triangular waveform.