text
stringlengths
83
79.5k
H: Arduino USB Protection I am an absolute novice when it comes to electronics. I want to use a PC to control an Arduino UNO clone that will in turn control a strip of 122 neopixels. My fear is that I will somehow damage the PC. These are the WS2812B LED strip and Arduino UNO clone I plan to use: https://www.amazon.com/dp/B09736VKNN https://www.amazon.com/dp/B016D5KOOC This is the external power supply I plan to use to power the LEDs: (5V 10A) https://www.amazon.com/dp/B08763VWXM/ My questions are as follows: 1. Should I wire the power supply to the LED strip directly or should I use the power jack on the Arduino UNO directly? 2. If I use power jack on the Arduino UNO what will happen if the LEDs are powered on and the external power supply is disconnected before the USB? 3. What do I need to do to protect the signal pins on the Arduino UNO? AI: Should I wire the power supply to the LED strip directly or should I use the power jack on the Arduino UNO directly? You should wire it directly to the LED strip. The LED strip and the power supply form the "real" circuit. Whereas the arduino will be supplying only a low power digital control signal. If I use power jack on the Arduino UNO what will happen if the LEDs are powered on and the external power supply is disconnected before the USB? The LEDs will draw excessive power from the USB port. The port would probably shut off due to internal protection. I don't think anything would be permanently damaged, but it's possible. Regardless, you are right to want to avoid this situation. What do I need to do to protect the signal pins on the Arduino UNO? It's actually pretty simple. The arduino and the LED strip need to have their grounds connected, but they don't need to share a power supply. First connect the LED strip directly to the 10A power supply with decent gauge wires, like 18-22AWG. This is the "power" part of the circuit. Connect your arduino to the LED strip using 2 wires. GND, and SIGNAL. You can add a series resistor to the signal line for additional protection. A 1k will probably work. Now one additional thing you need to look out for is ground loops. The ardino and led strip are connected by a tiny ground wire, and if the ground potentials of those 2 systems are slightly different, it will cause problems (probably unreliable communication). The simplest way to prevent this is to plug the power supply and the PC into the same outlet or power strip. If you can't do that, or it's not sufficient, you can add an opto-isolator between the two signals, but don't worry about that until you've tried everythign else.
H: How to select current sense resistors for ir2136 ITRIP signal Looking at the datasheet of the IR2136 it suggests this little resistor network as input for the ITRIP pin. https://www.infineon.com/dgdl/Infineon-IR213-DS-v01_00-EN.pdf?fileId=5546d462533600a4015355c8a02116a5 I understand the basic principle of current sensing resistors. But I don't get why the voltage divider is necessary. ITRIP has a threshold of 0.5V. Let's say I want to trigger the overcurrent protection at 5A. Why can't I use a single 100mOhm resistor? Like this: AI: I expect it might have something to do with the fact that the ITRIP input has an internal Zener diode clamping it to 5.2 volts above Vss. As with any Zener diode clamp, you need a resistor in series with the Zener to prevent excessive current flow and destruction. The way in which the two resistors are configured provides that needed resistance. There is also the absolute maximum rating for ITRIP that suggests it shouldn't fall below Vss - 0.3 volts and the added resistor will help out here. But, if you need to play tunes with the exact trip value and don't want to alter your series current sense resistor, the potential divider option helps you achieve that.
H: Measuring input current of opamp I want to measure the current that goes into a noninverting opamp input. I've setup my experiment as follows: I can't measure the voltage drop across the 10M (R1) resistor directly since my DMM has an input impedance in the megaohm range. So, i have to measure the voltage on the ouput of the opamp. R2 is a potentiometer, so the gain is not exactly 4, and V1 is 1.19V. When I connect the input to voltage source of 1.19V, I get 4.04V on the opamp's output. When I connect the input via the 10M resistor (R1), I get 3.64V on the output. Now, my gain is: $$ 4.04/1.19 = 3.39 $$ so, voltage that the opamp sees when connected via the 10M resistor is $$ 3.64/3.39 = 1.07V $$ this means, I have a voltage drop across the resistor R1: $$ 1.19 - 1.07 = 0.12V $$ so, the current into opamp's input is $$ 0.12V/10M = 12nA $$ Am i doing it right? Another question is: Could 12nA leakage current go thru a home made PCB that was vigorously cleaned from flux residue? LMC6001 has a stated input current in femtoampere range, it might be that it was overheated while soldering and went kaput. But first, I want to be sure that I'm doing the measurements the right way. AI: No, you are not doing this right. Your concept is good but your calculations are flawed. The gain of the opamp circuit is not R2/R3, but (R2+R3)/R3. Your gain is therefore (400 kΩ)/(100 kΩ) = 4. I'm using the values in your schematic because I shouldn't have to go looking elsewhere. If you don't like that, put the real values right on your schematic next time. You see a change of 4.04V - 3.64V = 400 mV on the output by switching in the 10 MΩ resistor. Divided by the gain of 4, this means a 100 mV change at the opamp positive input. By Ohm's law, (100 mV)/(10 MΩ) = 10 nA current thru the resistor, which is the opamp input current in this case.
H: "31WOC" on a boost converter, but what datasheet goes with it? Long shot, I know... I have this Boost convert 1-5V in out 5.1-5.2V and I hope to modify it for a different output voltage. So I'm trying to identify and find the datasheet for the main chip. I think the chip reads 31WOC, but I can't find a datasheet for that number. Anyone familiar what the type is of the chip? There is no more datasheet than the following: IN 1-5V OUT 5.1-5.2 Current: Rated 600mA (single lithium input), the maximum 700mA (single lithium input) € 1,50 AI: The 31WOC is probably not the actual part number, rather some type of code described in the datasheet for different variants of the chip. I think I would forget about what the actual part number is, and just locate the feedback pin and adjust the resistor divider as necessary (assuming it's an adjustable regulator). Most use an internal reference of ~1.2V. Look for the pin with this voltage on, attached to the centre of a resistor divider. If it's a fixed regulator then it's a little more tricky, you will probably have to experiment with a few resistor values from the feedback pin to ground (start with 1-10k?). You will only be able to raise the output voltage this way. EDIT - on the picture you have added, it looks like the divider is the two resistors at the top right marked "18C" (15k?) and "473" (47k). Measure these resistors, and also the voltage at the pin they are connected to (top right pin of IC). The output voltage formula should be 1.24V * ((Rtop/Rbottom) + 1) which in your case is 1.24V * ((47k / 15k) + 1) = 5.12V, which is around what you are seeing. Adjust as desired, but be aware if you are increasing the output, that the chip may not be rated for voltages much above it's current level.
H: Preventing Verilog module from being optimised away I have tried to put lots of inverters to stress test my Spartan 6 power supply as recommended here. Here is the basic module: module inverter( input wire clk ); reg [7:0] inverted; always @(posedge clk) begin inverted <= ~inverted; end endmodule I was told that because this module only has inputs, it will be optimised away, and indeed ISE is telling me: WARNING:HDLCompiler:1499 - Empty module <inverter> remains a black box. How can I prevent ISE from optimising away my inverters? AI: You can chain your inverters into a huge shift register, then e.g. tie its ultimate output to a pin. This should do the job while being vendor independent. You shouldn't have to instantiate every module manually, it can be done in a compact way using a generate statement, which I believe even XST understands (although I haven't tried it). See e.g. this forum thread for an example.
H: How does ESR affect cutoff frequency calculations for capacitors? I'm interested in estimating the cutoff frequency of a capacitor in a simple RC circuit. Since the capacitor and resistor are in series, can I simply add the ESR value to the resistor value? For example, if the ESR is 0.5Ω and my load is 1kΩ, then is the R value in my calculation 1000.5Ω? Is the ESR negligible in this case? Or is there a "Actually, in real practice..." addendum? AI: If you're trying to make a R-C filter, then your deliberate R should be much larger than the ESR (equivalent series resistance) of the capacitor else you will hit other effects that will mess up your circuit anyway. Yes, in theory, you add the ESR to your external resistance as in your example. But if this actually matters, then you're too close to the limit. Your example is good in that it shows the ESR is well below the noise level. You've got much much more slop in other areas than represented by the 1/2 Ohm added to the external 1 kΩ. Take a look at any good capacitor datasheet and you will see that every capacitor only works properly up to some frequency limit. For small surface mount ceramics, this is usually at a few 100 MHz. Often this will be shown as impedance graphs, where the capacitor impedance magnitude is shown as a function of frequency. For the ideal capacitor, this would be inversely proportional to frequency forever. For real capacitors, there is a low impedance limit, then the impedance starts to rise again as the frequency increases. There are all kinds of effects factored into the impedance graph. These include particulars of the dielectric, unavoidable parasitic inductance, and probably only in a limited sense ESR. Remember the "equivalent" in ESR. Most of it is not a real series resistance due to the construction of the cap, but a simplified way to presenting a host of other effects, particularly details that go on in the dielectric. In short, something as simple as a single ESR number doesn't hold anymore when you get near the minimum impedance frequency and beyond, or the self-resonance frequency. If you stay far enough away from those, then ESR will be noise to a R-C filter. Conversely, if you find that the little bit of ESR would actually make a significant difference, then it's a solid clue you are running the cap in a regime where it's not really just a capacitor anymore. Remember that even good caps are ±10%, so a ESR that is 1% of the deliberate external resistance had better not matter, else you've got a tolerance problem in your circuit anyway. There are two common places ESR does matter, neither of which having much to do with R-C filters. The first is to effect stability of a linear regulator when the cap is accross its output. Old LDOs were designed assuming there would be a electrolytic or perhaps tantalum cap on the output. These can be counted on to have some finite ESR. This ESR was considered in compenstating the control loop in the regulator. Without it, some regulators become unstable. More modern LDOs are designed assuming ceramic caps on the output, which have very low ESR. These regulators are specifically designed to work with a output capacitance down to 0 ESR. That is the only type you can safely put a ceramic cap on the output, since you can't generally count on them having some minimum guaranteed ESR. The datasheets generally only guarantee the maximum ESR, and that is quite low. The second place is when suddenly dumping large pulses of current onto a cap, as happens in many switching power supplies. The current times the ESR represents a momentary apparent rise in the cap voltage, which often must be carefully considered.
H: Draw Bode-Plot of a transfer function I want to draw the bode plot of a tranfer function: $$ H(j\omega)=\frac{100j\omega T}{j\omega T + 10}, T=1s $$ Now I have $$ H(j\omega)=\frac{100}{1 + \frac{10}{j\omega T}}, T=1s $$ Using a double log scale: $$ 20*\log{H(j\omega)}=20*\log{\frac{100}{1 + \frac{10}{j\omega T}}}, T=1s $$ And can I just insert omega and compute the points for the plot? Like for omega = 1000 $$ 20*\log{\left(\frac{100}{1 + \frac{10}{2*\pi*1000}}\right)}=40-20*\log{\left(1 + \frac{10}{2*\pi*1000}\right)}=39.9... $$ Is that correct? AI: You've got to back up a step or three. The transfer function is complex valued so, to plot it, you need two plots, usually magnitude and phase. The magnitude plot is usually log-log but the phase plot is lin-log. So, you need to find the magnitude and then take the log before plotting the Bode magnitude. To find the magnitude, multiply H by its conjugate and then take the root. $$|H(j\omega)|^2 = \dfrac{100}{1 + \dfrac{10}{j \omega T}}\dfrac{100}{1 - \dfrac{10}{j \omega T}} = \dfrac{100^2}{1 + \dfrac{10^2}{(\omega T)^2}}$$ Also, omega is the radian frequency while f is the frequency. So, if omega = 1000, you don't multiply by 2 pi. However if f = 1000, you do. UPDATE: fixed denominator of transfer function to match OP's original UPDATE, PART DEUX: We should try to put this transfer function in standard from so that can identify the asymptotic gain, the type, and the pole/zero frequency. Since the variable \$\omega\$ appears with highest exponent 1, it is a 1st order filter. There are only two types of 1st order filters: low-pass and high-pass. In standard form the OP's transfer function is: \$H(j\omega) = 100 \dfrac{\frac{j\omega}{\omega_0}}{1 + \frac{j\omega}{\omega_0}}\$ \$ \omega_0 = \frac{10}{T}\$ Then: \$ |H(j\omega)| = 100 \dfrac{\frac{\omega}{\omega_0}}{\sqrt{1 + (\frac{\omega}{\omega_0})^2}}\$ Now, if we stare at this a bit and ask it some questions, we can imagine exactly what this looks like. When \$ \omega << \omega_0\$, the denominator is effectively "1" and so, the transfer function is decreasing by a factor of 10 as \$ \omega\$ decreases by a factor of 10. On a log-log scale, this is a line with a slope of +1. When \$ \omega >> \omega_0\$, the denominator is effectively \$ \frac{\omega}{\omega_0} \$ and so the transfer function is effectively constant with a value of 100. If we plot these two lines on a log-log plot and have them intersect at \$ \omega = \omega_0\$, we've created an asymptotic Bode magnitude plot. In fact, it's easy to see that when \$ \omega = \omega_0\$, the magnitude is \$ \frac{100}{\sqrt{2}}\$ so the lines we plotted are actually the asymptotes of the (magnitude) transfer function. That is, the function approaches these lines at the frequency extremes but never actually gets to them (on a log-log plot, \$ \omega = 0 \$ is at negative infinity)
H: Setting LMC6001 offset voltage I have got two LMC6001 samples. I will use them for pH sensor interfacing. With one LMC6001, I can cope with high input impedance errors. But I want to set offset voltage near by to 2 V. Can I make offset circuit with other LMC6001? AI: You don't need the second opamp. These are typical circuits for offset control in opamps which don't have offset null inputs, like the LMC6001: You'll want the lower circuit, because there you can use the fA input current feature of the opamp. The other one has a much lower input impedance. Without R3 and the potmeter this is just a non-inverting opamp where \$ \mathrm{ V_{OUT} = \dfrac{R1 + R2}{R1} V_{IN}}\$ Adding the potmeter: \$ \mathrm{ V_{OUT} = \left(\dfrac{R1 + R2}{R1} + \dfrac{R2}{R3} \right) V_{IN} - \dfrac{R2}{R3} V_{OFFSET} }\$
H: Recursion versus Tail Recursion on an Arduino In computer science, recursion chews up a valuable and limited resource – the stack – and can lead to an unrecoverable type of runtime error: the dreaded StackOverflow. Tail Recursion however is a form of recursion that doesn’t use any stack space, and thus is a way to use recursion safely. freeRam() is the function to test the usage of memory static int freeRam () { extern int __heap_start, *__brkval; int v; return (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval); } I am testing on arduino, in order to see the difference of the memory usage between the recursive way and tail recursive way // recursive way int recsum(int x){ if(x==1) return x; else return recsum(x-1) +x; } // tail recursive int tailrecsum(int x, int total){ if(x==1) return total; else return tailrecsum(x-1,total+x); } however void setup() { Serial.begin(9600); Serial.println( recsum(1000) ); Serial.println(freeRam()); } recursion 1000 times outputs still 1858 bytes available void setup() { Serial.begin(9600); Serial.println( tailrecsum(1000, 0) ); Serial.println(freeRam()); } tail recrusion 1000 times outputs also 1858 bytes available The test shows the recursion and tail recursion way in arduino doesn't affect the memory usage, so I am very confused and about it and in doubt of my results. AI: In both cases, your setup() function is looking for a memory leak, but there is no memory leak. A recursive function will only increase stack usage as the recursion takes place. Eventually, the stack unwinds until the final return statement executes, leaving the stack as it was when the function was first called, so the free memory after the function call is the same as it was beforehand. EDIT At first I thought that your freeMem function checks the amount of memory statically allocated. Adam Davis has pointed-out that this is not the case. I don't know the Arduino or it's development environment, but the following pseudo-code might help. The general idea is to determine the stack usage just before it "unwinds" ... int recsum(int x) { if(x==1) { //Pseudo-code line follows ... //stack_used = (int)( &top_of_stack - stack_pointer ); return x; } return recsum(x-1) +x; } This assumes that the stack grows 'downwards', and you will have to look at your documentation to see how to reference the top-of-stack.. You can use your freeMem function but in either case you will have to subtract the stack usage before setup() is called.
H: L200C voltage regulation I have a battery that's designed to power a MacBook Pro (it's not an internal battery, it's an external battery meant to act in place of a charger, so it's got all the monitoring electronics internally). Now, I wish to use this battery to power an iPhone dock. This dock has internal batteries itself, but they run out fast (only 3.6V, 1.8Ah), whilst this MacBook battery can provide 60Wh - which would increase the duration of the playback by many times, and stop the voltage dropping etc. The only issue is that the iPhone dock DC in wants a 12V, 2A supply, whilst the Macbook battery will supply between 14.5 and 18.5V (4.5A max). I don't wish to play with fire by putting in too high a voltage (there could be no damage, I might have to buy a new dock), so I was hoping to regulate the voltage of the MacBook battery so that it could safely power the iPod dock. I've been reading 'A designer's guide to the L200C voltage regulator' and it seems to ideal for this purpose, but some aspects seem a bit strange. Can I get someone to answer the questions I have with this circuit? The ST datasheet is here. Firstly, I'm not sure what the black rectangle at the bottom is. It seems like one plate of a capacitor, but that can't be right... Can that be explained? Secondly, am I to attach the positive lead of the battery to the top left line and the negative lead to the bottom left line? Is it the same for the output? (top right for positive, bottom right for negative? Finally, I'm not really sure what Vref means in all of the equations in the datasheet. The values for R1 and R2 given in the documentation for a 12V output are 1 kilo-Ohm for R1, and 3.3kW for R2 (I assume that's a typo, and they mean kilo-Ohm). Do these values seem reasonable, and should I have R2 as a potentiometer (If so, what sort of potentiometer?) I'm sorry to sound really needy - I wish to get this working, and I also have barely any knowledge of electronics! AI: The black rectangle is the symbol for ground, the reference level against which all the rest is measured. Bottom left and bottom right are connected to ground, and are the zero level of input and output. Some people say the minus, but it isn't really negative, so that's kind of a misnomer. Positive input goes to top left, positive output is top right. Vref is specified in the datasheet, and is 2.77 V typical (page 3). R2 and R1 form a voltage divider, and the L200 will regulate the output so that on pin 4 the 2.77 V appears. That's where the equation top right of the image comes from. So if R1 = 1 kΩ and R2 = 3.3 kΩ then Vout = (3.3 + 1)/1 * 2.77 V = 11.9 V. (The equation says it's Io, for output current, but that seems to be a typo, since the dimension of the RHS is volt.) You should keep an eye on the input-output difference. Most regulators need a few volts difference, and the L200 is no exception. On page 3 you can read that it can be as high as 2.5 V. Then an input voltage of for instance 14 V may only give 11.5 V out. edit Something about power. The L200 and the Micrel are examples of linear regulators, and one of their properties is that the current through them is the same as the current of the load, so that's 2 A. I was keeping an eye on the 14.5 V minimum, and I lost sight of the 18.5 V maximum for a moment. If the input is 18.5 V then there's 6.5 V difference between in and out, times the 2 A flowing through it is 13 W. That's a lot more than any regulator can handle unaided. Aided means a heatsink. At 13 W dissipation the Aavid-Thermalloy 5336XX's temperature will rise by 45 °C. That's already too hot too touch. There's another alternative, which is not a linear regulator, but a switching regulator. This will only dissipate a few watt, and can do with a modest heatsink. But it's a bit more complex than a linear regulator, and since you already didn't like the L200's resistors this is not something I would let you make yourself. Besides, it requires some experience in switchers to do it properly too. Small ready-made modules exist for lower power, like this one which can supply 500 mA. A 12 V/2 A model will be a bit larger, and have a small heatsink for the regulator. I'll look around and see if I can find one.
H: Low Cost FPGA for 500MHz FIR Filter I need a 500MHz FIR filter for filtering ADC samples (500MSPS). After the filtering a few samples needs to be stored in a buffer, for a simple peak detection algorithm. I have been developing some VHDL code, and used a Xilinx FIR filter library to create the filter. But after looking into the prices of Xilinx FPGA's that would do a 500MHz FIR filter, well above $100, my jaw hit the desk. Is there any good and low cost ( less than $50) FPGA for doing this? AI: Until recently, 500 MHz would have been considered a fairly fast clock, requiring a relatively high-end (and high-cost) FPGA. But nowadays a low-cost part ought to be able to do that. However, there are other specs that are equally important to the data rate to determine what part will work for you: What's the data width? A 16-bit adder requires a longer carry chain than an 8-bit adder and so requires a longer clock period in a given architecture and speed grade. How many taps in the filter? A very large number means working with RAMs instead of just registers, leading to a new set of timing requirements and new considerations for which parts will meet your needs. What are the weights? Equal weights on all taps means a much simpler calculation. If you have different weights on each tap, you might need to redo the complete set of add-multiplies for each new input sample, making for a much harder problem. But if your other specs aside from clock rate are fairly relaxed you might be able to do this in a low cost device. All the FPGA vendors have low-cost FPGA lines that can be priced as low as $5 each. For example, Xilinx has Spartan and Artix, Altera has Cyclone, etc. In recent generations, these parts should be able to do at least minimal logic at 500 MHz. But if you have to do wide add-multiplies or something, you may have to do some very careful pipelining or other tricks to get them to work. Be sure to look at the most recent generation of parts to get best performance, best pricing (unless a family is absolutely brand-new), and longest assurance of supply. Recent CPLD's from Altera and Lattice are really small FPGAs with built-in flash to allow automatic reconfiguration on power-up. For a simple filter, these might be sufficient. But without knowing your complete design we can't tell you what device will work. You'll have to just try designing it for each candidate part and use the vendor's synthesis tools to find out if you can meet timing in each case.
H: Bluetooth simple oscilloscope I was thinking to make a simple oscilloscope which can measure at least 2MHz signals, that would be enough, and it will be connected to a PC, or Android device. First, I tought to use USB to connect the "oscilloscope" to the PC or Android USB Host capable device that would give me about 12Mbps transfer speed but I would like to do it whit bluetooth or on another RF way. I say some Bluetooth-UART modules and they are capable of 2Mbps transfer speed and Wi-Fi modules only about 300Kbps. I have a 512K RAM memory for buffering the samples from my 12MSPS ADC, so at 2Mbps it will take: $$\frac{512000\ Kbytes}{2000000\ Mbps/8bits} = 2.048 seconds$$ So I will lose 2 seconds of samples until I send all the 512K buffer over bluetooth. Over USB it is about 330mS and it is too much I think. Is there any way to avoid this ? How does USB oscilloscope do it ? AI: If you treat the PDA as simply a display, then you can change your way of thinking about what data actually needs to be sent. It only needs a single trace of data, the width of the display, up to 30 times per second. If we assume 8 bit samples, and a retina display width of 960 columns, then you only need to send 960 bytes 30 times a second, or 28.8kbytes per second. If you are fine with 10Hz update rates, then the link only needs to handle 9,600 bytes per second. When the user zooms in, or changes any of the parameters of the measurement, send the new parameters to the microcontroller, and have the microcontroller prepare the data so you only need a low data rate stream to display the data. If you want to do analysis on the PDA, then you'll have to send a whole chunk of data, and that's simply going to be slow. But the more analysis you do on the microcontroller side, the less data you have to send, and the more frequently you can update the display. Keep in mind that fast bluetooth data links will not connect to iOS devices (iPod touch, iPhone, iPad) without fulfilling the requirements of the Apple Made For iPod program, or jailbreaking the iOS device. This is why many similar devices are using wifi. If you cannot reduce your data rate, and need the PDA to have full access to all the data with no breaks, you should skip bluetooth entirely and use wifi. Inexpensive wifi adaptors might only handle low data rates, but there are wifi modules that will provide more bandwidth.
H: Testing Oscillator Over Temperature I want to characterize the frequency inaccuracy of several silicon oscillators over temperature to see which one will work best in my end application. If I populate the oscillator on a bare-board and solder a coaxial pig-tail to the trace/GND will that suffice? Since the oscillator is designed to drive a high-impedance IC clock input, will it be able to drive the low (50Ω) impedance of the spectrum analyzer? Do I need to add some special matching, DC-block, or bypass network? I'd like to introduce the least amount of extra components possible as I'll have to take them into account in my measurements as well. AI: the oscillator is designed to drive a high-impedance IC clock input In this case, you will want to add a buffer chip to be able to drive the 50 Ohm coax and the spectrum analyzer input. The appropriate chip depends on what type of load the oscillator is designed to drive (TTL, CMOS, ...?), the frequency, the available power supply voltages, etc. Possibly you could use a transformer instead of an IC buffer to impedance-match the cable, but the buffer has the advantage of providing exactly the type of load the oscillator is designed for (an IC input). Incidentally, if you want to measure the frequency accuracy of your oscillator and not other parameters, a frequency counter will give you a more accurate measurement more quickly than a spectrum analyzer. But it will still likely have a 50 Ohm input unless you're working at very low frequencies (below 1 MHz?).
H: Rewiring KVM Buttons Background: I'm working on a project that would allow a number of people to hook up their laptops to a single monitor through a KVM and use buttons on the desk to control who has control of the monitor. Originally, the plan was to use buttons with microcontrollers to send keyboard shortcuts to the KVM which would then switch to a given input based on what keyboard shortcut was sent. I've run into a problem because the keyboard switching is a little.. off. The only way to do it without installing software is to cycle through the inputs 1 by 1, and it's a requirement that the user not need to install anything. So now I'm wondering if its feasible to interact directly with the KVM hardware. Worst case scenario, we could just plant the KVMs on the table, but that's not quite as user-friendly. Problem: I want to have a bunch of buttons on a table, ~6ft away from the KVM, which would replace the buttons on the KVM board. I'm not sure how to do this, but I have a few ideas (not sure if they're feasable): Remove the buttons and solder wires in their place, which would be connect to the buttons on the table. Connect the wires from the table buttons directly to the leads of the buttons on the board, without removing them. Some kind of tiny motor or something that could push the buttons on the KVM. (Rather avoid this) So I guess the question is: which of these would be the easiest, and what's the best way to go about it? Also open to suggestions on entirely different solutions.. First picture is of the top of the KVM, where the buttons are soldered onto the board, the second is the underside of the board. The button connections can be seen sticking out of the board. AI: You can simply solder long wires to the back of the board where the buttons connect and put buttons on the ends of the wires. If you get phantom switching (button presses when you weren't pressing a button) or several switches when you only press the button once then you'll have to look more carefully at the design of the circuit to eliminate false presses and properly debounce it. But try long wires and buttons first - it should work fine. The buttons you show have 4 pins. Connect your 2 wires to two diagonally opposite pins.
H: Coils! Does the material used matter? I've been making some coils with wires and I find out that sometimes they work and sometimes they don't work at all, the inside of those that don't work are some what more "silver-ish", more "metal-ish" like iron. While the coils that do work do look more thin like copper. Does the conductive material used for the coil matter? (If it does I'm -greatly- interested in why) AI: The core is very important in making a successful inductor, as a good core massively increases permeability (the ability for magnetic currents to flow effectively in the core - a lot like inverse electrical resistance (conductance.)) However, adding a core introduces a saturation characteristic; if the current approaches or exceeds the saturation point, the core will loose permeability and the inductor behaves more like a piece of wire. The actual wire is less important. Generally low resistance and high temperature resistance are desirable factors. Copper is good.
H: Choosing power supply, how to get the voltage and current ratings? Power supplies are available in a wide range of voltage and current ratings. If I have a device that has specific voltage and current ratings, how do those relate to the power ratings I need to specify? What if I don't know the device's specs, but am replacing a previous power supply with particular ratings? Is it OK to go lower voltage, or should it always be higher? What about current? I don't want a 10 A supply to damage my 1 A device. AI: Voltage Rating If a device says it needs a particular voltage, then you have to assume it needs that voltage. Both lower and higher could be bad. At best, with lower voltage the device will not operate correctly in a obvious way. However, some devices might appear to operate correctly, then fail in unexpected ways under just the right circumstances. When you violate required specs, you don't know what might happen. Some devices can even be damaged by too low a voltage for extended periods of time. If the device has a motor, for example, then the motor might not be able to develop enough torque to turn, so it just sits there getting hot. Some devices might draw more current to compensate for the lower voltage, but the higher than intended current can damage something. Most of the time, lower voltage will just make a device not work, but damage can't be ruled out unless you know something about the device. Higher than specified voltage is definitely bad. Electrical components all have voltages above which they fail. Components rated for higher voltage generally cost more or have less desirable characteristics, so picking the right voltage tolerance for the components in the device probably got significant design attention. Applying too much voltage violates the design assumptions. Some level of too much voltage will damage something, but you don't know where that level is. Take what a device says on its nameplate seriously and don't give it more voltage than that. Current Rating Current is a bit different. A constant-voltage supply doesn't determine the current: the load, which in this case is the device, does. If Johnny wants to eat two apples, he's only going to eat two whether you put 2, 3, 5, or 20 apples on the table. A device that wants 2 A of current works the same way. It will draw 2 A whether the power supply can only provide the 2 A, or whether it could have supplied 3, 5, or 20 A. The current rating of a supply is what it can deliver, not what it will always force thru the load somehow. In that sense, unlike with voltage, the current rating of a power supply must be at least what the device wants but there is no harm in it being higher. A 9 volt 5 amp supply is a superset of a 9 volt 2 amp supply, for example. Replacing Existing Supply If you are replacing a previous power supply and don't know the device's requirements, then consider that power supply's rating to be the device's requirements. For example, if a unlabeled device was powered from a 9 V and 1 A supply, you can replace it with a 9 V and 1 or more amp supply. Advanced Concepts The above gives the basics of how to pick a power supply for some device. In most cases that is all you need to know to go to a store or on line and buy a power supply. If you're still a bit hazy on what exactly voltage and current are, it's probably better to quit now. This section goes into more power supply details that generally don't matter at the consumer level, and it assumes some basic understanding of electronics. Regulated versus Unregulated Unregulated Very basic DC power supplies, called unregulated, just step down the input AC (generally the DC you want is at a much lower voltage than the wall power you plug the supply into), rectify it to produce DC, add a output cap to reduce ripple, and call it a day. Years ago, many power supplies were like that. They were little more than a transformer, four diodes making a full wave bridge (takes the absolute value of voltage electronically), and the filter cap. In these kinds of supplies, the output voltage is dictated by the turns ratio of the transformer. This is fixed, so instead of making a fixed output voltage their output is mostly proportional to the input AC voltage. For example, such a "12 V" DC supply might make 12 V at 110 VAC in, but then would make over 13 V at 120 VAC in. Another issue with unregulated supplies is that the output voltage not only is a function of the input voltage, but will also fluctuate with how much current is being drawn from the supply. A unregulated "12 volt 1 amp" supply is probably designed to provide the rated 12 V at full output current and the lowest valid AC input voltage, like 110 V. It could be over 13 V at 110 V in at no load (0 amps out) alone, and then higher yet at higher input voltage. Such a supply could easily put out 15 V, for example, under some conditions. Devices that needed the "12 V" were designed to handle that, so that was fine. Regulated Modern power supplies don't work that way anymore. Pretty much anything you can buy as consumer electronics will be a regulated power supply. You can still get unregulated supplies from more specialized electronics suppliers aimed at manufacturers, professionals, or at least hobbyists that should know the difference. For example, Jameco has wide selection of power supplies. Their wall warts are specifically divided into regulated and unregulated types. However, unless you go poking around where the average consumer shouldn't be, you won't likely run into unregulated supplies. Try asking for a unregulated wall wart at a consumer store that sells other stuff too, and they probably won't even know what you're talking about. A regulated supply actively controls its output voltage. These contain additional circuitry that can tweak the output voltage up and down. This is done continuously to compensate for input voltage variations and variations in the current the load is drawing. A regulated 1 amp 12 volt power supply, for example, is going to put out pretty close to 12 V over its full AC input voltage range and as long as you don't draw more than 1 A from it. Universal input Since there is circuitry in the supply to tolerate some input voltage fluctuations, it's not much harder to make the valid input voltage range wider and cover any valid wall power found anywhere in the world. More and more supplies are being made like that, and are called universal input. This generally means they can run from 90-240 V AC, and that can be 50 or 60 Hz. Minimum Load Some power supplies, generally older switchers, have a minimum load requirement. This is usually 10% of full rated output current. For example, a 12 volt 2 amp supply with a minimum load requirement of 10% isn't guaranteed to work right unless you load it with at least 200 mA. This restriction is something you're only going to find in OEM models, meaning the supply is designed and sold to be embedded into someone else's equipment where the right kind of engineer will consider this issue carefully. I won't go into this more since this isn't going to come up on a consumer power supply. Current Limit All supplies have some maximum current they can provide and still stick to the remaining specs. For a "12 volt 1 amp" supply, that means all is fine as long as you don't try to draw more than the rated 1 A. There are various things a supply can do if you try to exceed the 1 A rating. It could simply blow a fuse. Specialty OEM supplies that are stripped down for cost could catch fire or vanish into a greasy cloud of black smoke. However, nowadays, the most likely response is that the supply will drop its output voltage to whatever is necessary to not exceed the output current. This is called current limiting. Often the current limit is set a little higher than the rating to provide some margin. The "12 V 1 A" supply might limit the current to 1.1 A, for example. A device that is trying to draw the excessive current probably won't function correctly, but everything should stay safe, not catch fire, and recover nicely once the excessive load is removed. Ripple No supply, even a regulated one, can keep its output voltage exactly at the rating. Usually due to the way the supply works, there will be some frequency at which the output oscillates a little, or ripples. With unregulated supplies, the ripple is a direct function of the input AC. Basic transformer unregulated supplies fed from 60 Hz AC will generally ripple at 120 Hz, for example. The ripple of unregulated supplies can be fairly large. To abuse the 12 volt 1 amp example again, the ripple could easily be a volt or two at full load (1 A output current). Regulated supplies are usually switchers and therefore ripple at the switching frequency. A regulated 12 V 1 A switcher might ripple ±50 mV at 250 kHz, for example. The maximum ripple might not be at maximum output current.
H: How to drive high powered LEDs as efficiently as possible I have come across some very high powered LEDs. Datasheet for LEDs: http://www.cree.com/led-components-and-modules/products/xlamp/arrays-nondirectional/xlamp-cxa2011 For these high powered LEDs, how can I make a driver circuit that will be energy efficient, or is there a driver for high powered LEDs that I'm just not finding? The LEDs will eventually be used in a grow light for an aquaponics system, so efficiency is essential. AI: For best overall LED efficiency (which is your stated aim) you want high efficiency LEDs plus high efficiency drivers. There are many high power LED modules available, but they in many cases use LEDs that are less efficient than the best available. No amount of efficient driving of a very low efficiency LED will make up for its low efficiency. You mention 0.8A x 40V (which will use multiple LED dies or separate LEDs) = 32 Watts. This input power could be obtained with about 5 to 8 of the top available LEDs and it will often be better to use the best available if top efficiency is wanted. It will probably be most cost-effective to buy a commercially available LED driver rather than making your own, looking for good efficiency. ebay lists a vast quantity of possible candidates You also need to specify your energy source. Is this 110 VAC mains, or a 12 Volt battery or? The standard way of driving LEDs correctly and at maximum efficiency is straightforward (fortunately). Determine required LED operating current. From data sheets, determine maximum voltage drop that LED will have at desired current. Provide a switched mode constant current source that will produce the required current at at least the maximum possible voltage. Extra points: Minimize LED temperature with cooling Operating an LED at below its maximum current ratings will result in somewhat increased efficiency - an increase of perhaps 10% to 20% more light out per Watt-input, at 20% of full power-ratings compared to full power. LED lifetimes increase with decreasing temperature and with decreasing current. While more current will usually result in more heat produced, these effects are independent of each other. To get an LED driver of essentially any desired rated current, use of an external MOSFET of suitable rating, driven by a controller that fits, will allow whatever current the MOSFET can handle, which can be very large if desired. For very best efficiencies, use "synchronous rectification" - this replaces rectifier diodes with switched MOSFETs and gives lower losses. Efficiencies around 85% from a low voltage DC source to an LED string will be achievable -- over 90% in selected cases and with care. Comment on the above and we can provide better information.
H: Transformer labeled 120v-60hz-35va-12v - What exactly does it do? I have a pool light whose bulb physically broke, but I've found a way to open the water-tight bulb enclosure to get at the actual bulb. I'm trying to either hunt down a replacement bulb (rather than the whole expensive assembly) OR make some LED magic. Either way, the transformer's somewhat permanently mounted and I'd like to use it as it's already there, water-tight, and has the specific fitting to match the pool light flange. I just can't figure out what 35va-12v means, and google didn't really help. I assume 35VA means 35VAC, but 12V is a common DC voltage and it's on there too! Of course, I tested it and I actually get 14 VAC. How would I, in the future, know that based on the label? Is there a spec for this kind of stuff? AI: If it is a plain transformer, without any additional electronics, then the 12V mentioned means 12V(AC). This is the voltage the transformer outputs at its rated load. Now what is this rated load I speak of? It is about how much power the transformer can potentially output. The 35VA indication is a measure of how much power the transformer can deliver. This is simply the product of voltage and current. The voltage is 12V, so maximum current can easily be calculated I(max) = 35VA / 12V = 2.9A When experimenting with this transformer, you might want to add a 3A fuse at the 12V(AC) side, just to prevent overloading. You wrote your lightbulb is broken, so your transformer has no load. This is a bit like you driving your car down hill. Your engine doesn't have to work very hard and you probably start accelerating a bit. An unloaded transformer increases its output voltage. 14V output voltage is perfectly normal with this transformer when unloaded. When you attach a 25W lightbulb to it, you'll notice the voltage drops to around 12V. With a conventional light bulb, you can use bulbs up to 35W with this transformer. When using other devices the maximum number of watts drops, depending on the exact device. 120V and 60Hz is related to your mains supply. It is what you get from the wall socket.
H: How do I get started with uCLinux? Right now I have ported an application from LPC2478 to Cortex M3 (LPC1788) and things are working fine at 120MHz. I am thinking of porting uCLinux to my LPC1788 custom board. The board consists of around 4 MB of NAND flash and around 32 MB of RAM. Is this enough? Frankly, I don't have much idea of how to port uClinux onto Cortex m3. Where do I begin? I think the first step is the bootloader, then compile uCLinux and my application: am I right? AI: First try to build uClinux from source to get a feel for it (toolchain, memory consumption etc.). Then look at the tips for porting uClinux. The quick rundown: Architecture: Check whether your architecture is supported or not. Create a new branch in linux/arch/ if needed. That step involves creating/modifying 20-30 files to handle CPU-specific instructions. Platform: Your processor is already supported by an existing linux/arch branch, but has differences that impact the kernel and peripherals. Create a new branch in linux/arch//platform/. This step creates traps, vector inits and interrupt handlers and involves 6 files. Board: Your architecture is very close to exiting platforms and only minor changes have to be made to the kernel (e.g. memory description/layout, external peripeherals). Usually you have to adjust the Makefiles and then fix all the complaints until it builds.
H: What sensor to choose to track human presence indoors? This is a hobby project about home automation. The idea is that of distributed system of cheap microcontrollers that take measurements, communicate, output data and control appliances. One important task that this system should perform is detect at every room if there are humans inside (awake or asleep distinction would be a over-the-top bonus :) ). I am still at a loss if this is feasible. PIR-s? Webcameras? Microphones? Optical counters at the doors? Ultrasound? Available processing power per sensor output: 5MIPS of avr instructions (8-bit RISC) for 3 seconds = 15 million instructions. Typical usage scenario would be: The apartment is empty (the system should know that). The door opens and a group of 4 people enters the hallway. (the system should know that someone is in the hallway). Two go to a room (the system should track their path by room, with a lag of no more than several seconds, plus know that there is still someone in the hallway). , one of them returns, the other sits on a chair, (the first one is to be tracked. the system should also know in which room is the sitting person). In the meantime the rest two wander randomly in a 2 x 2 meter area in the hallway (taking shoes off, hanging coats) (the system should know they are still in that area). One goes to another room, turns the radio on and goes to sleep (the system should know that there is a sleeping person there). Someone leaves the apartment and returns 10 minutes later (here the only requirement is that the system know that there are still persons inside). The three leave (the system should know that there is still someone inside). Much later, the sleeper gets up, wanders around for half an hour and then leaves the apartment (the system should track his position by room with a lag no more than several seconds, and know when he leaves that there is now no one inside). I have no pets. The same question has been asked and received some nice answers on a sister SE: the question. AI: Especially for a home hobby project I'd probably start with PIR (Passive InfraRed) sensors. They are cheap and very effective at detecting something warm like a human body moving around. However, PIR sensors will not detect static warm objects like someone sleeping or sitting still on the couch. With enough PIR sensors around the place, you can probably infer where people are motionless by where you know there was movement and in what direction. PIR sensors don't inherently give you direction, but enough of them activated in sequence does. For example three sensors triggered in sequence in a hallway is a strong clue someone is walking down the hall in that direction. If you saw motion of someone entering a room and then motion in the room, but nothing at the doorway, then you can make a good guess the person that entered is still inside but motionless. This system isn't foolproof, but PIR sensors are cheap and remarkably sensitive, so with enough of them I think you can get to quite a useable level. One thing to keep in mind is that other warm moving things will trigger PIR sensors too, like pets moving about. If you have a dog, then aiming the sensors so they only see motion a few feet off the floor helps. Cats jump around a lot, but are smaller, so maybe there is a way to not trigger on cats. This system will be a lot easier if you know the only warm moving things are humans though.
H: How to make 95%+ efficiency voltage regulator? I need to make an 11 volt voltage regulator. I would really like this to be step-down, not a linear voltage regulator because I am going to be drawing between 1-2 amps at 11 volts, and if I was using a linear voltage regulator it would get very warm. The input voltage that I am using is going to be about 12.8 volts. I know a linear regulator under these circumstances would have 86% efficiency, but they are not reliable when working with voltages close together. The last option I can think of (which is still linear...) would be to use resistors to lower the voltage, but I don't think that that would be any cheaper than using a step down converter. Because the resistors would have to "handle" 3.6watts. There are regulators like this on eBay (which I am bidding on, but only up to $2), should I just buy something like that? Or would it be cheaper to make it myself? Even if it is cheaper, I don't know how, and that's why I'm here :) AI: That eBay thing is a switching regulator, aka a "switcher", aka SMPS (Switch-Mode Power Supply). These things can indeed reach efficiencies of 95 %, exceptionally 96 %. Lot of it depends on input and output voltage, and the highest efficiency is with parts that are designed for a specific input and output voltage. So the eBay thing won't always be as efficient, especially not at low output voltage or high input/output voltage ratio. You can make them yourself; as you can see they only require a few parts, but designing a high efficiency switcher requires some experience to choose the right parts and make the right PCB layout. So I would suggest you buy one. I guess a component cost for the eBay module will be around 6 or 7 dollar, so 2 dollar would really be a bargain. edit You can't use the series resistor to regulate to 11 V. At 2 A and 12.8 V in you would need a 0.9 Ω resistor, but when the current drops to 1 A that resistor's voltage drop will be reduced to 0.9 V, and the output will rise to 11.9 V.
H: How it will work if i solder thin wire with thick wire I have a simple power supply for a rc airplane. I need to connect the speed controller to my brushless motor. The motor has very thin wires without a connector. But the speed controller has thick wires. I would like to connect them. Will it heat the wires? AI: Well the limiting point here would be motor wires. I would expect whoever designed the motor to have used appropriate wires on the motor so that the loses in the wires are negligible. I don't think that in this case it would be a heat problem to connect the speed controller wires to motor wires. The resistance of wire depends on the length of wire, wire's cross-section and the specific resistance of the wire. The resistance is \$ R= \frac{ \rho l}{A}\$, where A is the area of the cross-section of the wire, l is wire length and \$\rho\$ is specific resistivity of the material. Here is a handy list of resistances of copper wires using the AWG system. Here is a chart with resistances of cables with cross-section expressed in \$mm^2\$. For exact amount of heat, we'll need the current and the voltage of the motor. The power formula is P=V*I and based on it, we may be able to guess if the wires will heat or not. The exact amount of heat and temperature rise depend on lots of factors such as thermal conductivity between the cable and air and if the cable is being ventilated and so on.
H: PWM speed controller with rectifier as voltage regulator? I was looking around for a voltage regulator when I thought of this, so it might be completely crazy. And I'm sorry if it is. BUT... Could I use a PWM speed controller as a voltage regulator if I were to use a cap to "smooth" out the voltage, and a simple four diode full wave rectifier to make sure the voltage never goes negative? And I apologize if this is a ridiculous question. I don't know much about PWM or voltage regulators. AI: No you can't. You need an inductor, which 'reduces' your idea to the well-known concept of a switched regulator. The problem with your idea is that there is some resistance in the path between your power source and the capacitor, and this resistance will dissipate the heat that would otheriwse be dissipated by the linear element (pass transistor or the like). Switching does not help. An indiductor does help, because it stores the energy (and lateron releases it) instead of dissipating it. Just in case you wonder (as I did long long ago): reducing the resistance does not help. You can prove mathematically that as the resistance approaches 0 the problem stays the same. An no, you can not get it to be zero, which is just as well, because then the current would become infinte, which (at the very least) would give 'interesting' magnetic effects. Using PWM to deliver a 'reduced' power or as an Digital-to-Analog converter works only when you either smooth the PWM output with a filter (you can use a simple firts-order RC if dissipation and bandwith are no issues), or the thing you deliver the PWM to does not need a smooth supply (motors, lamps, heaters, etc all fall in this category).
H: SelectMap: Who drives the pins: FPGA or Microprocessor I'm reading this application note regarding SelectMap image loading for my Spartan6. On page 2 I read: The device has properly powered up, but the internal configuration memory needs to be reset. This portion of the configuration flow is signaled by PROGRAM going High and INIT going High The wording "is signaled" is a little unclear to me. Is it the FPGA that drives PROGRAM and INIT high during the "Device Initialization" phase, or should it be the microprocessor? Same goes for the "Configuration load" phase: The start of the configuration load phase is signaled by the INIT signal going High. Is that the FPGA or the microprocessor that drives INIT high? AI: Assuming that SelectMAP programming hasn't changed much, nobody "drives" INIT_B or PROGRAM_B high. Rather, a pull-up resistor is pulling them high when no one is driving them low. Read the wording on page 2 again, paying particular attention to how the word "drive" is associated with "low" and how "going" is associated with "high". This is because it's open drain, and so it can only been driven low, or floated. Initially, the FPGA is driving INIT_B low and PROGRAM_B low, to indicate that it is powering up. Once it is ready to to clear the configuration latches, PROGRAM_B and then INIT_B will be "released" so that they can float high. They are in fact driven by the external pull-up resistor. The Micro can then prolong this stage by driving either PROGRAM_B or INIT_B low. Again, nobody ever drives those signals high. That is the responsibility of the pull-up resistor. Both the FPGA and the Micro can drive it low or let it float, but neither of them will drive it high.
H: Electrical element instead of diode? What electrical element can replace diodes other than transistors in an AC circuit ? AI: There are many theoretical methods you could use to replace a diode. The complexity and cost would mean it is not really feasible on any sort of scale, but it is certainly doable. Take for example a buck converter circuit. There are losses in the recovery diode that limit the efficiency of the circuit. Some buck converter ICs have the option of controlling synchronised rectification to allow replacement for the recovery diode with a MOSFET. In this situation, the intrinsic diode of the MOSFET is initially used to allow for the recovery current to flow through and then the controller turns on the MOSFET, creating a lower resistance path for the current (that is Rds vs vf of the body diode). This could be extended to be made into an active diode. A comparator across each end of the MOSFET with the output connected to a gate drive is used to turn the MOSFET on. So while one side is positive, the MOSFET will turn on and allow current to flow in parallel to the body diode and when the voltage reverses, the comparator will detect that and switch off the gate, thus turning the MOSFET off. You effectively have an active diode in parallel with a regular diode. Going back to your originally question, there is no electrical element that can replace a diode (a p–n junction) other than another p–n junction (whether in a diode, transistor or MOSFET package). This element can be improved upon using a MOSFET and associated circuitry to reduce losses. This is often the case in switched-mode power supplies and motor controls.
H: What might be the cause of high pitch sound coming from a switching regulator circuit? We designed a switching regulator circuit using a 1.5MHz, internal-switch, switching regulator (semtech.com/images/datasheet/sc185.pdf). Vin is 5V, Vout is 3V3. We have an input capacitor (47uf), an output capacitor (47uf) and an inductor (1uH). The problem is that we hear a high pitch sound coming — presumably — from the inductor when we turn the system on. It appears that the sound is more noticeable when the circuit is drawing very small amounts of current. As the current demand increases, the sound usually becomes unnoticeable, but not always. Any ideas what we might have done incorrectly? Is there any other information I can provide to be more specific? I've looked at the regulator output, just before the inductor, and I see some ringing, but I can't tell whether the ringing is normal or not. AI: The usual places sound comes from in electronic circuits is inductors and ceramic capacitors. The cross product of current and magnetic field is a force. Forces always work on two things, which in the case of a inductor are the core and individual segments of wire that make up the windings. At the right frequency, this can make the winding vibrate a bit, which you hear as sound. Ceramic capacitors exhibit piezo-electric effect to varying degrees. The more efficient ceramics capacitance-wise are also more susceptible to this. If I remember right, barium titanate is particularly good at this since the titanium atom in the lattice changes between two energy states, which also cause it to change its apparent size. Yes, the ceramic is actually shrinking and growing very slightly as a function of voltage. I just recently had a problem with this in prototypes of a new product. A power supply capacitor was subjected to 5-10 kHz ripple, which causes the whole board to make a annoying whining sound. I test five different models from different manufacturers, but all the ones that had sufficient capacitance had the noise problem. I have now reluctantly switched to a aluminum electrolytic for that part. In your case your switching frequency of 1.5 MHz is way too high to be audible, so it can't be the switching frequency directly. Most likely your power supply is meta-stable and you are hearing the control fluctuations. There may not be much output ripple at the audible frequency, but you can probably see a little difference in the duty cycle at that frequency. At very low currents the control loop may be causing bursts of pulses with some dead time between bursts, which could have a strong component in the audible range. At higher currents the system is probably running in continuous mode and is more naturally damped, which is why the control response in the audible range decreases. Also look at the current draw of whatever the power supply is driving. That may be in the audible range, forcing the power supply control response into the audible range too.
H: Using 4 digit 7 segment LED I have a 7 segment display that has 4 digits. What will I need to make use of this? Can a single microcontroller handle the operation? I mean PIC16F690...I saw also a MAXIM chip that drives these kind of displays...is it absolutely required? Here is a picture from the datasheet, I don't know why there are 2 pin diagrams? Here is a link to the datasheet AI: The two schematics are two versions of the display, common cathode at the top, common anode at the bottom. I'll assume you have the common cathode version. You connect the segments A..G, DP via 8 series resistors to 8 I/O pins of the microcontroller. Driving a pin high will light that LED on the selected digit. To select any of the 4 digits you make the corresponding common cathode low via an NPN transistor, which you again drive from an I/O pin via a resistor. If your supply voltage is 5 V and you're using red LEDs then you can use 150 Ω resistors instead of 330. Also decrease the transistor's base resistor values to 2.2 kΩ, and use for instance BC337s for the transistors. To drive the full display you first make pin 12 low by driving its transistor with a high level, and set the I/Os for the segments of that digit. Some time later you switch pin 12 and the segments off, and switch 9 on, and again the segments for the second digit. And so on. If you go from 1 digit to another in less than 2.5 ms, then the whole display cycles at 10 ms, or 100 Hz, which is enough to avoid noticeable flicker. You can use the Maxim driver, like the MAX7219, but it's Damn Expensive™: 12.80 dollar in 1s at Digikey. The good thing about it is that it takes care of the multiplexing for you, so you just have to load it with the segment data for the 4 digits. It also has software brightness control. I checked the PIC16F690 datasheet, and unlike other microcontrollers its I/Os don't seem to be able to source 20 mA (which is disappointing). So you'll need transistors on port 2 as well: R1 was one of the resistors on port 2. So you insert Q1 and R2 between them, and repeat that for each of the 8 segments. Attention, Q2 is a PNP! Any general purpose PNP transistor will do.
H: Smart charging circuit for NiMH battery pack I want to build a solar powered charger for a niMh battery pack to run a microcontroller. What are the fundamentals of building a smart circuit that will not overcharge the battery and will also allow current to be pulled directly from the panel AI: In most cases with a small panel and a NimH battery the peak charge rate is below 1C and the supply varies with insolation (sunshine level). In addition, if the battery is near the panel the battery temperature varies with insolation. All these factors make most usual NimH charge termination algorithms and methods inapplicable. Negative delta V detection is problematic at best when you charge V is variable. Delta temperature rate rise is utterly swamped if the cell is sun exposed and Absolute cell temperature is not a good measure of endpoint for the same reason. In such cases a very reasonable charging strategy is to terminate charge at 1.45V per cell. This can be adjusted for temperature. In my designs I also add charge current sensing and adjust the threshold down if the cell is still accepting large charge currents when almost charged. This helps compensate for a degree of variability between cells. Cells of different models and different manufacturers are usually reasonably consistent with the 1.45 / cell setting but some are fully charged at as little as 1.35V/cell. If the cell output is wanted for other purposes when not needed for charging then charge control can use a series regulator or on/off series switch. If the battery is the only load then a shunt regulator to dissipate excess power can be a good choice for smaller panels. Note that if the panel is taken "off charge" its voltage will drop and hysteresis will be needed to prevent endless on off charge cycling.
H: What is the size of a normal potentiometer screw hole? I have a potentiometer that looks like this: http://www.parts-express.com/images/item_standard/023-606_s.jpg Fig.1 I don't want to solder wire onto the leads. Instead, I want to do something like this: Fig.2 What is the standard size of the hole that I need to put the screw in? AI: What's your rationale for not wanting to solder to the terminals of the pot in Fig.1 ? They are designed to be soldered to wires. The best practice is to hook the wire through the little hole, and then solder it. (Special MIL-spec bonus if you can prevent solder from wicking up into the stranded wire.) For strain relief, add heat shrink such that it covers the terminal and some length of wire. Update. In response to Blake's comment. I can give you a couple of tips on soldering. Solder should melt on the pin (that included wires) rather than on the soldering iron. That means that the pin should to be heated up sufficiently. Flux helps remove oxidation. If you use no-clean flux, you don't need to clean it. Leaded solder has a lower melting temperature, so it's easier to use than lead-free. May be, I'll be able to find a good tutorial video and post a link in another comment. Also, there ought to be a good "soldering tutorial" thread or wiki on this board too.
H: how determine the RC time constant in PWM digital to analog low-pass filter? I 'm looking for the best RC time constant and its reason in a PWM to convert digital signal to analog based on duty-cycle and frequency and other parameters. PWM frequency is 10 kHz. AI: The best RC is infinite, then you have a perfectly ripple-less DC output. Problem is that it also takes forever to respond to changes in the duty cycle. So it's always a tradeoff. A first-order RC filter has a cutoff frequency of \$ f_c = \dfrac{1}{2 \pi RC} \$ and a roll-off of 6 dB/octave = 20 dB/decade. The graph shows the frequency characteristic for a 0.1 Hz (blue), a 1 Hz (purple) and a 10 Hz (the other color) cutoff frequency. So we can see that for the 0.1 Hz filter the 10 kHz fundamental of the PWM signal is suppressed by 100 dB, that's not bad; this will give very low ripple. But! This graph shows the step response for the three cutoff frequencies. A change in duty cycle is a step in the DC level, and some shifts in the harmonics of the 10 kHz signal. The curve with the best 10 kHz suppression is the slowest to respond, the x-axis is seconds. This graph shows the response of a 30 µs RC time (cutoff frequency 5 kHz) for a 50 % duty cycle 10 kHz signal. There's an enormous ripple, but it responds to the change from 0 % duty cycle in 2 periods, or 200 µs. This one is a 300 µs RC time (cutoff frequency 500 Hz). Still some ripple, but going from 0 % to 50 % duty cycle takes about 10 periods, or 1 ms. Further increasing RC to milliseconds will decrease ripple further and increase reaction time. It all depends on how much ripple you can afford and how fast you want the filter to react to duty cycle changes. This web page calculates that for R = 16 kΩ and C = 1 µF we have a cutoff frequency of 10 Hz, a settling time to 90 % of 37 ms for a peak-to-peak ripple of 8 mV at 5 V maximum. edit You can improve your filter by going to higher orders: The blue curve was or simple RC filter with a 20 dB/decade roll-off. A second order filter (purple) has a 40 dB/decade roll-off, so for the same cutoff frequency will have 120 dB suppression at 10 kHz instead of 60 dB. These graphs are pretty ideal and can be best attained with active filters, like a Sallen-Key. Equations Peak-to-peak ripple voltage for a first order RC filter as a function of PWM frequency and RC time constant: \$ V_{ripple} = \dfrac{ e^{\dfrac{-d}{f_{PWM} RC}} \cdot (e^{\dfrac{1}{f_{PWM} RC}} - e^{\dfrac{d}{f_{PWM} RC}}) \cdot (1 - e^{\dfrac{d}{f_{PWM} RC}}) }{1 - e^{\dfrac{1}{f_{PWM} RC}}} \cdot V_+\$ E&OE. "d" is the duty cycle, 0..1. Ripple is the largest for d = 0.5. Step response to 99 % of end value is 5 x RC. Cutoff frequency for the Sallen-Key filter: \$ f_c = \dfrac{1}{2 \pi \sqrt{R1 \text{ } R2 \text{ } C1 \text{ } C2}} \$ For a Butterworth filter (maximum flat): R1 =R2, C1 = C2
H: How do the different kind of lamp dimmers work Hooked up the scope to a dimmer expecting to see how it altered the waveform, and didn't see it do much of any difference from all the way down to all the way up. I bought an LED bulb and was told it needs a "special" CL dimmer. How do the different kind of dimmers work, and how do I manage to see what they do with my scope? AI: Wikipedia has a nice article on phase angle control which is used for most house hold light bulb dimmers. They usually don't work for halogen, LED, TL. This is a typical case where a single image says more than a thousand words: It works by chopping a (varying) part from the sine wave using a triac. The image shows a rectified wave, but the mechanism works equally well with an unrectified one, the latter being most common in household light bulb dimmers. The dimmer is connected in series with the lamp and therefore needs to see a minimum load (in the order of 10W) to work. This is the classical dimmer. Not sure how halogen / LED / CFL differs from this, but I know from experience (with CFL) that an unmatched dimmer can make the lamp flash at a really annoying frequency.
H: Thicker solder harder to melt? I'm trying to do some soldering on a PCB with a 20 watt iron and 1.2 mm 63/37 solder. From what I can understand, I'm supposed to touch the iron (has been tinned) to the point and heat it up then touch the solder to the point and it should start melting. But it takes like 20 seconds for it to heat up enough to melt that way instead of like 5 seconds in the videos I see. I end up having to touch the solder to the iron and try to let it flow on to the board. Would 0.8 mm solder work better? AI: I used a 25 W soldering iron for some time before I got a 80 W temperature controlled one and one thing I've noticed that helps is to have a bit of solder on the tip of the iron. The story told to beginners is not to try to transfer solder from hot iron to the joint and to instead apply solder directly from the wire. I won't say that that is incorrect, but it often helps to have a bit of solder on the tip. That solder will improve thermal connection between the tip, device and the pad. The amount of solder should be just enough so that once the tip is in contact with the device and the pad, solder from the tip is in contact with all 3. At one point you'll see that the solder from the tip is starting to flow into the joint. That is the moment when you should add solder from the wire to the joint. If you've done everything correctly, it will easily melt and flow into the joint and the flux will have the chance to clean the joint this way.
H: Touchscreen : how to choose? I am working on a small touchscreen extension to implement into a robot (show debug, command some basic functions, show points on a map where the robot has to go, etc...). Microcontrollers used are Cortex M3. I am pretty new to the world of touchscreens and I see there are different types and I very don't know how to choose them, and on which criterion. Astonishing colours or resolution aren't needed, just need something reliable and handy. A size of about 5", 4:3 format should be great I think, but I will happily look at any suggestion. Can you help me to choose, tell me where to buy, which price, indicate some references, tips ? Do you also have some advices about how to interface it ? Thanks a lot for your help, Ryl AI: If I had to choose a touchscreen for my project I rather prefer a capacitive one. Resistive touch screens may be cheaper but have some disadvantages comparing with capacitive ones: They need calibration Are less sensitive Have shorter life cycle One of the downsides of the capacitive touch screens used to be the complexity of the required software and hardware, but there are already capacitive touch screen display controllers that solve this issue. Take a look at NHD-5.0-800480TF-ATXI#-CTP from Newhaven Display, it may be what you are looking for. It includes a controller with I2C interface and it seems fairly simple to use.
H: Flip flop/latch with isolated differential input and differential output I am looking to make something that takes a pulse of floating voltage (say from a feedback coil of on a transformer), and maintain a differential output voltage depending on the sign of the amplitude of the pulse. Then when another pulse comes along with a different sign, the output would change signs as well. I naturally looked into flip flops/latches, but after some experimentation and closer reading, I realized they do not really provide a differential output, and they do not seem to work with floating inputs. They seem to work instead with pulses of non-floating voltages and send a high to one of two nand/nor gates. In summery: My problem is I am given an isolated voltage in the form of a pulse. I want something that will "remember" the sign of that pulse in the form of a voltage difference between two output leads. The reason I emphasize the "differential" part is that the two output leads of a flip flop/latch will not form a complete circuit when connected through a load or something that senses voltage differences. It is just an open switch and a closed switch. I may just be doing this wrong, or misunderstanding, but based on what I have tried, this seems to be nontrivial. AI: If I understand your question right, a simple centre tapped rectifier like this should do: L1 is the primary winding, L2 and L3 are the secondary windings. OUT1 and OUT2 are to your FFs (or whatever you are capturing with) Ignore the 1Meg R3, it's just there to keep SPICE happy. You can use the correct windings ratio to adjust the levels as desired (may be useful if high voltages are involved), and add e.g. a couple of zeners to protect your inputs. There are also many other ways to do this, depending on exactly what you are trying to do. Simulation of above circuit (light blue is input pulse waveform, blue is negative pulse output, green is positive pulse output) EDIT - to make it clearer I simulated with a pulse file rather than a square wave as shown in the schematic:
H: How long will it take to completely run out of power for two A23 batties(12volt each) on a serial circuit with 6 leds? I'm connecting 6 LEDs serially with two 12volt batteries. Each LED needs 3.2 volt and takes 20mA of current. I'm using two alkaline max 12 volt batteries(A23 LR23) and ATC is their manufacturer. I tried to calculate the batteries' lifetime like below(taken from a website): To find out how much energy I'm going to use I multiplied the Watts times the number of hours to get Watt-hours. So 12v battery rated(?google search) 1.2Ampere-hour then the energy is: 12 volt * 1.2 Ampere-hour(battery) * 2 batteries = 28.8 Watt-hours The LEDs run at 3.2v and uses 20mA each. So 3.2 volt * 6 leds(each needs 3.2 volt and current 20mA) * (20 / 1000)Amp = 0.384 Watt And finally the battery lasts: 28.8 watt-hour/0.384 watt = 37.5 hours Is this calculation correct? If not is it possible to help me correct my mistake? AI: The setup in your answer is a bad one. First you parallel two batteries, which you shouldn't, because their voltages are never exactly the same. Their low internal resistance will cause a current from one battery to the other. So feed three LEDs from 1 battery, and the other three from the other. Then you place all LEDs parallel, which means that you lose (12 V - 3.2 V) x 20 mA = 176 mW per LED in its series resistor, while the LED itself uses only 64 mW. Total power loss is more than 1 W. That's because the large voltage difference between battery and LED. The best way to get a longer endurance is to keep losses in the series resistors as low as possible. So better place two times three LEDs in series, so that their total current is 40 mA instead of 120 mA. The power loss in the series resistors is then (12 V - 3 x 3.2 V) x 20 mA = 45 mW per 3 LEDs, or 96 mW in total. That's less than 10 % of the power loss for all LEDs in parallel. Then your batteries will last 100 mAh / 40 mA = 2.5 hours or 150 minutes. This is pretty optimal. The batteries' capacity is 1200 mWh, and the LEDs consume 384 mW, so with an ideal conversion you can get a little over 3 hours out of them. But the most efficient conversion using a switching current regulator will get you maybe 85 % efficiency, and then you only gain 9 extra minutes. edit re comments An alkaline battery's voltage quickly drops by 10-15 %, and then remains more constant for a great part of the discharge cycle. So either you calculate the resistors for a larger current at the start, and 20 mA for the rest, or for 20 mA at the start, and a lower current later on. The latter solution will give you a longer battery life, but a bit less brightness. jippie suggests to use a switcher anyway to get more out of the batteries, and it's a thought. You'll have to place the batteries in series to get 24 V to allow a voltage drop as high as possible. The larger Vin/Vout ratio of the switcher will make it less efficient, but overall you should get some extra time from the batteries.
H: SelectMap accepts FPGA image but does not enter the startup sequence I am loading the image of my Spartan 6 and it seems that it cannot go to the final step of the process: the "Startup Sequence". After I load the image byte by byte, and add a lot of extra clock cycles at the end, the status register of the FPGA reveals that there have been no errors, and GHIGH STATUS status being high means that: The device has properly received its entire configuration data stream. The device is ready to enter the Startup sequence. I have checked that the INITB and PROGRAMB are not pulled low preventing the startup sequence, and in the status register everything looks very promising. Why is my Spartan 6 not entering the startup sequence after it received and is happy with the image? (Below is the status register after configuration.) [0] CRC ERROR : 0 [1] IDCODE ERROR : 0 [2] DCM LOCK STATUS : 0 [3] GTS_CFG_B STATUS : 0 [4] GWE STATUS : 0 [5] GHIGH STATUS : 1 [6] DECRYPTION ERROR : 0 [7] DECRYPTOR ENABLE : 0 [8] HSWAPEN PIN : 1 [9] MODE PIN M[0] : 0 [10] MODE PIN M[1] : 1 [11] RESERVED : 0 [12] INIT_B PIN : 1 [13] DONE PIN : 0 [14] SUSPEND STATUS : 0 [15] FALLBACK STATUS : 0 AI: Solved it! The problem turned out to be that I was using the JTAG clock for the startup sequence, rather than CCLK. The choice of clock is specified inside the "Startup Options" inside ISE.
H: Logic Conversion 3.3V to 5V (Current Related) 74HC 74HCT A beginner style question. I'm attempting to perform some logic level conversions on 3.3V signals to 5V using the 74HC4050D powered at 3.3V and 74HC541 powered at 5V. Is it OK to directly connect the outputs of the 74HC4050 to the inputs of the 74HC541 ? or do I need pull up/Pull down resistors from the output of the 74HC4050D to the input of the 74HC541 for each pin? What should I be looking for in terms of the datasheets when determining these things (in my case I don't really aim to source or sink current to other devices.I just aim to do the voltage conversion. Is it only leakage current that would be output in this instance? If one didn't add a pull up/down resistors or a series resistor would the only risk be that of an unkown logic state if VCC were to change abruptly or signal noise was present? PS I'm aware of that the maximum current into these devices Icc is about 50ma and a max output of a particular pin Io is around 4-5ma. Links for you reference: 74HC4050D 74HC541 AI: Whether a 3.3 V signal can reliably drive a 5 V logic input depends on the minimum guaranteed logic high level of the 5 V logic input. Whether that is sufficiently below 3.3 V for reliable operation varies with logic families and manufacturers, so consulting the datasheet is essential. For example, Microchip PICs with Schmitt trigger inputs usually require 80% Vdd for guaranteed logic high. That would be 4.0 V, which is too high to be driven from 3.3 V logic. A easy hack is to use logic with "T" in its name, like 74HCT as apposed to 74HC. The T stands for TTL, and means the logic input levels are compatible with the old TTL logic. This had a finite guaranteed logic high threshold sufficiently below 3.3 V so that driving one of these from 3.3 V logic is fine, although its a good idea to check the specs in any one instant. So if you need a logic gate in there anyway, make it a HCT type and you get the 3.3 V to 5 V conversion for free.
H: avr adc differential mode frequencies I know that in single ended free running mode the following equation holds: f_ADC = f_clk/N = 13f_s where f_ADC is the ADC frequency, f_clk is the avr's clock frequency and f_s is the sampling rate. The above equation means that if I want to convert an analog signal with a max frequency of 4KHz then I should choose f_s=8Khz (Nyquist theorem), and then choose N such that f_ADC >= 13fs=13*8KHz. Now, I've read in the ATMega128 datasheet that in case of differential channels gain (eg to convert the difference of two analog signals), an internal f_ADC2 frequency is produced which is f_ADC2 = f_ADC/2 (or is it f_ADC2 = 2*f_ADC ????) but can't really figure out what happens with timings. So the actual question is: what equation holds (like the one above) if I use differential gain channels at free running mode? AI: From page 235 of the datasheet: Differential conversions are synchronized to the internal clock CKADC2 equal to half the ADC clock. This synchronization is done automatically by the ADC interface in such a way that the sample-and-hold occurs at a specific edge of CKADC2. A conversion initiated by the user (i.e., all single conversions, and the first free running conversion) when CKADC2 is low will take the same amount of time as a single ended conversion (13 ADC clock cycles from the next prescaled clock cycle). A conversion initiated by the user when CKADC2 is high will take 14 ADC clock cycles due to the synchronization mechanism. In free running mode, a new conversion is initiated immediately after the previous conversion completes, and since CKADC2 is high at this time, all automatically started (i.e., all but the first) free running conversions will take 14 ADC clock cycles. This is the only paragraph where they talk about CKADC2. It's nowhere mentioned in the electrical characteristics either. Overmore, the table preceding this paragraph says 13 ADC clock cycles for single ended conversions and 13 or 14 for differential, also "conversion time (cycles)". It looks like the same ADC clock is used, and that CKADC2 is only relevant for that 14th cycle if a conversion is started when CKADC2 is high.
H: Integrating another binary I am building a binary for ARM using GCC and Crossworks. I have a secondary binary image that needs to be loaded to another IC during start up. I like to integrate this binary to my tool-chain, in other words, I make this binary part of my image and every time I build this binary gets integrated into the final binary file that will be loaded to the CPU. What is the best practice for this? Thx AI: You can use objcopy to transform the binary image into an object file. The object file will contain symbols that can be accessed from C code. Then tell GCC to link the object file in and you can access the symbols in the object file using extern in order to load the data into the other IC. Here is a good tutorial on how to link in a binary blob including how to file out the options you need to pass to objcopy using objdump. And of course the objcopy man page and the objdump man page.
H: SelectMap: Should HSWAPEN be high? I'm in the process of debugging microprocessor loading of a Spartan 6 image through SelectMap. The HSWAPEN pin has caught my eye. In my design it is pulled low via a 10K resistor. However, when reading the status register I notice that the HSWAPEN pin is read as high, and I've confirmed this with an oscilloscope. Is this behaviour normal? This user guide (see page 62) explains: HSWAPEN is a configuration-related multipurpose pin. When it is grounded prior to configuration, it enables internal pull-up resistors in all of the I/O pins of the device Is HSWAPEN high because the internal pull-up is stronger than my pull-down? AI: All of the diagrams that I saw in the user guide as I took a quick pass have HSWAPEN tied directly to ground without a pull down resistor. I couldn't find the HSWAPEN internal pull up resistance explicitly stated anywhere but Table 4 in the Datasheet gives the Thevenin equivalent resistance for programmable inputs and outputs. I'm assuming HSWAPEN would have a similar value since it can be used for User I/O after configuration. Depending on the grade device you are using, the values are different. But most are below 100Ω which would defiantly be stronger than your 10KΩ. And here is a Xilinx forum thread where a user states that 10KΩ is too high and that 0Ω should be used instead.
H: Finding air wires in Eagle I am almost done routing a board. However, Eagle is telling me that there is still one more wire. I have looked but I just can't seem to find it. Is there a way to make Eagle tell me where it is? AI: I can think of three options: Zoom out as much as you can then use the route tool on the tiny board, this catches the air wire, then zoom in again and route it. You can also disable the top and bottom layers so the air wire becomes more visible. Yet another option is to run the provided "length.ulp" script (File->Run... or ULP button). This script shows a list of all the nets, on that list there is a column "Unrouted", some net is not completely routed a value should appear here instead of "--". You can then type on the command line "show net_name" to highlight it.
H: Inexpensive Frequency Counter and Display for clock generator I have the desire to add a frequency display for a clock generator. Certainly not necessary, I can use my DNN, but it would be nice, I found a veritable plug & play frequency counter and LDC display for in excess of $50 USD at Digikey. I didn't have to pay a tenth of that for the clock generator. I have 1 $20, el cheapo, backup DMM for when I need to measure more than one thing at a time. It has a frequency counter/display. Therefore I would think there must be a way to get this functionality at a reasonable price. I'm sure I'm over-thinking this and a simple solution is avoiding me. Any ideas on how I can count the number of clock pulses per second and display the results, (7 segment LED would be preferred)? AI: A small micro would do this easily. You can set up a capture compare peripheral to count input pulses and read/clear at a regular interval (e.g. 50ms - use a timer interrupt for this) Then write result (after multiplying by necessary value - e.g. in above example: capture_result * 20 = Hz) to whatever display you have in mind. A non-micro solution could be produced with some discrete logic ICs, there are quite a few designs floating around on the web. It would be a pretty "old school" solution but may be fine for a one off home project (and cheap) Here is a 0-40MHz example (from this page) For higher frequencies a small CPLD/FPGA would work well and would be a quick/easy HDL program to write - may be an idea if you have a small dev board lying around. There are almost certainly dedicated ICs/Modules available if you have a look around. Most DMMs probably use an ASIC of some sort to provide multiple functions, though some of the cheaper models may use off the shelf stuff.
H: How do they calculate the output power? I'm building a quadcopter using Cortex A8 and I would like to use this USB WIFI dongle. According to the description this is 3000 mW. What does this mean? 3000 mW consumption? In the USB 2.0 the maximum power can is the following 500 mA x 5V = 2500 mW. What is the 3000mW in this case? I also saw other WiFi dongle with 6000 mW powered from USB 2.0. AI: From the less than complete information provided in the advertisement, and the use of "3000 mW" on the box and on the device itself and elsewhere, it is almost certain that they are referring to the peak transmit power of the WiFi RF output stage. This is liable to be 3 Watts (= 3000 mW) DC in and not RF out, and there is a moderately good chance that actual power level is not even that high. They say " ... The high power adapter greatly improves stability. ..." and " ... Great coverage 3000mW ... ". It is common to market equipment wit stated power ratings that are either above actual or which relate to the power at the peak of a cycle or over a very short transmission period and which do not represent power as it would usually be measured or actual mean power over even quite short periods of time. Peak RF power and mean RF power while transmitting and mean RF power can all be substantially different, and it is possible to have peak powers of 3W from a 2.5W DC source with the extra being provided for the very brief periods required by capacitors. If I was building a Quad-Copter I'd want the performance of my coms link to be very well known and to be based on a reasonably complete set of relevant data. This unit cites "3000 mW" in a number of places but does not mention transmit power in the specifications and makes no mention of receiver sensitivity. About the only concessions to formal specifications are a statement of maximum VSWR and claimed antenna gain. For this utterly indispensable part of a quad-copter system I'd buy something else which had more complete specifications and which was produced and backed by a reputable manufacturer. One possible precaution against uncertain link performance would be a "fly back towards launch position on loss of signal" mode, with GPS, inertial and other relevant data input - but that's a whole new topic.
H: Thevenin equivalent circuit with current and voltage source I have the circuit below and I have to calculate complex current i1using thevenin equivalent circuit method: I tried to follow the rules and open the terminal AB across the resistor: I want to ask if this is a correct way to convert the circuit to thevenin. Also I am not sure is capacitor's reactance -4iis same as -4j or 4, 90 degree ? Then The voltage across AB terminal would be equal to voltage across the capacitor? Is it ok to in this case see the voltage source as a short? AI: The Thevenin voltage looking into the terminals A & B can be found by inspection using superposition. \$V_{th} = (-j2A) \cdot (-j4\Omega) + j4V \$ The first term is found by zeroing the voltage source on the right and the second term is found by zeroing the current source on the left. The Thevenin impedance is easy too; simply zero both sources to get: \$ Z_{th} = -j4 \Omega\$ The current is then found to be: \$I_{resistor} = \dfrac{V_{th}}{Z_{th} + 8 \Omega}\$ I highly recommend reviewing the late Dr. Leach's notes on using superposition. I've taught it this way for years. If you practice using superposition, you will amaze your friends and professors alike by solving many circuits by inspection.
H: What would cause a watch battery to leak? I am using a CR1220 Lithium Manganese Dioxide battery to keep a real-time clock chip running during periods when batteries are removed from a device. The battery is only connected to two pins on the RTC that are designed as battery inputs. The circuit works as expected for a while, but after several months the battery backup stops working. The RTC is rated for \$V_{bat}\$ from 1.8V to 5.5F. The voltage measured on the battery's terminals while still in the circuit is 3.25V, but at the chip's battery terminals, \$V_{bat}\$ is 0V. There is a white crusty trail leading from the seal between the positive and negative terminals on the battery to the negative electrode on the battery connector which is covered with a white crust. When I clean the white crust off, the voltage at the chip's battery terminals matches the battery voltage, and the RTC works as expected. It appears that something (probably electrolyte) is leaking out of the battery, coating the negative terminal, and insulating it. I have seen this on at least four or five our our devices of the same design. What could causing the batteries to leak? The spec. sheet lists the maximum battery input leakage current as 100 nA at 5.5V (we are using a 3.3V supply). Could current leaking from the supply into the battery be an issue? [ Edit: I now see that battery data sheet says the max reverse current is 1 \$\mu\$A, so I am guessing that is not the problem ] AI: It is likely that your computer system is "charging" the battery when powered on and causing the problem. See (2) below. (1) If battery charging as discussed below is not the problem then the remaining choices appear top be bad environment or excessive load. The RTC load in backup mode should be under 2 uA (1.6 uA max at 25C, 5 uA max across temperature range) according to the ISL12020M RTC datasheet that you cited. Battery capacity of the Energizer CR1220 cell that you cite should be in excess of 45 mAh at 1.8V endpoint and over 42 mAh is you use a series Schottky diode. Even using a series silicon diode such as a 1N4148 (with much lower reverse leakage than a Schottky diode) will give more than 41 mAH. I say "more than" in each case as this is based on the supplied curve at 62k ohm load or around 40 uA - so you should get usefully more capacity at around 2 uA. 5 uA at say 40 mAh = 8000 hours and the more likely < 2 uA = 20,000 hours, so the experienced corrosion problems after a few months use do not point to battery overload stress being the source of corrosion. Which leaves environment. Are the battery holders formal ones and correct for this cell - ie are the materials compatible? Is there a corrosive atmospheric component? You could try placing two cells in two holders loaded with say 330k ohm and 1 megohm for about 10 uA and 3 uA initial currents and place them in a sealed container with dessicant, and two more similarly loaded cells located within the equipment involved with exposure to the air. Spraying the battery assembly with a complete but not too generous coating of polyurethane clear enamel (or a formal conformal coating of your choice) would help establish whether the corrosion came from without or within. Does the corrosion product match the electrolyte of the cell? What temperature ranges are being experienced. (2) Battery charging? Measure the battery voltage out of circuit. Power the PC and measure the peak output voltage on the battery terminals that occurs with the battery out of circuit at any time during operation. If V_PC_max > V_battery by even a small amount then you are charging the battery and violating the reverse charge spec. If charging is occurring (which seems likely) options are - a different battery technology or a series Schottky diode. The diode will reduce available battery capacity relatively slightly (see calculations above) and may be acceptable but need system modification in some way. A small silicon diode would also probably be acceptable and would reduce reverse current even more. Sharptooth asked: Why will connecting a diode in series reduce the available battery capacity? The diode reduces the available voltage by "one diode drop". As the battery voltage decreases a point will be reached where it is no longer adequate for the task. When a diode is in series with the battery feed the point of lowest useful voltage will be reached when the actual battery voltage is still one diode drop above this voltage. How much of the capacity is wasted by this loss of voltage depends on the general battery characteristics, the load minimum voltage requirement, and the shape of the battery curve at different load levels. For a typical CR1220 Lithium Manganese Dioxide cell characteristics see this Maxell CR1220 datasheet Two graphs from this data sheet are copies below as Fig 1A and Fig2A. Modified versions with output voltage reduced by one diode drop are shown as Fig 1AB and Fig 2B. I have changed the voltage by roughly 0.3 - 0.4V. A Schottky diode's voltage drop depends on current drain and diode characteristics and as small as 0.3VF to 0.4V is typical enough at low currents. Even lower may happen, but no guarantees. Fig 2A / 2B gives a good example of the effect of adding a diode. Fig 2A is a discharge curve set with the battery alone and for FRig 2B I have offset the curves downwards by about 0.4V to simulate the drop in a series diode. As an example, imagine that a load required at least 2 V output. The blue vertical lines show where the output voltage drops under 2V for a 0 degrees C discharge with 30 kOhm load for the original battery ~= 380 hours, and with a diode ~= 325 hours. The reduction due to the diode = (380-325)/380 x 100% =~ 7%. At -10C the durations are about 340 and 280 hours or a reduction of ~= 18%. The results will vary with Vcutoff, Rload,temperature and more. The most signmificant results will occur when a load needs a relatively high voltage that is not much less than the Vout over most of the capacity range. eg in Fig1A if the load required 2.75V minimum then an 82k load would last about 700 hours anhd a 5.6M load would last about 40,000 hours with the battery alone. BUT with a series diode as shown, anything under 1 MOhm would not run at all and a say 5.6 MOhm load would be touch and go - ie may not run at all and may run for 10,000 + hours depending on variations in cells.
H: NMOS FET selection for reverse polarity protection I am working on a reverse polarity protection circuit, similar to that in Figure 2 of SLVA139: Reverse Current/Battery Protection Circuits. Here is my circuit: My case is slightly more complex due to the possible input voltage ranging from 5-40V. Most MOSFETs seem to have a maximum gate-source voltage VGS of 20V, so I need the Zener clamp on the gate (or a very large/expensive FET). The maximum input current will be about 6A. What I'm wondering is, what FET characteristics actually matter in this configuration? I know that I definitely want a drain-source breakdown voltage BVDSS high enough to handle the full input voltage in the reverse polarity condition. I'm also pretty sure I want to minimize RDS(on) as to not introduce any impedance in the ground circuit. Fairchild AN-9010: MOSFET Basics has this to say about operation in the Ohmic region: "If the drain-to-source voltage is zero, the drain current also becomes zero regardless of gate–to-source voltage. This region is at the left side of the VGS– VGS(th)= VDS boundary line (VGS – VGS(th) > VDS > 0). Even if the drain current is very large, in this region the power dissipation is maintained by minimizing VDS(on)." Does this configuration fall under the VDS = 0 classification? That seems like a somewhat dangerous assumption to make in a noisy environment (this will be operating in the vicinity of various types of motors), as any voltage offsets between input supply ground and local ground could cause current to flow. Even with that possibility, I'm not sure I need to spec for my maximum load current on the drain current ID. It would then follow that I don't need to dissipate very much power either. I suppose I could mitigate the problem by Zener clamping VGS closer to VGS(th) to reduce drain current/voltage? Am I on the right track with this, or am I missing some critical detail that's going to make a tiny MOSFET blow up in my face? AI: The use of a MOSFET for reverse voltage protection is very straight forward. Some of your references are correct but of low relevance and are tending to make the problem look more complex than it is. The key requirements (which you have essentially already identified) are MOSFET must have enough Vds_max rating for maximum voltage applied MOSFET Ids_max rating more than ample Rdson as low as sensibly possible. Vgs_max not exceeded in final circuit. Power dissipation as installed able to sensibly handle operating power of I_operating^2 x Rdson_actual Power dissipation as installed able to handle turn on and off higher dissipation regions. Gate driven to cutoff "rapidly enough" in real world circuit. (Worst case - apply Vin correctly and then reverse Vin instantaneously. Is cutoff quick enough?) In practice this is easily achieved in most cases. Vin has little effect on operating dissipation. Rdson needs to be rated for worst case liable to be experienced in practice. About 2 x headlined Rdson is usually safe OR examine data sheets carefully. Use worst case ratings - DO NOT use typical ratings. Turn on may be slow if desired but note that dissipation needs to be allowed for. Turn off under reverse polarity must be rapid to allow for sudden application of protection. What is Iin max ? You don't say what I_in_max is and this makes quite a difference in practice. You cited: "If the drain-to-source voltage is zero, the drain current also becomes zero regardless of gate–to-source voltage. This region is at the left side of the VGS– VGS(th)= VDS boundary line (VGS – VGS(th) > VDS > 0). and Even if the drain current is very large, in this region the power dissipation is maintained by minimizing VDS(on)." Note that these are relatively independent thoughts by the writer. The first is essentially irrelevant to this application. The second simply says that a low Rdson FET is a good idea. You said: Does this configuration fall under the VDS = 0 classification? That seems like a somewhat dangerous assumption to make in a noisy environment (this will be operating in the vicinity of various types of motors), as any voltage offsets between input supply ground and local ground could cause current to flow. Even with that possibility, I'm not sure I need to spec for my maximum load current on the drain current ID. It would then follow that I don't need to dissipate very much power either. I suppose I could mitigate the problem by Zener clamping VGS closer to VGS(th) to reduce drain current/voltage? Too much thinking :-). When Vin is OK get FET turned on asap. Now Vds is as low as it is going to get and is set by Ids^2 x Rdson Ids = your circuit current. At 25C ambient Rds will start at value cited at 25C in spec sheet and will rise if/as FET heats. In most cases FET will not heat vastly. eg 1 20 milliOhm FET at 1 amp gives 20 mW heating. Temperature rise is very low in any sensible pkg with minimal heatsinking. At 10A the dissipation = 10^2 x 0.020 = 2 Watts. This will need a DPAk or TO220 or SOT89 or better pkg and sensible heatsinking. Die temperature may be in 50-100C range and Rdson will increase over nominal 25C value. Worst case you may get say 40 milliOhm and 4 Watts. That is still easy enough to design for. Added: Using the 6A max you subsequently provided. PFet = I^2.R. R = P/i^2. For 1 Watt disspation max you want Rdson = P/i^2 = 1/36 ~= 25 milliohm. Very easily achieved. At 10 milliohm P = I^2.R = 36 x 0.01 = 0.36W. At 360 mW a TO220 will be warm but not hot with no heatsink but good airflow. A trace of flag heatsink will keep it happy. The following are all under $1.40/1 & in stock at Digikey. LFPACK 60V 90A 6.4 milliohm !!!!!!!!!!! TO252 70V 90A 8 milliohm TO220 60V 50A 8.1 milliohm You said: I suppose I could mitigate the problem by Zener clamping VGS closer to VGS(th) to reduce drain current/voltage? No! Best saved for last :-). This is the exact opposite of what is required. Your protector needs to have minimal impact on the controlled circuit. The above has mjaximum impact and increases dissipation in protector over what can be achieved by using a sensibly low Rdson FET and turning it on hard.
H: Safety precautions working with high voltages (Nixie Clock) I'm building a nixie clock kit with these instructions. I'm up to the part where I have to test (page 17) the high voltage components (the power supply is 12V 250 mA) which is up to 170 volts (page 5) and is potentially lethal. Now I don't know if they put that disclaimer just to cover themselves or if there's really a likelihood of killing myself assembling a nixie clock kit. I've done some electronics back in high school a few years ago but I'm no electrical engineer or electrician so I'm wondering what kind of safety precautions I should take? I don't suppose dishwashing rubber gloves suffice? AI: That supply could very easily kill you. That said, I've (long ago) had a number of shocks at that level with no long term ill effects - BUT some people manage to die on the first encounter. Try not to ever find out which category you will be in. (I've also had numerous 230 VAC mains shocks and a few at around 1000 VDC with in all cases never more than a nasty experience at the time.The last was long ago and I aim not to repeat them if at all possible.) (230 VAC is probably the most "disturbing") A DC supply has a "can't let go" effect clamping the muscles. You don't want to EVER experience this. HV is not something to worry about overly much - but must not be ignored. You may be able to receive say 100 shocks in quick succession from such a system and suffer no consequesnces apart from nightmares and a lifelong aversion to digital clocks of any sort. BUT it could kill you along the way. Anything in the area outlines in red in the diagram below will be potentially lethal and anything outside it MAY be :-). Historical advice is to keep one hand in your pocket while testing to prevent accidentally closing a hand to hand circuit via you chest/heart. That has some merit BUT slow deliberate thought out actions are at least as valuable. Rubber dish washing gloves would offer very substantial protection if dry and not punctured. Puncturing can happen on a small wire end. They can be used as an added safety feature and PROBABLY make things much safer BUT act as if you are not wearing them. If I "MUST" work near live conductors I try to keep fingers curled inwards so that a hand clench triggered by current will not cause grasping of a live conductor. Brushing the back of a hand against a live HV conductor is liable to cause a muscle contraction towards your body BUT do not ever rely on this. Best method is to have HV turned off until it is needed for testing. Think what you are going to do, and have tools, meters etc ready. If you can attach a meter with a test clip with power off, so much the better. If a test clip will not safely and reliably attach to the HV target you can solder a wire to HV when power is off (of course) and connect that to the meter probe. After you have experience with such things you are liable to think nothing of measuring say 230 VAC mains or 500 VDC with two probes with power on - but resist the urge to leap in an do what will be safe enough wit experience until you have some experience - or you may never acquire any :-(. It's really rather safe most of the time. But it's better to be safe than sorry as a beginner. Power on, think, test, think, power off. Be SURE power is off. Be SURE power is off. Be SURE power is off. I have seen power on when it was thought to be off happen often enough over the years that I am quite obsessive about checking. If a mains cord is involved I am liable to turn power off, pull out plug, wave the cord to be sure the plug is the correct one, place plug near gear being worked on as an indication that it is safe. That's obsessive. I'm alive. I recently installed a new domestic stove in place of an old one for a friend, with only a wall mains switch between me and mains. Fuse still in for various reasons. "Safe enough" but potentially lethal. Tested with meter. Shorted all leads (PNE) together to ensure no mains on. Treated all wiring as if alive as much as possible throughout job. Obsessive. Alive. Think carefully. Act slowly. Being very safe is easy.
H: Using a Shunt Resistor to measure current Usually when we measure current, we have to break the circuit and place the ammeter in series. However, if we want to make it easy to measure current at various points in a permanent circuit(for whatever reason), can we add very low resistances in the circuit at those points,, measure the voltage across them instead and use Ohm's law to find the current? If the effect of those resistors is negligible, and they have no effect on the functioning of the circuit, is doing this ok? edit: I can't think of a suitable title for this question, help on that would be appreciated as well:) AI: Yes, that's a good way of doing it, if the effect of the measurement resistors is indeed negligible. That could be a problem with low voltages: you want to have a resistance high enough to measure at least several tens of mV, but if that branch has only 100 mV between nodes then the resistor may affect the circuit's operation. But often you don't need a shunt resistor, if you already have a resistor in the branch you can measure across that. Note that for AC measurements the current you measure will always be in phase with the branch's voltage, and that may be quite different from the current's real phase. For instance, if you want to measure the current through a capacitor that will be 90° out of phase with the voltage, but what you measure across the resistor will have 0° phase shift.
H: spotify controlled by arduino I want to build a spotify controller [next song button for the moment] with Arduino to build something like this song changer physical song changer What do I need to control my spotify? using a windows machine? [using processing?] the examples I have seen use a mac and AppleScript spotify box so do I need to connect to the spotify API with Processing ?? do you guys know of an example to do this?, if not, what would be the best way to accomplish it? thanks! AI: How about that you make your arduino behave like a usb keyboard, and then add a keyboard shortcut for the features you want like "next song". I think you can use the LUFA usb lib on the arduino.
H: Electrons Flow in cables I'm Trying to understand how the current flows and how network cards receive the electrons (packets) in a Ring network (all the servers are connected to the same cable). My question is in the case of a broadcast one server will produce current and put it on the wire and then the electrons will flow and get captured by the first server down the line, but how do the electrons reach the second server ? Can some provide some references to electrons flow in cables (some basic stuff) ? Thanks AI: In a ring, all communication is really one-directional and point to point. The ring topology only happens because each node has two ports, one for receiving from upstream and one for sending downstream. When a node does not consume a packet (and sometimes even when it does), it re-transmits the bits it receives from the upstream port to the downstream port. Electrically each link is point to point, making termination easier. Even modern ethernet is all point to point. The illusion of a shared network segment is created by hubs and switches. Hubs merely pass on what they hear on one port to the other ports, and are not really used anymore today. Switches look at the destination addresses of packet and pass them on to specific ports as appropriate. Everything is point to point again, but this time bi-directional at the cable level at least. Inside each cable are two twisted pair, one for carrying the bits in each direction. That actually makes each pair point to point and uni-directional. The original ethernet and then ThinLan were true busses, but you'll only find those in museums today. True busses are still in use, with CAN being the top example. However, it runs much slower (up to 1 Mbit/s) and is for a different purpose than general networking.
H: Explaining a car voltage regulator circuit I need to come up with a solution for a voltage regulator to be used in a vehicle, regulating ~12V from the car battery to 5V used by Atmel AVR microcontroller. I've found this schematic on the Internet: While I understand the most part of how this circuit works, I have a few questions about it: What's the purpose of R30 resistor on the input side? Why are there two capacitors on each side of linear regulator LM7805? This answer to another question might be the answer I'm looking for, but I'm not sure about it. If this answer is related to my question and the use of two capacitors is to reduce resistance and inductance, why are such different capacitor ratings used (0.1 µF and 470 µF)? Taking a single pair of capacitors, why is one of them polarized and the other one is not? Are there any drawbacks if using capacitors with larger capacitance instead of those displayed on the schematic? Are there any drawbacks if using capacitors with larger break-down voltage instead of those displayed on the schematic? Thanks in advance. AI: R30 limits the charging peak current to the capacitor somewhat, but at 1 ohm it still allows for 12 A, so of little use there. Also would limit the current through the zener if there are peaks above 20 V. The larger capacitors work less well at higher frequencies, and that's where the smaller ones take over. 470 µF in a non-polarized version would be expensive, but there would be nothing against it. All large capacitors are polarized. On the output it would give an extra load for the 7805 to charge it. On the input too for the battery, but that can deliver more than enough current. No, except that they're larger. Keep in mind that the input-output difference for the 7805 is about 5 V (12 V - 1 V for the diode and 1 V for R30 - 5 V out) and that at 1 A out the regulator will dissipate 5 W, so it will need considerable cooling (sizable heatsink) if you want to draw that much current.
H: What is a multiplexed network? What is a multiplexed network? Multiplexing means many to one and sending the result over the network. As far as I know, Control Area Network (CAN) supports multiplexing. How? CAN is said to support multicast, one to many (kind of de-multiplexing). Then what is multiplexing and a multiplexed network? AI: A multiplexed network means that several messages uses the same transmission medium (see below for short version from wikipkedia at the same time. In telecommunications and computer networks, multiplexing (also known as muxing) is a method by which multiple analog message signals or digital data streams are combined into one signal over a shared medium. The aim is to share an expensive resource. For example, in telecommunications, several telephone calls may be carried using one wire. Multiplexing originated in telegraphy, and is now widely applied in communications. The multiplexed signal is transmitted over a communication channel, which may be a physical transmission medium. The multiplexing divides the capacity of the high-level communication channel into several low-level logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, can extract the original channels on the receiver side. I have not used CAN but reading this white paper about CAN and Multiplexing they seem to be using using the same definition, if you look at for instance pages 9 and 7 you will note that the CAN uses only one cable that connects all nodes.
H: Trying to get that tube sound from a gainclone amp A Gainclone amplifier makes for a nice simple benchmark/monitor amp. It is a class AB linear amplifier capable of 20 W - 120 W of power, and is based on a monolithic amplifier chip such as the LM1875, LM3875, LM3886, or LM4780. What different ways are there to add controllable even order harmonic distortion to its sound? AI: You ask about getting "tube sound" but then presume the answer by suggesting a specific method. In addition to what you suggest, try adding a small and variable amount of mains frequency signal to the output. This simulates heater hum, which is a low level but nearly unavoidable characteristic of AC heated filament* tube amplifiers. The ear/brain associates this sound with "tube amplifiers". This is NOT mentioned in the Wikipedia "Tube Sound" page but I have long ago heard it mentuioinbed seriously as a factor so it is well worth looking at. Even harmonics are referred to (by some) as Octave harmonics as they share a 1:2 ratio with the original, and then 1:4 etc. Processing which emphasises this effect will influence the "octave sound". There are many ways of doing this - or trying to. This remarkably good page A Musical Distortion Primer discusses the underlying principles and then proposes about 15 ways or variants of achieving such effects. one obvious is to use full wave rectification in a variety of ways. A full wave rectifier fed with the signal and with a portion of its output summed with the input AC will give even harmonic effects which may be deemed to be useful. Wikipedia - Tube Sound provides an very extensive overview of the subject but with minimal circuitry. Some useful user discussion is here Of some use http://www.geofex.com/effxfaq/distn101.htm The large majority of tube amplifiers utilise AC heated filaments.
H: On the use of "BLOCK INTERCLOCKDOMAIN PATHS" I based an FPGA design on Lattice reference code that, in the timing constraints .lpf file, specifies: BLOCK INTERCLOCKDOMAIN PATHS The design two main clock domains are 100Mhz and 125Mhz so I expect them to drift through phase relationships that violate cross-domain setup and hold times periodically. Each clock domain has appropriately constrained frequencies and the transfer of data between domains looks sound. Lattice themselves post advice that seems to discourage the use of BLOCK INTERCLOCKDOMAIN PATHS in their FAQ: Question How can I block clock domain transfers where I have synchronizer circuits? Answer ... trace analyzes all clock domain transfers in which it can relate the source and destination clock domains. This may not always be desired. You may utilize your own synchronizer to handle the transfer between the clock domains. In this case trace should not analyze the clock domain transfer. This can be done in three ways: BLOCK INTERCLOCKDOMAIN PATHS This preference will block all clock domain transfers in the design. If all of the clock domain transfers in the design are handled by the logic within the design then this can be used. Be careful with this preference since it will stop trace from analyzing all clock domain transfers. BLOCK PATH FROM CLKNET "src_clk" TO CLKNET "dst_clk" This preference will block all clock domain transfers from src_clk to dst_clk. This covers all of the transfers between these two domains. All other clock domain transfers will be reported and timed by trace. BLOCK FROM CELL "myff1*" TO CELL "myff2*" This preference will block the clock domain transfer from myff1* to myff2*. This is a very specific path and is useful if you have several different types of clock domain transfers between two domains. For example, a design may contain an asynchronous FIFO and a FF-to-FF domain transfer between the same domains. The asynchronous FIFO transfer can be safely blocked, but the FF-to-FF transfer needs to be timed. And: Question What is the "BLOCK INTERCLOCKDOMAIN PATH" preference used for? Answer This constraint is used to block paths that involve data transfer between registers that are clocked by different clock nets. If used without understanding, it will block all clocks domains from each other, even if they are supposed to be related. This can cause your tools to not calculate the necessary routing delay between related clock and data paths and can lead to intermittent setup and hold errors in your design. It's best to use a BLOCK CLKNET for each path that is unrelated in case you have related clocks. I feel obliged to remove the catch-all constraint given this advice and explicitly determine the relevant cells in my synchroniser (and their path in the hierarchy) that cross the domains, then block analysis of each known crossing explicitly at the cell level. I know where these should be after all - I designed them in, but the exact hierarchy names will take a while to find from the timing report violations I expect when remove this constraint. This will probably take a day and a few P&R cycles to get them all. The net effect will probably end up being the exact same design I have at the moment, with a longer constraint file and the same near-critical paths. So - would you bit the bullet and take out the catch-all BLOCK INTERCLOCKDOMAIN PATHS, and sub in BLOCK .. CELL constraints for every crossing? Or is it not worth the effort and keep the catch-all? AI: What you are doing is correct. The key part is the synchronizer. If the only inter-clock domain nets are processed there, you can ignore the warning. Your plan to remote constraints on other nets isn't the right plan. If you get warnings apart from your synchronizer, that means you have clock crossing elsewhere. You need to fix them.
H: Atmega 328 uart program: compiler error on undeclared variable on register name ending with "n" I'm trying to get the atmega328 to communicate over serial port with my pc. I'm using hyperterminal to echo back the keyboard character entries. I'm using the max3232 chip to do the communicated with the computer. When I try to compile it gives and error and says that the variables in this function are undeclared, but they aren't variables but data register assignments. If I replace the 'n' char in the register assignments to a number it will compile and program but no info is displayed in hyperterminal. What should the 'n' be set too? My RX is on pin 2 and TX is on pin 3, if that helps. It is being programmed because I've done the blinky light programs. unsigned char USARTReadChar( void ) { //Wait untill a data is available while(!(UCSRnA & (1<<RXCn))) { //Do nothing } //Now USART has got data from host //and is available is buffer return UDRn; } THE ENTIRE SOURCE CODE #define FOSC 1843200 // Clock Speed #define BAUD 9600 #define MYUBRR FOSC/16/BAUD-1 //This function is used to initialize the USART //at a given UBRR value void USARTInit(unsigned int ubrr) { //Set Baud rate UBRR0H = (ubrr>>8); UBRR0L = ubrr; //Enable The receiver and transmitter UCSR0B = (1<<RXEN0)|(1<<TXEN0); // Set fram format: 8data 2stopBit UCSR0C = (1<<USBS0)|(3<<UCSZ00); } //This function is used to read the available data //from USART. This function will wait untill data is //available. unsigned char USARTReadChar( void ) { //Wait untill a data is available while(!(UCSR0A & (1<<RXC0))) { //Do nothing } //Now USART has got data from host //and is available is buffer return UDR0; } //This fuction writes the given "data" to //the USART which then transmit it via TX line void USARTWriteChar(unsigned char data) { //Wait untill the transmitter is ready while(!(UCSR0A & (1<<UDRE0))) { //Do nothing PORTD ^= 1 << PINB2; } //Now write the data to USART buffer UDR0 = data; } int main(void) { DDRB |= 1 << PINB2; //Varriable Declaration char data; USARTInit(MYUBRR); //Loop forever while(1) { //Read data data = USARTReadChar(); /* Now send the same data but but surround it in square bracket. For example if user sent 'a' our system will echo back '[a]'. */ USARTWriteChar('['); USARTWriteChar(data); USARTWriteChar(']'); } } AI: The 'n' is meant to be a placeholder for the actual UART peripheral that you want to use when discussing the set of registers associated with the UARTs. So if you had two UARTs, you could read the status register from the first UART by replacing it with a '0' or the second UART with a '1' like: UCSR0A or UCSR1A - note that these two are #define'd for you in a register definition file. Since we only use the UCSRnA convention for when we refer to these registers in discussion, it will not be #define'd to a register and thus using it in code will throw an error. You can follow your #include statements to find where they are defined. The register definition file and datasheet are usually two things that I have handy when working. The ATmega328 only has one UART peripheral, but follows the same naming convention. So you would replace it with a 0. Regarding being able to read/write to a serial port - assuming that you have hooked up your MAX232 correctly, have you set your baud rate correctly on both the ATmega and the PC? Post the rest of your UART initialization code if you want further help debugging that. EDIT: Summary of follow-up comments. Check your clock frequency and make sure it's correct when calculating the baud register value Ensure that both uC and PC have same UART settings (baud, parity, stop bits) Try not to use random code samples that may or may not be valid for your hardware configuration.
H: Where does the Maximum Emitter-Base Voltage come from? The datasheet for a BD679 transistor lists amongst the absolute maximum ratings that the "Emitter Base Voltage" has a maximum of 5v. This figure confuses me - my mental model of a (BJT) transistor has the path from the base to the emitter equivalent to that of a diode and the potential difference is irrelevant - it is the current that controls the gate. I have searched for this term and among the results get ones like this which appear to be talking about a different property of the transistor. The notation ('Emitter Base Voltage' as opposed to 'Base Emitter Voltage') makes me think this might be referring to the maximum 'negative voltage' that could be placed across the Base-Emitter, instead of the maximum in normal operation. Is this correct? If not, what is this figure, and what causes this junction to have such a low maximum compared with the rest of the device? AI: "Emitter Base Voltage" is the maximum voltage that may be applied when the base-emitter diode is in reverse; not conducting. This is generally much lower than a small signal diode in reverse can handle.
H: Why do ICs often have multiples of the same configuration, with different part numbers? Take the SN54AHC125/SN74AHC125 tri-state quad buffer, for example. Data sheet is here: http://www.ti.com/lit/ds/symlink/sn74ahc125.pdf Refer to page 7, and you will see this (snippet): Click here for a larger version SN74AHC125DGVR, SN74AHC125DGVRE4, and SN74AHC125DGVRG4 (the TVSOP entries; middle of image) have the same properties according to that chart. What's even weirder is that chips with the same properties, from the same manufacturer, might cost different! I looked through the datasheet to see if it had anything to do with packaging, but I still couldn't find anything. So why do multiple part numbers exist that reference the same part/configuration? Real example The reason I actually asked this question is that I'm interested in buying a small-ish quantity of said buffer chip. I want this one, but I also see a seemingly identical chip (with the addition of G4 at the end of the part number) for a few cents cheaper. I thought there must be a reason it's cheaper, but I (obviously) couldn't figure out why. AI: This may help, from my reading the E4 or G4 designation seems to be based on the RoHS information; E4 and G4 are basically the same but the E values are used for JEDEC marking while the G values are used for TI-Green marking. As for the pricing I can only speculate that Texas Instruments would prefer to sell one part (the G4 part in this case) over the other options.
H: wireless door opening detection I need to find a way to detect when a door is opened and i want to register that through a wifi connection. So the 'door detection' module should send data to a PC over wifi when a door is opened. There may be better solutions to get this working, but the first thing that came to my mind is an arduino board. Is there an arduino board that is easily capable detecting when a door is opened and sending data on detection to a PC (wireless)?? Or are there better alternatives? AI: The Digi Xbee WiFi modules have 10 digital I/O channels and ADC on board to operate autonomously, so you don't need a SBC like Arduino. Connect one of the I/Os with a pull-up resistor to a microswitch to detect the door's state.
H: Uses for a Double Diode? Can anyone tell me what a 'double diode' is used for? For example, a BAV99. I'm not sure of the correct name for it and so can't find it in any of my reference books. I've come across one in a circuit, between an output pin on a micro and an input clock pin. AI: In the case you show the diodes act as clamping diodes to protect the input against too high or too low voltages. The top diode will clamp the input to +Vs + 0.7 V, the other one will clamp the input to -0.7 V. The small advantage is that they have a the anode of one diode in common with the cathode of the other. Other double diodes, like the BAT54 are available in different configurations, so that there's always one for when you need two connected diodes: Note that even the single BAT54 comes in a SOT23 package, so placement of two single diodes requires six pins to solder, versus three pins for the double diode. Soldering cost is calculated per pin, and for simple components may exceed the cost for the part itself. And it also saves board space. Apart from the series, common anode and common cathode arrangements there are also independent double diodes, like the BAS40-07, which again may be used to save board space and cost. The BAV99 diode is also half of a bridge rectifier, and the BAV99s has two of them for making a bridge. Apart from less signal routing through the common connection you can also use double diodes when they need to be well matched, since they're on the same die.
H: How to measure battery voltage from a microcontroller Today I had an interview, there they asked me, if given a microcontroller and Battery then how do you measure the voltage level. Write the steps from scratch with pseudo code. How can this be done? AI: If they were after the pseudo code then that was a bad question; it all depends on the microcontroller. I probably would have given the candidate the microcontroller's datasheet and observe how he works his way through that 300 pages document. Anyway, you connect the battery to an ADC input (assumptions: microcontroller has ADC on board and battery voltage is less than microcontroller supply voltage). Set pin mode for that pin to ADC Select this ADC input if multiple ADC inputs Start ADC conversion Wait for end of conversion flag read ADC register calculate Vbat = Vcc x (ADC reading)/(2^ADC resolution)
H: I2C: 3.3 V and 5 V devices without level-shifting on 3.3 V bus? Do I really need a level-shifter if I use 5 V-powered devices on an I2C-bus that has pull-ups to 3.3 V? In my understanding the devices will only pull the lines (SDA, SCL) low (to ground) and never drive their supply-voltage to the bus. So I don't see a reason for a level-shifter as long as all devices detect the voltage from the pull-ups (3.3 V) as logical high. That should be the case with devices using 5 V as supply. In my case I have an IC whose inputs are not 5 V-tolerant as master and I could power my slaves with 3.3 V but using 5 V is easier in my circuit and allows higher (internal) clock-rates for the slaves. AI: According to version 4 of the \$\mathrm{I^2C}\$ spec, "Due to the variety of different technology devices (CMOS, NMOS, bipolar) that can be connected to the I2C-bus, the levels of the logical ‘0’ (LOW) and ‘1’ (HIGH) are not fixed and depend on the associated level of VDD. Input reference levels are set as 30 % and 70 % of VDD; VIL is 0.3VDD and VIH is 0.7VDD. See Figure 38, timing diagram. Some legacy device input levels were fixed at VIL = 1.5 V and VIH = 3.0 V, but all new devices require this 30 %/70 % specification. See Section 6 for electrical specifications." (page 9) Deeper in the spec, you'll see that this \$ 0.7 \times V_{DD}\$ is the minimum logic high voltage: For your 5V system: \$ 0.7 \times 5 V = 3.5 V\$ \$ 0.3 \times 5 V = 1.5 V\$ To me, the 3.3 V pull-up looks marginal, especially if any of your 5V devices use the 'new' standard of \$ 0.7 \times V_{DD}\$ for logic HIGH. Your mileage may vary, but it's always best to be within the spec wherever possible...
H: Why is Embedded Strictly C/C++ I didn't like this question since it can't be easily answered but perhaps I can rephrase: "What keeps Embedded from changing languages?" For instance, we pretty much see C/C++ for embedded (I think i've heard ADA mentioned before too? correct me if Im wrong) But what exactly keeps the Embedded world from changing languages? Is it just that C is too easy to use or there just isn't really a "need" for a change since C does everything fine? This has always kinda baffled me, not that im complaining. Since keeping it to a few languages keeps things standardized. But still the question remains. I realize this is sort of a Subjective question, however My main Question is "Why" and not "IF/WHEN" AI: First of all: forget about "embedded" as that is not a useful distinction. The all-important property is "resource-constrained". The most important resource is often time, in which case we talk about real-time systems, but it can also be memory or power. New language adoption is hard and rare. It requires re-training, new tools, and finding a good way to work with the new language. This is costly, especially for the early adopters. It is also a chicken-and-egg problem: without a large user base there won't be good quality tools and libararies, but without those there won't be a large user base. Hence a new language must have a big advantage over the existing ones, otherwise it won't stand a chance. Most "recent" new developments in languages have been filling the gap between the available CPU power and what the user needs. In other words: they can be inefficient in speed, but compensate by being easier on the programmer. Think of the rise of languages like Java, Python, Perl, Tcl that are essentially run by an interpreter (maybe after some compilation) and make heavy use of dynamic memory management. But this does not match well with the resource-constrained world, where we want to get a) the most out of the available resources, even at the expense of more programming effort, and b) a predictable use of resources. C and C++ (or a suitable subset) are still the highest level languages that are in common use (enough that good tools, sufficient trained programmers, and extensive libraries are all available) that can meet predictable space and time requirements that are not to far from what is possible on current hardware. The only contender is I think Ada, but it has suffered from a bad start: the first implementations were (perceived to be?) too slow and inefficient, and now (even though good implementations are available) the language has fallen a bit behind in features (compared to C++). Personally I think this is a pity, other things being equal I'd rather fly in a plane that is programmed in Ada than one that has been done in C or C++.
H: Can I safely use a HDD in a car? A few months ago I bought some new speakers and an amp for my Jeep. I made a little face plate with an ALPS potentiometer to control the volume and it has an iPod dock (no head unit, just the pot). A couple days ago I finally was able to order a Raspberry Pi, and I want to make my own head unit for my car that supports 192/24 for my vinyl rips (iPod only outputs 48/16). Can I use a HDD to store my music on? I know that the gyro effect on the HDD spinning can cause some serious problems if the drive is twisted around a different axis than the platters (because the head of the HDD can scratch the disk). If I securely mount a HDD in my jeep will it break? When I go off road the jeep probably does up to ±4g of acceleration for short times in random directions. Can a spinning HDD tolerate that? I will use a SSD if I have to, but even a small and (relatively) slow SSD for my 100GB of music will cost A LOT of money where as a 256GB HDD cost about $10 on eBay. A more complicated option would be to cache my current playlist to the SD card on the RPi and install an accelerometer. I could monitor the accelerometer and make it stop reading from the disk when the acceleration goes more that 1.2g or so. AI: 192/24 for my vinyl rips (iPod only outputs 48/16). The audio quality of vinyl discs is way below 48kHz 16 Bit due to the analogue and mechanic process - the "scratching" of the needle generates a relative high background noise level. I strongly recommend using flash memory technology in a car instead of magnetic platters. While 2.5" notebook HDDs can take a bit more abuse, they are still senstive - shock can damage them quickly. But audio needs low bandwidth so you can try to use cheap memory like USB sticks or SDHC/SDXC cards. No need to use expensive SSD.
H: How to measure pulse width of an IR signal using an 8-bit PIC? My friend and I want to design a universal learning remote controller, like this one, for learning purposes. What we want to do basically is store and replay infrared pulses. We want to detect 36kHz, 38kHz and 40kHz signals. A 40kHz signal will have a period of 25\$\mu\$s. We want to use an 8-bit PIC microcontroller, for now we have selected PIC16F616, that will run at 20MHz high speed crystal oscillator. We have two options available: Use the Interrupt On Change module. Use the capture mode of the CCP module. First option will be as follows: Assume a register is set as: unsigned char _10_us = 0;. This register will hold the time. TMR2 module with period register is set to create an interrupt every 10\$\mu\$sec. When an interrupt occurs, it will increment the _10_us register and exit. This will give a maximum time of 2.55msec. If more time measurement is needed, additional registers such as _1_ms can be defined and incremented as needed. Now, every time an interrupt is generated by any kind of change ( high-to-low or low-to-high), the program will note the current time, that is the value of _10_us register. After a while, when the next interrupt is generated, the program will subtract the saved value from the _10_us register, and thus will now the time that is taken in the meantime, with a unit of 10 \$\mu\$seconds. This option make me scratch my head; TMR2 interrupt will occur about every 50 instructions. Interrupt handling will take about 20 instructions. I am left with 30 instructions to calculate and save the period to an array. Will this method work? Second option will be as follows: Set up the capture mode of the CCP module so that it will generate an interrupt when an event(high-to-low) occurs on CCP1 pin. In the interrupt routine, it will set a flag so that a task in the program can calculate(if needed) and save the value of CCPR1H (probably will not be needed) and CCPR1L. Then we will change the configuration of the capture mode so that it will trigger the interrupt when a low-to-high edge occurs. And then it will wait for the next event. I cannot estimate the performance of this method since I've never used it. Another option? We can use an infrared demodulator IC such as TSOP17xx series. That would solve our problem completely. However some questions come to mind. Our reading distance requirement is not much; 1 metre (~3 feet). If we select a TSOP1738 that is meant to be working in 38kHz, how well will this work with 36kHz and 40kHz signals? Page 4 of the datasheet of TSOP17xx series shows "Frequency Dependence of Responsivity" graph. As far as we understood; 40kHz, which is ~1.053 of 38kHz, will give a relative responsitivity of ~0.6. 36kHz, which is ~0.95 of 38kHz, will give a relative responsitivity of ~0.65. What do these values mean? Can we use a TSOP1738 and be fine? AI: Yes, the TSOP1738 will do at this short distance. The 0.65 relative responsitivity means that at 36 kHz your IR LED needs to be \$\sqrt{0.65}\$ = 0.8 times closer to see the same signal strength, due to the inverse-square law. So if your TSOP1738 sees a certain level for 38 kHz at 1 m, you'll have to hold the transmitter at 80 cm to get the same signal strength at 36 kHz. BTW, with a remote control with fresh batteries I measured perfect reception at more than 15 m distance, so no problem at all. Don't worry about the PIC's performance. The TSOP1738 won't output the 38 kHz signal. That's the carrier frequency, which is removed by the TSOP1738 to get back the baseband signal, which has a much lower frequency, with pulse durations in the order of 1 ms, so there's plenty of time to measure time between edges accurately. The following scope images illustrate this: This is one RC5 code. The top signal is the 36 kHz modulated signal, the bottom the baseband signal with the actual code. This is zoomed in on one pulse of the baseband signal. You can see individual pulses of the 36 kHz carrier. One more word about the carrier frequency. You may be using a remote control which you don't know this frequency of. The TSOP1738 doesn't give it on its output, so if you want to read it you'll have to connect an IR photodiode or transistor to one of the PIC's inputs and read the time between two same edges. That's feasible. Period times for different carrier frequencies: 40 kHz: 25 µs 38 kHz: 26.3 µs 36 kHz: 27.8 µs A 20 MHz PIC16F616 has an instruction cycle of 200 ns (it divides the clock by 4!). So readings for the three frequencies should be about 125, 131 and 139. That should be enough to tell them apart. But if you want you can let a number of edges pass and only read the timer after the 10th interrupt, for instance: 1250, 1316, 1389. Not too much longer because you have to keep the time shorter than one pulse of the baseband signal. Success!
H: What capacitance should be added over the inputs and outputs of a voltage regulator? Is there a nice rule/formula for this or do we always need to go off of the datasheet? I read today (in a datasheet, no less) that the capacitance should be customized for a given application (no further explanation provided), and I was also told that added caps are not required if the input voltage is stable. AI: There are basically 2 types of capacitors around a voltage regulator: relatively small ones (order of 100's nf): These are required to get the regulator stable, to let it properly do its job, to prevent it from oscillating and are always mentioned in the datasheet. relatively big ones (order of 100's μF): These are to reduce ripple at the input of the controller. This ripple is visualized in the image below. (Ripple@Wikipedia) Rule of thumb is 2200-4700μF per ampère output current at 50Hz. The idea of these capacitors is to prevent the rectified input voltage from the transformer to drop below the minimum input voltage of the regulator. In case of a LM78xx-like regulator input voltage must always be some 3V higher than its output voltage. It is also good to notice that a single big electrolytic capacitor cannot replace the smaller capacitor from the datasheet (unless the datasheet states otherwise). The smaller capacitors have a much better high frequency response than the ecap and therefore you want them both. Beware not to make output capacity too large: Most regulators don't like it when output voltage (due to a charged output capacitor) is higher than its input voltage (due to a switched off transformer). Some datasheets do mention a maximum output capacitance. This is why you sometimes see a reversed diode across the regulators in and output pins.
H: What is the DONE_cycle startup option? In ISE, it is possible to select various "Startup Options" for the generate FPGA image by right-clicking "Generate Programming File", selecting "Process Properties", and then clicking "Startup Options". The listed startup options are: StartUpClk DonePipe DONE_cycle GTS_cycle GWE_cycle DriveDone This document gives explanations for StartUpClk, DonePipe and DriveDone. What exactly is the DONE_cycle startup option? (And while we're at it, what are GTS_cycle and GWE_cycle?) AI: For the DONE option, it is the event that drives the DONE pin high - for more info on this, see "Sequence of Events" in the Spartan-X Configuration Guide (I'm looking at the Spartan-3 version here, as I happen to be working with one, but I'm sure there will be a similar document for Spartan-6. EDIT - yes there is, it's on page 76 of the Spartan-6 configuration guide)
H: 3rd lumped circuit abstraction postulate So i was reading Anant Agarwal's book Foundation of Circuits and stumble upon this postulate "Operate in the regime in which signal timescales of interest are much larger than the propagation delay of electromagnetic waves across the lumped elements" "The signal timescales must be much larger than the propagation delay of electromagnetic waves through the circuit." Why is that the signal timescale be so much larger than the propagation delay of electromagnetic waves? I dont get the consequences if we are to bypass this limitation. Thank you AI: At high frequencies, you need to model connections between elements with a transmission line as there are delays associated with the interconnects. This is what you are referring to. At low frequencies, these delays are negligible to the operation of the circuit as a whole as they act at very small timescales, but at higher frequencies the behaviour of the circuit needs to take these delays into account as they cause reflections and unexpected noise into the circuit if it is not designed properly. For more information check out wikipedia's article on Transmission lines
H: Verilog: XOR all signals of vector together Say I'm given a vector wire large_bus[63:0] of width 64. How can I XOR the individual signals together without writing them all out: assign XOR_value = large_bus[0] ^ large_bus[1] ^ ... ^ large_bus[63] ? I'm especially interested in doing this for vectors where the width is specified by a localparam. AI: Binary operators like &, |, ^ and some others can also be unary in verilog, which is quite convenient. They perform bitwise operations on the operand and return a single bit value. See e.g. reduction operators on asic-world.com. reg [63:0] large_bus; wire xor_value; assign xor_value = ^large_bus;
H: Alternatives to SPI because of EMI I currently make a system consisting of a plastic enclosure which contains an MCU talking to 7 ADCs using 2MHz SPI over wires about 5cm long. The problem is that I'm concerned about EMI. Everything I have read suggests that any kind of digital signal that's not safely on a PCB in a grounded metal chassis will radiate too much to pass EMI testing. I guess this would include I2C as well. Is this likely to fail EMI testing? What can I do about this? I am looking for any kind of answers, including "Use a different bus / ADC", but not including answers that involve mechanical changes like: "Put all the ADCs on the same PCB" or "Put the whole thing in a metal box". I am especially interested in Low-EMI alternatives to SPI including differential buses. Here is some relevant information about the application. Please let me know if you need to know more things: 6 wires go to each ADC board (Power, GND, CS, CLK, MOSI, MISO). ADCs are currently MCP3208 (Microchip 8-channel, 12-bit) I am working in a desperately space constrained application, so adding shielding to the wires isn't really an option. It would be nice to use some kind of differential bus (one or two pairs only), but the only ADCs with differential communication seem to be multi-MSPS LVDS types. CAN is probably too slow, and also kind of bulky for such a space constrained application. Sample rate: I need to sample every channel at 1kHz. Added: Just to give an idea of the space constraints: Here you can see one of the ADC PCBs. This one actually has an MCP3202 instead of an MCP3208, but it's compatible(ish). It's in a TSSOP 8 package. The PCB is 11mm x 13mm. The black cable is 2mm diameter. As you can see, there isn't even space for a connector and the wires are soldered directly to the PCB, then potted. The lack of connector is due to surrounding space constraints rather than PCB space constraints. AI: 2 MHz SPI over 5 cm cables is not huge. I do 30 MHz SPI over 10 cm cables a lot, passing FCC Class B and the CE equivalent. The key is to make sure that you have a good cable (controlling as best you can for loop area), and properly terminating your signals. You control for loop area by putting the power/GND signals somewhere in the middle of the cable: Both middle of the connector, but in the middle of the bundle of wires as well. Normally you'd have a power or GND per signal, but since that is rarely a practical solution you have to do the best with what you have. Also, make sure to put one or two decoupling caps on the PCBs at both ends of the cable. Properly terminating the signals is going to be a little tricky since you don't have a controlled impedance on your cable. What I would do is to put an RC filter on the PCB at both ends of the cable. The RC filter would have the C on the cable side and the R on the chip side. At the signal driver, I would start with an R of about 75 ohms and a C of about 1 nF. At the receiver, the R would be about 10 ohms and C still 1 nF. Once you have the prototypes built then you should try different values. Essentially you want higher values for R and C, but not so high that the thing stops working or the signal levels are too attenuated. The edges of your signals should look very rounded off, but there should be no ringing and the clocks should be nice while in the signal transition band (usually 0.8 to 2.0 volts). A cap value of at least 3 nF is ideal for ESD protection, but that might not be an issue in your application.
H: Classification of VDD and VSS pins I am creating a library component for KiCAD using a convenient online app at: http://kicad.rohrbacher.net/quicklib.php. The component I am making is the Microchip PIC24EP512GU814 in LQFP http://www.microchip.com/wwwproducts/Devices.aspx?dDocName=en554337. The app asks me to assign a type to each pin. The types avilable are: Input Output Bidir Tri-state Passive Unspecified Power Input Power Output Open Collector Open Emitter I must now assign one of these types to the VDD and VSS pins. My best guess is that VDD should be assigned as Power Input. Would VSS also be considered as a power input? Perhaps since VDD is analogous to VCC (and VSS to VEE), I could classify VDD as an Open Collector (and VSS as Open Emitter). However I'm not sure if VDD and VSS can be considered as "open" for an IC. What should VDD and VSS be mapped to? AI: I would recommend using Power Input for both VDD and VSS (or any power pins on an IC for that matter). If you choose to use these types of pin labeling, then it allows you to verify that all power supply pins are connected and that there is a source that can provide power. If you set VDD or VSS as a Power Output then the software will see it as something that provides power for the circuit. This would also give you an error if you have multiple chips on the same power supply (e.g. two of these chips). The output of a linear regulator would be a good example of a Power Output. Making both VDD and VSS the same is a bit counter intuitive compared to current flow, but these are being used as system terms, not electrical current/voltage terms.
H: Capacitor in series and parallel to diagnose APC Smart UPS 1000VA Brother having APC Smart UPS 1000VA and I noticing high pitched buzzing noise coming from UPS: I think it from power filter capacitor - tall one you see on LHS of picture: (Picture all sourcing from http://www.elektroda.pl/rtvforum/topic843513.html, as brother not allow me open UPS) I say this because UPS model 2003 and I know when electrolytic caps age, sometime their electrolyte dry, and hence making pitched buzzing noise? Am I correct, or there be possibility transformer itself making high pitched buzzing noise (unlikely as noise intermittent and goes away for few seconds now and then) That cap is 2700uF, 40V and I having hard time finding this cap locally. But I have 2200uF, 35V caps - so I thinking: Connect two 2200uF, 35V caps in series to get theoretical 1100uF, 70V caps: calculation correct? Connect two theoretical 1100uF, 70V caps in parallel to get theoretical 2200uF, 70V cap: calculation correct? I guessing that tall cap 2700uF, 40V is power supply filter likely, so replacing it with theoretical 2200uF, 70V cap for confirming suspicion not bad? I also have ESR meter: what value bad cap vs good cap read to confirming suspicion? What other component possible make such high pitched buzzing noise from UPS? (UPS out of warranty since last 5 years and now obsolete model also, so APC not interested with feeedback) AI: For 2 use a 3300 uF, 40+ V. That is a more standard capacitance value than 2700 uF. And you may find others that are more than 40 V are available and those at 40 V. For 2 your calculation is correct. However this reduces the capacitance so I wouldn't go that way. I would go with 4 or 5 instead.
H: What does NOM stand for in a datasheet and what does it mean? The question is in reference to: http://www.ti.com/lit/ds/symlink/sn74ls00.pdf AI: NOM = Nominal, that's the value you normally can expect, and what the device is designed to. Note that nominal values are often not the best to calculate with. If you want to go worst case you'll have to calculate with Minimum or Maximum, depending on the parameter and the application.
H: How to add temperature control to a soldering iron I see lots and lots of builds of "temperature controlled iron" that simply either: adds a timer to an uncontrolled iron adds a dimmer to an uncontrolled iron The first is the same as turning on the iron later. And the second is the same as paying too much for a 25W iron (because that's what your 75W iron will become). Are there any projects out there really adding closed loop temperature control to an unregulated soldering iron? I think this can be done way cheaper than buying an overpriced Weller, but before I start something I would like to see where others failed/succeed and I'm failing miserably in finding any preceding work on this. AI: For an automatic temperature control, you would need three basic components: A way to sense the temperature of the iron at the tip A way to control the power supply to vary the temperature A microcontroller to read the temperature and determine how to adjust the power supply to achieve the desired temperature Since you asked for preceding work here is a nice build log of a DIY mod to a soldering station including a lot of nice pictures, schematics, and firmware. And Dangerous Prototypes looks to be putting together a kit of some sort. But some of the links are dead and information is missing. But they do have a nice custom PCB for the project that is currently out of stock as of this writing. I'm not sure if this is a work in progress or just an unfinished and forgotten about project. You could probably email them and ask for an update. PS: I agree with AndrejaKo's assessment on the dimmer controlled iron. For some people, that's all they want'need. Here is a nice Instuctable on how to do just that.
H: DAC: What waveform should I expect when ramping up? I have a 14-bit DAC, the AD9775BSV. Ramping the data pins linearly from the minimum of 0x0 to the maximum of 0x3FFF I see (using a scope) the analogue output voltage behaving as follows: In the first half (from 0x0 to 0x2000), the voltage ramps from 1V to 1.8V. In the second half (from 0x2000 to 0x3FFF), the voltage ramps from 0V to 1V. Is this the expected behaviour? What output voltage range should I be expecting for the analogue output? Why is there a "break" at 0x2000? AI: I've figured out part of the answer. The break is due to the two's complement option being turned on (see page 14): Logic “0” (default) causes data to be accepted on the inputs as two’s complement binary. Logic “1” causes data to be accepted as straight binary.
H: What analogue input voltages does this ADC understand? I have a 14-bit ADC for which I do not know what the "accepted input voltage" is. The datasheet specifies that there are "out-of-range" signals, but I cannot find what the range is. What is the range of my ADC? AI: From what I can understand from the datasheet, the input voltage can be defined from 1V p-p to 2V p-p Span = 2 × (REFT − REFB) = 2 × VREF -- (From page 17) VREF = 0.5 × (1 + R2/R1) -- (From pages 20/21)
H: Designing my own Bus This question is further to my previous question: Alternatives to SPI because of EMI. I am toying with the idea of designing my own communication bus. I would be grateful if someone could cast their eye over my preliminary design, and tell me where I'm crazy... I am currently using 2MHz SPI carried over 10cm long wires to seven ADCs on separate PCBs (shared CS, but each ADC has its own MISO line. It's bit-banged), but would like to replace it with something differential to reduce EMI. Problem is that there aren't many ADCs with a differential bus, so I am wondering if it's possible to design my own bus. At least the physical layer, and possibly the protocol too. Design goals of the new bus: use physically small components low EMI no more than 4 data wires (two pairs) bandwidth of > 300kbps from each ADC. (>2.1mbps total) Before you write me off as crazy for thinking about this, consider that it may not be so hard to do on a PSoC5. On that chip I can certainly design my own protocol in Verilog and have it implemented in hardware. And to some extent, I may be able to include the physical layer components too. What's more, I may be able to have seven of these things at the same time, all running in parallel in the master, one to each slave so that I can get good overall bandwidth. And here is my preliminary idea: It would be based on I2C, slightly modified to help it connect to the physical layer components. The SDA and SCL lines are now differential pairs. The SDA pair has the OR-ing property. This is achieved using one pin which can only drive high, and one which can only drive low. The SCL pair is driven exclusively by the master. The data rate would be turned up to at least 1mbps. The master would be a PSoC5 with 7 master modules. The slaves would also be PSoC5s, with one slave module, and would use the integral ADC. Thoughts: Not too sure what's the best way to implement the pull up resistors and slew rate limiting components. I assume I don't need any termination. If I limit the slew rate to about 80ns, it should be good for a 10cm long cable. That is obviously not a proper differential transceiver. Am I wasting my time trying to make one out of gates? Those Not gates cause some skew. It this likely to be a problem? AI: Well, if you're going to have PSoC5's all around, I was recently reading about IEEE 1355. The DS-SE-02 signal link variant may do what you want. It calls for 4 signal lines: a data line and a strobe line in each direction. The only dedicated hardware I can find that implements 1355 is rad-hardened Spacewire gear, but this IEEE presentation claims that it can be implemented on an FPGA with 1/3 of the gates of a UART and operating at speeds 100 times faster than a UART. A public copy of the full standard is available courtesy of CERN if you're up for a bit of light reading. I haven't had a chance to sit down with the full thing, so I can't speak for its EMI performance. Looking over the DS-SE section, it looks like you have to control for impedance in the transmission line and terminate at the receiving end.
H: How to analyze a display? So while walking around school today I found a neat little piece of hardware thrown in the trash. After a quick overlook, I noticed that the display was one of those 32 segment kinds (like a speak and spell). So I took it apart and starting running some basic test and looking up things on Google. Unfortunately I am stuck now and am not sure how to continue. The IC that controls the display has 40-pins and it too old to get hits on Google. To interface with the display, there are 8 pins. Two are for Voltage, and two run to Ground. Using an oscilloscope, I found square waves on two other pins as well. Don't really know what the last 2 pins do. Using a simple circuit for adjustable voltage on my Arduino, I was able to get the display to light up, but that is it. So my question is, How can I get this display to play nice with my Arduino?. Do I need some other diagnostic tool? Or is it near impossible to do? AI: This looks like a 16-character 14-segment plus decimel point and comma (16 segments total) multiplexed vacuum fluorescent display. There appears to be a high voltage anode supply located in the lower left corner (orange transformer coil and nearby circuitry). With only eight wires entering the board at J12, there must be some other part(s) we don't see that perform the multiplex function to accept segment data and clock it into serial shift registers that drive individual display segments. I have included a link below to one such integrated circuit. Note these were available in very small surface mount packages and its presence may not be immediately obvious. The multiplexing circuit is typically one line for each character and one line for each segment or a total of 32 lines. That also happens to be the number of lines that appear to connect the display to the circuit board. At a minimum, to interface with the display driver, there should be a clock line, a data/command line, a reset line, power and ground. Those signals are probably to be found on the eight wires connected to your board. The multiplexing driver integrated circuit is probably mounted under the vacuum fluorescent display. You will need to figure out the serial data format (ASCII, EBCDIC, Klingon, or whatever), the clocking speed, and how to generate the serial data and the clock with your Arduino. The most significant bit of an 8-bit data stream typically signifies whether the next seven bits are to be interpreted as a command or data. I would start sending 7-bit ASCII streams with no parity to see how the display responds. Also read through the command descriptions on the datasheet to see if there is any sort of setup required before characters can be stored and/or displayed. There is a 16 page PDF data sheet on the OKI Semiconductor MSC1937-01 display driver that you can download from http://www.datasheetarchive.com/14%20segment%20display-datasheet.html# that may be of some help. It is near the bottom of that web page.
H: connecting devices with two different power supplies I wanted to know if this configuration is fine with two different grounds. the microcontroller is powered up with 3.3V and the transistor is powered by 5V. Both grounds are separate. My question is whether it is safe to connect the microcontroller pin to the transistor to switch On/Off the motor? AI: Everything in your circuit needs a common reference. That's the circuit's ground. You can compare the separate grounds and the trigger to the following: take two batteries, and connect the + of the first one via the 10 kΩ resistor to the - of the other one. There won't flow any current through the resistor since, as Alfred says, there's no closed loop. So for driving Q1 you need a current path starting at the +3.3 V through the microcontroller, R5 and Q1 to ground, and that should close the loop, so connect that ground to the - of the 3.3 V. It doesn't matter that this is also the ground for another supply voltage; if you have two separated circuits you can always choose one point on each of them to connect them, and that's usually their grounds. Alfred mentions optocouplers and we discussed them in other questions. There you have two separate grounds because you have two separate circuits. There's an optical but no electrical connection between the 230 V AC side and the 3.3 V DC side, and there's no need for current to flow from one side to the other. Same with transformers. There we have a magnetic connection, again no electrical.
H: dying battery, increasing resistance this is a graph of v-i relationship of a battery (not ideal one since it has internal resistance) as you can see, the slope is 1/R, so as time passes by, R increases, therefore, the slope approaches 0. the graph now tend to become horizontal in time, so the open circuit voltage increases in time. but in real life, when battery ages, the open circuit voltage it has should be decreasing, why is then the graph and my intuition dont agree? this is based from my experience since as i measure the voltage in an old battery, i find it lower than the expected voltage like a brand new one. thanks AI: "so the open circuit voltage increases in time" That would be the case if the the short-circuit current remained the same, but that decreases as R increases. So that point moves up the i-axis. At the same time the open-circuit voltage (the so-called cell's EMF) decreases as well, moving that point to the left on the v-axis. That's another reason for the lower short-circuit current. The result will be that the slope of the line will become more shallow. So a battery is not a fixed voltage with a varying series resistor; the voltage is variable as well. Added (upon Russell's request) Like I said also open-circuit voltage decreases, which has nothing to do with internal resistance; in an open-circuit there's no current, so also no voltage drop across the internal resistance. It's the cell's actual EMF which decreases: In practice, cell EMF depends on temperature and concentration of reactants and products. If the concentration of reactants increases relative to products, the cell reaction becomes more spontaneous and the EMF increases. As the cell operates, the reactants are used up as more product is formed causing the EMF to decrease. (from here)
H: Is this ADC circuit safe for when my Pro Micro is off? I have the following little circuit: This voltage divider is used by my Pro Micro to detect when CB_VCC is powered. CB_SENSE leads to an ADC pin on my Pro Micro. Is this circuit safe for when the Pro Micro is off, but CB_VCC is powered? I'm dealing with 3.3V (that's what's powering the Pro Micro and CB_VCC). Do I need a series resistor before the ADC pin? AI: It's not a good idea to have voltages higher than the supply voltage on a microcontroller. The controller indeed has clamping diodes to protect the inputs against too high and too low voltages, but it's a Bad Habit™ to use them intentionally. \$\mathrm{CB_{SENSE}}\$ won't be higher than 0.67 V and you would think this is harmless, but on page 378 of the datasheet it says under Absolute Maximum Ratings: "Voltage on any Pin except RESET and VBUS with respect to Ground: -0.5V to Vcc+0.5V" So even that low 0.67 V is too high. Your gut feeling may say that voltage and current are so low that it's probably harmless, and I would agree if the datasheet didn't say otherwise: "Stresses beyond those listed under “Absolute Maximum Ratings” may cause permanent damage to the device. This is a stress rating only and functional operation of the device at these or other conditions beyond those indicated in the operational sections of this specification is not implied. Exposure to absolute maximum rating conditions for extended periods may affect device reliability. (emphasis mine) So, while you would want to take the risk for your one-off project I wouldn't do this for my 10k/year design.
H: Best way to connect a RS-485 device I have to control a device with a C++ application (Windows) via RS-485. I'm wondering, which would be the best way to connect it. Should I use a USB-RS485 adapter (which would be the most flexible solution) or a RS232-RS485 adapter? Any other recommendations? I am especially concerned about the speed, I used to work with a USB-RS232 adapter, which was very slow (OK, was also cheap). At the moment, I'm considering buying a NI adapter. Any other recommendations? Thanks. AI: I'm not going to give any specific device recommendations but I will offer some advice. As you say, a USB-RS485 adapter is the best solution. Adapters which use an RS232 port are available but these have some disadvantages. First, these devices generally use the RS232 RTS line to control data direction. If the windows RS232 drivers is used, you may find timing issues (because the RTS line was not intended for this purpose) although some adapters have their own device drivers to circumvent this problem. Other adapters claim to have 'automatic' direction control. The ones that I have seen do this by connecting the transmitted data to the data-enable line of the driver chip so that when the 485 line should be driving the line to the high state, it actually goes tri-state and relies on pull-up/pull-down resistors to 'drive' the line. This solution gives poor drive capability and slow risetimes (which may be the cause of your slow speed experience). One other potential problem is power. Since RS232 ports do not have power as such, RS232 adapters get their power from data or flow-control lines. One adapter I know of has a curious 'bootstrapped' internal power circuit (I won't bore you with the details) which relies on some transmit data transitions to 'kick it off' so it is completely 'deaf' until some data is transmitted.
H: Muscle wire considerations I am in the planning stages of a project for my new son/daughter, who will be arriving in December. The project involves using muscle wire to make bug or bird slowly flap it's wings. The form will be made of fabric or foam. Has anyone ever worked with muscle wire, and have some advice or warnings or painful lessons you learned that you could pass on? My plan is to use a micro to energize the muscle wire (through transistors if the current draw is too great). This will cause the muscle wire to retract, pulling on the fabric wing and make the wing look like it's flapping. This may not be the correct forum for such a question, but I'll ask anyway and hope for a favourable response. There is a lot of collective experience and knowledge among the users here, so I'm hoping to access some of that. EDIT: Thanks for all the suggestions. Lots to think about. I should have mentioned this was going to be a mobile. You know, the kind that spin around above the babies crib? So my thought was muscle wire for low weight and silent movement. However some very good considerations have been made. AI: The experience I have had with muscle wire was never good. Massive power consumption Gets hot Slow Doesn't seem to last a long time, maybe stretches. You can't solder to it. I once made a butterfly with flapping wings. For that I used simple ammeter coil type actuators. You can buy these from a company called Plantraco Microflight. If I remember correctly, their resistance is so high that you can actually drive them directly from the pins of a microcontroller. You can put together a nice butterfly from a couple of these, and a PIC. Don't forget to have some optical fibre antennae, illuminated by LEDs. And here's a terrible quality video of it moving.
H: How does the QTR-8RC capacitor charging works? I would like to understand how does work the polulu Reflectance Sensor internally. I can make it work but I would like to know what is happening electrically. http://www.pololu.com/catalog/product/961 This is the schema (focus on the bottom left part, the squared area). http://a.pololu-files.com/picture/0J629.650.png?7975fd7128a0eb0861e253d9c7f439c0 An these are the steps to read a value: Turn on IR LEDs (optional) Set the I/O line to an output and drive it high Allow at least 10 us for the 10 nF capacitor to charge Make the I/O line an input (high impedance) Measure the time for the capacitor to discharge by waiting for the I/O line to go low Turn off IR LEDs (optional) I guess initially the 10 nF capacitor is charged. Why does it say in step 3 to charge it again. Do you charge "the other side" of the capacitor?. AI: The third pint should read Allow at least 10 us for the 10 nF capacitor to discharge. If the capacitor has Vcc on both sides it's discharged. (IR) light on the phototransistor will cause a current to ground, so that voltage builds up on the capacitor, making its lower side go lower, as the other side is fixed at Vcc. Since the charging is through a current source instead of a resistor the voltage will decrease linearly with time.
H: Measure power supply ripple with Rigol DS1052E I recently received (as a birthday gift) a Rigol DS1052E oscilloscope. I read about oscilloscopes but I never really used one. So my question is: How can I measure the ripple voltage of a floating linear power supply, using this oscilloscope? Can this be done with this oscilloscope (or the common mode noise will be too high, since the ground clip of the probe is tied to earth)? To get a fairly accurate reading would I need differential probes? Or can I use the two channels of the scope and add them together? AI: You should be able to safely measure the floating linear power supply, as long as you know what you're doing and you're sure that the supply is in fact floating. So first step is to make sure that the supply is floating. It would be simple to use a multimeter to confirm that there isn't a conductive path between the power supply's rails and ground. Is that is true, you can simply connect oscilloscope probe's ground connector to some point in the circuit. Often this would be the circuit's negative line, but does not need to be. If the power supply isn't floating (or to say more clearly is grounded), you'd have to connect the oscilloscope's probe ground to the grounded rail of the power supply. That would be usually the negative rail, but could be positive, so to be 100% sure, you'd need to confirm the ground connection with a multimeter. Do note that once you've connected the ground clip of the probe to some part of the circuit, that part is now ground referenced! That is important, because the ground clip of the other probe is also connected to ground and if you touch another part of the circuit with it, you'll short it to ground, which could have very negative consequences. Here's a diagram of usual probe connections inside of an oscilloscope: So if you for example connect the ground clip of one probe to negative rail of the supply and the other to positive, you'd have a short-circuit. Now about the actual measurement itself: First step would be to see if the probe can handle the voltages and to determine the appropriate probe setting. Usually 10x attenuation is used on probes, since that represents what is usually negligible load on the power supply and provides more bandwidth for the oscilloscope. After that, connect the ground clip of the probe to the power supply and the probe tip to the point which you want to measure. Some sources recommend that the tested device be powered-off during the connection and that to me looks like a good idea since it minimizes the chances to make a short somewhere where it shouldn't be while connecting the probe. Once you connect the probe, check that the probe is properly connected and isn't touching anything it shouldn't be, like heatsinks (which might be connected to power supply's negative side). Next, activate the oscilloscope and make sure that the probe attenuation factor is set at the same setting as seen on the probe. Next make sure that the probe coupling setting is correct. It shouldn't be set to ground and it should be set to DC. More about that is in the manual under To Set up the Vertical System. Next step would be to set the oscilloscope trigger voltage for the connected probe a little bit higher (or lower) than the power supply's nominal voltage. This should make the scope trigger on ripple. After that, turn on the power supply. You should be able to see a (more or less) flat line representing the supply's output voltage on screen and you may see some interference riding on that voltage. Next part is a bit more difficult to explain and is a bit more experimental, but once you do it a few times, it will be easy. The idea is to zoom in on the interference you see. You could try with automatic measurements and see how they work out. In case they don't show what you want to see, I'll explain how to do it manually. The whole story is explained in the horizontal and vertical settings part of the manual. Basically you use the scale knob to zoom in on the wave you see and then you use the position knob to set the wave on the center. I usually first adjust vertical settings, then horizontal and repeat the procedure until I can clearly see the ripple. Once you see it, you can measure the ripple using the graticule or you can use cursors. Cursor use is explained in the example 5 at the end of the manual for the scope and in the To Measure with Cursors section. When you're using graticule, you simply look up how much time or volts each division represents and then multiply the number of occupied divisions by the value you have. Cursor measurement will usually provide you with more precise result. So far I haven't mentioned the math menu, because there is no need to use it. You definitely need to reference some point in the circuit to the oscilloscopes ground, since the scope does all measurement with respect to ground. If you connect one probe to the positive rail of the power supply and the second to negative and subtract them, you'll gt same result as if you measured against the probe clip's ground. Do note that in the case of the isolated linear power supply, you can't get ground loop and have noise, since there will be no current going from power supply's ground through the oscilloscope's ground to the main ground, because the PSU itself isn't ground-referenced and there's no closed loop for current to go through. A bit about AC coupling: As Vorac says, if you set the probe to AC coupling, you'll remove low frequency signals. This includes the DC component of power supply voltage, which will leave you with only the ripple. This way, you can avoid the need to use vertical position controls to bring the noise into view because it will be already centered to zero volts, so you can just zoom in on it. Another handy thing are the trigger settings. You can also set filtering to triggering circuit, so that it will work on AC, DC, low or high frequencies. AC trigger coupling will remove all signals under 10 Hz from trigger circuit, so slow periodic signals won't interfere with trigger. LF reject will block all signals under 8kHz and HF reject will block all signals above 150 kHz. This can sometimes be useful if you're trying to focus on only one component of the signal and trigger on it.
H: What is the difference between these resistors? Take a look at these two different resistors: Fig.1 source Fig.2 source The first one looks "normal" to me (the way resistors I've always bought in the past look..), but the second looks kind of odd. Both of these pictures are of 5% carbon film resistors. What's the difference between them? AI: Note that power dissipation is not the only feature which may differ - see below. You can tell very little with certainty by looking at resistors externally. Knowing the manufacturer is liable to tell you far more than appearance does. While I am almost always in agreement with Wouter, and do not differ very substantially on this occasion, I note that in some cases small resistors from a given manufacturer can have larger dissipations than those of larger resistors from the same manufacturer. An excellent example are the superb SFR16 resistors (originally made by Philips and subsequently sold on several times) and their companion SFR25 resistors. The combined SFR16 / SFR25 datasheet here shows that an SFR16 resistor is rated at 25% more dissipation than an SFR25 but is only about 50% of the length and 80% of the diameter. When placed side by side the SFR16 appears tiny compared to an SFR 25, having only about 33% of the volume. Some other versions of the SRF16 had datasheets that advised up to 0.6W dissipation. (Note that the SFR25H in the above datasheet with the same dimensions as the SFR25 has 0.5 W dissipation). Why, then, use an SFR25 ever? The SFR25 compared to an SFr16 has superior temperature coefficient, 250V compared to 200V maximum voltage rating and much superior noise characteristics in some ranges.
H: Sixteen 100v isolated power supplies I need to create sixteen isolated power supplies on a single PCB. Input voltage 9v DC. Output voltage 100v DC. Output current 10mA each. No precise regulation needed. Efficiency not terribly important. I was thinking that maybe I could just wind my own transformer with one primary coil, and 16 secondary coils. Drive the primary coil with an H-Bridge, then use bridge rectifiers and capacitors to smooth the outputs. Does this sound sensible, or is there a better way to do it? Do I need a sinusoidal input waveform, or will a square wave do? Does it matter how many turns I use? Is more turns better? (As long as the ratio is correct) Does the frequency matter? My cost and space budget are small, but negotiable. So I'd rather not just buy a whole load of off-the-shelf supplies. Added: Schematic AI: The tone of your question implies that you have little-to-no experience with switching power supply design. You are going to have an incredibly difficult time if you want to make a transformer with a single primary and sixteen secondaries. The construction of a transformer is often more critical than the hard electrical/magnetic parameters (turns ratio and core material) due to their being so many degrees of freedom (leakage inductances, coupling ratios, copper loss in the windings, interwinding capacitances, etc.). If the secondaries have to be isolated from the primary, but can be common to each other, you can go with a single secondary winding rated for all the power you need, and use point-of-load converters (bucks or synchronous bucks) to regulate each rail and provide overload protection (to keep one rail from bringing down the entire bus). You can get complete synchronous buck stages in 2mm square packages (a few external parts and you're done.) If all 16 rails have to be isolated from each other, I'd recommend not using more than four secondaries per transformer (obviously you need four converters). You could go with a flyback converter design, which simplifies the secondaries (no filter inductors needed) and allows for output > input with galvanic isolation. There are many integrated flyback controllers on the market that contain the MOSFET and control circuitry, just wire up some feedback through an opto and away you go. You (of course) need a properly-designed transformer, so "yes" the turns do matter as well as the actual number of turns used. The number of turns impacts the inductance, peak current and peak flux density of the transformer. A proper transformer design optimizes the number of turns to minimize core and copper losses, and requires a thorough design procedure. There is no 'magic' number, and more is not always better. For a flyback converter, there are more/different constraints, since the transformer has to be designed to store a certain amount of energy. Your space budget is small. Forget about sinusoidal waveforms. Forget about low frequency operation. You need high-frequency conversion to minimize the space, which (in its simplest form) involves square waves. Of course, there are efficiency tradeoffs with higher frequency operation. (Space doesn't come free.)
H: C++ microcontroller/processor selection I am having trouble selecting a microcontroller/processor for a robotics project in C++. I have a program working on my computer that is 1.5+ KLOC and relies on data in twenty other files to function, so please do not suggest I use another language. I tried translating it to C, but could not get it to work, perhaps because of the program's heavy reliance on fstream and strings. The program is about 1 MB on my computer right now and takes up 3 MB while running, so I suppose the microcontroller/processor would need either to be capable of supporting 4 MB of ram if it is von-Neumann/MHA and 1 MB of flash and 3 MB of ram for Harvard. I need PWM, SPI and UART/USART on the processor to communicate with other sensors, and I plan to use a hard drive for the other files and external ram for the program and its data. I will need at least 90 IO pins (40 IDE + 40 servos + sensors). Summary: >90 IO pins PWM SPI UART/USART if von-Neumann/MHA, capable of supporting >4 MB of ram if Harvard, >1 MB program flash and >3 MB of ram supports C++ What do you suggest? Please also provide information on how to program the processor, if possible. So far, I have found Freescale’s i.mx25, but I am not sure how to connect this processor to my computer for programming, if it uses C++, or the details of how to turn my current Windows .exe program into a .hex compatible with this processor. @m.Alin I am using a hard drive because I started out with AVR and found a tutorial describing how to communicate with an hdd from an AVR. I could not find a similar SD card tutorial. @MikeJ-UK The program currently runs on my laptop, an x86-64 Windows 7. @darron "1MB binary implies more than 1500 LOC" The program is 643 KB now, not 1 MB. I apologize for the confusion. I said 1 MB because I am still working on and expanding the program, so the prospective processor will need to be able to handle its future larger size. "add a peripheral board for the servos" "io offloading onto an FPGA..." I do not know how to do this. After a quick search, I was unable to find any affordable FPGA's. Do you know of any >$50? @Rocketmagnet 1500 lines. @vicatcu I do not think 8 io pins will be enough. @AndrejaKo For the most part, the servos will not need to be controlled at the same time. I like your multiple microcontroller/demultiplexer option, but I do not understand what is wrong with using an i.MX25 with Linux? It has 128 io pins. AI: This sounds like a job for embedded Linux. Persistent file system. Forget IDE (save yourself 40 pins) and go for a board that uses a flash card. More RAM and Flash. Typical embedded Linux boards have RAM in the megabytes. As for peripherals, driving 40 servos could be a question here on its own. How are you doing this now? For the rest of your peripheral requirements, here's a board that seems to fit that has a good community as well: http://beagleboard.org/static/beaglebone/latest/Docs/Hardware/BONE_SRM.pdf The tool chain has a C++ compiler, it has SPI, UARTs, and even a PWM. This is what's being claimed in the PDF at least, you'll have to make sure that there are drivers for all those peripherals available to you at the application level for whichever distro of Linux that you put on. Hopefully the one they provide has everything you need. So basically, if you can port whatever you've written to a Linux PC, there's a good chance you can port it to an embedded linux target. However, I'm willing to bet that if all you're using from C++ is <fstream> and <string>, you could probably do a C re-write and save yourself the overhead of Linux.
H: How do I calculate the proper width of a copper trace based on a given gauge (AWG) of wire? I am designing a PCB which I will etch at home, but I need to know how to convert a wire gauge into the proper corresponding width of a copper track on the PCB. Is there a standard formula for this? AI: There are formulas around to calculate current handling for various shapes/sizes of wire/trace, so rather than convert just calculate directly. There are various standards around (e.g. IPC2221 - was IPC-D-275 I think). Rather than memorising formulas from the standards I think many use a tool of some sort. MiscEl is quite a useful little tool, amongst many other things, it has calculations for wires in AWG/mm/in/mil/etc - number of strands or solid core. For traces you put in desired width, temp rise and copper thickness and it will give you the max current for inner/outer traces.
H: What are sacrificial components? The circuit diagram of a board I'm working on has parts labelled as "sacrificial components". These components seem to be pairs of probe points connected via a capacitor, and connected to nothing else. What are these "sacrificial components"? What are their purpose? AI: To elaborate on W5VO's comment about offering to the gods. +1 by the way. Sacrificial for protection In my experience sacrificial component implies that the part will take some kind of damage and get destroyed in order to prevent some more precious part of the circuit from taking damage. Usually, a sacrificial part is designed so that it's easy to replace. One example, would be a common AGU fuse. Another example. A certain instrument needs to measure an input with an expensive A/D converter. The input arrives via connector, which is exposed to the outside world. Harm can come through the connector (ESD, overvoltage, reverse polarity). A sacrificial OpAmp buffer in a socketed DIP package can be added between the connector and A/D. http://en.wikipedia.org/wiki/Sacrificial_device On the other hand, that all doesn't make a lot of sense in the context of O.P., in which sacrificial parts are not connected to anything. How would harm come to them? A snippet of your schematic and even a portion of the PCB layout would help understand your context better. Sacrificial for fabrication During fabrication* sacrificial mean that something is destroyed in the process of making the product without becoming a part of the product. Sacrificial material is a part of the fabrication process. Simple example: when you want to drill a hole, you might put a piece of wood on the other side of your part, so that the drill bit doesn't over-penetrate into something important. * of anything, not just electronics. May be, this is your case. May be, test points are used for some mechanical purpose. EDA package demands that they have to be connected to something (anything), so they are connected to the dummy capacitor.
H: Can acrylic latex spray paint be used as a DIY solder mask? I have recently begun etching my own circuit boards for miscellaneous Arduino projects and I need to apply a solder mask so that my copper traces don't oxidize. I bought this acrylic latex spray paint for a different project, but I am wondering if it could be used as a solder mask. My thought is that I would solder my components on first and then spray the board. Possibly, I would just cover any sensitive components with a small piece of masking tape first, but largely I wouldn't even think this would be necessary(?). Any thoughts or reasons why this would mean certain doom for the board or, more importantly, why this may be unsafe or a fire hazard? AI: It looks like it might work "okay", certainly unlikely to damage your circuit in any way (unless sprayed in pots/switches/etc) Check if it says flammable in the datasheet to ascertain fire safety. However, if you are just looking for a "sealant" to add after populating the PCB (as you mention in your question), then there are plenty of coatings around specifically for this purpose. They are usually know as Conformal Coatings. There are various types and methods of application, so you need to decide which is best for you. Here is a comparison of the various types (acrylic, epoxy, etc) Some will protect the PCB from moisture, oxidising, etc, but you can solder through them if you need to alter something (e.g. spray lacquers) More permanent and hard wearing solutions include potting compounds (e.g. put circuit in suitable potting box, pour compound in, leave to set) Look on places like Farnell, RS, Mouser for "PCB coating" or "Conformal Coating" and you will get plenty of options. Here is an example of a spray on conformal coating you can solder through. Conformal coating is not just to prevent oxidisation, it also helps prevent problems caused by contamination (e.g. acids/alkalis) or moisture (important for e.g. sensitive/high impedance circuits), and also can protect against arcing in high voltage circuits (with suitably high breakdown voltage rated compound) If you are looking for something to apply before populating the board to stop trace oxidisation, then see the tinning suggestions in the other answers.
H: WiFi Signal Boosting a couple of days ago I had a discussion with a colleague when he told us he purchased a "WiFi Booster Adapter+Antenna". It was claimed that by plugging it in his USB and using it instead of his built-in WiFi adapter, he'd be able to connect to those "far away wireless networks". My assumption about WiFi (or any form of 2-way connection): The two devices should be in each other’s range. There's no such thing as "receiving range". Even if you boost one device's transmission range while the other device is moving further, you should also do the same boosting (more power gain) for the moving device. Based on those assumptions I have the following in my head: (Both devices in each other's range; A's signals are reaching B, B's signals are reaching A) (B is going further and boosting its signal. While B's signals are reaching A, A’s signals aren't reaching B) The question is: away from the discussion itself, the brand of the booster or where he's installing it (one might purchase a super duper WiFi AP for example). Am I making wrong assumptions here? either about how sending/receiving signals works or how the booster works, if so can you please correct me? AI: WiFi Signal Boosting - The two devices should be in each other’s range. - There's no such thing as "receiving range". - Even if you boost one device's transition range while the other device is moving further, you should also do the same boosting (more power gain) for the moving device. Your premises are partially correct. An aside - Antenna gain: The following is liable to confuse more than help. Just accepting "antenna gain" as a focusing of signals as wit a magnifying glass, will suffuce for this dicussion. When I say below "increase antenna gain" I mean as far as the transmission between the two stations is concerned. Antenna "gain" is always only achieved by dealing with signals from a relatively smaller area. You can get an inefficient antenna, which clouds the issue, and antennae may reflect a radiation image in the ground plane, but for practical purposes antenna "gain" is identical to what you get with a magnifying glass. On the receive side, signal from a wider area may be captured but what is effectively being done is to acquire signal from a larger solid angle. If you increase antenna gain of B relative to A then you will increase apparent transmitter power of B. But, using the same antenna you will also increase the apparent transmitter power of A as B will have the signal from a wider area "focused" by the receiver. So, increasing antenna gain increases range due to apparent boosting of transmit power by both stations. If you increase the transmit power of B you will increase the B to A transmit distance but the A to B distance will not be affected. To provide an equivalent receive boost you need to reduce the receiver noise level of B proportionately. This is usually best achieved by use of a lower noise device in the receiver front end. This area involves the blackest of magic. It is usually easier to increase transmit at power at both ends than to increase receive gain and transmit power at one end only. In a typical WiFi scenario the central station / master / access point / whatever, is shared amongst multiple channels and is able to bear a greater capital cost. By adding a booster that both increases transmit power and also adds a low noise receive amplifier you benefit all channels and remote devices concerned. At least potentially, a unit which does not improve receiver performance and which increases transmit-power-only at the access point could support a greater outgoing data rate and slower incoming rate at lower power / signal to noise / worse BER. This would be useful for download dominant data streams which are what is commonly encountered. Whether the system used supports such split data rate configurations is protocol dependent.
H: gate current direction during turn-ON of P-channel mosfet In a lecture, my instructor told us that mosfets have various capacitances associated with them. eg for a switching application the important ones are- Gate-source cap, Drain-source cap, and Gate-Drain cap. Now he said that during turn-ON of a mosfet a current flows through gate of the mosfet that charges all these 3 capacitances. The capacitance of these capacitors determine the turn-ON time of a mosfet. He gave the example of a N-channel mosfet. While I was applying same concepts to a P-channel mosfet, I came across a doubt. In case of a P-channel mosfet, does gate-source cap. and all above caps. charge or discharge during turn-ON of a P-channel mosfet. It seems to me that caps. should discharge during turn-ON and should charge during turn-OFF. Specifically, I'm driving a P-channel mosfet as High-side switch via a open collector output pulled up to 5V. So I need to know this. After searching through many docs on google and many books, I still have this doubt uncleared as all have examples of N-channel mosfet not a P-channel one. Can anyone clear this doubt or provide any references or links? The circuit is similar to this, but components are different. If required, the mosfet I'm using is IRF9392. Update1 on 2012-07-09 : "This circuit will switch off slowly due to Resistive turn off gate drive. OK in most on/off cases - but not at smps frequencies. D3 not obviously useful." In above schematic ckt. why D3 is not useful after adding a D2 schottky(Anode to Gate) and D1 zener(Anode to Gate) in parallel, physically and electrically close to Q2? Why will turn-off be still slow(even with diode D3)? Is it due to resistor R4 or increase in gate-source capacitance due to zener and schottky? Update2 on 2012-07-14: Is a gate to source zener, like D1 zener(Anode to Gate) required, even with a flyback diode across an inductive load(instead of resistor R3) connected to drain of Q2? AI: No, also for P-channel MOSFETs it's charging them. The confusion probably stems from the voltage you see at the lower side, which goes to ground (or near it). But that voltage isn't important, a capacitor's charge is determined by the voltage across it: \$ Q = \Delta V \cdot C \$ So decreasing the gate voltage increases the gate-source voltage difference, which increases the charge of the capacitor. When you switch off T1, there's current flowing from +12 V through R2 to the gate to discharge it's capacitance. edit re the update of your question dd. 2012-07-09(*) Turning off means that you discharge the gate to +5 V, and this happens by current through R2 and D3. So you bypass R? but R2 is still the limiting factor. A solution would be to swap R2 and T1, so that there's more current/less resistance to discharge the gate than to charge it. \$ \$ (*) I'm using the ISO 8601 standard date format here. We have user from all over the world and for some 9/07 means 9 July, for others it's 7 September. ISO 8601 is unambiguous.
H: What are the pins? I'm looking at a motor controller, the Motor Driver 2A Dual L298 H-Bridge, and since I'm pretty new at this I'm wondering what the pins are: IN1, ENA, IN2, CSA, IN3, ENB, IN4, CSB Now, I'm guessing INx stands for INPUT. But what type? Analogue or digital? AI: The answers are in the datasheet: Points 3, 4 and 5 tell you what the pins do. INx are digital inputs for controlling motor direction. ENx are digital inputs for enabling each motor. The CS pins are for current monitoring purposes. There is a small resistor (e.g. 0.5 ohms in this case) between this pin and ground, so the current will be translated to a voltage at the CS pin (e.g. 1A * 0.5 ohms = 0.5V) You can see the two large 3W, 0.5 ohm current sense resistors clearly in the picture below: Here is an example from the L298 datasheet: The quality isn't very good, sorry. Note this shows only 1/2 of the LM398. Hopefully you can see the resistor at the bottom left, with a tap off to "control circuit" (e.g. current can be controlled if desired, and monitored for overcurrent) The truth table gives the same info as points 3, 4, 5. C and D equal IN1 and IN2 (or IN3, IN4 for the other half) Ven equals EN1 (or EN2..) and the pin with the resistor is CSA (or CSB) Hope this all makes sense.
H: I can't use digital ports with my raspberry pi? I have a raspberry pi that I really want to be able to control some motors and was thinking of this one at sparkfun: http://www.sparkfun.com/products/9670 The problem is that the Pi does not have digital ports? And the motor require it for control (low/high signal). Is the GPIO on the Pi just plain analog? If thats the case, is there a addon that could transform signal from the Pi to the motor digitally? AI: I haven't seen the datasheet(*) of the Raspberry' BCM2835 SoC, but I would be surprised if it didn't have digital I/O. I guess the I/Os are configurable for either analog or digital, and input or output. edit This document describes the Raspberry/BCM2835 peripherals; on page 90 ff it describes how to configure I/Os for either input, output or one of up to 6(!) alternate functions. (*) Apparently, Broadcom doesn't really want us to see the BCM2835 datasheet, since we have to a fill in an information request form to get at it. Not for me.
H: Maximum input frequency of 74HC Logic gates I have a clock that is running at 25.175MHz I want to run that signal through some basic logic gates from the 74HC series. I have looked on the datasheet for the 74HC08 (AND Gate), I can't seem to find any information about the maximum frequency i can run through these gates. Does anyone know how to find this information, hopefully I am not being too stupid and it's on the datasheet. I have checked 3 times now! So my question is: What is the maximum input frequency of the 74HC Logic gates? Thanks AI: No, it's not in the datasheet, it's in the HCMOS user guide, as it concerns all HCMOS components. On page 4: D-Flip-Flop: 30 MHz minimum, 55 MHz typical Counter: 25 MHz minimum, 45 MHz typical For gates this will probably be in the same ballpark, since for those only rise and fall times are relevant, and not propagation delay. If you want something faster, then NXP's Advanced Ultra-low Power series (AUP) quotes frequencies up to 619 MHz.
H: Why is the voltage regulator's output current not what I'm expecting? I've recently bought a couple of Sanyo LA5003 voltage regulators (datasheet). From the datasheet, I understood that the highest possible current on the output can be 60mA. However, when I connect it to several (2 or 4) series-connected 1.5V AA batteries, I get, respectively, around 120mA and around 140mA. Here's how I'm testing the current: (pin 1 is on the right) What am I doing wrong and/or misunderstanding? PS. I've tried measuring on more than one of them, obviously. AI: You are measuring the current incorrectly, and you are misunderstanding what the term "maximum current" means. You are measuring current like you would measure voltage. In order to measure current, you need to place your meter in series with your load. If you place it in parallel, then the meter will short-circuit your source (regulator in this example) and give you a (usually) nonsensical answer. The datasheet specifies maximum values, such as maximum current and maximum power dissipation, and it is your job to make sure that you don't exceed those parameters. The maximum current is the most current that you can safely use without damaging the circuit, but this part won't stop you from exceeding the maximum current.
H: Power supply design pattern with various capacitors I'm seeing the following pattern for power supplies on the board I'm working on: What is the purpose of the capacitors? Why are they of different values? What is the purpose of the design pattern? AI: stevenh already said it pretty good, but most likely they are decoupling capacitors. Decoupling capacitors, or bypass capacitors, are capacitors meant to smooth power flow into specific parts of the circuit or into specific ICs. Changing power demands will create a "sag" on the power supply as it changes to meet output current demands. This pulls down the voltage. These capacitors will act as "local storage" to the load during a transient event that effectively masks the sag on the power supply to the load being bypassed/decoupled. In a very dumb downed way, think of it like a pipe. One end is your power supply, and the other end is your load. The power supply adjusts itself to supply what the load is demanding. If the load changes, it might temporarily take enough water (power) out of the pipe to the point where the pipe isn't entirely full. The pipe not being full is the equivalent of your voltage sagging. This is what happens all the time on a power supply... load changes, and the voltage sags slightly as the power supply tries to supply enough current to meet demands... then eventually the voltage comes back up once the power supply has changed its output current to meet demands. Now, a decoupling capacitor is like adding a big tank on top of the pipe. When the pipe is full... the capacitor can't empty any of its water out. However, when the load gets big enough and the power supply can't supply it quick enough... the tank lets some of its water out to keep the pipe full until the power supply can supply the given current. As far as why they are different values, different parts of the circuit will require different amounts of power. Usually you'll see big caps (in the tens of microfarads, in this case, those big 100uF ones) near the power supply output itself... I usually see this referred to as "bulk"... this is for really big transients that pull a lot of power. Smaller values are for things with smaller current draws. There's also some math behind the capacitance, I believe, in regards to how fast the capacitor can give up its energy for a transient event. Smaller capacitance being better for high-frequency transients, etc.
H: While loop optimized away I have the following code in my microcontroler program: // Wait for ADC conversion to complete while ( ( ADCSRA && _BS( ADSC ) ) == _BS( ADSC ) ) {} Where ADCSRA is a register that will change its value once an analog conversion is completed and where I want to wait for a bit to be clear. This bit indicates conversion completed. Looking at the resulting assembly code, the whole loop is replace by a single instruction: in r24, 0x06 ; ADCSRA The register is read, but its value isn't even tested!? How do I have to change my C++ code to instruct the compiler to keep rechecking the register, without unnecessary delaying the program? I use the avr-gcc toolchain. EDIT: I changed the code as follows (Thnx: lhballoti): while ( ( ADCSRA & _BS( ADSC ) ) == _BS( ADSC ) ) {} Which changed the assembly code to: 38: 36 99 sbic 0x06, 6 ; 6 3a: fe cf rjmp .-4 ; 0x38 <__CCP__+0x4> Which apperently solves the problem. Check this page for the complete program and its disassembled resulting code. AI: You should be using a bitwise AND. The expression in the first while loop evaluates to zero, which causes the compiler to remove the loop altogether.
H: Raspberry PI without OS on it? I've been doing a lot with Arduino recently. It's very simple because you can directly execute C++ code on it without the need of operating systems or drivers. I've done some research and AFAIK, you usually install Linux on the Raspberry PI and create python scripts or C++ binaries on it. Question: Is it possible to run code on it without any operating system, but still being able to use HDMI and SD cards? AI: Run code on the RaPi without OS: No problem. David Welch has done the grunt work, check his gitub. Basically, you can take the standard startup files and replace the kernel.img with the file you want to run. To avoid the 'SD card dance' you can run a bootloader that receives the image-to-be-run over a serial line. I have added an auto-reboot, so you can run a new image 'hands-off'. If you want to go this route I have some more information. Run your own bare-metal application and use the screen and SD card: That's a problem. All information is available deep in the Linux sources, and there are persons working on this (check DexOs?), so I assume in a half a year or so this info will be common knowledge. But for now I would say this is probably too much work.
H: How can I instruct my compiler to use 8 bit bytes instead of 16 bit integers? I have the following code in my microcontroler program: int analogValue = ADCH; // ADC Data Register // // Simple analog comparator. // If analogValue lower than threshold then toggle output high, // Otherwise toggle it low. // if ( analogValue > 128 ) { PORTB = 0; // Port B Data Register } else { PORTB = _BS( outputPin ); // Port B Data Register } Where: ADCH is the register that contains the value from the ADC PORTB is a dital output port that toggles an LED Looking at the resulting assembly code, I noticed that it is doing a 16 bit compare (lines 40-44) where strictly speaking only 8 bits would have been sufficient: 40: 90 e0 ldi r25, 0x00 ; 0 42: 81 38 cpi r24, 0x81 ; 129 44: 91 05 cpc r25, r1 46: 14 f0 brlt .+4 ; 0x4c <__SREG__+0xd> 48: 18 ba out 0x18, r1 ; PORTB 4a: f5 cf rjmp .-22 ; 0x36 <__CCP__+0x2> 4c: 28 bb out 0x18, r18 ; PORTB 4e: f3 cf rjmp .-26 ; 0x36 <__CCP__+0x2> I realize I declared analogValue as int, which indeed is 16 bit on AVR, but ... How can I instruct the compiler to use 8 bit comparison? The Arduino IDE allows me to use byte, but avr-gcc by default doesn't. Check this page for the complete program and its disassembled resulting code. EDIT1: Changing int to char changes the assembly code to: 14: 11 24 eor r1, r1 ; r1 = 0 3e: 18 ba out 0x18, r1 ; PORTB Basically skipping the test entirely. EDIT2: (Thnx: Wouter van Ooijen) Changing int to unsigned char changes the assembly code to: 3c: 85 b1 in r24, 0x05 ; ADCH 3e: ... 40: 87 fd sbrc r24, 7 ; compare < 128 (well optimized) 42: 02 c0 rjmp .+4 ; 0x48 <__SREG__+0x9> 44: 18 ba out 0x18, r1 ; 24 46: f7 cf rjmp .-18 ; 0x36 <__CCP__+0x2> 48: 98 bb out 0x18, r25 ; 24 4a: f5 cf rjmp .-22 ; 0x36 <__CCP__+0x2> AI: I actually think a better practice that avoids this architectural ambiguity is to include <stdint.h> then use declarative types like: uint8_t for unsigned 8-bit integers int8_t for signed 8-bit integers uint16_t for unsigned 16-bit integers uint32_t for unsigned 32-bit integers and so on...