text
stringlengths
83
79.5k
H: FPGA Floating-point to Unsigned 32bits Regarding something I read in a Xilinx manual saying this: Because floating-point operations use considerable resources relative to integer/fixed point operations, the Vivado HLS tool utilizes those resources as efficiently as possible. I was thinking about the following option: If that is true about floating-point cores utilizing more resources, why can't we convert every float we have to u32 with a union like this: union converter{ float f32; unsigned int u32; } do the operations always with the u32 and then in the end convert everything back to float? I saw some examples of people doing this in really small designs, but never in big ones with lots of float operations. AI: Just because they use the same number of bits, doesn't mean that you can perform operations on them in the same way. Look at how a float is actually constructed internally - how the bits are used (sign, exponent, fraction) and then think about what would happen if you simply did integer operations on it.
H: Transfer characteristic of p-channel JFET Considering the output characteristic of the n channel JFET, can we somehow obtain the same characteristic only for the p-channel type of this transistor? In R.Jaeger's book about microelectronics circuit design, only a vague description was given on how only the polarities are changed, and everything else is left untouched. I suppose that means, that we will either be graphing the same curve in the first quadrant, or in the third also, only using the Vsg voltage. AI: Considering the output characteristic of the n channel JFET, can we somehow obtain the same characteristic only for the p-channel type of this transistor? It's not practical because the manufacturing tolerances for any specific N channel JFET are so wide to make it a worthless exercise. See below from the 2N5486 data sheet: - The above describes the channel resistance against temperature WHEN the voltage across gate and source is zero. If you picked out one device that had Vgs(off) = -1V then you'd use the top curve but who knows what device you have grabbed from the handy bag of JFETs. It could easily be a device whose Vgs(off) was -8V Because of this, Fairchild (and other suppliers) have basically created three part names for the same device because the tolerances are so wide. If you read the data sheet there are: - 2N5484 having a Vgs(off) between -0.3 and 3.0 volts 2N5485 having a Vgs(off) between -0.5 and 4.0 volts 2N5486 having a Vgs(off) between -2.0 and 6.0 volts Irrespective of all that, they are still anticipating that some may have a characteristic that is -8.0 volts! It's not unknown on Fairchild data sheets to have made a typo and actually meant -6.0V. So, trying to characterize a P ch JFET from an N ch device is fruitless
H: Inductor in transformer In an inductor, a current waveform 90° out of phase with the voltage waveform creates a condition where power is alternately absorbed and returned to the circuit by the inductor. If the inductor is perfect (no wire resistance, no magnetic core losses, etc.), it will dissipate zero power. I wonder how this will change in the case of transformer. Suppose the primary and secondary coils are of zero resistance and the core is perfectly ferromagnetic, what should be the phase different between the voltage and current in the primary coil? Power is delivered to the secondary circuit so it should not be absorbed by the primary voltage source as in the case of pure inductor. AI: In a transformer, the primary has two components of current: - Magnetization current i.e. the open-secondary primary current. Current that (ignoring small leakage inductances) finds it way (via the turns ratio) to the secondary. Point 1 - the phase angle of the mag current is 90 degree lagging the primary voltage (just like an inductor). In fact with the secondary open circuit, the transformer is just an inductor. Point 2 - the current in the secondary is in phase with the secondary voltage for a purely resistive load. The power delivered to the load is "X" watts and the power taken from the supply via the perfect transformer is "X" watts. Primary current is mainly\$^1\$ in phase with primary voltage when full load is on the secondary. As the secondary loading reduces, the primary current starts to look more reactive and eventually it is 90 degrees out of phase. \$^1\$ As Dave tweed points out, for many low power transformers (on full load) the primary magnetizing current (at 90 degrees) is still quite domainant and therefore the phase angle won't be "mainly in phase with the primary voltage". The example he gives is a transformer with 1 henry primary mag inductance. This will take a reactive current of 120 V/ XL. XL is about 377 ohms therefore, current is 318 mA. Current attributed to the 12V load is 1A and due to the turns ratio produces a current of 100mA in the primary. In other words the primary current at full-load, for this type of transformer, is still mainly reactive and closer to 90 degrees than zero.
H: Solid state relay doesn't turn off I am trying to build a custom light setup for my computer desk. I want to use the scroll lock LED signal to control a pair of 12 volt DC CCFL lights run off the same inverter. The problem is when I plug in the CCFL's they just turn on and no matter what I run across the signal pins, they don't turn off. The SSR I have purchased is from Crydom part number CN048D05. This is a drawing of my circuit: I am not really an electronics guy, but I have some basic knowledge. The LED circuit that I am trying to use as a control works. I verified that. What am I missing here? AI: From the diagram on the SSR (as given in the datasheet), it uses a phototriac. The SSR will not deactivate until the load current drops below the sustain current for the device, which is not given in the datasheet but is usually very close to 0 regardless. This device is not suitable for controlling constant DC loads.
H: Protecting current measuring shunt resistor from overcurrent To measure the current "drawn" by a 12V PTC heater element and to be able to switch it on and off from an AVR micro controller, I have come up with the circuit below. The PTC heater element has about 2 Ohm cold and 8 Ohm at steady state and is powered by a 12V 10A switching power supply. I tested the circuit and it works okay, I can measure the expected 600mV (cold) and 150mV (steady state) across the 0.1 Ohm 9W shunt resistor. Now I suppose I should protect the shunt resistor from overcurrent, i.e. in case the PTC heater element has a short circuit. In code I'm switching off the relay when the measured current exceeds 9000mA and there is a 10A fuse in series with the heater and the shunt. But what if for some reason the power supply "delivers" only, say, 8A? In that case, neither the software protection nor the fuse will kick in, and the shunt will be severely overloaded. Am I right? If yes, how would I solve this? Would I have to add some thermal protection for the shunt resistor? AI: Why would the shunt be overloaded? P = I^2 * R. At 9A (Worst case) that is 8.1W dissipation. At 8A, it is 6.4W dissipation.
H: STM32F030 analogue input pin impedance (does value change during conversion period?) The analogue input pin impedance for the STM32F030 is listed as around 50K in the datasheet. Does this value apply only during the conversion period, or would it be higher when the pin is configured as an analogue input pin but no conversion is taking place? http://www.st.com/web/en/resource/technical/document/datasheet/DM00088500.pdf (see p. 64) AI: You've misread the datasheet. RAIN is the maximum external impedance that can be connected to the ADC pin in order to keep error below 1/4 LSb on a calibrated ADC. The impedance of the S+H circuit is RADC, listed below that. The impedance of the circuitry beyond the S+H circuit is not given, but based on a RAIN of 50kohm with an error of 1/4LSb with a 12-bit ADC, I do not expect it to be any less than approximately 820Mohm. See figure 23 for details.
H: Why can't we test return neutral current with a tester? The current that a load takes up comes into it from the live and returns through the neutral. We know that. We test a live wire by touching it with a tester in one hand and being in contact with the earth. It GLOWS. But I tried to test a loads return current (my computer as the load having input power of 400W at 240V, so the current would be 400/240=1.67 A). I touched the neutral with a tester "A" and was in contact with the live wire through another tester "B" to prevent me from shock, and was completely insulated from earth with a pair of electrically insulated shoes. The "A" didn't glow neither did "B". Why could this be? But in this case when I measured the voltage between tester A and B it was 0V. Maybe that's why. Why could it be 0V ? I know this question seems useless but I was curious to see the return current. AI: The tester in your hand indicates live voltage is present. It doesn't indicate current thru the load. You need an ammeter for that and, it works in live and neutral. But in this case when i measured the voltage between tester A and B it was 0V You said A was connected to neutral and B connected to live. This means that the AC voltage between A and B is 240V or the power feed was switched off. Are you sure you are measuring with your meter on AC and not DC?
H: If I Don't Wire USBASP to ATmega 328-PU Will I Burn Out My Chip This is a simple, noob question. I'm trying to connect a USBASP with an ATmega 328-PU. If I incorrectly connect them, such as put ground into vcc or miso into mosi, will I kill my chip? AI: Yes, that is quite possible, so do not miswire it. What you could do is try to defend the chip from yourself. For example, you could put some resistors, say 220 ohm, on the MISO, MOSI and CLK lines. This will give you some safety from bad connections. Of course, this won't help you if you connect GND to Vcc, since you can't have resistors on those two lines. If you have any doubt, use a multimeter. If you get to the point where you're not sure what you're doing, make a break and try it in 20 minutes or next day or whenever you're a bit more rested.
H: LM324 for 10,000x gain? I'm looking to use the LM324n (http://www.ti.com/lit/ds/symlink/lm124-n.pdf) single-supply op-amp in non-inverting mode: The goal is to take a voltage of around 100uV and multiply it to around 1V (i.e. let R1 = 100 ohms, and let R2 = 1 megohm), while supplying the op amp with +5V on one end and GND on the other end. Will this work? AI: The op-amp is configured as an inverter and, with +100uV on the input, the output will try to go negative but it can't because you restricted the negative power rail to 0V. So this means your source has to have a negative voltage fed to the input (a possible constraint). The trouble with the LM324 is the input offset voltage is about 2mV i.e. about 20 times bigger than the signal and this may well be a positive offset and, as said above will force the op-amp output to go negative. If it's a negative offset then 2mV x 10,000 = 20V and the op-amp is saturated hard against the positive rail (or as near as it can get to it i.e. about +3V). Also, when running from a 5V supply, input bias currents could be as high as 500nA (across temperature). This will flow thru the 100 ohm input resistor and create an offset of 50uV - that's half your signal. Stop this madness and use a proper op-amp suitable for the job like an ADA4528. It has input bias currents of <1nA and an input offset voltage of about 4 uV. It is also a rail-to rail device on inputs and outputs.
H: Is it really OK to supply more current than what the component is rated for? In this heavily upvoted answer the answerer states that it is okay to supply a component with more current than what it's rated for. The analogy is that (paraphrasing here) "If Johnny wants to eat two apples, he'll only eat two regardless of whether you give him three or five, etc." However, one of the most basic circuits you can possibly make is to power an LED from some power supply. Since most power supplies provide a current that is higher than what most LEDs can handle, you must put a resistor in front of the LED in order to not burn it out. So which is it?!? Can someone explain to me when/where/how it is/isn't okay to provide higher (and lower, for that matter) current than what a component is rated for? AI: To answer the title of your question, the answer is no. It is not ok to supply more current to a component than its rated value. However, it is ok to have a voltage power supply rated for more current than the components rated value because the component will draw as much as it needs. If you are pushing more current into (forcefully) the component, then the component will exceed its rated value, heat up and be destroyed. Such as if you use a constant current source or you use a large voltage (which will cause more current to flow). But if you use the rated voltage, then the load will only take what is required, regardless of how much current is available to be drawn from the source. The difference is in how you word your question.
H: How to estimate solar panel output as a function of solar radiation Apparently, according to EarthScience.SE, the measurement of how "bright" a given day is, is measured in units of kWh/m^2, known simply as "solar radiation". Apparently, 3 kWh/m^2 is the average brightness of an American summer day (useless fact). This solar panel claims an output of 6V at 330mA. Obviously, solar panels will not perform the same on cloudy, darker days. What I'm looking to do is to put together an equation/algorithm that will take solar power (in Watts) and solar radiation ("brightness"; in kWh/m^2) as input, and tell me what the adjusted power output is for that panel, based on the current brightness. As an example, using that particular solar panel and a given point in time when, say, there is only 2 kWh/m^2 of solar radiation in the given vicinity, then the equation might yield something like this: // Example only AdjustedPowerOutput(normalOutput, solarRadiation) = normalOutput * (solarRadition / 3) = 1.98 Watts * (2 kWh/m^2 / 3 kWh/m^2) = 1.98 Watts * .667 = 1.32 Watts So, if this equation was correct, at that point in time in the given day, the panel will only output 1.32 Watts. How can this "adjusted power output" actually be calculated? AI: Normally Solar panels are rated with the power output when irradiated with 1kW/m^2 - this is close to the max solar irradiance at noon on a clear day. You seem to be mixing up energy (kWh) with power (kW). The current output of a PV cell will be fairly linear with the intensity of solar radiation. The voltage will reduce with temperature but if you keep the cell cool the power will then be proportional to the intensity of radiation. This assumes that the sunlight falls on the cell at 90 degrees - as the sun moves across the sky that cannot be maintained unless you have a tracking PV array. If not there will be a Cosine correction that will need to be applied. The total amount of energy you get per day from a cell will depend upon how long the light is present i.e. how long the day is which will vary with latitude and season and how clear the air is (e.g are there clouds). The link you gave has a map showing the integrated average energy per day for various locations. This graph shows how the current output of a PV cell varies with light intensity and voltage.
H: Increasing wire gauge by twisting pairs I have a cable like this with 4 wires inside. Each wire is 20 AWG. I've been told that I can twist the ends together (i.e. green and red, white and black) on both ends of the cable and this will effectively increase the wire gauge. This would make perfect sense to me if the wires were naked (no green/red/white/black plastic coat around it) so it would be copper twisted the full length instead of just the ends... does the coat affect the current? I'm not an electrician or EE so I'm not too sure about this. I want to use this for sprinkler valves. AI: If you twist two wires together, each would carry half the current, so you'd "effectively increase the gauge." American Wire Gauges go down by about 10 for every factor of ten in cross-sectional area. If you had ten #20 wires connected in parallel, they could carry as much power as one #10 wire. With two #20 wires, you'd have the equivalent of one #17 wire. (A handy "rule of thumb" value: #40 copper wire has about an Ohm of resistance for each foot. By the rule above, #30 would have an Ohm for every ten feet, and #20 an Ohm for every 100 feet.) Note that connecting wires in parallel may work at DC or low frequency AC. For audio, RF, or other purposes, you'd just mess up the wire characteristics, and cause yourself problems.
H: RIGOL Oscilloscope timescale "bug" I assume this an already known "bug" but I'm unsure if this could be fixed or not. Straight to the problem: I feed my RIGOL DS1054Z a 20MHz sinusoidal waveform and set the timescale to 200ms/div. The output is the following: (Sorry for the bad quality picture but the scope just don't want read my USB Stick for some reason.) It is displaying the waveform just wrong! I know some other scopes have this problem as well but can this be fixed? Note: My KEYSIGHT MSO 3000 Series displays the waveform as I would expect: AI: Your Keysight scope is sampling at 1 MSa/s. I can't quite make out the Rigol's sample rate, but it looks like 5 MSa/s. That might account for some of the difference. Regardless, neither of those sample rates is high enough to correctly show a 20 MHz signal. You're seeing garbage on both scopes; it's just different kinds of garbage.
H: Is ACS712 isolation sufficient for SELV? Can the Allegro ACS712 current sensor be used to measure mains current with enough isolation so that the isolation between the mains (230V) side and the logic side of the sensor is enough to achieve SELV compliant isolation (see https://en.wikipedia.org/wiki/Extra-low_voltage )? Datasheet for ACS712: http://www.allegromicro.com/~/media/Files/Datasheets/ACS712-Datasheet.ashx As far as I can tell, the device has a too short creepage distance between the low voltage and 230V pins, meaning that it won't be usable to achieve the required isolation for SELV circuits. I'm thinking one could have a slot in the PCB under the ACS712 and then fill this slot with a glue gun. In this way it might be possible to achieve 7mm creepage distance. Would this be an acceptable solution? Or will the current creep in between the glue and the SO capsule body? AI: Welcome to the ambit claims & half-truths of electronic component datasheets :-( I belive you're correct that, depending on the contamination grade you're aiming for, this device on a solid PCB won't meet your requirements. Adding a slot in the PCB, along with appropriate PCB routing, will help in general (without reference to any specific standard you're trying to meet) by increasing the creepage distance, but I don't think you want to be filling the slot with glue, unless you're meaning to add a 'wall' of glue in the Z plane all around the top-side of the device to increase both clearance and creepage, which is (a) ugly, (b) very difficult to service, & c) probably more expensive to manufacturer in a repeatable manner than a more appropriate selection of device.
H: PIC 12F508 Logic Level Calculations I am using a PIC 12F508 for a project and just want to make sure I understand the specifications in the datasheet for a logic low and a logic high. Here's the datasheet for reference. The specs I'm using are on page 73. The pin in question is the MCLR pin which I have set in the configuration bits as an input so it's not functioning as a reset for the micro controller. For a logic low it says 0.15 VDD I take it that means it's a percentage of the micro controllers supply voltage (VDD) right? So say my VDD is 3.50V so 3.50 * 0.15 = 525 millivolts. So as long as the input remains below 525 millivolts the PIC will read it as a logic low? I take it the same would apply to a logic high 3.50 * 0.85 = 2.975V or above would be read as a logic high right? AI: Correct on both counts. The default mathematical operation in an equation when none is given is multiplication.
H: Why PCB thickness in Microns I have created Gerber files for one of my application circuit. While giving it for development, manufacturer is asking for thickness in microns. My question is why thickness is given expressed in microns, not in mils/mm? And what is start and finish copper thickness in PCB layout. Thanks. AI: He is probably referring to copper layer thickness. He will probably start with FR4 plated by 18um Cu at both sides (start thickness) and eventually amends it with electroplating to 35 um (finish thickness), as he creates vias etc. You probably need to specify the finish thickness only (depends on current density in the traces, 35um or 70um are the most common) and the manufacturer will then select the material that best suits his technology.
H: black level calibration in image sensor For one camera, the black level of the sensor will make the output non-zero when it is covered. For example the output of the sensor is 10bit and black level is 63, if the output is reduced by black level, the range of the output will be 0~960. So the dynamic range is reduced from 0~1023 t0 0~960. Am I right? From one document, there is one figure enter image description here I am not a electronic engineer, so I don't understand why the analog offset is added to pixel voltage. The output of ADC is reduced by digital offset, do this will reduce dynamic range greatly? AI: The thing is that you want on one hand to make the best calibration you can, but on the other hand you don't want to insert nonlinearity or othe artifacts. So you have some analog compensation, in order not to amplify anything that is not actual signal. But still after amplification some offset may remain, so you have to calibrate digitqlly too. And you are right, the dynamic range does not span over the full ADC scale. Still, if you have like 90% of your scale it's fine. Hope linear calibration will be enough. I once had a sensor that required polynomial calibration, and even with that the performance was not perfect.
H: MAX14578 LDO connection not in use I want to use MAX14578, I've looked into datasheet (http://datasheets.maximintegrated.com/en/ds/MAX14578AE-MAX14578E.pdf) and found it have an integrated LDO. But I don't need to use this LDO. Since I use BATT pin to power up this IC, can I don't use LDO? If yes, how should I connect this pins? to GND or leave floating? AI: That's the same configuration as on the first "Typical Application Circuit" on page 9 of the datasheet. Leave \$L_{OUT}\$ unconnected and connect \$V_B\$ to \$V_{BUS}\$ of the USB connector and via a 1\$\mu F\$ capacitor to ground. If you're using the version with \$L_{OUT}\_SNS\$, connect it to \$L_{OUT}\$ and let them float together.
H: LCD Display text disappearing I'm using LCD display and keypad to control my stepper motor. My stepper motor uses a 12Volts, 2Amperes power supply and I used L298N as its driver. My code functions well but the problem is everytime I plug in the power source for the stepper, the LCD starts malfunctioning. The display texts disappears but the light is still on. LCD gets its power source in arduino mega 5v, the l298N driver also utilizes the same 5v source from it. I think that's the problem, currents jamming in because l298N also has a pin connected the the 12Volts, 2Amperes power source..... I'm not sure if that's the problem guys pls help me fix it. THANKS AI: Clearly the combination of the loads that you are placing on the +5V pin is causing a sag of the voltage to a level that is lower than the LCD can still remain functional. This could be a constant sag with too much load or it could be a transient fault that happens when a load switches or you try to hot plug a load. A constant sag situation would require that you find a separate hefty 5V supply to drive some of your load requirement. For the transient type situation stop hot plugging. And for a transient due to the load switching you can benefit from adding bulk capacitance to the 5V line to GND at the loads that are switching.
H: What is the purpose of transistor in this circuit? "A zener diode voltage regulator is inefficient when the supply is used with equipment that draws high current. When a supply must deliver a lot of current, a power transistor is used along with the Zener diode as shown below" After reading this statement,I still not understand the benefit/purpose of transistor in this circuit. Please can you explain further this statement? AI: If the load requires high current and it is attached directly to the zener then resistor R must be very low because all the load current must pass through it. The result is the current through the zener will also be quite high, making it hot and/or requiring a high wattage zener (for extra money). Adding the transistor separates the zener current from the load current. So resistor R can be high, zener power can be low, and the heat becomes the problem of the transistor, not the zener.
H: Step down transformer I have a 12Volts, 2Amperes transformer and I need a lower voltage and lower current power source for my other components. I want to know how to make a step down transformer i want to lower it up to 5Volts and 750mA... Can I make it that low? AI: The below assumes you are using the correct term 'transformer' for a heavy mostly metal device that outputs 12VAC from mains voltage. If you actually have a switching wall wart adapter that outputs 12VDC then you can omit the bridge rectifier and large filter capacitors shown below, but the solutions are otherwise the same. Your transformer will give you 12VAC (RMS) - actually probably more like 14V with a light load. If you add a bridge rectifier and capacitor you'll get approaching 20VDC with no load and more like 14VDC with a heavier load. One approach is to simply use a linear regulator such as an LM7805 to reduce the voltage to 5V. The main issue with that is that it's going to waste 2/3 of the power and get very hot with a 750mA load (and thus will require a large heatsink to not destroy itself). Another approach would be to buy a module that uses an LM2596 regulator (or build a circuit using that regulator). There are such modules available cheaply from China (eg. eBay)n, though I believe the 'LM2596' is typically a counterfeit. The schematic would look something like this: Though you'd probably want to increase C1 to 2200uF/25V or more. If you buy a module the parts after the bridge rectifier and C1 are in the module. You can also buy kit power supplies that are missing all but the transformer. The LM7805 circuit would be more like the below (again with C1 increased to more like 2200 or 3300uF/25V). The LM7805 will need a heatsink maybe 100mm x 100mm 3mm thick aluminum plate or thereabouts. As is well explained elsewhere on this site (Olin has written a canonical answer) the voltage rating must match your device, but the current rating needs to be the same or higher than your requirement. At 5V 10A power supply will not force 10A through your device if it is designed to draw 750mA at 5V. A 5V 1A supply will also work, but not a 5V 0.5A supply. The super-easy solution (and this is not a design thing) is to just buy a switching wall plug adapter that outputs 0.75A or more at 5.0V. For example, an Ethernet router power supply that outputs 5V@2A would work just fine. They're cheap and good enough for many purposes (and generally safety-agency approved so you won't likely get a shock from the mains).
H: What is a 3 wire type cross linked serial cable? I'm trying to use a controller which uses Serial communication with the PC. The documentation says I need to get a 3 wire type cross link serial cable. Is this a null modem cable? The following image is from the documentation. AI: Yes - I would call that a Null Modem cable. If the cable you get (or make) doesn't work, my first debugging step for serial communications is to swap connections on pins 2 and 3. (Transmit and Receive data)
H: Voltage stabilizer input/output current What is the relation between the input and output current of a generic efficient ( not just a zener diode) DC voltage stablizer ie: If the input voltage equals 15 V the stabilized voltage equals 12V and the output current equals 2A whats the input current of the stabilizer? Does the input current decrease with input voltage ? AI: For linear regulators (such as 78xx, LM317) the input current is equal to the output current, plus a little bit to power the regulation circuit. For a switch mode regulator, the power input is equal to the power output plus a little for regulator inefficiency.
H: Mosfet operating in cut off mode Suppose we have an nmosfet that is in cut-off. What is it most commonly used for in this operating region? As a switch or as a variable resistor? I believe it makes more sense to use it as a resistor, because it has a long turn-on/off time. AI: The subthreshold current is very small and typically an uncontrolled parameter wrt. \$V_\mathrm{GS}\$, as it is strongly dependent on manufacturing tolerances and the temperature of the device. (Indeed, the threshold voltage also is only controlled within a limited range.) Hence I think that the most common usage is as the off state of a switch, where the long turn on/off time (due to gate capacitance) is counteracted by using MOSFET drivers that are capable of sourcing/sinking large currents. The ohmic mode is the typical on state for such a switch, rather than the active region, since in this case usually \$V_\mathrm{DS} < V_\mathrm{GS} - V_\mathrm{threshold}\$ because of the small \$R_\mathrm{DS}\$ of the MOSFET. Here it behaves as a voltage-controlled resistor, although the resistance would be considered an undesirable quality in a switch and is therefore normally minimized. This is not to say that an N-channel MOSFET in the cut-off mode cannot also be used as a voltage-controlled resistor, but it is not completely straightforward to do so. So, for the most common usage, switching seems to be the clear winner. One can easily see this by examining the datasheets of N-channel MOSFETS: the vast majority of them are recommended for switching applications, and most newer ones are trench-type devices that can be expected to display consistent properties between samples only when used for switching.
H: Solving Schmitt trigger circuit using the superposition theorem In order to solve a Schmitt trigger circuit implemented with an op-amp (or a comparator) connected in positive feedback, this wikipedia page uses the superposition theorem. My question is : why can we use the superposition principle in this situation? Superposition is a property of linear system and this one is clearly nonlinear since positive feedback involves hysteresis and saturation... AI: The Wiki article is using superposition correctly. It starts by assuming the output is in one state, then assuming the output remains in that state, calculates the comparator input with respect to the circuit input. It uses this relationship to determine the input level (producing zero volts at the comparator) to determine the trigger point at the specified state. It then does the same for the other state. What it does not do is to model the behavior of the circuit as a trigger point is passed - it only identifies what the trigger points are.
H: Overcurrent Protection Circuit I'm pretty new to electronics and the first thing I want to have on a PCB is a simple signal source (sinusoidal 12V RMS ; 100 Hz Frequency). To not blow things up I want a current monitoring circuit which should disconnect the source if the load exceeds about 500mA. I came up with the following idea, but since I'm new I'd like to have some review on this: (Better resolution) The circuit should output a LOW signal if the current on RL exceeds 500mA and a HIGH signal if the current is not exceeded: $$I_{max} = 500mA$$ The root mean square of the signal is 12V: $$U_{RMS} = 12V$$ The amplitude is approx. 17VDC $$U_{peak} = \sqrt{2} \times U_{RMS} = 16.97V \approx 17V$$ The maximum load is therefore 34Ohms: $$R_{Lmax} = \frac{U_{peak}}{I_{max}} = 34\Omega$$ I'm planning to use a one ohm resistor to monitor the current: $$R_{shunt} = 1\Omega$$ $$P_{shunt} = I^{2} \times R = 0.25W$$ If current is exceeded, the voltage drop across the one ohm resistor should be around 0.5V: $$U_{shunt} = 1\Omega \times I_{max} = 0.5V$$ I'm amplifying the voltage with the factor 5 to get exactly 2.5V if the current on the load is 500mA: $$V_{1} = \frac{R3}{R4} + 1 = 5$$ $$U_{OPV1} = U_{shuntmax} \times 5 = 2.5V$$ The comparators have the following supplies: IC1A & IC1C have +UB=5V and -UB=-5V. IC1B & IC1D have +UB=5V and -UB=0V. I know on the schematic all amplifiers have the same IC "name", I didn't bother to change this :( The comparator IC1C inverts the signal to detect 'reverse' overload current (-500mA results in -2.5V). Then I'm comparing the 2.5V with both comparators to check if the current is exceeded or not. AI: If the current through R1 is 500 mA, then the voltage dropped across it will be: $$ E = IR = 0.5A \times 1 \Omega = 0.5 \ \text {volt} $$ Then, since the voltages at U1A-4 and 5 must be equal, U1A must drive its output until it reaches: $$ Vout = \frac {Vin \times {(R3+R4)}} {R4} = \frac {0.5V \times {(100k\Omega+25k\Omega)}} {25k \Omega} = \ \text {2.5 volts.} $$ With the resistors in the voltage divider R5R6 being equal, they'll provide half of Vcc, 2.5V volts, to the + input of comparator IC1B. Then, as long as IC1A-2 stays lower than IC1B-6, IC1B-1 will stay high, signalling "NO OVERLOAD". If, however, there's an overload and IC1A-5 rises above 0.5 volts, IC1A-2 and IC1B-6 will rise above 2.5 volts, driving the output of IC1B low, signalling "OVERLOAD". YAY!, your circuit works but, as you noted, the OVERLOAD signal is choppy because the input is AC. I'll post an alternative circuit sometime today. EDIT: Since you've decided to use a latch or somesuch to shut down the supply when your output pulse goes low, there's no alternative circuit needed. Your latest circuit works, but as someone else pointed out, the DC supply connections to the opamps and comparators aren't shown, and - even though you mentioned it in the text - that can be confusing; especially with the opamps using minus five volts and the comparators using ground for the low side power inputs. There's a question as to the value of the load resistor, R1 in my schematic, and there's also an issue with the opamp gain setting, since the 2.5 volt reference for the comparators is DC and the AC input being sensed is RMS. More specifically, if the current through R1 and R2 is supposed to be 500 milliamperes when the voltage across them is 12 volts, RMS, then from Ohm's law: $$ R = \frac{E}{I} = \frac {12V}{0.5A} = \text {24 ohms.} $$ With R2 being one ohm, then, the voltage dropped across it will be: $$ E = IR = 0.5A \times 1\Omega = 0.5 \text{volts, RMS.}$$ Since the ratio of peak to RMS for a clean sine wave is \$ \sqrt{2} \$, the peak voltage across R2 will be 0.707 volts when the current through it is 500mA, RMS. Then, since the DC trip point of U2A and B is set at 2.5 volts (Vcc/2 by virtue of the voltage divider R7R8), the gain of U1A must be set so that when there's 500mARMS through R2 and it's dropping 0.7 volts, peak, the output of U1A will be 2.5 volts. U1A is an inverting amplifier with a bipolar supply, and its voltage gain is given by: $$ Av= \frac{R4}{R3} $$ So, since we must generate an output with a 2.5 volt peak when the input is at 0.7 volts, peak, we need a gain of: $$ Av= \frac{Vout}{Vin} = \frac{2.5V}{0.7V} \approx 3.6 ,$$ and arbitrarily picking R4 at 100K, then, means that R3 must be about 28k. In any case, I've taken the liberty of redrawing your schematic, below, using the LTspice schematic editor, with the change in the gain resistor, R3, shown and the load resistor changed to allow 500 mA through it with 12 volts, RMS across it. Finally, the LTspice .asc file is here if you want to play with the circuit.
H: Why this nrf24l01 module has two VCC and two GND? This module has two GND and VCC pins: However, this module and this module only have one of each. Why is that? EDIT: I know most MCU's have a lot of 3V3, 5V and GND pins. This is required for connecting the most varied peripherals simultaneously without exceeding max current output. But I have never seen a peripheral with this configuration. Since I never saw a MCU with only one GND and one VCC pin, and there is a lot of nrf24l01 modules with just one pin of each, I would like to know the purpose, since it's supposed that this module should be connected with only one MCU at a time. AI: There are several reasons why a design can design the use of multiple pins. In order of importance: Connecting two pins in parallel lowers the resistance of the pin connection - a designer might consider low impedance important to avoid interference on the power supply lines There is a limit to the current allowable through a single pin. If power is extremely critical, a designer might decide to parallel two pins to lower the probability of contact problems (improve reliability). Finally, if the host board has already 2 pins wired to each Vcc and Gnd, why not use them? Note that the single pin board is for soldering. There you take the responsability. If you use a wired connection, you won't have problems. And your remark on CPUs having many pins for each, is not correct. In CPUs the multiple pins are really to lower the resistance to the power supply. If they'd use only one pin, the socket would probably burn, because of the heat generated in the contact. Also, with a CPU, the power filtering capacitor is on the board, so it is vital that the impedance from the chip to the capacitors is very low.
H: Is using a ground AND supply pours on a 2-layer PCB a bad design decision? Ground pour on the top, supply pour on the bottom. Does this typically increase noise? Is it considered a poor design decision? It's certainly makes things easier from a routing perspective to pop a via down when you need to supply a pin with power. AI: No, it's not a bad design decision, but it's often avoided because it can be a bad design decision sometimes, and many simply choose to use only ground pours on outer layers of the board regardless of the number of layers. A lot of engineers choose to use only ground pours on outside layers for reasons of impedance (signal integrity), because ground is needed more often than power, and because it's common, many simply assume any visible pour is going to be ground. Many put power planes internal to protect them from damage - an errant screw rattling around inside a case coming into contact with a ground plane is less likely to cause a problem than one coming into contact with a power plane. It's very convenient to have power pours, though, for some designs, and in two sided through hole designs it does make things easier in some ways. Once you run into a problem, though, you too may adopt a "ground planes or no planes on the outside layers" policy.
H: Wire Wrap Joint in High Humidity How does a wire wrap joint compare to a soldered joint in high humidity conditions? Corrosion? Electrolysis? Joint resistance creeping up? AI: Properly made wire wrap joints are probably the most durable of electrical connections. I worked in the defense industry for many years during which most of the connections between circuit boards were done by backplanes containing thousands of wire wrap connections. Each connection is done by wrapping the wire several times around a rectangular pin with very sharp sides. Because of the tightness of the wrap, there is a gas tight connection made every time the wire touches each side of the pin. A typical wrap could have a dozen or more of these connections. In my experience, I never heard of a single failure due to a bad wirewrap connection even though these systems were exposed to severe environments including submarines.
H: For production boards, is a buffer IC best to translate single unidirectional 3.3v to 5v logic without inversion? I've read plenty of advice on how to deal with 5v <-> 3v logic translation and there seems to be a ton of methods, some of which I'm clear on and some I'm not. Most of the material is geared around things like Arduino, etc... one-off hobbiest solutions that don't necessarily translate well to volume production for cost, pcb real-estate or reliability reasons. So specifically for production at volume, what would be the best way to handle a single unidirectional 3.3v microprocessor output (this case it is a STM32F205) to a 5v device (this case it is an RGB LED driver with one-wire control) without inversion? My guess is a dedicated buffer IC, but I don't really know. My 5V device requires 0.7*5=3.5V for logic high and 0.3*5=1.5V for logic low. My options appear to be: Just chance a direct connection. I put this here for completeness sake, because I think it would not be wise for production. There is anecdotal evidence (forum posts) suggesting the particular device can operate at 3.3V despite the datasheet info. I've been told a "simple pull-up resistor of 4.7k" should do the trick, however I've not found any discussions showing a single-resistor solution, so if anyone could explain this (preferably with a schematic) I'd appreciate it. This solution is somewhat vaguely mentioned in an answer here. That same answer also mentions a couple of diodes and a resistor, although this method is discussed here too with the article's author later stating in comments that he's not thrilled with the results. A MC74VHC1GT125, which is a dedicated IC built for the job. At a volume cost of roughly $0.06 to $0.07 it certainly isn't a cost issue (although % wise that is far above the other methods) but I wonder... is it the right way to go? With this options, I have a couple "sub-questions": I'm a little unclear on what purpose OE service. When would you want it in Hi-Z? I just tie that to GND, right? I've been unable to find any alternatives to this piece which specifically mention 5v to 3.3v translation, but any non-inverting buffer should do the trick, correct? Diodes Inc 74AHCT1G125, for example? AI: Because your microprocessor uses 5-volt tolerant IO pins, the simplest interface is simply to configure your pin as an open-drain, then use a pull-up to +5. From the data sheet, the pins have an absolute max rating of Vdd + 4 volts, or 7.3, so pulling up to 5 volts should be no problem. And, since the pins are rated for 8 mA/ per pin, you'll get best speed with a 1K pullup resistor, while dissipating a maximum of 25 mW. Using a 125 is also perfectly acceptable, and if you do, just tie the OE to GND, as you suspect.
H: Multiple Motors and one H bridge I am trying to make one H bridge for four identical motors. I do not want to make more than one H bridge by any means. I got a way to do that but I need your recommendation. I am going to make all the motors in parallel to each other so that the current, that comes from one branch in the H bridge, splits apart to all the four motors. Specifications: Each one of those four motors has the following features: "Voltage:DC 6V Current:120MA Reduction rate:48:1 RPM (With tire):240 Motor Weight (g):50 Motor Size:70mm*22mm*18mm" Purpose of use: Building a small robot that moves with 4 wheels. Each wheel is with one motor. Sounds a good idea? Tell me. AI: As a general concept I don't see any problems, as long as you don't expect the robot to move in a perfectly straight line. It may well be pretty good, but you seem to be aware of the problem of motor/wheel matching. And I presume you are aware that the motors on the left side must be wired the reverse of the motors on the right. When you graduate to differential steering, you will need 2 bridges. Get used to it. Actually, since you are talking about 4 motors I assume you have a 4-wheel vehicle with a motor on each wheel. Be aware that differential steering works very poorly on 4-wheel vehicles.
H: How to calculate input and output impedance of BJT Common-Emitter amplifier circuit Why is the input impedance not 260Ω in this circuit? I calculated this impedance as follows: \begin{align*} I_B &= \dfrac{18.7V - 0.7V} {180kΩ} = 0.1mA \\[1 em] r_{\pi} &= \dfrac{26mV}{0.1mA} = 260Ω \\[1 em] R_{in} &= \dfrac{180kΩ \cdot 260Ω}{180kΩ + 260Ω} = 259.6Ω \\[1 em] \end{align*} Similar question for the output impedance, which I calculated to be: \[ R_{out} = \dfrac{1kΩ \cdot 30kΩ} {1kΩ+30kΩ} = 968Ω \] UPDATE: Bad answer key AI: I'm posting this as an answer so the question won't appear in the unanswered queue. Your calculations look good - the answer key must be mixed up.
H: Calculation of a voltage in the frequency domain The current generator provides a constant current \$I\$ and the circuit is at règime conditions. At \$t=0\$, the switch T is being closed. I have to calculate the voltage \$v_c(t)\$ by using the Laplace (unilateral) transform. simulate this circuit – Schematic created using CircuitLab After having calculated the initial condition (at \$t=0^-\$) \$I_0\$ and \$V_0\$, I can draw the circuit in the frequency domain, calculate \$V_c(s)\$ and antitrasform to get \$v_c(t)\$. simulate this circuit In the above circuit \$R_1\$ doesn't appear because it's shortcircuited, but why \$I\$ doesn't appear too? Could anybody explain-me just this, please? AI: The current source doesn't influence the circuit after the switch has closed because all the current it provides will flow through the switch itself, therefore it is not useful to the computation of \$V_c\$ in any way. To see why, think of two impedances in parallel with the current source: one is the switch, the other is the equivalent of the circuit of all the other elements to the right of the switch. Then apply the current divider formula and you'll see that the zero-impedance (ideal) switch will hog all the current from the source. simulate this circuit – Schematic created using CircuitLab \[ I_x = I \cdot \dfrac {Z_{sw}} {Z_{sw} + Z_x} \] \$I_x = 0\$ if the switch is closed, i.e. if \$Z_{sw} = 0\$.
H: In depletion MOSFETs, what is the drain current value when \$V_{GS}\$ is zero? When \$V_{GS}\$ is very negative \$I_D=0\$, whereas it conducts a little current when \$V_{GS}\$ is a little negative. What is the value of \$I_D\$ when \$V_{GS}=0\$? AI: The value of \$I_{DS}\$ when \$V_{GS}=0\$ is called \$I_{DSS}\$ on datasheets. Consider for example the following excerpt from Supertex DN3545 datasheet.
H: Determine adequate speed of flyback diode for a relay I have already looked at this question which looked like it would provide the answer: How to choose a flyback diode for a relay? But it didn't, so I also checked out these: https://electronics.stackexchange.com/questions/ask Flyback Diodes and Relays https://electronics.stackexchange.com/questions/163104/what-are-important-parameters-for-a-flyback-diode Suppression diode for relays in ULN2803A https://electronics.stackexchange.com/questions/136896/pwm-and-flyback-diode-dependency Which didn't either. So my question is: When selecting a flyback diode for a relay, how can I determine whether the speed of the diode is adequate for the application? I am not looking for diode suggestions, but rather: What specifications in the datasheet for a diode I would need to look at How I can calculate the required values from 1. , given whatever information is needed about the circuit that controls the relay coil, and the specs of the relay. I presume it has to do with capacitances in the diode's datasheet among other things. Thanks Edit: to clarify, I don't mind if the relay takes some time to turn off. What I am concerned with is ensuring protection of the controlling circuitry from the spike the coil generates when current to it from the controlling circuitry ceases. AI: This has nothing to do with a relay, other than its coil acts like a inductor. What you are really asking is how to chose the flyback diode across a inductor. There are three main parameters to look at: Voltage rating. This is the maximum voltage the diode can take across it backwards, and still block current and not get damaged. This must be at least the maximum voltage applied to the coil. Current rating. The maximum current thru the diode will be the same current that is going thru the coil when the coil drive is shut off abruptly. The maximum coil current must already be known to design the coil driver. Ideally the diode should be rated for at least this current. However, many diodes allow significantly higher currents for short times than the maximum allowed continuous current. This can be relevant in the case of a flyback diode. Flyback current will decay on its own, so if the coil is shut off only occasionally, it can be valid to consider the pulse current spec instead of the continuous current spec. If you are not sure how to calculate all this, use the continuous current rating. Reverse recovery time. This is how long it takes the diode to switch from conducting to non-conducting mode. If forward current is going thru a diode and you instantaneously change the voltage so that the diode is reverse biased, the diode will actually conduct in the reverse direction for a little while before it shuts off. Now think of when this situation occurs when driving a coil. If the coil was recently turned off and the flyback current is still flowing thru the diode and the coil driver is switched on again, then there is a short from the power supply thru the diode thru the coil driver until the diode catches up and stops conducting. If you are driving something slow like a relay, this probably doesn't matter since the time from off to on is always long enough that the flyback current has died down. However, in something like a switching power supply or a solenoid or motor being controlled by PWM, the off to on time can be a small fraction of the flyback current decay time. In that case, you have to consider this carefully. Big fat power diodes meant to rectify line frequency (50 or 60 Hz) can often have substantial reverse recovery times. Sometimes the datasheet doesn't list this spec at all, since if it matters, you shouldn't be using that diode. Try finding the reverse recovery time of a 1N4004, for example. I just checked the On Semi datasheet, and it's not mentioned. It even calls these "standard recovery" diodes, which is marketing speak for "These diode are slow, so slow that we're too embarrassed to even tell you. But instead of being up front and calling them "slow", we'll call them "standard" and then everything else we sell will be "fast" or "ultra-fast" or "super-fast" or "turbo" or whatever other terms our interns can dream up because we think you're dumb enough so that giving something a cutesy name will make you buy more of them.". There are rectifier diodes where reverse recovery has been taken into account, sometimes with terms like "fast" or "ultra fast" in their names. Don't use the names to guess speed, but at least the actual speed will be listed in their datasheets. For small currents, you can use small signal diodes, like the 1N4148, that have reverse recovery time of only a few nanoseconds. Schottky diodes are usually so fast as to be effectively instantaneous to most circuits.
H: Where to find Advanced Schottky TTL Schematics I am currently having difficulty locating the circuit diagrams/schematics for the Advanced Schottky versions of the 7400 IC series. While the standard TTL data books seem to have depictions of the L,LS,H, ect, the volumes dealing with both ALS and AS do not seem to contain circuit diagrams. As http://www.slideshare.net/Ajlaaa/logic-families-16246507 seems to have a diagram for the NAND gate (SN74AS00) (unfortunately does not contain a reference) there should presumably be the others published somewhere. Does anyone have any idea where these might be published so I can get a look at them? AI: I don't know if this TexasInstruments application note would fit your bill, but it contains some schematics and reference for AS logic, like these excerpts: Another possibly useful resource is this older application note on AS and ALS logic.
H: DC Motor Rotate Direction What are possible solutions for changing DC motor rotate direction controlled by microcontroller? I know that H-bridge is one possible solution, but i want to use 2 pins for control only. AI: Two half bridges or two SPDT relays will allow you to control on/off and direction with two pins.
H: I don't understand transformer power ratings I'm befuddled. I have a wall wart power supply that supplies 65W (it's for an Intel NUC). I'm interested in building my own power supply that will put out... several different voltages and currents. So I go to mouser.com and try to find an appropriate transformer. I plug in several different power ratings. And then the trouble starts. They all cost much more than I expect. And, I notice, they're huge. So I search for something that ought to be in my NUC's power supply. What I find for 75VA is about 8-10 times larger than the entire NUC supply. And it costs twice what the whole NUC supply costs. So I cut open an unused wall wart. Yes, it has a transformer. No markings on it at all. So now I'm wondering whether I know what a power rating on a transformer is. If I believe mouser.com, then the NUC's transformer (if it has one) is wildly undersized. If I believe the NUC, then mouser's are wildly oversized. What am I missing? AI: I guess your wall wart is a switching power supply, which can use a high-frequency transformer. In search for a transformer for your own design, you aere looking for a 50/60 Hz transformer, which needs to be much larger and (hence) more expensive. A transformer roughly works by converting the input electrical energy to magnetic energy, and then converting that to electrical energy again, but at the output voltage. The size of the core of a transformer must be able to contain the magnetic field. At 50/60 Hz this happens 50 or 60 times each second. In a high-frequency switched power supply this happens for instance at 10 kHz (200 times more often than at 50 Hz), so a smaller core can be used to transfer the same amount of power. I would advise you NOT to try and build your own switching power supply, because your situation requires isolation and a reasonble power, which makes it a specialists project. As a side note, it is a sad fact of life that the compontes you can find in a mass produced product will together cost much more than the product itself.
H: How does an electronic shocker works? I'm new to electronics but I want to make this diagram from "Instructables" : I have everything I need from a camera... I also realized, using a different transformer, that the sparks are made only when the battery is deconnected and reconected repeatedly, like in this video. So ,I was wandering if that transistor is acting something like an oscillator (closing and reopening the circuit). Am I right or the transistor is used for something else? And if I'm right can you explain me how the transistors closes and reopens the connection? Thanks in advance and sorry for my mistakes(this is not my original language...). AI: Yes, the transistor works as an oscillator. Let see if I can explain the working. The transistor starts by conducting, because it gets base current via de resistor and the small section of the secondary coil. For some time the current through the primary coid will increase. When its stops increasing, the sudden drop in the change of current is transformed to the (small) secondary coil and causes the transistor to switch off. This effect is only temporary, and when it is over the cycle starts again.
H: Using a capacitor to properly power a servo I recently attached a servo to my Arduino for the first time. I ended up needing a 470uF capacitor wired like this tutorial shows because my servo was freaking out and causing my laptop to throw "power surge" warnings (I have my Arduino connected to my laptop via USB). Although I'm glad I got this working, this leaves me with several concerns. Why was a capacitor necessary to "stabilize" my Arduino? In other words, why was it the solution to my Arduino being able to properly power my servo? In that tutorial I linked above, there is a Fritzing diagram of the circuit with the capacitor. Can I assume this is a capacitor wired in parallel with the servo? Why parallel and not series? What's so special about a 470uF capacitor (vs., say, 100uF)? I just used a 470uF because thats what the tutorial said to use, but what math could this number have been based on? In other words, how might I have arrived at this number myself? I've heard capacitors can be extremely dangerous to work with, after all they store power. I'm now afraid to even touch the capacitor on my breadboard! How can I tell if its safe to remove the capacitor (even after I've disconnected the Arduino from its power source)? I have a multimeter but I'm not sure what setting/range I could set it to. AI: "Why was a capacitor necessary to "stabilize" my Arduino?" A servo motor draws a substantial current in short peaks. And USB power is not designed to provide such current peaks (and the cable makes it worse), hence the current peaks cause the voltage to vary, which the Arduino is not designed for. A capacitor acts as a buffer for current, so as long as the USB power can deliver the average current, the capacitor will help by smooth out the current peaks. "Why parallel and not series?" Think of a capacitor as a reservoir. But beware of overinterpreting anlogies, better read a decent electronics beginners book. "What's so special about a 470uF capacitor (vs., say, 100uF)?" The uF figure is the amount of buffering. Enough is enough, but better be on the safe side (for this use, more is OK). The value of 470 uF is probably based on experience. Just by accident, it is the same value I let my students use :) "I'm now afraid to even touch the capacitor on my breadboard!" Capacitors can be dangerous if they store either a high voltage or a large amount of energy that can be released in a very short time. In your case neither applies, so don't worry. But you might see a small spark when you short the leads of your capacitor, even after you have removed the USB power. A resistor (for instance 1k) in parallel to the capacitor solves this.
H: Driving MOSFET with 555 timer I am building a plasma speaker using a 555 timer to generate the audio tone. I am driving a mosfet with the output of the 555 timer, which in turn powers a flyback transformer to generate a plasma arc. I have heard conflicting information about putting a resistor in between the 555 and mosfet. I have heard that the resistor prevents over drawing current from the 555 timer. On the other hand, I have heard that a resistor keeps the mosfet in a transition state for a long time causing the mosfet to dissipate additional energy as heat. My mosfet is an IRFP460. Should a resistor connect the 555 to the mosfet? AI: alexan_e's answer in the reference given by Dejvid_no1 is useful but does not fully answer your question. Use a small value resistor from 555 to gate - maybe 10 Ohms. The purpose of the resistor is mainly to reduce gate ringing on switching transitions. A secondary use, probably not an issue here, is to very slightly slow clock edges with high capability drivers in order to reduce extremely fast FET switching edges which increases dissipation. An addition which may not help but which might be of great value and which costs very little is to add a reversed biased zener from FET gate to source and mounted with minimum (reasonably) possible lead length from the FET. Zener voltage should be somewhat more than driver Vmax and less than FET Vgs max. eg typically a 15V zener for a 12V gate driver. This clamps gate voltage to a maximum voltage slightly above the maximum the driver can apply. The purpose is to clip any transient voltages coupled into the gate by Millar capacitance coupling from the drain. This is most likely to happen when their are large positive inductive transients on the drain - which is a possibility with your application. While the driver MAY clamp the gate drive when it exceeds driver Vdd, and while you probably have a catch diode on the drain, if you have small inductances due to PCB traces then inductive ringing spikes may manage to be faster than the protection provided. That's a series of "may"s required to get bad results. In practice I have found in worst case circuits that an application, where the FET would survive for typically a few minutes without the zener, would survive indefinitely with one. I now add one as of right in any FET driver. If your load is pure resistive (not in this case) then the zener is theoretically not needed. Murphy works hard to try to ensure that no FET load is ever pure resistive.
H: Wire wrap wire quality I am looking at two kinds of wire wrap wire on Amazon. The first is the better, more expensive kind. It is silver plated and has Kynar insulation. http://www.amazon.com/dp/B006C4AGMU/ref=biss_dp_t_asn The second is a cheaper kind. It is tin plated and has PVC insulation. http://www.amazon.com/Amico-B-30-1000-Plated-Copper-Wrepping/dp/B008AGUDEY I am going to use it to connect to LED leads, which are tin plated brass, I believe. So I do not see much advantage in using silver plated wire if the LED leads are not as equally noble. Or am I wrong? The application temperature should never exceed 150 Fahrenheit (65 Celsius), so is there much advantage to Kynar insulation? Will it be just as well to buy the cheaper wire for this application? EDIT: Additional info. I will be using this hand tool for both stripping and wrapping: Jonard WSU-30M on page 34 here and how to use here The LEDs are not specifically made for wire wrapping, but they have fairly stiff tin plated brass leads that have a square profile section, although the edges may not be as sharp as purpose built WW leads. I did not want to solder to the LEDs at all - using wire wrap only. At least that's the hope. AI: This is a solid "don't know - BUT": That's a very large $/foot difference 12c versus 0.9C or ~= 13:1! However, if the total length needed was under 1 roll I'd be tempted to use the Kynar. If it was a large installation where cost started to be annoying I'd do some more research. The question needs more information than provided to be answered really well - see below. The result depends on how you are terminating the wire, which you don't actually say. eg are you using a strip and wrap tool, or manual strip then wrap, or soldering? IF strip and wrap tool - are the LEDs specifically manufactured for wire wrapping use? (pin edge "sharpness" matters). FWIW - you MIGHT manage to solder through PVC, although not at all recommended. You cannot solder through Kynar. You can wrap Kynar around a heated soldering iron tip and, while it gets very sad looking, it maintains its insulation, more or less - great stuff :-). Kynar is nasty to hand strip, and 30 gauge wire is somewhat fragile (both are 30 gauge) but OK once in place. If you are soldering it then the tinned copper is fine. If wrapping then you are presumably stripping it first then relying on the wrapped pin to wire contact. WW pins have sharp edges designed to bite into the wire. LED pins maybe be of square section are not intended to do this (AFAIK). Any help you can give an unsoldered join may keep it alive years on (or even months on in some environments). Vaguely relevant: Long ago the then sole national NZ telecom company started using a copper to copper twist join inside a melted PVC outer. In the variable fullness of time this was found to be a fatally bad idea and all such joins had to be re-made. They were then soldered.
H: torque of locked gear motor vs gearless motor Do gears increase the torque of locked/stalled motor ?. I understand that in case of rotation of shaft the gears will increase the torque because of transmission to slow rpm. But what makes difference in case of stalled/locked shaft? AI: A reducing gear box can be modeled as a lever: the small force of the stalled motor is multiplied by the reduction factor, to appear as a much larger force at the output shaft.
H: Choosing an Audio D/A converter for I2S digital audio based on 32fs I'm hoping someone can either verify or clarify my understanding of interfacing an Audio D/A converter to a device that sends audio over I2S. I have an Audio Codec chip, the VS1063A (Datasheet), that is capable of sending 16 bit I2S data to an external DAC. Unfortunately, the VS1063A is constrained to 16 bits per channel, which means the bit clock runs at 32fs. Most Audio DACs require a minimum bit clock of 48fs. For instance, TI's PCM1748 (Datasheet) states: BCK can be operated at 32 (16-bit, right-justified only), 48, or 64 times the sampling frequency. I found a different Audio DAC from TI, the PCM5101A (Datasheet) which seems promising, but I want to confirm my understanding. The datasheet states that it has an "integrated High-Performance Audio PLL With BCK Reference To Generate SCK Internally." Furthermore, it says So here's my question: Unlike the PCM1748, which requires a 4 wire I2S interface (the forth being SCK) with some additional circuity to play the 32fs format coming out of the VS1063A, will the PCM5101A accept the 32fs format because of its ability to generate SCK internally? Any feedback would be appreciated. AI: PCM1748 should be satisfied with the 32fs output of your VS1063A when the output sample rate is set to 48kHz. According to the documentation, you can generate a 12.288MHz MCLK output, from which a division by 8 will get you an SCLK output of 1.536MHz when 48kHz is selected, and the LROUT output will toggle with the sample rate of 48kHz, whereas SDATA will output the actual data. You wire MCLK into the SCK input of the PCM1748, SCLK output into BCK, LROUT into LRCK, and SDATA into DATA intput. Figure 21 in the data sheet of PCM1748 suggests that right-justified is the correct format setting for BCK = 32fs. Your misunderstanding might come from the fact that the pins are named a little bit differently by the two vendors. The difference between PCM1748 & PCM510xA is that the latter can live without getting the 12.288MHz master clock - it can generate it for itself using its internal PLL.
H: Non-diode OR-ing solution for n equivalent power sources I wish to allow powering my circuit using at least one of 3 connectable periphals. For example, my application could have 3 usb ports that each connect to different usb hosts. Any combination with at least one connected periphal should be able to power the circuit, but obviously no current is allowed to flow back into any of the 3 supplies. The circuit should be able to draw 200mA at 5v. Unfortunately, the voltage drop even from schottky diodes is unacceptable, so OR-ing diodes is not an option. I've come across a lot of power mux IC's, such as the the LTC4412[1] and TPS22933a[2]. The LTC4412 still requires a diode at one of the power sources, and is only a 2-input solution. Still quite far from ideal. The TPS22933a seems like a better component for me due to the 3 inputs, but the maximum output current is rated at only 50mA (doesn't that make this thing utterly useless?). Many of these power muxes offer niceties that I do not need: no need for any kind of load balancing between the power sources no need for a wide range of input voltages. All of them will be 5v. no need to source from the highest available voltage source. Bias is fine. not looking at a 3-input solution per se, expandability to n-inputs is desirable. Given this, what kind of solution should I be looking at? Is there an IC that nicely accomodates my (seemingly simple) needs? [1] http://cds.linear.com/docs/en/datasheet/4412fb.pdf [2] http://www.ti.com/lit/ds/symlink/tps22933a.pdf AI: I believe you can use the LTC4412 for your purpose, specifically, three of them, one for each power input. Please look at figure 5 in the datasheet. This shows two of them doing load sharing for two inputs with a diode connecting a third input. Clearly, we can remove the diode and third input (equivalent to the "wall adapter" not being connected). This leaves two 4412s and MOSFETs acting as ideal diodes for two inputs. Isn't it clear that you can simply add a third 4412 and MOSFET for a third ideal diode and input?
H: Filtering Capacitors before the fuse or after the fuse while limiting inrush current? There are basically two questions I have for this circuit. 1) Which capacitor configuration is good? before fuse or after fuse 2) How do I limit my inrush current? My thoughts: I am confused between weather to put filtering capacitor before the fuse or after the fuse? Putting a capacitor at the IN (Voltage in) pin of the voltage regulator(LM1085) is recommended by LTC. According to me, if I put my cap after the fuse, it will act like a short when I connect my 12V PC supply to my board. The inrush current could kill my fuse every-time I connect PC-supply. To me putting a cap before the fuse make sense but still, I have a feeling it could still act like a short during inrush and harm my PC-supply. What I want is a nice filtering at the IN pin of my regulator while limiting my inrush current. The circuit powered by my regulator (U2) has max 300 mA loading. Any suggestion of what I need to do here? Here is the datasheet link for my: capacitor-> http://industrial.panasonic.com/lecs/www-data/pdf/ABA0000/ABA0000CE2.pdf fuse-> http://www.littelfuse.com/~/media/electronics/datasheets/fuses/littelfuse_fuse_154_154t_154l_154tl_datasheet.pdf.pdf simulate this circuit – Schematic created using CircuitLab AI: The 10uF cap can be near the regulator where it does the most good. The inrush current with that size of capacitor should be no problem for a 3.5A fuse. The fuse you have selected is a fast acting fuse but the thermal mass of the fuse will unlikely respond in the short time that it takes to charge a 10uF capacitor. You will always have a certain amount of series resistance in the wiring, connectors, PCB traces that also will help to limit inrush current. Plus under normal circumstances I suspect that you would cycle the mains power switch of the 12V supply to power down your circuit instead of direct connecting the +12V. In this case the supply will have a fairly lengthy rise time at its output that limits the inrush current to same levels. For grins I ran a simulation assuming a wiring resistance of 0.1 ohm and the 12V supply coming up to full voltage in 10us. Under these conditions the inrush current is ~12A for the 10usec rise time of the supply (linear rise used). My estimation that under such conditions the fuse material may over heat and blow only if the switching duty cycle was faster than the fuse can cool down from a 10usec pulse. Do note that in the past I have had first hand experience of seeing fuses crystallize and fail after years of service being subjected to inrush currents. That was on the rectified DC lines of a Cromemco S100 chassis that had enormous capacitors. Your 10uF caps would look like specks in comparison. The Cromemco fuse in question was the 30A fuse on the 8V rail as shown (in the photo here). Inside the back of the unit the associated capacitor was the large soup can in the (closer part of this image). That capacitor was a 130,000uF / 15V unit. Now if you had something like a 4700uF capacitor then there may be more concern. In that case you may want to select a time delay SloBlo type fuse. In some electronics devices where there is indeed a very large inrush current possible the equipment is designed with a low ohms resistive device in series with the input. As the device comes up a special circuit either detects when the input caps are charged or just waits some nominal delay and then activates a relay that shorts out the low resistance device.
H: why do rectified voltage boosts after adding a capacitor? So I found this 9V 1A AC power supply laying around and decided to make a DC source. I rectified and added a capacitor to make things a bit even. After adding a capacitor my voltage boosted from 9V to 14V. Can somebody explain why this happened for me? (maybe this has something to do with frequency?) According to theoretical graph I should get aprox the same voltage even after adding the capacitor. And voltage varies depending on how much capacity the capacitor has (aprox. from 12-16V). Maybe there's some sort of equation to determine the actual output voltage depending on caps? It would be great to get 12V out of this thing. P.S.: Personally I find this boost weird because this circuit doesn't have any switches and coils to boost the voltage, so I need an explanation, thank you in advance! AI: The rectified AC waveform catches the peaks. The input 9VAC is RMS (Root-Mean-Square average) equivalent -- the actual amplitude of the sinewave is about 40% higher than the RMS average (square root of 2 is 1.414). So on your picture the 9V equivalent is about 70% of the way between 0V and the peaks. The numbers don't work out exactly to the ideal square-root-of-two crest factor, because there is some voltage drop across the two diodes that are on, and also because there is some variation in the line voltage. The reason RMS is used to describe AC voltages, is because the amount of power (heat) delivered to the resistive load is the same as it would be for a 9V DC 1A source. Edit: explaining the observed difference in load voltage measurement for different load capacitor condition, explaining why DMM gives wrong measurement for full-wave rectified waveform... Voltage is not actually being boosted in this circuit. When the capacitor is removed, the full-wave rectified signal doesn't sustain the peak voltages. As Ignacio Vazquez-Abrams mentions, the DMM may not be measuring the waveform correctly, especially in the case where there was no capacitor -- assuming you measured with the DMM's DC Voltage setting, without the capacitor the full-wave rectified waveform would be confusing the measurement. The 9V DC measurement reported by the DMM matches the rated 9V AC RMS equivalent, so maybe the DMM was somehow measuring the RMS value. Then when you added a capacitor, the waveform peaks were sustained long enough for the DMM to start measuring accurately. Sadly, it is possible for measurement equipment to "lie" to us under some conditions. Happens to the best of us sometimes. The DMM is just a electronic machine, it's not a magic box that always gives the right voltage measurement. Most DMM's use a measurement technique called dual-slope integration, where a capacitor is first rapidly charged up to the voltage being sampled, and then the sampling capacitor is discharged through a constant-current source. The DMM counts how long it takes to discharge the capacitor back to zero. The value of that counter is what the DMM displays. Calibration depends on the current source, the comparator offset voltage, and the quality of the sampling capacitor. This technique is cheap to implement and it works great, as long as the input signal doesn't change very quickly. But when connected to that full-wave rectified signal, the sampling capacitor doesn't stay at the peak voltage. It's indeterminate where the sample interval begins and ends, so it's hard to know exactly how many counts the DMM might report. So if the C is omitted, then it's not really a DC circuit, so the DMM DC measurement isn't valid. You also asked about using different value of C. The bridge rectifier is not regulated, its output voltage can vary with different load impedance. Changing the load capacitance C affects the capacitor's reactance Xc, which also affects load impedance. $$ Xc = \frac{1}{2 * pi * frequency * C} $$ A lower load impedance, just like load resistance, will draw more current at a given voltage. But unlike resistance, the current and voltage waveforms may be out of phase. So it's possible to have voltage across a capacitor even with zero current, and it's possible to have current through an inductor even with zero voltage (under some conditions). Putting a reactance in parallel with a resistance is a bit more complicated than putting resistors in parallel, because the voltage and current waveforms are in-phase for the resistor but are 90 degrees out-of-phase for the capacitor. In AC circuit analysis, we use complex numbers and phasor notation (yes, that really is a thing) to model these AC circuit elements. If you think of impedance as a vector, with the length of the vector acting similar to resistance in Ohm's law, and reactance acting at right angles to resistance, then putting the resistor and capacitor in parallel gives the total load impedance Z. Although it's possible to go deeper into the maths, there's another important point worth mentioning: This circuit isn't regulated. If you want to get 12V DC output, you can't just select a capacitor value and expect this to always give 12V DC output regardless of how much load current is drawn. This circuit is a good building block to start with, but the full-wave rectified output voltage will vary with changes in the utility line-in voltage as well as the load current. If you really want it to be regulated, add a regulator circuit such as 78M05 (or 78M12 if you really need 12V). In that case you'll need the full-wave bridge to provide a bit more then 12V so that the regulator has some headroom to work with (but not too much, because the linear regulator works by wasting the unwanted energy.) AC circuits theory can be kind of mind-bending at first, because there are all these surprising mathematical things like imaginary numbers and Euler's law, that turn out to actually work in real life. The comment about how the capacitor "evens out the peaks" is... kind of true... but it's a major oversimplification. As you've discovered, a qualitative statement like that doesn't help you determine how much capacitance you need to achieve your design goal of making a 12V DC power supply. I'm not going to be able to fully explain AC circuits theory here, but here are at least some interesting breadcrumbs: See https://en.wikipedia.org/wiki/Electrical_reactance See https://en.wikipedia.org/wiki/Electrical_impedance See https://en.wikipedia.org/wiki/Phasor
H: Is it ever useful to add a capacitor between op amp inputs? Is it ever a good idea to add a capacitor between the inputs of an op amp? I'm building an electronic load roughly inspired by some of the Dave Jones derivative circuits out there. I've adopted the standard of being able to fully explain the role of each component in the circuit so I can learn the most from the exercise, and so it's almost as if I've designed the circuit from scratch when I'm done. One of the popular builds is depicted here: (from http://mjlorton.com/forum/index.php?topic=29.msg92#msg92) Notice that C5 (10nF) is attached across the inputs of IC2B. The poster mentioned that he was only able to achieve stability by adding that capacitor. Something about seeing a capacitor there just seems wrong to me, but being something of a beginner with op amps, I thought maybe I was missing something and just haven't come across that before. The reading I've done on op amp stability seemed to indicate that capacitance on the inputs was a negative factor and something to be compensated for, not added. I don't actually have a stability problem in my circuit, not yet at least, although I do have bypass capacitors (0.1uF) in there from Vcc to ground and from the non-inverting input to ground (to filter out EMI from the reference voltage pot. Is adding a capacitor across the inputs ever a good idea or did the designer just get lucky in this case? AI: I think he got lucky- that hairball haywired circuit probably had some inductance that caused the oscillation. High currents only require a small layout issue to cause feedback voltage to show up- at the 750kHz that he reported and several amperes it wouldn't take much. In answer to your question, yes there is a valid reason to use such a capacitor even though it generally reduces stability. It has to do with EMI sensitivity of op-amp front ends which can produce an apparent DC offset via nonlinearity of the front end response. Since low level signal generally require high gain, stability is less of an issue than it would be with a very low gain circuit.
H: Is resistor on LED's necessary after voltage drop? Let's say we have perfect 5V DC 0.1A power supply. Obviously for one LED we would need a resistor (I'm well aware of Ohm's law), BUT what if we connect bunch of LED's in series. And the current drain is well above the power supply, so voltage on those LED's drops to 2.5-3V. Would you still need to add resistor if LED's voltage is about 3.2V? It might be a silly question, but I thought it'd be cool to not have a resistor in such case (because LED's would be brighter that way) and better ask and be sure than be sorry, right!? Thank you in advance! AI: Short: GIVEN: A Power Supply (psu) Providing Vcv (V constant voltage) with maximum current = Icc, and providing Icc at reduced voltage when R load < Vcv/Icc Define: Vpsu = voltage at psu terminals (1) Placing N LEDs in parallel across the psu with no resistors will not damage any LED if current in highest current LED is <= I_LED_max_ (2) Adding resistors can increase light out: For a load of many LEDs in parallel with no resistors, Then if Vpsu = Vload < Vcv (so Iload = Icc) adding series resistors per LED, but so Vload still < Vcv will increase total light out. How much depends on several factors - see below. Let's say we have perfect 5V DC 0.1A power supply. Assume that means it provides Vout = 5.000...V for Rload >= 50 Ohm = Ioit <= 100 mA. Also assume it provides 100 mA at whatever voltage suits for Rload < 50 Ohms. Obviously for one LED we would need a resistor No. That depends on the LED. If the LED Vf is < 5V at 100 mA it will draw 100 mA at < 5V. If the LED is specified to oprate OK at 100 mA it will be OK. BUT what if we connect bunch of LED's in series. Lowest common Vf for visible LEDs is ~~ 2V for red. (Varies widely in special cases). One LED = 2V 2 in series = 4V 3+ in series = too many So "bunch" <= 2 for proper operation. And the current drain is well above the power supply, so voltage on those LED's drops to 2.5-3V. I assume this means Itotal >> 100 mA if all LEDs operated in "normal" Vf range. Let's put the bunch of LEDs in parallel to make sense of this Let's say 20 x 20 mA at 3.2V spec LED's in parallel. Would you still need to add resistor if LED's voltage is about 3.2V? No from a ratings point of view. Maybe from others If 20 x LEDs rated at 20 mA at 3.2 V were connected to a 100 mA max power supply then they would draw 100 mA. The average current per LED = 100/20 = 5 mA. If the LEDs are reasonably modern from a reputable manufacturer then, while they will not all draw exactly 5 mA, it is exceedingly unlikely (close to certain) that any will draw over 20 mA when 20 are placed in parallel. So, no LED exceeds its rating. It might be a silly question, but I thought it'd be cool to not have a resistor in such case (because LED's would be brighter that way) As above, if the LEDs are in parallel and none exceeds its current rating then they will be safe BUT light per LED will be uneven and light out will porobably NOT be maximised. Because - LED efficiency in terms of lumen/mA rises as mA falls. The result varies with LED but as a guide 5 to 20% more light per mA may be obtainable at 10% of rated current than at 100% of rate current. Then, the brighter LEDs make LESS light per mA than the lower brightness LEDs so the ones that "hog" the current make less brightness per current but take a larger share of the current. If the "spread" of currents is small and/or of the efficincy change with current is small then the light output difference will be small. As current rage rises and as efficiency range rises the loss of light due to current hogging rises. A simplistic worked example shows the likely order of result. Example - If you have 20 LEDs and the highest current one draws 20 mA, and each less brighter one draws 80.245% of the prior one the the total drain will be 100 mA (actually 99.99936 mA) ie with I rounded to 0.1 mA LEDS draw 20, 16, 12.8, 10.3, 8.2, 6.6, 5.3, 4.2, 3.4, 2.7, 2.2, 1.7, 1.4, 1.1, 0.9, 0.7, 0.5, 0.4, 0.3, 0.3 mA A spreadsheet calculation shows that light loss is only about 3%. ie the loss of light from current hogging is not liable to be vast. Worse is that the differences can be substantial and very visible. I have seen lights made with a single common series resistor and 6 x parallel LEDs. At full power the differences were not visible. At lower powers the differences in brightness were immense. In the case of a CC supply ADDING per-LED resistors such that V_psu is stil < Vcv, the total LED brightness will INCREASE slightly. This is because power in increases as V rises, LED current is still I_psu_max, VLED rises and so LEDs operate at a more efficient point. SO If operating N LEDs from a current limited power supply where Isupply is much greater than N x I_LED_max_rated, then no LED is liable to exceed its rated max current value and no damage will be done.
H: Can someone explain a couple of lines of Verilog to me? I'm a student trying to learn Verilog on my own with a dev board. This is just a simple and short module. I just need some clarifications. module BASIC (CLOCK_50, KEY, LEDG); input CLOCK_50; input [3:0] KEY; output reg [8:0] LEDG; reg [2:0] key0_dly; always @(posedge CLOCK_50) begin if (!key0_dly[2] && key0_dly[1]) LEDG[0] <= ~LEDG[0]; key0_dly <= {key0_dly[1:0], KEY[0]}; end endmodule now I know the basic syntax of Verilog. My questions would be: What is the purpose of key0_dly? Why does the if statement check the 2 bits of it? Why is key0_dly assigned the values {key0_dly[1:0], KEY[0]}? What does that line do? Is this line key0_dly <= {key0_dly[1:0], KEY[0]}; in the if statement? Am I correct in saying that if there is no begin and end then like in C only the one line of code that follows in the if block? AI: What is the purpose of key0_dly? Why does the if statement check the 2 bits of it? It wants to complement LEDG[0] only if the trigger has occurred during the previous loop but not 2 loops ago. Looks like simple debouncing/edge detection to me. Why is key0_dly assigned the values {key0_dly[1:0], KEY[0]}? What does that line do? It left-shifts key0_dly one bit, shifting the value of KEY[0] in. Is this line key0_dly <= {key0_dly[1:0], KEY[0]}; in the if statement? Am I correct in saying that if there is no begin and end then like in C only the one line of code that follows in the if block? Sounds right. But you need the shift to happen each loop otherwise there will be no way to trigger the condition.
H: How to identify old microcontrollers? I have several old microcontrollers, I am sure I bought them over 10 years ago. It is my assumption that the numbers on these would still be categorized, but I cannot find a way to search the internet by the number to find out what I can use these microcontrollers are for. When I do a search for the number I generally either find a data sheet about a similar number or nothing at all. I have 14 so I won't list all the numbers here, but here are a few examples. GAL16V8D 15LP C914D16 74LS 138 661 T74LS74B1 98640A Is there a site that I can enter these numbers in to find out what the purpose of these microcontrollers can do? AI: None of these are microcontrollers, they're all logic (programmable logic in the case of the GAL, fixed in the other two). Goto sites like Digikey & Mouser to enter full or partial part numbers - if there's ambiguity that a partial part number happens to match two entirely different types of components, it should be pretty clear which it is (usually). For more generic googling, try just the first several alphanumeric characters of the part number (e.g. GAL16V8, 74LS138, 74LS74), the later characters are not usually helpful to a "just curious" search, and sometimes you need to remove the first one or two characters, but knowing when to do that is an 'acquired skill' :)
H: How or where can I get a 120VAC to 3.3VAC power supply? I'm using a safety gas valve (commonly found in household stoves) for a hobby project. The valve is rated for 3.3VAC, 3.6Amps. The problem is that AC-to-AC power supplies are quite rare, and I had no luck at all finding one that would match the rating above. So my question is, how easy/doable would it be to take a AC-to-DC power supply with the above ratings and modify it to output AC voltage. Is it as simple as bypassing some internal components? If not, then does anyone have ideas about where/how I could get the required power supply, or maybe if I could use some different method to open and close the gas valve? Edit: Why all the downvotes? Did I ask this in the wrong place? Edit2: I apologize for my lack of knowledge of EE terms. I don't really have much prior experience in EE. AI: What you are looking for is a transformer with a 117 volt primary, 3.3 volt secondary, and at least 4 amp current. You can't get this, exactly, but a standard 220 / 6.3 volt transformer connected to 120 VAC will give you a nominal 3.15 volts, which should be close enough.
H: Calculating gate resistor value for enhanced active-region stability I'm working on an electronic load design driving an n-channel MOSFET with an op-amp. I'd like to consider adding a gate resistor (R3 in the schematic) to improve the stability. I've searched quite a bit, but have been unable to find any hard description of just how to analyze that portion of the circuit. I understand the gate resistor forms a low-pass filter with the gate capacitance and that limits the bandwidth of the signal applied to the MOSFET, improving the phase margin and making the circuit less susceptible to oscillation. I've also found some heuristics that the value ought to be somewhere between 10R and 1K, but I'd like to understand the design choice better. I suspect I'm calculating a Bode plot pole for the RC filter formed by the resistor and the gate capacitance. However I'm not sure which capacitance value from the MOSFET datasheet to use (guessing Ciss = 2.4nF) and whether it's just a case of applying 1/(2πRC) to locate the pole or whether it's more complicated in this case. That works out to roughly 650kHz with a 100Ω value for R3, which makes me think maybe I'm on the right track. Also I'd love any advice of where to reasonably locate the pole to maximize stability without negatively impacting the circuit performance. Just guessing, I would expect that bandwidth of 100kHz would be plenty, but not sure if there would be reasons to place the pole either lower or higher. AI: The pole formed by the gate resistor and the input capacitance would actually make the circuit less stable. This is because there will already be a pole in the opamp giving a 20db/decade gain change (up to 90 deg phase shift). If you add another pole in the loop you now potentially have 40dB/decade gain change with the phase shift being asymptotic to 180 deg. I would add a resistor in the connection from the sense resistor to the opamp negative input and then add a capacitor from the output of the opamp to the negative input of the opamp. This can then give you a dominant pole that rolls off the gain before the phase shift in the output stage gives significant phase shift. It cane useful to put resistor in series with the feedback capacitor to give better gain at higher frequencies. The resistor in the gate of the MOSFET can be useful for stopping high-frequency instability - a value of 22-100 ohms is appropriate there.
H: JTAG boundary scan registers On the internet, it always shows that each pin of the IC has its own Boundary Scan register (the yellow boxes) But looking at BSDL files there are actually more BS registers than pins. So for the questions: Is it right to say that there are always more BS registers than pins? What are the other BS registers used for OR Why are they there? How can we use JTAG(TAP controller, state machine etc) to find out which are the pin's BS register? P.S. I have posted this question on StackExchange Reverse Engineering but there werent any reply. https://reverseengineering.stackexchange.com/questions/9119/jtag-finding-bs-registers-for-ics-pins AI: 1) Usually yes but that will depend on the device. Sometimes you have devices that only have a few Boundary Scan enabled pins (and i.e. lots of power/gnd/analogue pins) and then the size of the Boundary-register may be smaller than the number of pins. But that is rare. 2) The most common structure behind a bidirectional pin are 3 Boundary Scan cells: 1 Input-cell (for reading the pin), 1 output-cell (output3 - meaning it can be tristated) for writing the pin and 1 control-cell for enabling/disabling the output3-cell. There can also be bidirectional cells and purely internal cells when i.e. the same architecture is used for multiple packages. If you have a pure output- or input-pin you may just find an output- or input-cell behind the pin. 3) That information is usually provided by the device manufacturer. There are ways to reverse engineer a BSDL-file but they are a lot of work. Usually that means you need the device on a board, powered up and functional without anything around (so you can freely toggle all pins). First you have figure out the size and commands of the Instruction-register and the size of the Boundary register. Then you shift in known pattern into the Boundary-register to switch pin-states and try to find the pin that actually changes state and construct your BSDL-file based on that information. As said earlier: Tedious work!
H: TS2950CT50 Voltage Regulator +5V confusing specs / usage The datasheet for the TS2950CT50 voltage regulator only provides information on the series TS2950C and TS2951C. From what I can tell, TS2950CT50 is a model within that series, which is a fixed +5v regulator. But the circuit diagram in the datasheet only shows how the model TS2951A can be used to provide a custom voltage output. But it gives no information on how I might use the TS2950CT50 (which I think is purely a +5v fixed model, is that correct?) Please can someone give me a circuit diagram of how to use TS2950CT50? The input will be between 10 and 15 volts. I will be drawing current between 1mA and 90mA. Thanks AI: Page 5-6, left drawing. That's all you need to do. Capacitance value from the text is minimum 1uF (page 1). I'd say a 2.2uF input capacitance wouldn't go wrong either. So, basically just (and I feel I'm being very kind by redrawing it): simulate this circuit – Schematic created using CircuitLab
H: What kind of material of PCB should I choose to work at -40 degree centigrade? I need to design a PCB circuit to work at (or even below) -40 degree centigrade. Normally, I use FR-4 PCB board at room temperature. However, I have learned from the question "What is the minimum temperature for FR-4 PCBs?" that FR-4 PCBs may have problems below -30 degree centigrade. So what kind of PCB material can handle such low temperature? AI: I've used FR4 at 4K and others have used it at much lower temperatures. The physical characteristics go somewhat downhill at low temperatures, but board failure such as delamination does not normally occur from mere exposure to cold temperatures. The Charpy tests referred to in your linked answer are a measure of strength of a notched specimen to shock (add a stress riser then whack it with a hammer, basically). If you are in a severe mechanical environment you may have to consider the lower impact strength and use a thicker board or support it better. Solder joint failure due to differences in coefficient of thermal expansion can be a factor, especially with lead-free solder and things like large BGA packages. -40°C is just a nippy day in some parts of the world, and -55°C is the lower end of the military temperature range, both limits are within the normal range of epoxy-glass boards, and there are plenty of reasonably-priced components available that have guaranteed specifications at those temperatures (especially -40).
H: Could you explain how this current measurement circuit works? I have a schematic for current measurement, but I don't really know how can I calculate the output of it. The op-amps are on +-12V. The output of the circuit is 0.2*I(Rs) [V] (--> this is what I don't know how to calculate), thus in the range of 5V, we can measure 25A with it. The V_PWR is 24V. How should I set this circuit to measure a maximum current of 10A instead of 25A? What are the diodes and transistors for? Circuit measurement schematic http://img5013.photobox.co.uk/151954327f8d9d7d42f253c3a9c42b2e44e8b66e7687b35e9972598d1a88bc7f88b9ef41.jpg simulate this circuit – Schematic created using CircuitLab Thanks, Tamas AI: In a nutshell, the first opamp and the transistor work together to draw a current through the 100Ω resistor such that the voltage drop across it matches the voltage drop across the 0.01Ω sense resistor. Since the former is 10000× the latter, the current through the transistor will be 1/10000 of the sensed current. The current through the transistor is then fed through the two 1000Ω resistors, which creates a voltage drop of $$2000\Omega \cdot \frac{I_{LOAD}}{10000} = 0.2 I_{LOAD}$$ The second opamp is just a voltage follower (buffer) for this voltage. HOWEVER, the circuit cannot work as shown. The two inputs to the first opamp are at a value very close to V_PWR (24V), while the power supply for that opamp is at only ±12V. No opamp can work with its inputs that far outside the supply rails. You should use one of the dedicated current-sensing chips for this part of the circuit, which are specifically designed to deal with this situation. In order to change the range of operation from 25A to 10A, the simplest thing would be to change the 2K resistance to 5K.
H: Driving a large, noisy, free-wheeling DC motor (cooling fan) I have an automotive cooling fan which is nominally 12v and draws about 15-25A with a BIG inductive spike at startup (will happily blow a 30A fuse). These are traditionally wired via standard automotive relays, but I'd like to drive it from a FET or similar solid-state circuit, possibly PWM but that's a glory job for later, for now let's just switch it on & off with minimal magic smoke escape ;) This device presents a few challenges: Big inductive spike at startup / switchoff When the vehicle is moving but the fan is NOT powered, we need to allow it to freewheel. When the vehicle is moving AND the fan is powered, it may be driven "overspeed" by the motion of the vehicle, again we want to either disengage (freewheel) it, or at least not try to brake it. In both cases the driver circuit will have to tolerate the conditions (freewheeling open-circuit the motor may generate far more than 12v, for example) as well as switch between them without letting the smoke out. I haven't started doodling circuits yet as I suspect the answers to this question will dictate things like device choice & circuit topology. I'm assuming the freewheeling requirement will mean going down the H-bridge route (or at least half an H-bridge, there's no real call to reverse the rotation of your cooling fan!) I mentioned that PWM control would be nice, other niceties would be soft start (to avoid the massive startup kick) and over-current protection (if the fan tries to kick in when something's jammed in it, the vehicle is 4x4 so the fan could easily be full of mud/sticks/squirrels at any given moment). So, I'm looking for guidance on what sort of topology I could use to drive it, which families of devices would be most tolerant of this sort of situation, parameters/configurations to look out for or avoid, and what's achievable or a right pain from the "wish-list". AI: We make DC fan controllers for large earth-moving machinery. There are 8 or 10 large fans (depending on the machine) running from the nominal 28 Vdc power supply. Inrush current is large - more than 50 Amps per fan. The people who contracted us to build their controllers told us that any relay-based controllers they attempted to build had very short lifetimes, usually ending in violent death. We built controllers that handle the fans in pairs - two fans per controller output. One model has 4 output channels, the other model we build has 5 output channels. We used sense-FETs from IRC - IRCZ44. I don't even know if those parts are still manufactured - our controllers are a very old design. Nowadays, I'd use much beefier FETs with hall-effect current sensors from Allegro. We always ramp the fans from full off to whatever the desired speed is. PWM rate is relatively-high at 25 KHz. Each PWM stage is followed by a large LC filter intended to keep the switching edges from radiating into the communications radios on the machine. Each FET is protected with a large Schottky diode. The combination of FET, diode, inductor, output capacitor forms a classic buck-converter power stage. The inductors also helps with detecting over-current conditions - short-circuit current rise time is slowed sufficiently that the system can turn the FETs off before they are destroyed. Wiring shorts are a common problem on the machinery and it is important to protect the controller. This is also one of the benefits of splitting the controller into multiple channels - if one channel shuts down, the remaining channels still operate. We've learned a lot over the years with this project - this machinery has extreme levels of vibration under extreme environmental conditions. The controller boxes have been quite reliable and the most common failures are with the actual temperature sensors that mount into the engine and transmission on the machine. The mechanics who service the machines tell us that is normal - they are replacing sensors on a regular basis for both our controllers as well as the machine controllers. The main take-away from our experience is that FET-based controllers are reliable but you must use PWM to get the fans spinning. [Edit} I realized that I didn't address your concerns about free-wheeling and free-wheel over-speed. Because this is a simple buck converter, the fan is free to free-wheel whenever there is sufficient airflow to cause the blades to spin. The circuit does not add any drag to the fan while the fan is supposed to be off. Over-speed is controlled by the Schottky clamp diodes in the PWM stages. If the fan begins to spin too quickly, the voltage generated by the fan rises to the battery voltage, When the fan speed causes the generated output to exceed the battery voltage plus the Schottky diode drop, the fan starts supplying energy into the vehicle electrical system. This controls the fan speed - the speed will rise to that point and then the drag causes by the electrical load of the vehicle causes the fan to not exceed that speed.
H: Why the voltage of supply got stuck when connected with leds? I was working with some smd LED's trying to make their V-I curve. For that purpose, I connected two segments parallel to each other each containing two leds as shown, to a regulated dc power supply directly. The voltage from supply was increased gradually.LEDs started o glow as voltage reached around 4.8V with 0.07mA current in the circuit. As I increased the voltage, LEDs became brighter as expected. But after around 7.6 V, I wasn't able to increase the voltage any further. Current reading at this point was 189.8mA. The supply is capable to provide upto 30V, 2A. Why the voltage got stuck at 7.6V? AI: Your power supply can limit current - see the "coarse" and "fine" control knobs on the right side under "Current." Presumably, you had the current limit set to around 190mA. As Wouter van Ooijen says, this also provides a safe, simple method of getting the data for your V-I chart. Set the voltage at a safe maximum (from the LED datasheet) Set the current down to 0. Read voltage, Increase current, Read voltage Repeat steps 3 to 5 until the voltage reaches the maximum you set or your current reaches the maximum for your LED. Finished. It looks like you are measuring 1 LED from each module, with the two LEDs in parallel. You realize, that your readings may also be influenced by whatever else is in the circuit? That's the other LEDs, the resistors, and what looks like it might be a regulator or controller of some kind?
H: How does this BJT transistor circuit works? I just wanted to know a little more about this circuit about how it works a more in depth analysis I mean. I know there's some kind of differential amplifier and a Zener diode for voltage regulation connected to a voltage divider. (correct me if I'm wrong). And I can not get any voltage from Vo in Proteus 8 simulation what's wrong? ( the simulation file) AI: The zener with series resistor R3 has about 10V on the anode wrt ground. It is seeing 50mA so the actual voltage will be a bit higher than the 10V nominal, maybe one percent on average. That voltage is buffered with Q7 and used to create a ~17mA current source for the current mirror composed of Q6 and Q5, which feeds the differential amplifier composed of Q3 and Q4 (so bias currents are in the 50uA range). The differential amplifier is fed with 5V from the R4/R5 voltage divider (minus about 25mV from the bias current). Q1 and Q2 form a Sziklai pair voltage follower. The output voltage is divided down by R1/R2 (poorly matched to R4/R5) so that the output voltage should be about 21.7V at balance with 10V so maybe 22V with the 55mA zener current. This circuit could be improved by bootstrapping some the Zener current from the output to make it more constant (a resistor from output to the zener) and by making \$\dfrac{R1 \cdot R2}{R1+R2} \approx 500\Omega\$. The former improvement would improve line regulation (changes in output voltage with changes in input voltage), and the latter would improve temperature stability. Some emitter degeneration in the current mirror would also be a good idea (and coupling them together thermally). Also a resistor on the output pass transistor to deal with high-temperature leakage. It's going to run quite warm- the zener is dissipating over half a watt, and Q7 about a watt - above its rating without a heatsink. In a modern design we'd not likely be nearly that wasteful.
H: Why are some common-mode chokes rated in ohms instead of henries? Some common-mode chokes, including this one, are rated in ohms instead of in henries. Why? Isn't the impedance injected by the common-mode choke entirely frequency dependent? Is there an assumed frequency at which the specified ohm-age is calculated? AI: It is measured in ohms because it doesn't behave like an inductor. It's inductive at low frequencies, lossy/resistive in between, and capacitive at very high frequencies. The resistance is usually measured at 100MHz, but check the datasheet to be sure.
H: Reverse audio jack for front and back speakers My problem is a strange one. I have a computer connected to front and back speakers via 3.5mm audio jacks. Sometimes I use the computer with the monitor at my desk. Other times I'm using it to watch movies on a projector on the opposite side of the room. Depending on whether I'm looking at the monitor or viewing the projector, the front and back speakers need to be switched, but it appears Windows does not provide an easy solution for such a simple task. What I'm looking into doing is creating a box with two 3.5mm inputs, two 3.5mm outputs, and a switch on top that reverses the audio signal. This should be a simple task, but how can I construct this while insuring that I don't introduce a noticeable amount of static noise into the equation? Also how would I organize this on a breadbord? Thanks for any input. AI: How about something like this? simulate this circuit – Schematic created using CircuitLab Tie all the grounds together and flipping both switches will swap the speakers. The switches are DPDT. Keep all the wires short and you shouldn't introduce too much noise. You could do it all with one switch if you use a 4PDT, like this one. Note that the Left Right channels are not flipped by this switch, that's already done in software if your computer is expecting the speakers to be behind the person, like it is for surround sound channels.
H: How to cut costs when fabricating large PCBs? I've looked online at PCB fabrication companies, and they invariably price their boards according to the size of the board as the primary factor. Why is this? The physical board itself isn't that expensive, is it? I'm guessing it's because the size of the boards dictates how many they can produce simultaneously, and that's the primary limiting factor in their profitability - Is that right? Anyway, is there a way to keep the costs down when fabricating large (~8x10 in.) but sparsely populated (~50 components) boards (other than just ordering from the cheapest Chinese factory I can find)? It seems silly to pay $50 for a board that's only gonna have $10 in parts on it. AI: 10*8 isn't really large, but you may get only 2 per panel which will impact the cost. Panel sizes will vary from fab to fab so it is worth talking to a few - and negotiating the details - as The Photon says, you may get 4 per panel and half your price that way. And the economics of setting up a fab for small jobs dictate that buying 50 boards instead of 10 can half the price again, 100 or more even lower. In addition to The Photon's good suggestions, use the simplest process possible for the large board. Single-sided PCB may be MUCH cheaper than 2-layer, especially since there's no through-hole plating stage, omitting silk screen and solder mask may save a little more money. Some fabs may still offer phenolic material - much cheaper than FR4 fibreglass. You might be able to use an Arduino-sized full spec PCB to hold the complicated stuff or customization, which you then plug into a much larger single-sided board which - because it omits the personalization for a specific project - you can buy in larger quantities and re-use for multiple projects.
H: Crystal Footprint Placement Pretty simple question, please point me in the right direction if this has already been asked. I am designing a board around freescale's IMX23 microprocessor which runs from an external crystal at 24 MHz. I know that I need to keep the crystal close to the proc, but does it matter if the crystal is on the other side of the board (4 layers)? I know that for routing DRAM traces you want to avoid vias like the plague but would I be okay putting the crystal directly underneath the processor? The traces would be extremely short, but the vias would pass through a GND and VDD plane. The alternative is placing the crystal slightly farther from the proc, about 250 mil. Thanks. AI: Either will work out fine. The XTAL is a high Q BPF so it does a good job of rejecting noise in most cases. You do want to try and limit high edge rate signals adjacent to those traces where you can, though. The XTAL also has very low emissions so won't induce noise into your planes. I've done it both ways and never noticed any difference in SI one way or the other. Place it where it makes sense for the remainder of your design.
H: Stop mask error upon running DRC in Eagle 7.3.0 I'm working on my first board layout using Eagle (7.3.0). I've got the layout done, but when I run DRC, I get a number of Stop Mask errors: These seem to be due to small rectangles on the tStop layer (layer 29). If I hide the tStop layer, the errors go away, but I don't understand them in the first place. It appears as if there are two rectangles for each capacitor pad on the tStop layer, and the smaller of the two (highlighted in the screenshot) is provoking the error. I'm using capacitors that use the C0402 package from Eagle's builtin resistor.lbr library. I don't see those additional small rectangles when I open the C0402 package directly in the library editor. They're on a few other pads (e.g. the pad at the top of the screenshot), but only for devices from Eagle's built in library. Footprints that I created myself don't show them, nor do they cause DRC errors. Has anyone run into this before? Is there a solution? Are they something to worry about? For what it's worth, I'm using SeeedStudio's DRC file as I intend to possibly have them make these boards for me. AI: Change the font of the name to be 'Vector', not 'Proportional' - you can do this by smashing the component with the smash tool and then editing the text. Otherwise Eagle requires clearance around the text. Alternatively, and especially in this case, you should move the C1 name label into the free space near the component so that it doesn't overlap the stop layers and importantly isn't partly covered by the IC outline otherwise readability is affected. Again this is done using the smash tool which allows the text to be moved separately from the component.
H: Are transistors interchangeable? I do know the question in the title is really stupid, but it is the best phrasing I could come up with. Let's say I am building a circuit and the schematic says to use a BD139 transistor. Would there be any issue in using any other NPN transistor? What would I do if I could not find this transistor? I can't buy any parts online, and I do not have access to a store where they would sell this stuff, so I am limited to what I can salvage from old broken electronics, most of which have older, outdated parts that have limited information online. AI: Would there be any issue in using any other NPN transistor? If you are missng a screw, can you add any screw ? Would it depend on how long is the screw ? What is its thread is? How large is the diameter ? What is the material ? Similarily, all electrical components have electrical characteristics and parameters. Different components can tolerate more voltages and currents than others. Others are set up for a particular application even though they are all part of the same family of components (capacitors, resistors, inductors, transistors, diodes). So yes transistors are interchangeable, if the type (npn/pnp) and required specs match. What would I do if I could not find this transistor? You compare the parameters for the transistor you can have access too, and compare it to the one in the you want to have. You keep searching until you can find one that can handle it. Now, someone might have used an a transistor that was overkill for the project, and so knowing a bit about the circuit would help. If the current through a transistor was only 10mA and they have a part that can tolerate 1A, well, that's a bit much, and you can find a part that is more suited to what the circuit it. But if you do not have knowledge with how to analyse circuits, then you should probably match the component (to be on the safe side).
H: Determine input frequency of square wave w/ ICR in Atmega328p I'm trying to obtain the input frequency of a square wave using the input capture register of an Atmega328p. So far, it works sporadically -- which is to say, when I input a 75 kHz square wave, the output looks like this: 244 244 75117 74766 75117 75117 79207 80402 82051 82901 84656 85561 87431 244 244 244 88888 90395 244 244 244 -941176 -271186 244 -246153 244 244 244 Does anyone know why this might be the case? I've tried messing with the data types, but otherwise I'm not really sure what the problem could be. The code I've written is below. // # of overflows volatile long T1Ovs; // timestamp variables (store TCNT at time of input capture interrupt) volatile long Capt1, Capt2; // capture flag volatile uint8_t Flag; volatile long ticks; volatile double period; volatile long frequency; void initTimer1(void) { TCNT1 = 0; // initialize timer to 0 //timer/counter1 control register b TCCR1B |= (1<<ICES1); // input capture edge select; rising edge triggers capture //timer/counter1 interrupt mask register TIMSK1 |= (1<<ICIE1); // ICIE1: input capture interrupt enable TIMSK1 |= (1<<TOIE1); // timer/counter1 overflow interrupt enable } void startTimer1(void) { TCCR1B = (1<<CS10); //start timer with pre-scaler = 1 sei(); //enable global interrupts } ISR(TIMER1_CAPT_vect) // interrupt handler on input capture match (rising edge in this case) { if (Flag == 0) { Capt1 = ICR1; // save timestamp at interrupt (input capture is updated with the counter (TCNT1) // value each time an event occurs on the ICP1 pin (digital pin 8, PINB0) T1Ovs = 0; // reset overflows } if (Flag ==1) { Capt2 = ICR1; } Flag ++; } ISR(TIMER1_OVF_vect) // interrupt handled on timer1 overflow { T1Ovs++; // increment number of overflows } void setup() { Serial.begin(9600); initTimer1(); startTimer1(); } void loop() { if (Flag == 2) { ticks = Capt2 - Capt1 + T1Ovs * 0x10000L; // (second timestamp) - (first stamp) + (# of overflows) * (ticks/overflow = 65535) frequency = 16000000/ticks; // ticks * seconds/ticks = seconds // 1/seconds = Hz Flag = 0; // reset flags T1Ovs = 0; // reset overflow count TIFR1 = 0b00000000; // clear interrupt registers Serial.println(frequency); TIMSK1 |= (1 << ICIE1); // enable capture interrupt TIMSK1 |= (1 << TOIE1); // enable overflow interrupt } } Thanks in advance! UPDATE*********************************** The second iteration of code, using enumerated types to make a state machine: typedef enum { CAPTURE_1, CAPTURE_2, WAIT } timer_state_t; timer_state_t flag = WAIT; volatile long Capt1, Capt2; volatile long T1Ovs; void InitTimer1(void) { //Set Initial Timer value TCNT1=0; //First capture on rising edge TCCR1B|=(1<<ICES1); //Enable input capture and overflow interrupts TIMSK1|=(1<<ICIE1)|(1<<TOIE1); } void StartTimer1(void) { //Start timer without prescaler TCCR1B|=(1<<CS10); //Enable global interrutps sei(); } ISR(TIMER1_CAPT_vect) { switch(flag) { case CAPTURE_1: Capt1 = ICR1; flag = CAPTURE_2; break; case CAPTURE_2: Capt2 = ICR1; flag = WAIT; Serial.println(flag); break; } } ISR(TIMER1_OVF_vect) { T1Ovs++; } void setup() { Serial.begin(9600); InitTimer1(); StartTimer1(); } void loop() { flag = CAPTURE_1; while (flag != WAIT); Serial.println("loop"); Serial.println(Capt2 - Capt1 + T1Ovs * 0x10000); } AI: Here is your code updated to work, with comments starting with "J:" explaining the changes... typedef enum { CAPTURE_1, CAPTURE_2, WAIT } timer_state_t; volatile timer_state_t flag = WAIT; // J:This is a 16-bit timer, so these values will always fit into an unsigned int volatile unsigned int Capt1, Capt2, CaptOvr; // J:Mind as well make this unsigned and give it 2x range since it can never be negative. volatile unsigned long T1Ovs; void InitTimer1(void) { //Set Initial Timer value // J:All measurements against TCNT are relative, so no need to reset // TCNT1=0; // J: Note we need to set up all the timer control bits because we do not know what state they are in // J: If, for example, the WGM bits are set to a PWM mode then the TCNT is going to be resetting out from under us rather than monotonically counting up to MAX TCCR1A = 0x00; //First capture on rising edge TCCR1B =(1<<ICES1); //Enable input capture and overflow interrupts TIMSK1|=(1<<ICIE1)|(1<<TOIE1); } // J: Note that it would be ok to start the timer when we assign TCCR1B in InitTimer since nothing will happen when the ISR is called until we set flag to CAPTURE1 void StartTimer1(void) { //Start timer without prescaler // J: Note that we know that the other CS bits are 0 becuase of the Assignment in InitTimer TCCR1B |= (1<<CS10); //Enable global interrutps // J: Interrupts are turned on by Arduino platform startup code // sei(); } ISR(TIMER1_CAPT_vect) { switch(flag) { case CAPTURE_1: Capt1 = ICR1; // J: Reset the overflow to 0 each time we start a measurement T1Ovs=0; doubleOverflowError=0; flag = CAPTURE_2; break; case CAPTURE_2: Capt2 = ICR1; // J: Grab a snap shot of the overflow count since the timer will keep counting (and overflowing); CaptOvr = T1Ovs; flag = WAIT; //J: Generally bad to print in ISRs //Serial.println(flag); break; } } ISR(TIMER1_OVF_vect) { T1Ovs++; // J: Just to be correct, check for overflow of the overflow, otherwise if it overflows we would get an incorrect answer. if (!T1Ovs) { doubleOverflowError=1; } } void setup() { Serial.begin(9600); InitTimer1(); StartTimer1(); } void loop() { // J: No need to bracket this set with cli() becuase the counter will not be counting until wait is updated flag = CAPTURE_1; while (flag != WAIT); // J: Parenthesis and explicit cast for good luck! ( and to ensure correct size and order for operations) if (doubleOverflowError) { Serial.println( "Double Overflow Error! Use a bigger prescaller!"); } else { Serial.println( ( (unsigned long) (Capt2) + (CaptOvr * 0x10000UL) )-Capt1 ); } }
H: VHDL code for 4 bit binary division without using the division sign. Getting a warning that DIV.vhd(39): (vcom-1246) Range 3 to 0 is null -- logic taken from https://en.wikipedia.org/wiki/Division_algorithm LIBRARY ieee; USE ieee.std_logic_1164.all; USE ieee.numeric_std.all; ENTITY Div IS PORT( OperandA : in std_logic_vector(3 downto 0); OperandB : in std_logic_vector(3 downto 0); Errorsig : out STD_LOGIC := '0'; Result_Low : out std_logic_vector(3 downto 0); Result_High : out std_logic_vector(3 downto 0)); END Div; ARCHITECTURE behavioral OF Div IS signal Q,R : std_logic_vector (3 downto 0) := "0000"; signal i : integer := 3; signal diff,borr : std_logic_vector (3 downto 0) := "0000"; signal er,err : std_logic := '0'; COMPONENT SUB IS PORT( OperandA : in std_logic_vector(3 downto 0); OperandB : in std_logic_vector(3 downto 0); Result_Low : out std_logic_vector(3 downto 0); Result_High : out std_logic_vector(3 downto 0); Errorsig : out std_Logic); END component; begin Sub1 : SUB PORT MAP( R,OperandB,diff,borr,err); process(OperandA,OperandB) begin if OperandB = "0000" then i<=0; else for i in 3 to 0 loop -- where n is number of bits in N R <= R(2 downto 0) & '0'; -- left-shift R by 1 bit R(0) <= OperandA(i); -- set the least-significant bit of R equal to bit i of the numerator if R >= OperandB then -- Sub1(R,OperandB,diff,borr,err); R <= diff; Q(i) <= '1'; end if; end loop; end if; end process; END behavioral; --Someone please help me. Am a newbie and unable to figure it out on my own. Thanks AI: Besides the null range so ably pointed out by Brian and Eugene, your algorithm implementation won't work because it's depending on signal update values to occur in the same delta cycle. No signal is updated during the execution of a process, a new signal value assigned is available in subsequent simulation cycles, unlike variable assignment which takes effect immediately. There are two ways to address this, introduce delay between each time a signal is assigned and when it is next used or use variables. (There's actually a third way, evaluate before assign, but VHDL is specifically designed so you don't have to. Subsequent simulation cycles (Delta cycles) allow emulation of parallelism and are distinguished by not being preceded by the advancement of simulation time.) This demonstrates how to use variables: -- logic taken from https://en.wikipedia.org/wiki/division_algorithm library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity div is port( operanda: in std_logic_vector(3 downto 0); operandb: in std_logic_vector(3 downto 0); errorsig: out std_logic := '0'; result_low: out std_logic_vector(3 downto 0); result_high: out std_logic_vector(3 downto 0) ); end div; architecture foo of div is begin UNLABELED: process(operanda,operandb) variable quotient: unsigned (3 downto 0); variable remainder: unsigned (3 downto 0); begin -- if D == 0 then error(DivisionByZeroException) end -- Q := 0 -- initialize quotient and remainder to zero -- R := 0 -- for i = n-1...0 do -- where n is number of bits in N -- R := R << 1 -- left-shift R by 1 bit -- R(0) := N(i) -- set the least-significant bit of R equal to bit i of the numerator -- if R >= D then -- R := R - D -- Q(i) := 1 -- end -- end -- We errorsig <= '0'; -- allows successive operations if operandb = "0000" then -- i<= 0; assert operandb /= "0000" report "Division by Zero Exception" severity ERROR; errorsig <= '1'; else quotient := (others => '0'); -- "0000" remainder := (others => '0'); for i in 3 downto 0 loop remainder := remainder (2 downto 0) & '0'; -- r << 1 remainder(0) := operanda(i); -- operanda is numerator if remainder >= unsigned(operandb) then -- operandb denominator remainder := remainder - unsigned(operandb); quotient(i) := '1'; end if; end loop; result_high <= std_logic_vector(quotient); -- for error keeps result_low <= std_logic_vector(remainder); -- last value (invalid) end if; end process; end architecture foo; library ieee; use ieee.std_logic_1164.all; entity div_tb is end entity; architecture foo of div_tb is signal operanda: std_logic_vector (3 downto 0) := (others => '0'); signal operandb: std_logic_vector (3 downto 0) := (others => '1'); signal errorsig: std_logic; signal result_low: std_logic_vector (3 downto 0); -- remainder signal result_high: std_logic_vector (3 downto 0); -- quotient begin DUT: entity work.div port map ( operanda => operanda, operandb => operandb, errorsig => errorsig, result_low => result_low, result_high => result_high ); STIMULUS: process begin operanda <= "1000"; -- 8 operandb <= "0010"; -- 2 wait for 20 ns; operandb <= "0100"; -- 4 wait for 20 ns; operandb <= "1000"; -- 8 wait for 20 ns; operanda <= "1111"; -- 15 operandb <= "0011"; -- 3 wait for 20 ns; operandb <= (others => '0'); wait for 20 ns; operanda <= "1101"; -- 13 operandb <= "0111"; -- 7 wait for 20 ns; wait; end process; end architecture; And when run even works: (clickable) Using a subroutine doing subtraction instead of an operator We can look at the two involved functions in package numeric_std: function "-" (L, R: UNSIGNED) return UNSIGNED is constant SIZE: NATURAL := MAX(L'LENGTH, R'LENGTH); variable L01 : UNSIGNED(SIZE-1 downto 0); variable R01 : UNSIGNED(SIZE-1 downto 0); begin if ((L'LENGTH < 1) or (R'LENGTH < 1)) then return NAU; end if; L01 := TO_01(RESIZE(L, SIZE), 'X'); if (L01(L01'LEFT)='X') then return L01; end if; R01 := TO_01(RESIZE(R, SIZE), 'X'); if (R01(R01'LEFT)='X') then return R01; end if; return ADD_UNSIGNED(L01, not(R01), '1'); end "-"; function ADD_UNSIGNED (L, R: UNSIGNED; C: STD_LOGIC) return UNSIGNED is constant L_LEFT: INTEGER := L'LENGTH-1; alias XL: UNSIGNED(L_LEFT downto 0) is L; alias XR: UNSIGNED(L_LEFT downto 0) is R; variable RESULT: UNSIGNED(L_LEFT downto 0); variable CBIT: STD_LOGIC := C; begin for I in 0 to L_LEFT loop RESULT(I) := CBIT xor XL(I) xor XR(I); CBIT := (CBIT and XL(I)) or (CBIT and XR(I)) or (XL(I) and XR(I)); end loop; return RESULT; end ADD_UNSIGNED; First we're going to call the function sub. We don't need resizing both operands are the same size. We do need copies of the parameters, and we can consolidate the bits we need: architecture foo of div is function sub (L, R: unsigned) return unsigned is variable L01: unsigned(L'LENGTH - 1 downto 0); variable R01: unsigned(R'LENGTH - 1 downto 0); variable CBIT: std_logic := '1'; -- carry in '1' variable RESULT: unsigned(L01'RANGE); begin L01 := TO_01(L,'X'); R01 := not TO_01(R,'X'); for i in 0 to integer(L01'LENGTH) - 1 loop RESULT(i) := CBIT xor L01(i) xor R01(i); CBIT := (CBIT and L01(i)) or (CBIT and R01(i)) or (L01(i) and R01(i)); end loop; return RESULT; end function; (And we did this as a block declarative item in the architecture declaration). Then switch to the sub function: -- remainder := remainder - unsigned(operandb); remainder := sub(remainder, unsigned(operandb)); And all without using wait statements or adding operators: (clickable) And this gives the same answer as using the "-" operator. (Note the radix of OperandA has been changed to decimal). You could note there's a bit more optimization that can be done with a local function sub. We could get rid of the numeric_std package use clause and convert anything that's unsigned to std_logic_vector. You'd need a TO_01 function (or for loops to perform the same thing) for std_logic_vector. You could simply the sub function a bit more too. (Hint: it implies removing functionality, implied by the and_table, or_table and xor_table in the package std_logic_1164 body. Do we really need TO_01? The purpose in package numeric_std is to represent binary numbers in type unsigned.) In other words you could write your own subtract function. If you turn in something with a similar function you're going to be required to explain how it works.
H: Periodically reset microcontroller I have a tiny embedded system build around a Holtek HT48F06 microcontroller to monitor RFID tags (RDM630). Unfortunately it only detects the arrival, not the removal of a tag. As a workaround, I want to reset the microcontroller periodically (say once per second), since that triggers a new attempt of detection (or does that seem like a bad idea?). Using a 555, I found some suitable values (22µF, 4+100kΩ) for a timer to trigger the reset. However, in order to delay the power-on-reset, the low-active RESET-pin of the microcontroller is connected to a RC net like the following: simulate this circuit – Schematic created using CircuitLab So my question is, what would be a good way to connect a timer to the reset pin, preferably without making any changes to the existing circuit (I am not to keen about fiddling with the tiny smd pcb). AI: The output of a 555 is normally low. Since the minimum duty cycle for an astable 555 is 50%, you want to use a very high duty cycle (close to 100%) such that the low period is very short which will reset the microcontroller. So you don't interfere with the existing circuit, you want to make use of an open-drain buffer, like the 74LS07. The 100K resistor R1 will act as a pullup. The circuit will not interfere with the power-up reset since the output of the buffer will be high-impedance when it is a logic 1.
H: Why is RAM not put on the CPU chip? Modern CPUs are very fast compared to all things external, including memory (RAM). It is understandable, since CPU clock frequency has reached a point where it takes several clock ticks for an electric signal simply to run from from the CPU through the bus to RAM chips and back. It also complicates life on many levels: multi-level cache hierarchies are built to deliver data closer to the CPU, which in turn require complex synchronization logic in the chip. Programs must be written in a cache-friendly way to avoid wait cycles while data is fetched. Many of these problems could be avoided if a significant amount of RAM was located directly on the CPU chip. It doesn't have to an exclusive arrangement: maybe put 1-4 GB on the chip, depending on its class and allow additional memory installed separately. I'm sure there are good reasons Intel, AMD and the like are not doing this. What are these reasons? Is it that there's no room to spare on the chip? AI: Intel's Haswell (or at least those products that incorporate the Iris Pro 5200 GPU) and IBM's POWER7 and POWER8 all include embedded DRAM, "eDRAM". One important issue that has led eDRAM not to be common until recently is that the DRAM fabrication process is not inherently compatible with logic processes, so that extra steps must be included (which increase cost and decrease yield) when eDRAM is desired. So, there must be a compelling reason for wanting to incorporate it in order to offset this economic disadvantage. Alternatively, DRAM can be placed on a separate die that is manufactured independently of, but then integrated onto the same package as, the CPU. This provides most of the benefits of locality without the difficulties of manufacturing the two in a truly integrated way. Another problem is that DRAM is not like SRAM in that it does not store its contents indefinitely while power is applied, and reading it also destroys the stored data, which must be written back afterwards. Hence, it has to be refreshed periodically and after every read. And, because a DRAM cell is based on a capacitor, charging or discharging it sufficiently that leakage will not corrupt its value before the next refresh takes some finite amount of time. This charging time is not required with SRAM, which is just a latch; consequently it can be clocked at the same rate as the CPU, whereas DRAM is limited to about 1 GHz while maintaining reasonable power consumption. This causes DRAM to have a higher inherent latency than SRAM, which makes it not worthwhile to use for all but the very largest caches, where the reduced miss rate will pay off. (Haswell and POWER8 are roughly contemporaneous and both incorporate up to 128MB of eDRAM, which is used as an L4 cache.) Also, as far as latency is concerned, a large part of the difficulty is the physical distance signals must travel. Light can only travel 10 cm in the clock period of a 3 GHz CPU. Of course, signals do not travel in straight lines across the die and nor do they propagate at anything close to the speed of light due to the need for buffering and fan-out, which incur propagation delays. So, the maximum distance a memory can be away from a CPU in order to maintain 1 clock cycle of latency is a few centimetres at most, limiting the amount of memory that can be accommodated in the available area. Intel's Nehalem processor actually reduced the capacity of the L2 cache versus Penryn partly to improve its latency, which led to higher performance.* If we do not care so much about latency, then there is no reason to put the memory on-package, rather than further away where it is more convenient. It should also be noted that the cache hit rate is very high for most workloads: well above 90% in almost all practical cases, and not uncommonly even above 99%. So, the benefit of including larger memories on-die is inherently limited to reducing the impact of this few percent of misses. Processors intended for the enterprise server market (such as POWER) typically have enormous caches and can profitably include eDRAM because it is useful to accommodate the large working sets of many enterprise workloads. Haswell has it to support the GPU, because textures are large and cannot be accommodated in cache. These are the use cases for eDRAM today, not typical desktop or HPC workloads, which are very well served by the typical cache hierarchies. To address some issues raised in comments: These eDRAM caches cannot be used in place of main memory because they are designed as L4 victim caches. This means that they are volatile and effectively content-addressable, so that data stored in them is not treated as residing in any specific location, and may be discarded at any time. These properties are difficult to reconcile with the requirement of RAM to be direct-mapped and persistent, but to change them would make the caches useless for their intended purpose. It is of course possible to embed memories of a more conventional design, as it is done in microcontrollers, but this is not justifiable for systems with large memories since low latency is not as beneficial in main memory as it is in a cache, so enlarging or adding a cache is a more worthwhile proposition. As to the possibility of very large caches with capacity on the order of gigabytes, a cache is only required to be at most the size of the working set for the application. HPC applications may deal with terabyte datasets, but they have good temporal and spatial locality, and so their working sets typically are not very large. Applications with large working sets are e.g. databases and ERP software, but there is only a limited market for processors optimized for this sort of workload. Unless the software truly needs it, adding more cache provides very rapidly diminishing returns. Recently we have seen processors gain prefetch instructions, so caches are able to be used more efficiently: one can use these instructions to avoid misses caused by the unpredictability of memory access patterns, rather than the absolute size of the working set, which in most cases is still relatively small. *The improvement in latency was not due only to the smaller physical size of the cache, but also because the associativity was reduced. There were significant changes to the entire cache hierarchy in Nehalem for several different reasons, not all of which were focused on improving performance. So, while this suffices as an example, it is not a complete account.
H: How use DMM thermocouple probe hands-free to monitor heatsink temperature? I suppose this is more of a lab practice question than electronics per se, but I wonder if there's a standard-ish way to mount the K-type thermocouple that comes with my Fluke DMM so I can monitor the temperature of a power transistor and/or heatsink over time, hands-free? I'm certainly open to a special-purpose thermocouple or other sensor that could be screwed or perhaps preferably clamped in place so I could follow the temperature changes without having to poke the little temperature probe tip in there each time I want to take a reading and wait the minute or so it takes to stabilize on the right temperature. Is there something the pros do for this sort of thing? AI: Best way I think is thermal compound and glue it to the device, then just buy another thermal probe :) Short of that you can try and tape or clamp the flexible wire, I've had luck using a binder clip or some locking pliers like vice grips. Then curve it so the bend in the wire provides natural holding force onto the heat sink. That way what you're using to hold the probe doesn't affect the measurement. But really glue is the way to go. For more pro use you can use thermal loggers that use slightly cheaper thermal couples, there are some nice ones that help you measure air flow at the same time. That said for another project I just built a little wind tunnel and started using the bent wire approach so it's up to you how accurate you need to be.
H: Use SCR dimming info to feed 1-10VDC line Hello I am modifying an electronic lighting fixture to use LED instead. I have everything I need already, 80W LED, Power Supply, etc. There are just two issues I cannot resolve. The original fixture uses SCR dimming chip (controlled by electronics/computer) to power and dim a 24VAC Halogen Bulb at 250 Watts. The LED Driver accepts a DC Voltage between 1-12VDC. Im trying to somehow get that dimming information from the SCR, inputs, or outputs and convert it to 1-12VDC depending on the dimming amount fed into the SCR. The second issue may be easier.. Since im powering the LED seperately, it needs to know when to turn on and off. I figure some type of circuit which can detect ANY voltage from the 24V halogen bulb leads to and trigger the LED power supply to turn on. I know relays do this but from my understanding relays need a constant control voltage. The schematic for the dimmer circuit is here (Page 14 labeled dimmer).. I do see +5V and "MOC_EN" not sure what that means or if it can be used https://www.highend.com/pub/products/automated_luminaires/Trackspot/Schematic/Tspotr15.pdf AI: simulate this circuit – Schematic created using CircuitLab A problem (of sorts) is that the AC supply is notionally isolated from the logic supply - see diagram below - BUT if it is the same supply as on page 14 this is not a problem. . Two mains choices to get dimming information are Use the signal between (+5V - MOC_EN) to provide the dimming information remotely. Put a load on the dimmer output and derive the dimming signal from there. IC33 is an optocoupler. By using an identical or similar optocoupler with input connected from +5V to MOC_EN via a suitable resistor. I cannot see where MOVC_EN is derived from but it will probably allow a second opto to be used - one with more sensitivity and a larger resistor would help minimise extra load. The dimmer shows a 24 VAC signal on page 14, used to supply power via a bridge rectifier to the logic circuitry. If this is the same as the one used for the map in the circuit below then this output can be used as a dimming voltage source. Using AC lamp output for dimming signal source: Connect a single diode (Dl)from (or a bridge rectifier for less ripple) from lamp output to a resistor (Rl) to ground. Add a "suitable" capacitor (Cl) across Rl. Rl should be as large as works OK. ie a resistor giving full lap load will replicate the lamp voltage but dissipate as much power as the map. Probably a 100 Ohm to 1k resistor will work. 1k to 10k may. 10k is best if it works. Size wattage to suit. Cl filters this to a DC level. Using a bridge rectifier (see page 14 BR1 as an example) will provide a smoother DC signal. Extra filtering may well be needed - this can be discussed if this answer looks useful. System gain control or an external potentiometer (or both) will control relative brightness. On off control can be provided by detecting a minimum DC level for LED turn-on (as you suggested). Try this - see text below and in comments However - an "easy" possible solution (depends on LED power voltage) would be to power the LEDs directly from the existing output using the existing AC rectified with a bridge rectifier. I assume the LEDs use 12VDC for main powering - maybe not. If V_LED_power is >> 30V then the following does not work. LEDs would either need to accept up to 24 VDC or the system arranged to "not overdo it". A one resistor addition to the optocoupler drive would allow lower maximum output voltage levels. As the original unit can drive 250W at 24 VAC, driving a 12VDC 80W load would work.
H: Optimizing a boolean expression to (NAND,AND,NOT) system I'm trying to simplify the expression given by the following Karnaugh map to an expression that is using only NANDs, NOTs and ANDs- the less gates (of any kind), the better. I know how to optimize it to an (OR,AND,NOT) system, but that doesn't seem to help. I also know how to create an OR gate using only NANDs and NOTs, but that creates a really complicated expression and I've been hinted there's a simple one. I tried using Wolfram Alpha, but no (AND,NAND,NOT) system exist. The closest one is a (NAND, NOT) system and that expression still looks complicated. I'm allowed 4 inputs per NAND and 2 inputs per AND. Thanks in advance! AI: Complement Law says \$\overline{\overline{X}} = X\$. $$\overline A\ \overline B\ \overline C + \overline A\ B\ C + A\ B\ D + A\ \overline B\ \overline D$$ Take Double Complement. $$\overline{\overline{\overline A\ \overline B\ \overline C + \overline A\ B\ C + A\ B\ D + A\ \overline B\ \overline D}}$$ Use DeMorgan's to remove lower complement. $$\overline{\overline{\overline A\ \overline B\ \overline C} ∙ \overline{\overline A\ B\ C} ∙ \overline{A\ B\ D} ∙ \overline{A\ \overline B\ \overline D}}$$ 4 NOTs, 4 3-input NANDs, 1 4-input NAND.
H: Operating flow switch on a DC voltage I plan to buy a low flow switch for my heater protection from Mcmaster (2371K4, 0.1GPM set point). The catalogue says that the switch operates on 120/240VAC supply. Would this switch be able to operate on 14V DC supply and a max 40mA current? Regards, DPK AI: Maybe. Perhaps even probably. This really a question for McMaster, as they hide the real manufacturer and the datasheet from their customers. It's fine for simple stuff but for more complex products it becomes a crap shoot. You want to know the minimum voltage and current for the switch.
H: calculate Amp Hours when voltage is given I'm trying to wrap my head around how exactly Amp Hours work. I have been trying to figure out this question for a while now, but can't seem to find the information that I need. This is the circuit, and the information given is that the battery has 20AH capacity, and has a constant voltage 11.5 V. The question is to calculate how long it would take the battery to fully charge from being completely discharged. What exactly is Amp Hours, and what is the formula to calculate it? I have noticed on many web pages that this is being calculated when the current of the circuit is given, but nothing about what to do when the voltage is given. Edit: I know I can calculate the current in the circuit using ohms law, but I don't know what to do to calculate the time it takes to calculate how long it will take to charge to full. AI: 12v - 11.5v = 0.5v i=v/r i=0.5/1 = 0.5Amps charge current. battery = 20Ah (note: not A/hrs!) 20/0.5= 40 hours to charge. amp hours= 20 amps for 1 hour or 10 amps for 2 hours or .5 amps for 40 hours. it is a measurement for how much capacity the battery has and is useful for calculating how long it will last.
H: does changing UART stop bit number configuration (receiver side) affect message correctness I'm using USART on board to display messages on desktop PC terminal using Realterm I noticed that, when I change stop bit number on Realterm, it does not affect the correctness of the message, contrary to baud rate, or bit number. why stop bit number does not affect correctness?. AI: The stop-bit has a logical high level, it's the last bit that gets transfered if you transmit a byte over RS232. In case that you don't do any transfers the line is idle, and this is signaled as a logical high level as well. If you increase the number of stop-bits on the sender side of a RS232 connection the receiver side will interpret the first stop bit as a stop bit, and all following stop bits as idle time on the line. Effectively for each additional stop-bit you just add a small delay between the bytes transfered over the link.
H: What are the different grounds on a PCB? I am working with boost converters recently. I have problem which you can find here. After going through the datasheet several times I encountered the special focus on the ground plane. It talks about different ground nodes. I have no idea what it means. I would be grateful if someone can point out what is the best practice as well. Any suggestions would be appreciated. My revised layout: Circuit diagram: C1 ceramic 10% 2.2uf C2 ceramic 10% 4.7uf L1 1uh 1206 package AI: It's suggesting that there is a "star-point" that "power ground" and "feedback/control ground" use as their points of reference to 0V. Anything that should/must connect to power ground i.e. input decoupling caps, output decoupling caps and main ground on the chip should be separate from ground on the feedback potential divider (not used on your design). These two separate grounds should make just one connection to each other and this is called the "star-point". This ensures that load currents and input currents (that might create millivolts of volt drop on their track) do not influence the voltage that the feedback resistor network measures. If these currents did influence the feedback resistors, then you can expect a noisy output and possible instability. The "star-point" is, by default, right at the main 0V connection for the chip. For buck regulators (using a fly-back diode), this diode should be naturally on the power ground and NOT connected to the control ground. Several other pins may need to use the control ground and these include, soft-start input capacitors, oscillator resistor and capacitors and incidental digital inputs (such as external switching clock if used).
H: Digital Input Clamp Circuit Protection To be able to connect some peripheral to digital input, I'm designing the circuit protection to avoid any possible failure due to over/under-voltage issue. So, I will add a clamp protection circuit. The requirements I need to accomplish are: Power Source (Vcc): 3V3 Max Voltage: 3V9 (3V3 + 0V6) Min Voltage: -0V3 (GND - 0V3) Max Logic input current: +/- 300nA The circuit is next: R1000 is added to limit the current to 10mA. R1001 is added to limit the input current below 300nA. My doubts are: Might the circuit be modified in case it need to support up to 30V input? Which would be the best option for both diodes (Forward voltage, breakdown voltage, forward current, etc)? As digital inputs may need to produce a rising interrupt, will be better to add a schottky instead of a zener on D1000? AI: This is all you need: simulate this circuit – Schematic created using CircuitLab this also eliminates the problem that a high voltage at the input will lift-up your local VCC. When input < -0.6 V D1 will conduct and limit GPIO to -0.6 V, R2 limits the current, your GPIO input will be able to handle this (it also has input protection diodes !) When input > -0.6 V but < 3.9 V D1 does nothing, GPIO also happy When input > 3.9 V D1 will conduct, R2 will limit any current, GPIO input will be happy. Someone complained this would not work but was too lazy to explain why but I figured it out myself: Apparently I overlooked that 3.9 V zenerdiodes leak a lot in reverse so I lowered R1 to 10 kohm. If that still doesn't fix it than you could replace the zenerdiode with 4 to 5 standard diodes in series, see 2nd schematic.
H: How can I evaluate a potentiometer susceptibility to change? I have a usage case where I would like a potentiometer for controlling the charge going to a lead-acid battery via a IC. It would be susceptible to some (although not extreme) vibration, such people carrying it around and various bumps. It would for most part be a set-and-forget thing hidden in a case that nobody will think of, unless the battery type was changed. I don't want the resistance to go shifting over time, it should ideally remain reasonably stable within a few percent. How can I evaluate if a potentiometer is stable enough to use for this purpose? AI: Don't worry about it. If trimpots shifted around a few percent with minor vibration they'd never be used. Just about any trimpot from a reputable maker (and quite a few from disreputable ones) will be fine. Minimize the range of the pot (don't require it to be set to 0.1%) and try to use it as a voltage divider rather than a rheostat. If you must use it as a rheostat, keep the setting away from the very ends of the range and try to use a relatively high value (avoid 10 ohm cermet for example, if you can) so that CRV (contact resistance variation) is not much of a factor. But a few percent stability is not a high bar.
H: DSP Cutoff vs Sampling frequency question on low pass filter Is it unrealistic to have a 125 MHz sampling rate with a cutoff frequency of 5 Hz? I downloaded a few filter programs and used their methods and it seems that even with thousands of taps it still looks pretty bad. Is there a better way to implement this digitally or should I just stick with an analog implementation? AI: The basic formula for a simple filter relies on the ratio of time between samples and the time constant of the filter i.e. T\$_S\$/CR. At 125MHz Ts is 0.000000008 and CR = 0.03 for a 5Hz cut-off. Quite a few digital systems are going to produce errors unless you are working with decent floating point numbers. This is a simple digital IIR filter with a nod to the CR time of an analogue low pass RC filter: - It's dead easy to implement in a spreadsheet and you can apply integer math (or whatever you are using) to see how it performs with the vast difference in sampling frequency and target cut-off frequency.
H: Input Characteristics of MOSFET in triode region? When a MOSFET is operating in saturation region, i.e. \$V_{GS} > V_{th}\$ and \$V_{DS} ≥ ( V_{GS} – V_{th} )\$, the drain current equation clearly indicates the parabolic relation between the drain current and input voltage: \[ I_{Dsat}=Kn' \cdot \dfrac{W}{2L} \cdot (V_{GS}−V_{th})^2 \] hence \$I_{Dsat}\$ is directly proportional to the square of \$V_{GS}\$. However, when operated in triode region, the drain current equation is given by: \[ I_D=Kn' \cdot \dfrac{W}{L} \cdot \left[(V_{GS}−V_{th})V_{DS} - \dfrac {V_{DS}^2} {2} \right] \] Am I right in saying that, for a fixed \$V_{DS}\$, the drain current is linearly dependent on \$V_{GS}\$? Although the equation indicates a linear relationship between drain current and input voltage for a fixed \$V_{DS}\$ such that the device is in linear region, the results of simulation are quite different. Instead of a straight line, the simulation results in a parabolic curve that saturates for some value of \$V_{GS}\$ (even if there is no resistor connected between the drain and \$V_{DD}\$). Edit Here are a few questions: What I understand is that, if the MOSFET is in saturation the ID versus VGS curve would be a parabola as shown: Changing VDS has no effect on this curve (neglecting channel length modulation) right? But if ID versus VGS is plotted for the MOSFET in triode region, it would be linear(For a fixed VDS such that the device is in triode region) as evident from the equation: \[ I_D=Kn' \cdot \dfrac{W}{L} \cdot \left[(V_{GS}−V_{th})V_{DS} - \dfrac {V_{DS}^2} {2} \right] \] So in the above equation, if VDS is fixed, ID would vary linearly with VGS. Why isn't this characteristic of MOSFET exploited? Why do we settle for the "nearly-linear" ID versus VDS relation when we can have a perfectly linear variation of ID with VGS? AI: What do you mean by "input characteristics"? Textbooks and datasheets describe the behavior of MOSFETs using two graphs: Output characteristics: \$I_D\$ versus \$V_{DS}\$ with \$V_{GS}\$ as parameter. Transfer characteristic: \$I_{D}\$ versus \$V_{GS}\$ at a given fixed \$V_{DS}\$ value (this latter is chosen so that the MOSFET is in saturation region). There is no "input characteristic" (such as the \$I_B\$ versus \$V_{BE}\$ curve of a BJT) because the other input quantity besides \$V_{GS}\$, namely \$I_G\$, is virtually zero at DC (and all these curves assume DC operations). Therefore it wouldn't make much sense to plot \$I_G\$ versus \$V_{GS}\$, unless you wanted to analyze leakage gate current, but I assume you are not interested in that. So it is clear (also by a comment of yours) that by input characteristic you mean the transfer characteristic (TC). Note that the TC is plotted with a fixed drain-source voltage that guarantees that the MOSFET is in saturation for each \$V_{GS}\$ value on the horizontal axis. This is done because the TC is useful when the MOSFET is in saturation, i.e. when the output current depends solely on the input voltage (not considering "Early effect"), for example when you want to use the MOSFET as an amplifier and you need to draw a load line to design its bias circuit. If you plot the TC for different values of \$V_{DS}\$ you get a family of TC curves. For example consider this circuit simulation with LTspice: Plotting the TC for different \$V_{DS}\$ values you get: As you can see, the more you increase \$V_{DS}\$ the more the curve resembles a parabola, as you would expect for the TC in saturation. Notice that this part shows a threshold voltage \$V_{th} \approx 4V\$. Let's consider what happens if \$V_{DS}\$ is not big enough to drive the MOSFET in saturation for every \$V_{GS}\$ value, like in the lowest blue curve (Note: to present a more revealing plot I selected the curve corresponding to \$V_{DS} = 2V\$, whereas the lowest blue curve above corresponds to \$V_{DS} = 1V\$): As you can see, in saturation region you get a quadratic curve, whereas in triode region you get a linear curve. Everything as expected, except that real devices don't have an abrupt change between the two regions and that the linearity of the triode region is not perfect because of the device not being ideal (SPICE models usually take into account these effects). If you see in your simulation an abrupt departure from this behavior it could be that you tried plotting the curves outside the range of the voltages/currents admissible for your device. Notice that I limited the first plot to max 14A/20V which are the absolute maximum ratings for the device I chose. If you don't keep this in mind you will destroy the device (in real life) or get odd results (in simulations). EDIT (in response to a comment and a question edit) You ask why the "perfectly" linear curve for \$I_D\$ versus \$V_{GS}\$ in ohmic region is not exploited. Here is some insight: Why do you need a linear characteristic between input (\$V_{GS}\$) and output (\$I_D\$)? Usually to use the device as a (linear) amplifier. But what are the conditions that allows to have that linearity? \$V_{DS}\$ must be held constant. Therefore to make an amplifier this way you have to insert a load in the output circuit and still keep \$V_{DS}\$ constant. You can understand that such a load cannot be a simple resistor (which is the simplest kind of load). Therefore you need a much more complex circuit (with other active devices). On the other side, you can use the same MOSFET biased in saturation and get a decent linear amplifier: even if the behavior of the device is not intrinsically linear, but quadratic, there are linearization techniques (e.g. employ simple feedback schemes, like a resistor in series with the source terminal) that allow the overall amplifier to become more linear.
H: Resistor values for a passive audio mixer? I'm trying to come up with a schematic for building a passive audio mixer for an aviation headset, but I'm having some questions about resistor values. Figuring out the general circuit is easy enough, but I'd like to bounce this off people who know a lot more than I do so I can make sure I'm doing this right. The idea here is to combine the audio from the aircraft intercom system with the audio from a MP3 player and send it to the low impedance headset (10 ohms). I'm not that worried about sound quality because the headset itself has a small frequency range (100Hz - 5.5kHz). Here's what I have for a schematic so far: My question is, what should I be using for resistors here? Would 10 ohm resistors be sufficient for this? I feel like this is pretty basic stuff, but my Google Fu is failing to help me find the answer I need. AI: I'll assume this is not actually for use in an aircraft- if you are actually mucking with stuff in the cockpit, see your DAR for official advice that conforms to FAA etc. airworthiness requirements. If you use 10 ohm resistors and the impedance of the phones is really 10 ohms (often that's pretty variable) your av audio power will be reduced by about 12dB, which is quite a bit. Also the headphone impedance your MP3 player is expecting is typically more like 35 ohms. I would suggest trying more like 27 ohms on each MP3 output and 5.1 ohms on the av output. That will reduce the av power by less than 6dB. The MP3 sound may not be loud enough, but it's probably not really designed to drive a 10 ohm headset.
H: When using nodal analysis of a circuit involving CCCS, how do you know which currents are entering and which are leaving? I am trying to solve the following circuit: I believe the answer I'm getting for \$i_b\$ is wrong because I put it into LTSpice and I'm getting that \$i_b = -3.63636\$ This is my LTSpice diagram: I found \$i_b = 1\$mA by doing a loop voltage analysis on the left loop; for the voltage drop across the \$200\Omega\$ resistor I assumed that it would be \$i_b + 29i_b\$, which works out to be a nice number and in fact all of the numbers are nice in this case--usually when the numbers are nice, you know you're doing it right. At this point, I'm not sure if I incorrectly modeled this in LTSpice, or if I incorrectly assumed which way the current was flowing. Instead of giving me the answer directly, I would just like to know how to determine whether the current at a node is entering or leaving a branch. AI: Your paper analysis is correct, but your LTspice simulation is incorrect. I get the same (incorrect) result as you if I use a gain of \$+29\$ for the F device (your \$I_1\$). But the gain should be \$-29\$ since \$i_b\$ flows from the negative to positive terminal of \$V_{\text{ib}}\$. Changing the gain gives you the correct result. Circuit: F device attributes: Result: If I change the gain to \$+29\$ the result is: Note that the simulation result is \$v_y = v_{y1} - v_{y2} \approx 98\$V when using a gain of \$+29\$, which is clearly wrong. The two simulations highlight the importance of maintaining consistency in the direction of currents. The problem statement defines \$i_b\$ and \$29i_b\$ as both flowing toward the middle "T" node. LTspice defines \$i_b\$ as flowing away from it since it defines the current through \$V_{\text{ib}}\$ as flowing from positive to negative terminal. That means you also have to define the CCCS \$29i_b\$ as flowing away from the middle "T" node. In the incorrect simulation (with gain of \$+29\$), \$29i_b\$ is still flowing toward the "T" node while \$i_b\$ is flowing away from it. The correct simulation defines them both as flowing away from the "T" node. Alternatively, you could just switch the direction of the "F" device and use a positive current gain -- it would then also be defined as flowing away from the "T" node.
H: Driving a large array of LEDs (~600) A colleague and I had the idea to build Conway's Game of Life in a large 8x8' form factor using a 50x50 grid of RGB LEDs (RGB so that the grid can be re-purposed in the future). The idea is to build this thing in 4x4' sub modules (25x25 LED arrays) that can be driven separately and then arranged in whatever orientation. We were trying to come up with some details as to what we'd need to drive this kind of rig, and was hoping someone could offer some suggestions or steer us away from bad ideas. Some of the big questions that we have right now for building one of these sub modules are: Will some standard 5mm, 20mA RGB LEDs be bright enough to light up a 2"x2" cell with a thin diffusion layer on it? If we connect the LEDs in a grid and drive them one row at a time we would need 25x3 control lines for the RGB anode, and 25 lines for the GNDs. In this case, if an entire row was on the theoretical max current draw would be 20mA x 3 diodes x 25 LEDs = 1.5A. Is this a valid assumption? Would it make sense to use shift register such as the MC74HC595 to drive the individual LEDs and then use another shift register to drive FETs or similar to drive the high current common ground? Or vise versa for common anode LEDs? What kind of micro would we need to drive this? The plan is to do the processing on an laptop and then just send each frame we want to display serially to an arduino or similar to then drive the shift registers. The budget we have in mind is ~$400/ sub unit. Let me know what you guys think about feasibility. AI: I think this is a really great project, but I also think you need to check your arithmetic. A 50 x 50 grid is 2500 LEDs. You need very high brightness to do what you want. An example of what you might use is https://www.superbrightleds.com/moreinfo/through-hole/rl5-rgb-clear-tricolor-led/298/1225/. The problem is that these will run you (with quantity discounts) in the range of $1 apiece. I think you'd be much better off using separate LEDs. In part this is because standard Life displays only use red and green (red for birth, green for survival), so the blue goes unused. It's perfectly possible (and pretty) to use blue to indicate a cell which has just died, but it's unusual. Given your quantities, you should be able to get 2,000 - 10,000 mcd red and green 5 mm LEDs for around $.10 ea, although this will eat up $500. It's still better than $2500, and the LEDs will be brighter. If you mount the LEDs in a piece of 1/4" plywood, you can drill your mounting holes in pairs with a pair-spacing of about 1/4 inch, so I don't think a viewer will notice the offset with a cell spacing of 2". And two sheets of 1/4" plywood and 2 sheets of diffuser plexiglass is going to cost some, too. Plus, I hope you have access to a surplus source of cheap wire - you won't believe how much you'll need. Most of it will be small-gauge wire, so it won't be terribly expensive on a per-foot basis, but it adds up. Also, just as a construction tip, make sure you tie your wire harness to mounting plate with screws at regular intervals. If you don't, when you mount the display vertically, you risk the harness shifting and either shorting out exposed LED pins or even ripping wires loose. You'll want to pick your diffuser characteristics carefully, keeping the density as low as you can while getting an acceptable diffusion. Diffusers come in various degrees diffusivity, so check out the available choices. I'd recommend not trying to fill the cell spaces completely with LED illumination. For a 2 x 2 spacing, I'd go for a nominal 1" diameter spot. For a 30 degree LED, that would mean a 2" spacing between the mounting board and the diffuser. Smaller spacing will, of course, produce smaller, brighter spots. Multiplexing at this scale gets a bit iffy. On a 4' x 4' subpanel, if you multiplex between rows you will cut the brightness of each LED by a factor of 25. And, just as a practical matter, I'd recommend sizing your display to 48 x 48 cells. This will allow an even 1" spacing AND the efficient use of octal driver ICs like the 595. On this scale, anything which makes construction easier is a good idea. You have the right idea about doing the multiplexing, but be aware that 74HC595s are only rated for an output current of about 6 mA. The absolute max is 35 mA, and most Arduino projects routinely ignore the data sheet. You can probably get away with it, but if you want to be safe, buffer the column outputs with a transistor per channel. EDIT - Ignacio Vazquez-Abrams has kindly suggest the use of the TIP6C595 in place of the 74HC595, and I'd recommend it. However, it does have its own tradeoffs, particularly price. The TIP6C595 in DIP package is available from Digikey for about $1 ea. The 74HC595 in DIP from Jameco will run about $.40. If you go the risky route you can save about $35 for a complete display using 8x8 subregions. Whether or not this is worth it to you is a classic engineering tradeoff. Just thought I'd mention it. END EDIT I'd recommend that you subdivide your 4' x 4' panels into (9) 8x8 regions. This will keep your multiplexing losses down to 1 in 8. Your current calculation is correct, although with only 2 colors it drops to 1 amp. On the other hand, if you go to 8 x 8 subregions, each 4' x 4' panel will need a maximum of 3 amps for the LEDs. This reflects a basic, inescapable truth: if you want more light, you need more power. Also note that you may need to worry about cooling. If you use a 5-volt supply and 8 x 8 subregions, each panel will dissipate about 15 watts, including the current-limiting resistors which have simply been assumed and not discussed. While this may not seem like much, if each panel is enclosed, and then stood on end, 15 watts worth of warm air will collect at the top, and it may get pretty toasty for the upper LEDs. Overall, I think you can do what you want, and within budget. As long as your time is free, of course. Even assuming minimum wage for your time, this is going to go waaay over. FURTHER EDIT - If you need even more brightness, there is no need to multiplex. Simply use an 8-bit shift register for each 8 LEDs, connect them in series, then use each output to drive one LED to 10 mA. With a 1 MHz clock, updating the entire 2500 cells will only take 2.5 msec. At 1 update every second, the apparent brightness of a dark cell will only be increased by about 1/400 of full scale. A 74HC164 only costs about $.25 in hundreds, and the system would need 625, so total cost would be about $175. This would compare very favorably with the cost of a multiplexed system due to the lack of row drivers. Of course, worst-case current draw is now 25 amps, but as I've commented before, if you want more light you need more power. One further complication of this scheme is the need to provide either repeaters at regular intervals to regenerate the clock, since it cannot propagate accurately over the long signal path which this would produce, or a central driver to send clocks out over a twisted pair to each shift register. But such a system should be fairly cheap. It would also require 8 to 25 more current limit resistors than a multiplexed system.
H: Can I use a NPN Transistor as a Varactor Diode? I am currently building a crystal controlled AM transmitter. The circuit that I'm using call for a NTE – 613 varactor diode (which will come in a few days). I was wondering if there was an alternative to the director diode that I could use for the meantime to test the circuit? couldn't I use an NPN transistor as a varactor diode? Any help would be appreciated thank you so much! I will attach the schematic link below that I am using below: http://www.radio-circuits.com/images/ab1.jpg AI: Yes, you can use the BE or CE junction of an NPN (or PNP) transistor as a varactor diode, however it may not have the capacitance or capacitance range you want. The BE junction may not have the breakdown voltage required. The NTE613 has a 22pF capacitance at -4V and a 2.9:1 range from -2V to -30V. That seems on the low side to start with for an AM-band radio- I would expect several hundred pF. Perhaps it is not operating in the usual ~530 to ~1600kHz MW band. Anyway, a couple 1N4004's in parallel will be fairly close to the NTE part. As you can see, the higher voltage parts use a different (PIN) construction so they have less capacitance.
H: VHDL Counter does not update when desired For a project I am working on, I require a counter whose value increases as soon as the increment control goes high (i.e. on the rising edge). However, I have had trouble implementing this in VHDL. Here is some code that I have written: IF Reset='1' THEN count := 0; -- Asynchronous reset ELSIF rising_edge(clock) THEN IF Enable = '1' THEN count := count+1; -- Increment END IF; END IF; q <= std_logic_vector(to_unsigned(count,12)); Here is the ModelSim simulation waveform: As you can see, 'q' is only updated one clock cycle AFTER the Enable input goes high. I want q to increment as soon as Enable goes high. Is there a way to do this? AI: Everything is fine, when rising edge appears, the signal must be stable, not rising by itself. You will encounter this k7nd of problem later, when the logic you build will not propagate the signal fast enough and the output will not be ready before your next clock. So foe now in modelsim make "enable" stable somewhere in the middle of your previous clock cycle.
H: What is this SMD component Just out of curiosity I am trying to find out. What type of component is the green flat one in the picture? AI: It is a Polyfuse- a kind of self-resetting current limiting device. There are a few different names for them. Here is a datasheet. As far as names go, here is Wikipedia's take on it: These devices are sold by different companies under various trademarks, including PolySwitch (TE Connectivity), Semifuse (ATC Semitec), "Fuzetec" (Fuzetec Technology), Polyfuse (Littelfuse) and Multifuse (Bourns, Inc.).[4] PolySwitch is the earliest product of this type, having been invented at Raychem Corporation (now TE Connectivity) and introduced in the early 1980s. Due to common availability, electronics engineers and technicians often refer to this device as a "polyswitch", in the generic sense, regardless of actual brand.
H: What is this 60's-80's era component? My friend owns a factory with a very old machine. This component appears to have failed, but he does not know what it is. It is likely that it is a much larger version of a modern component. Here is some context: This panel has a few 3-way On-Off-On switches on it, which raise and lower a number of rollers on the machine. AI: That looks like a pair of old contactors with an overload relay fitted to one of them. Each contactor seems to have a set of auxiliary contacts fitted to the top. A 'contactor' is the term used for these large 3-phase devices which we in the electronics world would usually call a relay. Does much the same job, but on a larger scale. The addon device with the knob on it is the 'overload relay' and it causes the large contactor to which its attached to 'trip' when it senses excess current flow. When these contactors activate there is usually a very audible klunk as its electromagnet pulls the contacts down. You will also see the small black pin in the center of the white area on the auxiliary contact block get pulled down into the body of the device. Testing these would most safely be done with all normal sources of power disconnected from the machine. You will need to find the pair of terminals on each contactor which drive its electromagnet coil. I expect that you'll find wires running to these terminals from your toggle-switch, and the other terminals would probably be tied to neutral. Or they could be tied to live and neutral gets switched ... They will probably be labeled A1 & A2 on the body of the contactor somewhere. If you carefully apply a separate live & neutral mains supply to these terminals, you should hear & see the contactor activate. If the contactor itself appears to work, then it may be that 1 or more of the internal contacts have failed. This can be tested (still with all power disconnected) by using a meter to test for continuity across each set from top to bottom. Usually the 3 sets of main terminals on the base of the contactor will be normally-open and change to closed if you manually push the pin down. The auxiliary contacts come in both normally-open and normally-closed versions - but either way you should see the state change when you push the pin down. If the readings are confusing you may need to disconnect each set of terminals as you test them. I must reiterate - do these tests with the power turned off!!! Your contactors could be configured in such a way that phases are swapped over when one or the other activates, and if both are activated at the same time, or one of them is stuck, you will end up shorting 2 phases together. <--- Big Bang ...
H: Why is Leakage Current Important when a MOSFET switch is "ON?" Referencing ADG5206. ADI has specs for Drainage Current at the Source and Drain when the switch is OFF. They also have channel leakage for Source and Drain when the switch is ON. Why is it important to have specifications for leakage current when the switch is ON? Where does this leakage go? Does it simply get consumed within the MOSFET? Also, I understand why it would be important to have leakage specs when the switch is OFF; however, how much of a difference does it make when the values are so minimal? I am not designing anything as of yet, simply doing some research. I have found a few App Notes and am still not fully understanding. Thank you in advance for the help. AI: The datasheet asks you to refer to figure 34. Appears as though they mean the maximum current for the differential across the open switches +/-10V. This is important because the current across the open switches or to the power supplies can represent an error in a precision analog signal. The typical current is fairly high even at room temperature (20pA typical and 600pA over the industrial temperature range). If your signal is 100uA then 600pA represents a 6ppm error.
H: Transformer showing unexpected values I have setup a step down transformer to transform my 115v AC down to 5v AC. I ordered a transformer and hooked it up in parallel (there are two coils) and my oscope is showing 10v on the output side. There isn't any load on the circuit. Is this similar to show an unload DC circuit will show higher voltages when not loaded. Thanks in advance! Cheers, Gregg AI: As you have 0 to peak 10 volts, this means a RMS of around 7 volts. If you now add a small load it should drop to 5V. This is very common. Especially the internal resistance of the transformer leads to a fairly high idle voltage. When you rectify it and add a suitable capacitor you should have something like 5.5V or the like.
H: Am I getting DC bandwidth correctly? I am computing Johnson and shot noise in a DC circuit, where I have an Arduino Due sampling with "delay(100);", i.e. at 10Hz. Sampling rate and bandwidth follow Nyquist sampling theorem. According to this theorem, the sampling rate should be at least twice the bandwidth of the input signal. Since our sampling rate is 10Hz, the bandwidth should be $$5\text{Hz}.$$ Is this correct? Does this mean that many sources of noise just disappear if I add a large enough delay to my code? What is the correct answer for the bandwidth? AI: Frequencies work differently than that. As pointed out in the comments to your question, your noise will just alias into the bandwidth you do have. Aliasing means that a frequency you should be seeing gets interpreted as a lower frequency because your bandwidth isn't large enough. The nyquist frequency is intended to determine what you will be able to see/detect in a meaningful way, without mistaking the waveform. Of course, only for a sine. If you want to see a square wave of 5Hz, you need much more than 10 samples per second. As for your noise issue; you cannot "sample away" noise by just downgrading your sampling rate. In fact, a great way to get rid of noise above your signal of interest is something called oversampling. Not undersampling. Imagine, if you will, there being a 11Hz pure sine noise component. Let's say at t=0ms it crosses the 0 upwards, like a neat sine function of t. At t=22.7ms it's maximum value. At 45.5ms it's 0 again. At 68.1ms it'll be maximum negative. At 90.9ms it'll be 0 again. And so on. Let's also say your first sample was at t=0ms. At 100ms your noise has gone through 1.1 cycle, so it's a bit above 0. so you sample that. Then at 200ms, it's gone through 2.2 cycle, so it's a bit more above 0. and so on and so on, until after 10 samples you have seen it go up once, go back down, cross through 0 and seen it go negative and up again. In effect in 10 samples your system has seen 1 full cycle, while in effect the signal was 11 cycles. So your system says "Oh, that's a 1Hz signal!" Now imagine there being infinitely many different frequencies. Can you see how they all will get mistaken as different frequencies that do exist within your sampling band of 5Hz? In effect you are compressing the noise frequencies into a smaller band, so the noise level will stay about the same, but the noise will be "more dense". If you do over-sample and then take the average, because noise is random to your sampling system, you will effectively add/subtract the noise to itself. So for every 1 sample you want you take 100 and get the average. If you simplify the noise again, let's say you take 30 samples and average them to get one value, you sample 10 times per second exactly. So you get 1 average value per three seconds that you use. Let's now say you have 1VDC signal and 1VAC noise. Let's take 15Hz noise, that's at 0 exactly at sample 1): Sample 1: 1V + 0V = 1V Sample 2: 1V + 0.40V = 1.4V ( noise = sin( (n/15) * 2 * pi ) with n = sample number ) Sample 3: 1V + 0.74V = 1.74V Sample 4: 1V + 0.95V = 1.95V Sample 5: 1V + 0.87V = 1.87V Sample 6: 1V + 0.59V = 1.59V Sample 7: 1V + 0.21V = 1.21V Sample 8: 1V - 0.21V = 0.89V Sample 9: 1V - 0.59V = 0.41V Sample 10: 1V - 0.87V = 0.13V Sample 11: 1V - 0.99V = 0.01V Sample 12: 1V - 0.95V = 0.05V Sample 13: 1V - 0.74V = 0.26V Sample 14: 1V - 0.41V = 0.59V Sample 15: 1V - 0V = 1V This repeats for the next 30 samples, so 30 samples added together gives you a value of about 29. (I forgot to press M+ on my calculator when calculating the sine values, so I added the rounded numbers above to come to 14.1 for one set of 15 samples, is 28.2 for 30 samples and just liberally rounded up for good measure). That 29 divided by 30 gives you a neat approximation of the DC voltage of 0.9667V without ever reflecting any of that pesky 15Hz. And the 15Hz even completely suppresses your DC at its negative peak! If you'd oversample by a factor of 100 or 1000 you'd be even better off, but even with a factor 4 or 10 you'll already be suppressing a lot of noise. By then increasing the sample frequency you do open up the bandwidth for your noise a little, but you'll still be much better off than sampling at your original 10Hz and not oversampling, because not-oversampled you just get all the noise aliased into your 5Hz signal band, oversampled your initial signal band becomes wider, which allows for more noise components, but the averaging than smashes away the vast majority plus most of the noise that you previously aliased in. Basically the best noise free DC sampler would be an ADC sampling at infinite-terra-herz and averaging over infinite seconds. Because all noise would be perfectly sampled and represented, but then averaged away over itself, from yocto-herz to yotta-herz. But I don't think you want to wait for infinite seconds to see your signal, so just oversampling a little and accepting a few mV of error will be your best bet. One important note: If your noise is so strong that it clips, i.e. hits the minimum or maximum value of your ADC, but not both equally as much, you will get an offset. But this answer is already way too long, so I will leave you just with the warning: Make sure your noise is small enough to fit inside the sampling voltage range of your ADC and averaging will take care of the vast majority of it.
H: RC snubber across switch or inductor What are the differences one should consider between designing an RC snubber across a switch and across an inductor? Is there any preference for putting it across the switch? AI: A RC snubber can be placed over a switch, sure. Just remember that electricity is not instantaneous; it flows 1/100th the speed of light. For most practical purposes, it wouldn't make much difference if the snubber were at the switch or inductor... unless you're using very long wiring. Then transmission line effects come into play. Also be sure to use a critically-damped RC snubber for best results. If using DC, a snubber diode may be sufficient across the inductor, provided it is fast enough and the rest of the circuit can tolerate a slight negative-going pulse and ramp-to-zero voltage for some milliseconds. If the inductor is a solenoid, it will have a slower release time. Diode snubbers have the added benefit of wasting almost zero energy while the inductor is energized. (An RC snubber "charges" when the inductor is charged, wasting some power each time it is activated.)
H: shift register RGB led brightness I'm currently using 74HC595 shift register to light leds. Recently I decided to start using RGB leds instead of simple 1-color leds. Which means that I now have to use 3 output pins for each individual led. So far so good. The problem however, is that I cannot fully enjoy the power of the RGB leds. I would like to use the full color-depth by mixing the 3 color components (Red,Green,Blue) at different individual distinct brightnesses. There is an Output-Enable pin on the 74HC595 which can be used to control the brightness, but as far as I know, its value applies to all output pins, and cannot be used to set the brightness of individual outputs. Can it be done with the 74HC595 shift-register, or is there a more appropriate component ? AI: Although you could run a shift register fast enough to PWM some LEDs, there are dedicated RGBA LED drivers which will take a serial input and perform PWM. One arbitrary example is the TLC5971: The TLC5971 is a 12-channel, constant-current sink Spectrum PWM: driver. Each output channel has individually 16-bit (65536 steps) adjustable currents with 65536 PWM grayscale (GS) steps. Also, each color group can be controlled by 7-bit (128 steps) for each color group 128 constant-current sink steps with the global brightness control (BC) function. GS control and BC are accessible via a two-wire signal interface.
H: Which 9 V DC Adapter Need for MB102 Breadboard Power Supply Easy question. I'm a noob... I bought a MB102 breadboard power supply unit. But, it doesn't come with a manual. I need to get a 9V adapter for it. So, I'm not sure what I need. Does the 9V adapter's current matter or will any 9V wall wart work? What is the max. input mA for the MB102? AI: Here is the schematic: It needs to be rated for the total output current (current on the 5V line plus current on the 3.3V line) you want plus a few tens of mA for the LEDs and regulators. Other than that, any regulated 9V adapter with the right polarity will work fine. Unregulated ones will also work provided the unloaded output voltage isn't too high.
H: Feedback in hybrid power supply During my research of PSU I came across schematic from Linear's datasheet: 1) Whats the purpose of node which is going from output to SW pin of buck converter? 2) Is node which provides 6V headroom for LDO sufficient for normal operation of PSU? AI: 1) The two Schottky diodes and capacitors form a charge pump that generates a negative voltage. With the 1K and 2N3904 they form a current sink that allows the output to go right to 0V with the 500K rheostat set to 0. 2) The dropout voltage of the regulator is on the datasheet. It is less than 6V.
H: Why does Samsung mobile phone batteries come with a chip-like paper? Are they spying on us? Yesterday I saw a WhatsApp message. In that, one person is showing the Samsung phone batteries. He removes the wrapper and takes out the black color thin paper or plastic sheet which is attached with the body of the battery and there are some lines on that, like a circuit, and he claimed that this is a kind of chip to spy on our daily use of mobile phones. Is it true? AI: That coil is most likely the NFC antenna. If you read the docs for some of the Samsung phones, only specific batteries allow the phone to have NFC capability. In other words, the NFC antenna is part of the battery pack. NFC is the acronym for Near Field Communications. It is that technology that allows phones to transfer data back and forth just by touching them together. NFC communications is extremely short range. A gap of a few millimetres will stop the communications. It is extremely unlikely that this technology can be used to spy on people without their explicit permission - the user must make the conscious choice to place their phone in direct contact with another device.
H: question about charge pump circuit My charge pump circuit is giving increased Vload in simulator, about 22v. but when I breadboard and test, with Vsupply from a 12V power supply, multimeter shows Vload about 11v. I've tested changing parts of same model but still getting same voltage so I think my parts is working. what might be reason? Is there anything I should know about choosing my parts? power level of resistor, etc. So I tested and C7 not charging, C13 charges though, what is causing error oops, I missed resistor 1.5k. Now I can get 20 volt AI: Your first order of business in checking out the real circuit is to see if the 555 output is able to maintain a reasonably good looking square wave with full swing from near GND to up near the VCC level of 12V. If the output cannot achieve this then there is no hope to be able to step the voltage up as you wish. The output should have an aquare waveform with a 50% duty cycle at a frequency of about 45kHz. If the output is acting up then you need to make sure that you have GND hooked up to the 555. Also check out the VCC (+12V rail). Is it staying nice and steady at the 12V level? If the +12V level is sagging way down then there will little chance of doubling the output voltage. Make sure that the battery that you are using is not some 3/4 dead unit with a high output resistance.
H: ADC 'ADS1130' Wiring Problem I am hacking a commercial board which uses ADS1130 for analog to digital conversion. Wiring in commercial board is a bit different than what is specified in datasheet. In commercial board, Pin number 4 is connected to Pin nummber 1 i.e 4 is connected to VCC. But datasheet indicates that pin number 4 is DGND. And commercial board operates absolutely fine without any problem. I need to design a layout for my ADS1130 too. So whether or not to copy the commercial board layout, I am in great dilemma. Here is the datasheet. Any suggestion would be appreciated. AI: In commercial board, Pin number 4 is connected to Pin nummber 1 i.e 4 is connected to VCC. But datasheet indicates that pin number 4 is DGND. Then it can't be the chip you think it is. The data sheet is clear about pins 2, 3 and 4 - they are DGND: - Of course you could have read the part number incorrectly when in fact it was an ADS1131 with pin 4 as an input: - If you are trying to design something, the most important information you will get is the data sheet. Forget trying to hack a design and read the data sheet.
H: Depletion NMOS that's saturated at 0V May seem like a weird request for a part, but are there any Depletion NMOS out there that is already saturated when you apply 0V to gate or if the gate is not connected? AI: Infineon make them. See this for example but I wouldn't consider it sensible to leave the gate open on any mosfet. Infineon also make others so take a look at what they offer and use google to search. It won't bite you. You might also consider the humble N channel JFET. With 0V on the gate relative to the source it conducts and can be fully turned off (<1uA) with a negative voltage applied to the gate. You might find it easier to match one of these to your requirements. There are also P channel versions.
H: Why does SPI flash has HOLD# pin, as opposed to stopping the clock? A SPI flash like Micron M25P16 (shown below) has a HOLD# pin, and any input data is ignored if HOLD# is asserted at a rising clock edge. But can't the clock just be stopped instead ? AI: The HOLD# signal for use in an SPI bus with multiple slaves. Stopping CLK will stop the whole bus, while asserting HOLD# will only stop transactions to that specific SPI slave. HOLD# is slave specific like CS# is. Imagine you have a flash on the SPI bus, but also a SPI sensor that you need to read at a specific time. Now while you are in a transaction with the SPI flash, the time comes to read the sensor. You could just finish the transaction with the SPI flash, but then you compromise the timing of the sensor reading. If you stop the clock, you cannot read the sensor either. But if you assert HOLD# to the flash, you can now start a second SPI transaction to the sensor. Once you are done with the sensor, you deassert HOLD#, and you continue with your SPI flash transaction.