text
stringlengths
83
79.5k
H: Current limiter for homemade arc welder I'm would like to control the AC current that is will be consumed by a homemade arc welder. Website I used as a reference: http://www.mike-worth.com/2013/07/31/adding-a-current-limiter-to-my-mot-welder/ He converted a transformer to a rheostat (blue handle). I can't really make it out but can someone explain the following: Looks like there is only a primary, no secondary winding Is current is modified by laterally sliding the E and I sections of MOT apart? Or is tilting a single side of the E off the I of the MOT enough to modify current? Can the rheostat transformer endure 30-50 amps of current going through its primary winding without being damaged or melting the primary coil? I would like to know what kind of duty cycle to expect. I prefer a mechanical means to control the current. I'm open to more efficient/simple suggestions, simplicity is what I'm after. AI: Looks like there is only a primary, no secondary winding That is because it is an inductor, not a transformer. The idea is to limit the current by introduce inductive impedance. The impedance is adjusted by adjusting the air-gap. Is current is modified by laterally sliding the E and I sections of MOT apart? Or is tilting a single side of the E off the I of the MOT enough to modify current? Either will have an effect. You will need to do more research to get the design details and determine the best scheme for the specific design. Can the rheostat transformer endure 30-50 amps of current going through its primary winding without being damaged or melting the primary coil? I would like to know what kind of duty cycle to expect. I suppose "rheostat transformer" is what you are calling the variable inductor. The safe level of current depends on the size of wire, the insulation temperature rating, the core dimensions and material, the number of winding layers, external cooling, and whatever I may not have thought about. You probably need to search for variable-reluctance inductor design information.
H: Adding a DC bias to an AC signal for amplification and measurement I am trying to add a DC bias to this AC signal, but for some reason the schematics I found online for this do not work. The AC coupling is not happening. I have simulated the circuit here: http://everycircuit.com/circuit/5315107213279232 Or screenshot here: simulate this circuit – Schematic created using CircuitLab AI: Figure 1. OP's circuit. You have biased the op-amp input at 2.5 V DC. You have configured the amplifier as non-inverting with a gain of ten. Output will try to go to 25 V. You are powering the amplifier with only 5 V so it has saturated. I think you are expecting it to amplify the AC but not the DC component.
H: Mean time to failure (MTTF) of coreless DC motor? What is a typical MTTF of a small coreless DC motor (e.g. 4x12mm) like the one seen here? For a non-coreless 'toy' permanent magnet DC motor, the MTTF was shown to be in the 10s of hours at rated voltage, with MTTF decreasing exponentially as voltage was increased beyond spec: https://www.pololu.com/docs/0J11/all However, I have read that coreless DC motors can have longer commutator/brush life due to lower inductance. Would it be reasonable to expect a coreless DC motor to have an MTTF of 500-1000 hours if the applied voltage was half the spec voltage? Does continuous (i.e. multiple hours) operation wear down the brushes faster than shorter runs (in the minutes)? This information has not been available from any of the vendors I have interacted with, so I guess my next step would be to buy 100 or so and test them. AI: Would it be reasonable to expect a coreless DC motor to have an MTTF of 500-1000 hours if the applied voltage was half the spec voltage? lifespan is mainly determined by brush wear, which increases at higher rpm and higher current. Intermittent running is worse due to startup surge currents, but a lower duty increases the life of the motor by reducing its run time. Therefore it is difficult to give a MTTF because it depends on how the motor is operated. However you can get some idea from the intended application. The most popular uses for small coreless motors are cell-phone vibrators and small r/c quadcopters ('drones'). Drone motors are designed to produce the most possible power at high rpm, so they have a short life. Vibrator motors work at lower rpm and current, but are trending towards higher power to get stronger vibration. Typical no-load rpm is 40,000-80,000rpm, with smaller diameter motors generally running faster. At half voltage a DC motor will run at half speed, so lifespan should be dramatically improved if the load is light. However even with a light load a high power motor will draw relatively high current, so a drone motor is likely to have a shorter lifespan than a vibrator motor. A larger diameter motor may also have longer lifespan because the brushes and commutator can be larger and more robust, and a longer motor is usually more efficient (due to more copper in the 'active' part of the basket) so it draws less current for the same power output. I am running 24V 7x16mm coreless motors on 7.4V, doing ~9000rpm and drawing 10-20mA. I pulled one apart after about 50 hours operation and the brush wear was almost imperceptible. Unfortunately all the sources for this particular motor seem to have dried up, as modern devices typically run on 3.7V or lower. However a few manufacturers (eg. Namiki, Citizen) still make higher voltage low speed coreless motors for industrial applications. These are expensive, but should last much longer than the 'generic' coreless motors used in toys etc.
H: Audio mixing : crosstalk calculation Here is an interesting article about mixing stereo. Most important issues when mixing stereo are : interaction between channel crosstalk noise Interaction between channel is explained here in 1.0 with passive mixing, from that : It would be possible to make the output impedance of all equipment much higher, so direct mixing would not cause any circuit stress. The problem would then be that we are back to the position we had when valve gear ruled ... high impedance causes relatively high noise and high frequency rolloff with long cables. Cables can also become microphonic, and this is why so many pieces of valve kit used output transformers - to provide a low impedance (optionally balanced) output to prevent the very problems described. Low output impedance is here to stay, as are mixers, so now we can examine the methods in more detail. But for crosstalk, in the mixing passive circuit, how do we calculate it? We can find different methods to measure it. but for simple mixing circuits I did not found any documents describing how it be computed. We can find an example in here : The problem arising from using all three outputs (the two original and the new summed output) is one of channel separation, or crosstalk. If the driving unit truly has zero output impedance, than channel separation is not degraded by using this summing box. However, when dealing with real-world units you deal with finite output impedances (ranging from a low of 47 ohms to a high of 600 ohms). Even a low output impedance of 47 ohms produces a startling channel separation spec of only 27 dB, i.e., the unwanted channel is only 27 dB below the desired signal. (Technical details: the unwanted channel, driving through the summing network, looks like 1011.3 ohms driving the 47 ohms output impedance of the desired channel, producing 27 dB of crosstalk.) Some technical details are given, but it's not clear to me. Can someone give detailed steps to compute crosstalk for this example? A simulation example could also helps. AI: Can someone give detailed steps to compute crosstalk for this example? It's quite simple. If signal A has an output impedance of 47 ohms and is passively mixed with a signal B using two 500 ohm resistors then signal B impregnates signal A because there is a simple potential divider. simulate this circuit – Schematic created using CircuitLab If signal A is zero volts (to take the simple case) and signal B is 1 volt, then the voltage appearing at signal A (after the 47 ohm resistor) is: - 1 volt x 47/1047 = 44.9 mV. This 44.9 mV shouldn't be there ideally i.e. it is cross contamination and, if you worked out what this factor is in decibels then that comes to -26.96 dB. The way to overcome this is not to bodge-mix by using a passive resistor network but to use a virtual earth summing amplifier.
H: Is there some kind of official standard for signal measurements? Judging from this question, it seems that different people/sources have different definitions for measurements like amplitude. I personally encountered people disagreeing on the definition of the rise/fall time, some stating that it should be computed using 10% and 90% of the amplitude, other stating that it should use the peak-to-peak values. Since this can lead to confusion at best (thinking about contract issues) is there some sort of recognized standard? Or should I explain how each measurement is computed? AI: There is no standard for exactly what rise and fall times mean. Even if there were, good documentation defines its terms. In the case of rise and fall time of a digital signal, the appropriate thresholds depend on how the signal is intended to be received. The first thing to look at are the levels are of the minimum guaranteed logic high and maximum guaranteed logic low. For any specific receiver, these are the levels that matter. Of course you need these levels to be guaranteed at the receiver end of whatever transmission line is between the sender and receiver, so there needs to be some margin beyond the min/max levels at the sender end. If the spec is for something more general and various receivers could be used in the future to detect the signal, then you usually get more conservative. Even receivers with Schmitt trigger (hysteresis) inputs usually have their min/max levels no wider than 20% and 80% of the supply. In that case, you might specify rise and fall times between the 10% and 90% thresholds. For analog signals, rise and fall times take on a whole different meaning. These are often specified to settle within some minimum error of the final steady state. If the signal needs to be accurate to within 1%, then it hasn't "settled" until it gets to within 1% of its final value. You'd usually add some margin there too, since there are other sources of errors in the system. You might specify "settled" as within ½% of the final steady state value, for example. Again, don't leave things like this open to interpretation. Good specs include definitions for anything that matters that could be interpreted differently.
H: Need help DC biasing an AC source I am trying to simulate this circuit I saw here: However, when I simulate it, the "mid-point" does not behave like the blue line in the above picture. You can check the simulation here: Is the original picture wrong? Am I missing something? AI: You have connected the output of your simulated CT to GND. This is not how the original circuit is wired. One side of the CT output in the original circuit is connected to the bias point, the other side is the input to the Arduino. Connect the capacitor across the bias resistor, and one side of the CT output to the Arduino A/D input. https://www.circuitlab.com/circuit/595a448d3mdx/updated-circuit/ If you are trying to measure current, then don't forget the burden resistor. If you are trying to measure the voltage, then use a normal transformer with a 3V output. Never make measurements directly from the mains - remember mains can be lethal! Take care in what you are doing.
H: Gap detector does not register high speed projectiles I'm trying to build a gap detector-based ASG gun chrono. I successfully connected two gap detectors to Arduino Uno, wrote a program, which measures time between two voltage spikes (on each gap detector) and attached them to aluminium profile, such that ASG projectile must cross the IR streams of gap detectors (ASG projectile has 6mm diameter, the profile has around 8-9 mm of space inside, detector is positioned exactly at the center, vertically). The setup looks like following (without the profile): Now when I drop the projectile into that profile, Arduino correctly notices two spikes (from each detector) and measures time between them. But when I shoot the projectile from the slow spring-based ASG gun, nothing happens. My suspicion is that the time, when projectile covers the IR stream is so short, that detector does not have enough time to raise the voltage and thus Arduino does not register any change. Core part of the source code looks like following: #include "Display.h" #include "Keypad.h" #include "Menu.h" #include "EnableInterrupt.h" // Global variables ----------------------------------------------------------- LiquidCrystal lcd(8, 9, 4, 5, 6, 7); Display display(lcd, 16, 2); Keypad keypad; BaseMenuItem * mainItems[] = { new ActionMenuItem("\x01 Pomiar FPS", 1), }; MainMenu mainMenu(display, mainItems, sizeof(mainItems) / sizeof(BaseMenuItem *)); // *** FPS measurement *** unsigned long fpsStart; unsigned long fpsEnd; int fpsMode = 0; // Global functions ----------------------------------------------------------- void detector1Up() { fpsStart = micros(); fpsMode = 1; disableInterrupt(A1); enableInterrupt(A2, detector2Up, RISING); } void detector2Up() { fpsEnd = micros(); fpsMode = 2; disableInterrupt(A2); } void fpsDisplay(long ms) { lcd.clear(); lcd.setCursor(0, 0); lcd.print("ms: "); lcd.print(ms); } void fpsDisplayMissedError() { lcd.clear(); lcd.print(" *** Error *** "); lcd.setCursor(0, 1); lcd.print("Missed 2 sensor"); } void fpsMeasurement() { fpsMode = 0; enableInterrupt(A1, detector1Up, RISING); fpsDisplay(0); bool finish = false; while (!finish) { if (fpsMode == 1) { // Checking if particle missed second sensor unsigned long now = micros(); if (now > fpsStart && now - fpsStart > 1000000) { // If second passed after first measurement, assume, that // particle missed second sensor // Stop waiting for second sensor disableInterrupt(A2); fpsDisplayMissedError(); // Restart measurement fpsMode = 0; enableInterrupt(A1, detector1Up, RISING); } } if (fpsMode == 2) { // Checking if measurement was made if (fpsEnd > fpsStart) { unsigned long timeDist = fpsEnd - fpsStart; fpsDisplay(timeDist); Serial.print("Measured time: "); Serial.print(timeDist); Serial.print("\n"); } fpsMode = 0; enableInterrupt(A1, detector1Up, RISING); } else { // Waiting for keypress - exiting mode int key = keypad.ReadKey(); if (key != KEY_NONE) finish = true; } } disableInterrupt(A1); disableInterrupt(A2); } void setup() { Serial.begin(9600); while (!Serial); // (...) delay(2000); mainMenu.Show(); } void loop() { int key = keypad.ReadKey(); int result = mainMenu.Action(key); if (result == 1) { fpsMeasurement(); mainMenu.Show(); } } How may I improve either hardware or software to register the particle and thus be able to measure its speed? Edit: To avoid too big discussion in comments, I'll provide all requested information here: @PlasmaHH: The gap detector uses (I guess) IR diode and IR detector on the other side. When attached to Arduino, it gives low voltage on no obstacle and high voltage on obstacle between diode and detector. I don't have values here, but from what I remember values were like 31-32 without obstacle and around 800 when obstacle was present (Arduino scales voltage 0~5V to 0~1023). @JRE: Schematics of the detector are [on shop's page of the detector] (leaving link if more information is needed)2. Close-up on the detector itself: @OllinLanthrop: This is weird, Google Image Search on "Aluminium profile" returns exactly what I've used. But for clarity, I'm using the following piece of alluminium: I drilled holes in two places in the profile, such that detector's beam goes exactly through the profile and the holes are centered vertically on the profile. I hope that image gives you more information on my build. Also, "dropping" projectile should have more sense now. Also, a gun chrono (or chronograph) is a device, which measures speed of projectile. Commercial one looks like following: AI: Looking at the detector schematic, I see a 10k pullup on the phototransistor, and a "104" (100nF) capacitor to ground, to filter noise. The time constant of this RC filter is much too slow to detect a flying projectile. Simple solution: desolder the capacitor ! If it still does not pulse, use a lower pullup like 1k for extra speed, and increase LED current accordingly. The detector datasheet should mention its speed and max currents. Code-wise, you should really use pin-change interrupts which will be both simpler and a lot more accurate. All you have to do is: When "detector 1" sees projectile, grab value from timer and store into a variable When "detector 2" sees projectile, grab value from timer and substract previously stored value. You can use "micros()" instead of a timer if you're real lazy. EDIT The docs for micros(): Returns the number of microseconds since the Arduino board began running the current program. This number will overflow (go back to zero), after approximately 70 minutes. On 16 MHz Arduino boards (e.g. Duemilanove and Nano), this function has a resolution of four microseconds (i.e. the value returned is always a multiple of four). On 8 MHz Arduino boards (e.g. the LilyPad), this function has a resolution of eight microseconds. I guess your UNO runs @ 16 Mhz. So 800µs is 12800 cycles, which is plenty. If you use a proper timer clocked at 16 Mhz, you'll get excellent precision. If you use micros() with its idiotic rounding down to 4 clocks, you'll get 4x worse precision, but still okay. The important thing is to avoid interrupt jitter and other sources of jitter. So, if you do everything in software using a pin change interrupt, you must ensure no other higher priority interrupt can block it (like arduino timers, serial or whatnot) and of course don't do anything that can take a variable amount of time in your interrupt. Now, the usual way to do this is to use a timer in capture mode. The hardware will trigger on the pin you specify and capture the value of the running timer into a register, THEN it will raise an interrupt which can read the captured value. In this case, since the value has been captured at the highest precision by hardware, it does not matter if the interrupt that will read it has jitter. There is an example here.
H: What will be the output of opamp non inverting amplifier at 0v input I have a doubt regarding an opamp non inverting amplifier. I am using a dual opamp AZ4580MTR. One opamp is configured as a non inverting amplifier having gain 2 and the other is a voltage follower. The input of the opamp is connected from a DAC AD5675. When I reduced the input to 0v, then the Output is about 11v. If a any other input voltage like 1 or 2v applied to the input, it gives the correct output(as per the gain). How this this circuit is behaving like this, if I changed the opamp to any other part will it works fine or work like this? Waiting for a reply. Regards Sebastian AI: Google datasheet. Look at page 6, find "Input Common Mode Voltage Range ±12V (with ±15V supplies)." This means input common mode must be at least 3V away from power rails. Opamp is not rail to rail. If you supply this opamp with a single supply (ie, 0V/+15V) and not with ±15V then obviously its input common mode range will +3 ... +12V and not ±12V . Your 0V signal is not inside the interval +3 ... +12V therefore the opamp does not work as intended. Use the right rail to rail opamp.
H: Simple MOS current mirror bias circuit Consider the basic MOS current mirror bias circuitry I am really trying to understand on how Vbias when M3 is 'on' can be any other value other than Vt because M3 is diode connected (Vds= Vgs) and this means that the Vds of M3 will always be equal to Vt. So all that I can vary is only the W of M3 to control the current sunk because Vds=Vbias=Vt is always fixed ? AI: You have a misconception at this point. That the transistor is diode connected only means that \$V_{ds} = V_{gs} = V_{bias}\$, but that's all! This by its own doesn't mean that \$V_{gs}\$ will be equal to \$V_{t}\$. \$V_{t}\$ is just a threshold that should be at least achieved for the transistor to conduct. You shouldn't mix this with the case of a normal diode. For the transistor, we just say that in this configuration it's I-V characteristic "looks like" that of a diode, but it doesn't mean that \$V_{gs}\$ will remain equal to \$V_{t}\$ no matter how much current it flows. The "diode configuration" just makes sure that the MOSFET will be in saturation at all case, once it starts conducting since \$V_{ds}\$ aka \$V_{gs}\$ will always be higher than \$V_{gs}-V_t\$ What you actually achieve with the current mirror, is that by forcing the current through the transistor to be \$I_{bias}\$, you basically define how much \$V_{gs} = V_{bias}\$ will be. Take a look at the typical I-V characteristics of a MOSFET. You can see it like this: You define how much \$I_d\$ will be with your bias circuit above the MOSFET. Then draw a horizontal line at this value of \$I_d\$ and you find how much \$V_{gs}\$ will be forced to be. So, all in all, with \$I_{bias}\$ having a specific and constant value set by your bias circuit, \$V_{bias}\$ will also have a specific and constant value that will be, normally, higher than \$V_{t}\$ and that depends on how much \$I_{bias}\$ is. In this way, your only free parameter to play with is indeed the Width of the transistors.
H: Switching a relay from its supply (2 Relay Module, VCC to IN1, doesn't work) I searched far and wide but couldn't find an answer to my question. I've read the datasheet and I've look at schemes. I'm not very experienced with electronics. This is the module I will be talking about: Let's say I apply 5V to VCC from a given power supply, and then connect the ground. If I short IN1 and IN2 with VCC, nothing happens. Why is this so? Logic dictates it should, because when an Arduino controls the relay it powers the relay from 5V and then uses the same power supply to switch the relay with a pin connected to IN1 and IN2. Why wouldn't it work with a simple 5V power supply? The power supply I used was my phone's adapter, it delivers exactly 5V. Is there a way to short something to make it work? Here's my scheme: The scheme's idea is to switch between 5V from the wall and 5V from a battery. If power from the wall is available, then the battery is disconnected and recharged, if power is out then the battery kicks in. I know MOSFETs will probably be better in this case but I've already soldered the thing. Thanks! AI: The module you are using needs the inputs to be pulled to 0V (ground) in order to switch the relay. Check the manufacturers webpage for the module: https://www.sainsmart.com/sainsmart-2-channel-5v-relay-module-for-arduino-raspberry-pi.html This has a picture of the input circuitry on the module, which uses an optocoupler: You need to redesign the rest of your circuit to take account of the fact that the inputs take negative logic: INput: Low, Relay ON INput: Vcc or floating, Relay OFF
H: Force sensor on uneven shape I'd like to measure the force that a human leg impacts on a knee brace during normal gait. I have a problem in finding a suitable sensor for the job... The leg (and hence the brace) are not flat at their contact surface and it seems FSRs change their response dramatically when bent... and load cells seem quite bulky and also have the same restriction. I looked at more exotic solutions, e.g. Creating my own sensor out of Velostat (http://www.robotshop.com/uk/pressure-sensitive-conductive-sheet-velostat-linqstat.html) but my concern is it won't be accurate or reliable. Can strain sensors give me low creep and good repeatability? AI: Use a strain gage. Super-glue a full wheatstone bridge onto the structure. Amplify it by 100 to 1000x and your good to go. Calibration is easy with a precision shunt resistor (to get strain). Then use some maths to estimate load. It's a bit pricey for quality gages, and you will still need to acquire the voltages. But once it's in place, it will last a long time. If you design it as a in-series module, then you could directly calibrate it with dead-weights or a press, and it's even easier (I think that it's still a good idea to have a shunt resistor so you can detect if the gages are drifting or damaged). Vishay has some good app notes on the subject.
H: Need help in understanding miller plateau region Will somebody please help me to understand what is role of Miller plateau region in switching of MOSFETs? AI: MOSFETS have several regions of operation (1) off, where Vgate_source ~~ zero (2) on, where Vgate_source ~~ 5 volts or 10 volts (3) building charge in the gate, as the FET channel charge builds up with opposite polarity inside the region between source and drain (4) opposite of (3) During 3 and 4, the input (gate) voltage is minimally changing because the input gate voltage requires lots of charge AND the gate-drain capacitance also requires lots of charge. Unless the provider of that charge is powerful, the oscilloscope shows a plateau. Where is the charge going? View the FET as an amplifier, perhaps with gain of 10. Thus one volt deltaV on gate produces 10 volts (in opposite direction) on drain. As the gate is charged up (in Nchannel), the drain is moving oppositely and demanding another 10X charge bundle into that gate-drain capacitance. Unless the gate driver has enormous current ability, there is a hesitation. simulate this circuit – Schematic created using CircuitLab In first region, the input pulse charges 2nF + 1nF (the drain remains constant at 50 volts). In 2nd region, the input pulse charges 2nF + 1nF * (1 + 10) = 13nF, thus slope is 4X slower. With gain of 10 assumed, 1 volt deltaVgate produces 10 volts deltaVdrain in the opposite direction; result is C_gate_drain demands the charge of 11X larger capacitor. In 3rd region, the input charges 2nF + 1nF.
H: Virtual Ground Paradox? I'm unable to come to terms with something I think is a paradoxical situation relating to the virtual ground of an Operational Amplifier.Please pardon me if this is a really stupid question. When the 'Negative Feedback' in an Op-Amp (Ideal) makes the difference between its input terminals equal to 'Zero'. Shouldn't the output become zero too because the Op-Amp is fundamentally a Differential Amplifier and according to the equation: Vo = (Open loop gain)*(Differential voltage b/w the inputs) The Explanations I've come up with so far are:- 1) The Op-Amp Output is indeed zero and it is the External Circuitry (consisting of resistors Rf and Rin) that create the voltage, which adds up to the Op-Amp output voltage (in this case Zero) at point B to create the actual output of the system. 2) The virtual ground is not perfect and there exists a very very small differential voltage at the input which gets multiplied by the very high gain and produces the output. I'm fundamentally unable to understand how the actual definition of Op-Amp behavior is consistent with the virtual ground phenomenon without making the output zero. Please Help! AI: It's #2. For a "perfect" theoretical opamp, the open-loop gain is infinite, and this makes the difference at the inputs zero. When introducing opamp circuits, or when working out how things are supposed to work, people normally think about the "perfect" opamp. When thinking about the performance of a circuit, we usually have to start thinking about the imperfections of a real opamp. For a real opamp, the open-loop gain is not infinite, and there is some difference between the inputs. To take the example of an LM324, the open loop gain is about 115dB. That's a little less than a million volts/volt, so if there is a 1V DC output, then the inputs are different by about 1uV. Most of the time you can ignore that. It gets more complicated for AC. At higher frequencies, the gain drops. For the LM324, it goes to 0dB, i.e. 1V/V at about 1MHz. At that point, the inputs certainly will have a large difference. Practically speaking, the amplifier just doesn't work any more. For frequencies in-between, the gain of the amplifier (inc. feedback) will vary. The term "Gain Bandwidth Product" is used to describe what gain you can have at what frequency for a given opamp. This is just one of many imperfections a real opamp has. Another very relevant one is input offset voltage. This is the difference in inputs which results in a zero output, and it's not always exactly 0. This might be more important than the limited gain in many cases. Other imperfections you might want to consider are saturation/clipping, input current, PSRR, CMRR, nonzero output impedance and many more.
H: Any side effects when substituting bigger capacity battery in place of smaller capacity? I have a car key fob whose batteries recently died. The coin slot says it accepts two cr2025 coin cell batteries. But I was thinking, since a majority of coin cell batteries run at 3v, I should be able to simply substitute the battery with the biggest capacity available. So in this case I could use a cr2032. Fortunately, the bigger batteries fit in the key fob. But are there any side effects to using bigger capacity batteries in place of smaller capacities? I heard rumors that bigger batteries demand more current consumption from the load, or else the chemical reaction inside the battery will slow down, thus making the battery life shorter(assuming the load has a p-channel MOSFET to cut power from the load and a button to turn on the power). AI: If the thicker 2032 battery does fit into 2025 coin place of your key fob, there is absolutely no side effects. Your keyfob will just last longer. The rumors you heard are utter nonsense. If a battery has no load, it means it is in "shelf", storage mode, and in the shelf mode no "chemical reaction" happens other than the normal storage aging , as in any other battery, 2016, 2025, 2032, or else. ("chemistry slows down", if you wish).
H: Anti-Static Soldering Mat - Makes hair stand on end? I bought this (supposedly) anti-static soldering mat, because I was ruining my standard ESD mat when I would go to reflow things. https://www.amazon.com/gp/product/B06X97Y379/ref=oh_aui_detailpage_o03_s00?ie=UTF8&psc=1 When I received it, it made my hair stand on end as it brushed past. (static) I like the mat, and it doesn't burn, but I need some way to keep it discharged. How would I do this? AI: An anti-static mat should have a connection point where you can tie it to ground to bleed off the charge. The mat you linked to does not seem to be a true anti-static mat, but rather a silicone heat insulating mat. It says it has anti-static properties, but doesn't have a ground connector and does not specify exactly what those properties are. I personally would not trust it. From one manufacturer's instructions: GROUNDING: Sufficient ground cords should be used to reliably meet EN 61340-5-1 Table 3 less than 1 x 109 ohms for working surfaces. Industry recommendation is that continuous runs of ESD matting should be grounded at 10ft intervals to allow proper charge decay rates. Each individual ESD mat should be grounded with ground snaps located no further than ve feet from either end.
H: Can I get all the energy from a PV system if it is only hooked up to a CC and batteries? What I mean is that I know a charge controller regulates the voltage and current into your batteries to protect them (and CC's can incorporate MPPT's to extract more from your solar panels), but do the MPPT CC's take ALL the energy you could generate from the PV panels and put all of that energy in the batteries (neglecting efficiency here because that is irrelevant for the purpose of this question)? Or do they also tone down the rate at which the batteries get charged as they get close to maximum capacity (I know the CC will stop charging the battery once it reaches full capacity). For example, would the below be something that could happen? A theoretical PV system could generate 12 watts with an MPPT CC at a given moment in time. If the batteries were at 5% charge, they could accept a charging rate of 12 watts, BUT since the batteries are at 75% charge, they can only be charged at a rate of 7 watts, so the MPPT CC sets the PV panels at 7 watts, thus losing you 5 watts from your PV panels. AI: An MPPT controller is effectively a buck/boost converter that tries to present the optimum load to the solar panels in order to maximize the harvested energy while supplying the harvested energy at the voltage required by the load. At times, the solar panels will produce more energy than what can be used by the load. At other times, the load could consume more energy than the panels can produce. An MPPT controller that is charging batteries is primarily monitoring and controlling the voltage to the battery. There is a standard temperature compensated charge profile for flooded and AGM batteries that it will follow as the state of charge of the battery progresses. If the battery is deeply discharged, the battery will attempt to draw more current from the controller. If the solar panels are not producing enough energy to supply the current the battery could draw, the MPPT controller will effectively restrict the current drawn in order to keep the necessary charge voltage across the battery. This has the effect of slowing down the charge rate while still allowing the batteries to continue to charge. If the solar panels are not producing enough energy for the controller to deliver the correct charging voltage, say in the early morning, then the charge controller will not charge the battery at all. In this case, potential energy from the solar panels is being wasted. If the solar panels are producing more energy than the batteries require for charging, as when the battery is nearly fully charged, then the controller only needs to keep the right charge voltage since the batteries will not draw all available current. In this case, energy that the solar panels could have produced is not being used by the batteries. Therefore potential energy is being wasted. In very hot climates, the charger may have to suspend charging all together if the batteries are too warm. In this case potential energy from the panels is being wasted.
H: VHDL Simulation bug (am I losing it??) I've got a simulation that simply takes an address as an input and 64 clock cycles later it simply outputs it on another port. For some reason, when I register the output data, it is not delayed by a clock cycle (see waveform). Is this some crazy part of the standard or did I find a bug in the delta step of my simulator? Testbench: library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity bug_report_tb is end bug_report_tb; architecture TB of bug_report_tb is -- MIG UI signal declarations signal app_addr : std_logic_vector(29 downto 0); signal app_en : std_logic; signal app_rdy : std_logic; signal app_rd_data : std_logic_vector(29 downto 0); signal app_rd_data_r : std_logic_vector(app_rd_data'RANGE); signal ui_rst : std_logic; signal ui_clk : std_logic; begin process(ui_rst,ui_clk) begin if ui_rst = '1' then app_en <= '0'; app_addr <= (others => '0'); elsif rising_edge(ui_clk) then app_en <= '0'; if app_rdy = '1' then app_en <= '1'; if app_en = '1' then app_addr <= std_logic_vector(unsigned(app_addr)+1); end if; end if; end if; end process; process(ui_clk) begin if rising_edge(ui_clk) then app_rd_data_r <= app_rd_data; end if; end process; --********************************************************* module : entity work.bug_report_mod port map ( ui_clk => ui_clk, ui_rst => ui_rst, app_rd_data => app_rd_data, app_rdy => app_rdy, app_en => app_en, app_addr => app_addr ); end TB; Module: library ieee; use ieee.std_logic_1164.all; use ieee.numeric_std.all; entity bug_report_mod is port ( ui_clk : out std_logic; ui_rst : out std_logic; app_rd_data : out std_logic_vector(29 downto 0); app_rdy : out std_logic; app_en : in std_logic; app_addr : in std_logic_vector(29 downto 0) ); end bug_report_mod; architecture behavioral of bug_report_mod is signal clk : std_logic; signal reset : std_logic := '1'; signal app_en_sr : std_logic_vector(63 downto 0) := (others => '0'); signal dly_counter : unsigned(6 downto 0); signal rdy_counter : unsigned(6 downto 0); signal app_rdy_int : std_logic; type int_array is array(natural range <>) of integer; signal addr_array : int_array(63 downto 0); begin process begin clk <= '1'; wait for 2.5 ns; clk <= '0'; wait for 2.5 ns; end process; ui_clk <= clk; ui_rst <= reset; app_rdy <= app_rdy_int; process begin wait for 50 ns; wait until clk'event and clk = '1'; reset <= '0'; wait for 2 ms; end process; process(clk) begin if rising_edge(clk) then if app_en_sr(63) = '1' then app_rd_data <= std_logic_vector(to_unsigned(addr_array(63),app_rd_data'LENGTH)); end if; end if; end process; process(clk,reset) begin if reset = '1' then app_rdy_int <= '0'; rdy_counter <= (others => '0'); dly_counter <= (others => '0'); elsif rising_edge(clk) then app_en_sr <= app_en_sr(62 downto 0) & (app_en and app_rdy_int); addr_array <= addr_array(62 downto 0) & (to_integer(unsigned(app_addr))*4); rdy_counter <= ('0' & rdy_counter(5 downto 0)) + 1; app_rdy_int <= not rdy_counter(6) and dly_counter(3); if dly_counter(3) = '0' then dly_counter <= dly_counter + 1; end if; end if; end process; end behavioral; AI: You have done a very strange thing: The DUT is generating its own clock! This means that when both the clock and the data propagate out to the testbench, the data will have already changed before the clock edge is processed, effectively creating the "zero delay" effect you're seeing. While it's true that real hardware wouldn't behave this way, it doesn't at all surprise me that most if not all simulators would do exactly the same thing with this code. Try generating the clock (and the reset) in the testbench (the usual scenario) and I think you'll see the expected behavior. The alternative would be to add a nominal delay to the assignment of the output data bus inside the module app_rd_data <= std_logic_vector(to_unsigned(addr_array(63),app_rd_data'LENGTH)); in order to correctly model this interface.
H: What class of audio amplifier is this unusual configuration? Regarding Figure 1 on page: http://www.rason.org/Projects/transaud/transaud.htm I'm trying to get my head around what type of amplifier "class" Figure 1 is. The last part of it looks like a Class AB or B amplifier, except that it uses two NPN transistors (plus the 2N3904 that controls them) rather than one NPN and a PNP. The original page only states: "This circuit is really a current repeater formed by Q2 and Q3 and driven by Q1. This circuit is similar in performance to many IC amplifiers but requires an initial bias adjustment. R2 controls the bias and should be adjusted so that exactly ½ of the supply voltage is measured at the collector of Q3 with no signal. Once adjusted, heat sinks are not needed for Q2 and Q3 and a very high input impedance of approximately 47,000 ohms is seen at the input. Diodes D1 through D4 provide a constant voltage of 2.8 volts and form a constant current source through the base of Q1. This circuit is almost as good as some audio amplifier IC's and is preferred when a minimum power drain is needed." 1) What type is it and why do I never see this type of amplifier when I read about audio amplifiers elsewhere? What is the benefit or reason for it instead of AB? 2) It claims to be good for power-conscious applications. Is it more power efficient than an AB amplifier? How could I calculate the efficiency? 3) What impedance and power of speaker is the amplifier intended/able to power? 4) I actually only need to power a 0.5 Watt 8 Ohm speaker by battery. In this case could I use general purpose 2N3904 transistors, and could I increase the resistor sizes to reduce power losses, and decrease the 220uF capacitors to save money and bulk? 5) What capacitor size would be recommended for 0.5 W 8 Ohm? Is there a trick to calculating that? 6) Would the circuit still work at 5 V? 7) How can I calculate the gain, or at least avoid blowing a 0.5W speaker? 8) And how good is the sound quality likely to be from this? Many thanks! Figure 1 AI: It runs in class A as both output devices conduct for the entire cycle of the signal. It can provide twice the quiescent current into the load because as Q3 increases in current, Q2 reduces. The opposite occurs for negative going signals at the input. The quiescent current is mainly defined by the value of R4 and the gain of the transistors. Approximately half of the current through R4 goes into the base of Q2, the other half flows through Q1 into the base of Q3 with some getting diverted into R5. The quiescent current will vary with the gain of the transistors although R5 and R6 will help stabilize it. That is not a very good version - it has no bootstrapping so will be limited in output excursion and it depends upon the Hfe of the two output transistors being matched as well as having to be adjusted for the specific transistors. One of the earliest well-known implementations of this configuration for audio was in Wireless World magazine in 1969. Note that compared to the poster's circuit that the resistor feeding the collector of the driver transistor is split into 2 with a capacitor driving the junction from the output. This bootstrapping allows the output to swing to the positive rail. In the poster's circuit the current through R4 reduces as the output voltage goes positive starving the output transistor of base current. In non-audio use a similar configuration is used as the output of TTL gates. From "http://www.angelfire.com/sd/paulkemble/sound3b.html" Magazine article
H: How to connect two circuits into on light bulb with a switch in each I am not an expert. but I have two circuits, each on is 220v and both have a switch. I want to connect the two circuits into one light bulb. what I have to follow or need to know, To make this happens? if any one of the switch turned on, the bulb goes on. and doesn't goes off, until all switches is off. UPDATE THIS DIGARM IS JUST FOR CLARIFICATION NOT FOR ACTUALLY USE. I did this to clarify what I mean. what I need to adjust to make this happen. I don't mind major editing. simulate this circuit – Schematic created using CircuitLab AI: As you said yourself, you should never directly connect two circuits together. But the behavior you want could be done with a relay: simulate this circuit – Schematic created using CircuitLab You can expand this to as many circuits as you want, as long as they are all isolated from each other with the appropriate relay arrangement. Of course, if the directly-connected circuit (LINE 2 as drawn here) trips its breaker, then none of the switches will turn the light on, but you'll have that problem with any safe solution. Just choose which circuit should be the direct one and use relays for the rest. AND CLEARLY MARK WHEN YOU BUILD THIS THAT THE RELAY'S CONTACTS ARE NOT FROM THE SAME POWER SOURCE AS THE COIL!!! This is important for the next person's ability to work on it safely.
H: How to create a zero crossing detector using a full bridge wave rectified circuit Currently 1) I have a full bridge wave rectified ac waveform of 10Vmax. 2) I also have a potentiometer adjustable DC waveform. 3) My rail voltages are +12V and -12V. 4) I currently supply my full bridge wave rectified wave in to the non inverting terminal and the DC voltage to the inverting terminal. 5) Using an opamp or a comparator, I want to compare the two signals and produce a square wave which varies from 0 to a positive voltage at the output. 6) At every zero crossing , my square wave should go to zero. In all the other times it must stay positive. I have created an incomplete schematic below and have also tried many different methods but I am having difficulties in achieving my result. Thank You simulate this circuit – Schematic created using CircuitLab EDIT New schematic Removed the Buffer, Positive Feedback resistor, Ac source floating and load resistor added. simulate this circuit AI: simulate this circuit – Schematic created using CircuitLab Figure 1. Modified OP circuit. Your second circuit has no AC return path. This modification fixes that. 10 V RMS will peak at 14.1 volts which will overload the input to the comparitor. Rearrangement of the two 1k resistors will drop this to 7 V. Figure 2. The LM311 has the emitter of the open collector output available on a separate pin. Using the LM311 you can tie the emitter of the output transistor to ground (or anything else) to prevent negative output excursions. When I simulate this circuit in the circuit lab , it didn't give me any square wave at the output of the comparator? Any ideas on that? Figure 2. Running the simulation on Figure 1 with the potentiometer K factor set at 0.15 results in the above simulation. (Settings: 0 - 0.1 s, 0.001 s steps.) You probably had the pot setting too high in your simulation. Don't forget that most comparitors have open-collector outputs so I've added R3.
H: MAX3485/MAX485 Low-Power Shutdown Mode I am using a MAX3485 for Modbus communication through UART between a device and a Modbus server. What I want to achieve is to put my MAX3485 breakout into low-power shutdown mode when I put my device into deep sleep mode. The wiring my MAX3485 breakout is as followed: DI/RO => UART GPIO of my device RE/DE => 2 distinct GPIO of my device Vcc/Gnd => respectively 3.3V/GND of my device A/B => to the A/B of my Modbus server that is powered by the 5V/GND of my device From the MAX3485 datasheet, I can read: Low-Power Shutdown Mode (MAX3483/MAX3485/MAX3486/MAX3491) A low-power shutdown mode is initiated by bringing both RE high and DE low. The devices will not shut down unless both the driver and receiver are disabled (high impedance). In shutdown, the devices typically draw only 2 nA of supply current. Bringing RE high and DE low is not an issue but how to achieve: the driver and receiver are disabled (high impedance)? AI: Just bring RE high and DE low. From the datasheet: RO is enabled when RE is low; RO is high impedance when RE is high. and The driver outputs are enabled by bringing DE high. They are high impedance when DE is low. So when RE is driven high and DE is driven low simultaneously the receiver and transmitter will be in high impedance state.
H: Linear power supply + zero-crossing detection on transformer secondary Having read this comment, I'm a bit concerned about whether one of my projects is going to work: simulate this circuit – Schematic created using CircuitLab The comparator is also inside the uC, but shown explicitly for clarity. The intent of this question appears to be an exact duplicate, but its only answer completely misses what I think is a critical point: The transformer is (almost) completely unloaded at the time that I want the ZCD pulse. I can understand that having both linear and reactive components, considering the transformer and the load together, can cause a load-dependent phase-shift, and that's what the other answer was about. But in my case, and in the case of the other question, the transformer becomes unloaded for all except the peaks, and we want to sense the zeros. Given that detail, is there still a phase-shift to worry about between the primary zero-crossing and the secondary one? AI: In an ideal transformer, the primary voltage and the secondary voltage are identical (except for scale) at all times. This is because it's the changing flux in the core that creates the voltages on all the windings. When we introduce reality into the transformer, it disturbs that slightly. The finite primary inductance means the transformer draws a magnetising current, in quadrature phase to the input voltage. The finite winding resistance means that the magnetising current creates a small voltage in quadrature phase to the input voltage. This creates a small phase offset between the primary and secondary voltages. The load currents are in phase, and so irrelevant. The phase error in a good transformer is small. It will be significant if you're trying to make a precision lab instrument for measuring phase shift. It should not be significant if you're triggering a zero-cross switching circuit, or phase-shift Triac dimmer, try it and see. Note that transformers get better as they get bigger. A 'good' transformer may have to be a big transformer, say 50VA and up. It's likely that PCB mount matchbox-size and down could be too non-ideal for this use.
H: MPPT for a load whose operating voltage is nearly equal to the solar panel's open circuit voltage Is it possible to implement MPPT for a load whose operating voltage is nearly equal to the solar panel's open circuit voltage. edit: Is it possible with buck converter alone? AI: Yes it is. The MPPT converter will undoubtedly find a maximum power point that will lower the panel voltage. However, the boost circuit will raise this voltage to the specified voltage for the load. What is unknown from your description, however, is if there will be sufficient current to fully power the load. You asked about a buck converter alone. Since applying a load to the solar panel will lower its open circuit voltage, a buck converter will not help in this case. A boost converter will be needed. By itself, a boost converter may work to some degree but it depends on many factors so it may not prove reliable under conditions of varying sunlight and loads.
H: Passing and overwriting parameters to Verilog modules I'm studying the way of passing parameters from one module to another and I have one question. I have this instance in the top level module. parameter a= 100; parameter b = 200 ; test#(b/(a*2)) test( .clk(clk), .reset(reset), .out(out) ); In test module, I have this header: module test#(parameter max = 33 )( input clk, input reset, output out ); So, my question is: Which value will the module take as input parameter? 33 or 1 ? I mean, Is max=33 overwritten by the one I'm passing from the top level module? AI: Yes, the value will be overwritten by the one passed from the top module. The value 33 will only be used when there is nothing passed. You can think of it as a default value.
H: FTDI chip returns descriptor of unknown device I would like to debug the kernel of my nexus 5 and I sent to a fab the design for the debu cable that uses internally a FTDI chip that creates a USB to serial bridge. Since the chip is a QFN package is a little bit annoying to solder but after some tries I done it; one of the board is seen correctly, indeed from the syslog kernel: [12174.440550] usb 3-14: new full-speed USB device number 5 using xhci_hcd kernel: [12174.585763] usb 3-14: New USB device found, idVendor=0403, idProduct=6001 kernel: [12174.585766] usb 3-14: New USB device strings: Mfr=1, Product=2, SerialNumber=3 kernel: [12174.585767] usb 3-14: Product: DCSD USB UART kernel: [12174.585768] usb 3-14: Manufacturer: FTDI kernel: [12174.585768] usb 3-14: SerialNumber: A600ASO8 mtp-probe: checking bus 3, device 5: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-14" mtp-probe: bus: 3, device: 5 was not an MTP device kernel: [12174.606497] usbcore: registered new interface driver usbserial kernel: [12174.606569] usbcore: registered new interface driver usbserial_generic kernel: [12174.606633] usbserial: USB Serial support registered for generic kernel: [12174.608070] usbcore: registered new interface driver ftdi_sio kernel: [12174.608128] usbserial: USB Serial support registered for FTDI USB Serial Device kernel: [12174.608173] ftdi_sio 3-14:1.0: FTDI USB Serial Device converter detected kernel: [12174.608245] usb 3-14: Detected FT232RL kernel: [12174.608427] usb 3-14: FTDI USB Serial Device converter now attached to ttyUSB0 Other two boards instead are seen as a DCSD Status LED kernel: [11309.878562] usb 3-14: new full-speed USB device number 4 using xhci_hcd kernel: [11310.024048] usb 3-14: New USB device found, idVendor=0403, idProduct=8a88 kernel: [11310.024051] usb 3-14: New USB device strings: Mfr=1, Product=2, SerialNumber=3 kernel: [11310.024052] usb 3-14: Product: DCSD Status LED kernel: [11310.024053] usb 3-14: Manufacturer: FTDI kernel: [11310.024054] usb 3-14: SerialNumber: A101FPA7 I googled it but I haven't found anything related to it. Since two devices return the same exact descriptor I don't think it's an error of transmission but something else, anyone has idea of what can cause this behaviour? P.S: The chip has been bought from aliexpress so I can't say are original. EDIT As indicated by the solution, the chip was reprogrammed. BTW thanks to the comment of @pjc50 I looked for a way of rewriting the EEPROM: on a Debian system you can install ftdi-eeprom and after that you have to write a configuration file with the desired VID and PID vendor_id=0x0403 # Vendor ID product_id=0x6001 # Product ID and then use the program to reflash the chip $ sudo ftdi_eeprom --device i:0x0403:0x8a88 --flash-eeprom ftdi.conf (ftdi.conf is the name of the configuration file described above). Now I have the device correctly indentified by the kernel. AI: Looks like the FTDI chip is preprogrammed with a custom PID. You can reprogram it with FT_PROG.
H: Shorting Triac gate to ground doesn't turn it off In the below circuit, 170 is a Q401E3 Triac: http://www.mouser.com/ds/2/240/E2Triac-18233.pdf The capacitors are ceramic disc. The Triac is rendered normally conducting when S1 is open. When I close S1 the Triac is still conducting when I don't want it to be. I'm not sure what is wrong or how to fix it; would appreciate any help. The only thing I can think of that is happening is that the path from Engine Magneto to ground through the S1 switch is of a lower impedance therefore making the Triac useless in the circuit below. The point of the circuit was that when S1 closes, it allows the spark plug to ignite. Edit: circuit based on patent US5190019 https://www.google.com/patents/US5190019 AI: The off-circuit magneto will be pulsed on and off by the engine points or equivalent. When switch S1 is closed, this will prevent the magneto from firing the next time the points open. This works because the the magnetic field from the last points opening has collapsed, bringing the current through the triac to zero. Now the points open again, raising the voltage on the magneto primary. If switch S1 is open, the gate is fired from this rising voltage and the triac turns on. If S1 is closed, the rising magneto voltage is shorted to ground so the triac will not fire. As a result, there is insufficient current developed in the magneto primary to generate enough voltage on the magneto secondary to fire the spark plugs.
H: Time constant in an RC circuit Is it possible for this circuit to have two different time constants? One from 0 to 30µs and the other one from 30µs. I tried to calculate the values and I get τ1=20µs and τ2=95/7 µs. They seem quite different so I want someone to confirm or tell me the right way to calculate them. This is how I did it: for t=0: S1 in A is closed while in B is open. S2 is open. I tried to find an equivalent circuit formed of a capacitor, a resistance and a voltage source. I got R=10 ohm, C=2µF, V=20V. So τ1 = RC = 10*2µF = 20µs For t>=30µs: S1 in B is closed and in A is open. S2 is closed. This means that there is no connection between the voltage source and the capacitor. I tried to find an equivalent circuit and I got: R=95/14 ohm, C=2µF V=0V. So τ2 = 95/7 µs. Now, let's say I already have the capacitor voltage in a period of time (0 - 100µs). How do I calculate the output voltage (Vout(t))? I get two different expressions again... AI: Your answers are correct. Note that you should always number your components in a schematic (e.g. R1, R2, C1, etc). It would make describing this circuit a less ambiguous task. The time constants in the following description refer to the discharge time constant when S1 is closed, thereby removing the 20 volt source and shorting the source side of the lattice to ground. So the key to solving this problem is to determine the discharge current path(s) for the cap. When S2 is open, the upper 20 ohm and the right 10 ohm resistor are in series and this in turn is in parallel with the left 10 ohm resistor. This in turn is in series with the 2.5 ohm resistor. Solve this resistor lattice in that order and you can then calculate the time constant in that case. When S2 is closed, this simply puts the 10 ohm resistor in parallel with the lattice value computed earlier before the addition of the 2.5 ohm series resistor. Recalculate the time constant with this new R lattice . To calculate the capacitor discharge voltage at any time, you use the following formula: V(t)=V0e-t/(RC) Where V0 is the initial voltage, t is the time in seconds, R is the discharge resistance in Ohms, and C is the capacitance in Farads. Note that you can use this formula to calculate the voltage starting from any point in the discharge curve. You can calculate the capacitor charging voltage with the following formula: V(t)=VS(1-e-t/(RC)) Where VS is the supply voltage. You may also observe that these two formulas are the basis for the more simple RC time constants of 63.2% for charging and 36.8% for discharging when -t/(RC)=-1.
H: Minimum distance between inductor beads and an RF transceiver? I there a minimum distance needed between the beads and the RF transceiver? Like below ... the connections do not really matter, just if there are requirements regarding the distance between the bead and a nRF24L01 transceiver? Through the bead MIDI data is send/received (31250 bps). I draw only 1 bead, there will be 18. simulate this circuit – Schematic created using CircuitLab AI: The purpose of the beads is to keep common mode RF current off of the control lines. The first line of defense is to place the beads at the transmitter end. However, depending on line lengths, shielding, grounding, termination impedances, and transmitter antenna placement, common mode currents can still be introduced into the interconnections. A best practice is to provide suppression at both ends of the cable. It should also be noted that the ferrite material selection is important. It must have adequate suppression properties at the frequency of operation. But care must be taken that it does not provide any significant suppression at the frequencies used in the control cable.
H: Liquid level switch with normally open contact If something is described as a "liquid level switch", and contains a normally open contactor, would this imply that a dry sensor would cause the contactor to open, and a submerged sensor would cause the contactor to close? I get confused on the term "normally". To me, "normally" being dry or wet depends on the application, I suppose. AI: Depending on the sensor type liquid level sensors can be either always submerged or always dry. They don't usually sense wetness or dryness. A sump pump level sensor has a switch that closes when the liquid rises above some level and remains closed until the liquid falls a certain amount below that level. The same sensor may have a second switch that closes when the liquid rises at an "alarm" level. I agree that "normally" is usually the condition when the system is dry or de-energied.
H: Are 12v power splitter cables device-agnostic? I am looking at purchasing this power splitter for the purposes of powering several identical 12V/0.5A devices: Henxlco Supply Adapter The goal is to avoid a mess of "wall warts" and the corresponding proliferation of power strips throughout my office. My questions: Can this product be used for other devices (not security cameras) with similar power requirements? If I hook up 8 devices that each take 12 volts and 0.5 amps then is this power splitter a safe option? There are other splitter cables on the market, but since this one is packaged with the power supply I am assuming it can handle a total of 4A draw (80% of the listed 5A). Will the total power current draw at the wall (120V circuit) be the stated 0.8 amps? Or will it be 1.6 amps, or 5 amps? Thanks! AI: Yes, such power splitter are just connectors and wires. There's no sort of "intelligence" in these. And as long as the devices get the voltage and can draw the current they need, this should work fine. 1) Yes, should not be an issue, do check that the polarity is correct or you could damage the devices you're connecting. 2) Yes, should not be a problem. Note that the cable where that total 4 A is running needs to be reasonably thick. A proper 4 A or 5 A power adapter will have such a cable. 3) Total maximum power is 12 V, 4 A so that is 48 Watt. Some power is lost in the adapter so total power consumption can be around 60 W that would be around 0.5 A at 120 V AC. On the adapter a higher current might be indicated, this is the peak current it can draw. The current at the 120 V side will be lower than on the 12 V side. A power adapter contains a circuit which can convert high voltage/low current into low voltage/high current efficiently. If that was not the case your adapter would get extremely hot due to power dissipation.
H: Resistor needed for: Arduino Micro + 12V Pump + HF3FA/005-ZTF relay I am using an Arduino Micro to turn On/Off a 12 V Water Pump. I am using a HF3FA/005-ZTF relay --> Link Here's how I connected everything: I control it with this code: const int GO = 12; void setup() { pinMode(GO,OUTPUT); } void loop() { digitalWrite(GO,HIGH); //WaterPumpON --> ON delay(60000); digitalWrite(GO,LOW); //WaterPumpOFF --> OFF delay(60000); } Everything is working as it is supposed to: QUESTION: Since I'm new electronics I would like to know if I need any kind of resistor somewhere within that circuit? I did not find similar circuits while googling and the only useful info was in this video: YouTubeLink_german and he does not use any resistors but he uses a different relay. AI: The relay you are using requires a coil voltage of 5 volts. Assuming that your Arduino Micro is able to supply the 5 volts at 72 milliAmps, then no resistor is needed. Normally a resistor is used when the driving voltage is greater than the coil voltage. The resistor then "drops" the extra voltage across it so that the coil receives the correct voltage. So in the case of your relay, if you were using this relay but wanting to drive the coil with a 12 volt source, you would place a ~100 ohm, 1 watt resistor in series with it (7 volts / 72 mA). simulate this circuit – Schematic created using CircuitLab
H: 50/60Hz Noise Harmonics I want to wirelessly communicate over low frequencies (<1kHz), and I noticed during some tests that 60Hz noise is showing up a bit at 180Hz (3rd harmonic), and also slightly at 300Hz (5th harmonic): (Ignore the stronger signal I was transmitting) Why are only the odd harmonics of 60Hz noise visible? Why are the harmonics of the signal I am transmitting not visible (e.g. 3rd harmonic of 125Hz signal is 375Hz)? More generally, what frequencies should I assume to be occupied when transmitting in this low frequency range? I originally assumed 50/60Hz and all their harmonics (i.e. 100/120Hz, 150/180Hz, 200/240Hz, etc) should be avoided. Do I not have to worry about even harmonics? Are there other low frequencies commonly occupied in an in-home environment? Edit Thanks for the answers. I just wanted to add that looking through a few papers on harmonic noise from appliances, it seems a foregone conclusion that only odd harmonics are significant. Here is a diagram showing some slight even harmonics (source): AI: I am communicating through air (wireless) using pure sinusoids via a magnetic transmitter and receiver (i.e. near-field) Chances are that the interference you receive is 60 Hz AND odd harmonics. This statement is based on the fact that the current taken by localized circuits (such as transformers) will be somewhat rich in odd harmonics due to core saturation. Most common powerful loads/components such as transformers and induction motors tend to take a symmetrical (same shape for positive and negative half cycles) but distorted current. When the waveform is symmetrical there tends to be only odd harmonics. Taken to extremes a square wave only has odd harmonics: - Why are the harmonics of the signal I am transmitting not visible (e.g. 3rd harmonic of 125Hz signal is 375Hz)? Because you are transmitting pure sinusoids and they don't have harmonics. Of course if you don't change frequency seemlessly then you'll see some harmonics creeping in.
H: Does the impedance of a PCB track matter if the length of the track is far smaller than the wavelength of the signal? I have a PCB with tracks of no controlled impedance. The longest track is shorter than 1/5000 of a wavelength. Does the impedance of the track even matter? If not, then at what length would I need to start thinking about matching the track impedance to the source and load impedances? AI: I have a PCB with tracks of no controlled impedance. The longest track is shorter than 1/5000 of a wavelength. Does the impedance of the track even matter? No it won't matter. It starts to matter (as a rule of thumb) when the track (or wire) length becomes about one tenth of the wavelength of the highest frequency signal of importance. If not, then at what length would I need to start thinking about matching the track impedance to the source and load impedances? Well, not all scenarios like this require matching - for instance if you are designing a quarter wave impedance transformer you don't match on purpose. If, on the other hand, you are transmitting data then it makes complete sense to match the impedances to avoid reflections and the possibility of data corruptions.
H: Calculating VDS in circuit I need to calculate the \$V_{DS2}\$ in the following circuit, but am unable to solve it. I know that \$I_{D1}=\frac{1}{2}k_n(V_{IN}-V_t)=0.4\$ mA and that with this value I can calculate \$V_{DS1}=V_{DD}-R_1 I_{D1}=1.5\$ V. But then I'm stuck, I think \$V_x = V_{DS1}\$ but \$V_{DD}\$ also connects there so it needs to drop voltage somewhere? Also I'm not sure what to do with the current source, since it's pointing downwards it is just an infinite resistance, but then the circuit isn't grounded? Also note that \$\lambda=0\$. AI: As you noted (devices in saturation), $$ I_{D1}=\dfrac{1}{2}k_{n1}(V_{GS1}-V_T)^2$$ Same goes for \$I_{D2}\$. $$ I_{D2}=\dfrac{1}{2}k_{n2}(V_{GS2}-V_T)^2$$ And \$V_{GS1}=V_{IN}\$. They also gave you \$I_{D2}=I_{BIAS}=1\text{mA}\$. You could now use the equation for \$I_{D2}\$ and solve for \$V_{GS2}\$ (you have a value for everything else). Keep in mind that \$V_{GS2}=V_X-V_{S2}\$. \$V_X\$ is easy to find since you have the current \$I_{D1}\$ and you have the value of \$R\$ too, it is indeed \$V_{DS1}\$. So once you know \$V_{GS2}\$ from the \$I_{D2}\$ equation, you can solve for \$V_{S2}\$ (knowing \$V_X\$). And finally \$V_{DS2}\$ is just the difference between \$V_{D2}\$ and \$V_{S2}\$.
H: Raspberry pi 3 access point - broadcast power calculation / measurement? Using hostapd to turn a Rpi3 into a wireless access point on 2.4Ghz band. How can I estimate the broadcast power of this AP? PS adapter is 5V 2.5 A. P=IV = 12.5 watts, but, I think that's just the available source adapter power and not really what would be used. What could I do to measure the broadcast power of this? SoC: Broadcom BCM2837 Broadcom BCM43438 chip provides 2.4GHz 802.11n wireless LA Networking: 10/100 Ethernet, 2.4GHz 802.11n wireless Looking at the specs of the BCM43438 I see: So looking at the table I see a range from 37 to 41 mA * 3.6V = max 0.1476 Watts. Does this look right? And, is that powerful for an access point if so? AI: The specifications to look at are the dBm transmit numbers. dBs are always an expression of a power ratio. In this case it is the logarithmic ratio the AP output power compared to a 1 mW transmitter. To convert dBm back to power output in watts, divide the dBm figure by 10, then raise 10 to this power and finally multiply by 0.001 watt. For example, to convert the 20 dBm figure to watts, take the 20 dBm divided 10 to get 2. Then raise 10 to this power (102) to get 100. Then multiply 100 times 0.001 watts to find that the output power is 0.1 watts. Here is the same thing in its full formula: Watts = 10(dBm/10)*0.001 Of course power output is not the only variable affecting range or effectiveness. The gain of the transmit antenna, the gain of the receive antenna (often shared with the transmit antenna), the sensitivity and selectivity of the receiver, interfering signals, and the distance between APs all play a role.
H: Convolution integral for LTI system We know that any continuous time signal can be expressed as follows: $$x(t)=\int_{-\infty}^\infty x(τ)δ(t-τ)dτ$$ I came across a certain relation regarding linear time invariant systems . Using $$x(t)*δ(t)=x(t)$$ we get , since it's an LTI system : $$x(t)*δ(t-t_0)=x(t-t_0)$$ How did we get here? Is there a property I'm not aware of? I try to imagine the convolution graphically and since we have the x(t) how can the multiplication with the dirac function give x(t-t0) for every point in the convolution integral? AI: How can the multiplication with the dirac function give x(t-t0) of every point in the convolution integral? It can't, that is what the integral does. -- I try to imagine the convolution graphically. This is very helpful, but inside the integral, it is unnecessary. Inside the integral it is just multiplication. The integral operation handles the infinite summation for all values of tau. So I agree, the multiplication cannot get every point, that is why the integral is needed. If you find a good video/gif of convolution some here, you can think of each frame as the multiplication for a particular tau. The combined effect is the integration. Hope that helps connect the visual to the equation.
H: Dual gate driver inputs/outputs tied together I'm using an IR4427 dual low-side gate driver to drive a single IXEL40N400 IGBT. Since I'm not using the second side, I'm considering tying its input to the active input and output to the active output for higher current availability and thus, I hope, faster switching. Does anyone foresee a problem here? To me it just seems like I'm paralleling two (hopefully well-matched -- they're on the same die) MOSFETs, but I'd like to be certain. Here's the proposed schematic snippet: simulate this circuit – Schematic created using CircuitLab AI: No worries if you put a 10 ohm resistor on each gate driver output.
H: MOSFET (N) On resistance much higher than datasheet under load This is my first post on this forum and I am hoping to be much more involved in this forum. I am currently designing circuit that needs to switch a load of 200 mA by controlling its GND via a MOSFET (N). I chose a MOSFET with very low \$ R_{ds} \$ (On) of 25 mΩ (DMG6968U) but when the load switches on the ON resitance goes up to 12 Ω causing a large voltage drop. I am really confused. anyone know why is this happening. Edit: I am simply tying the gate to +5v or GND to do the switching Added the schematic. AI: Thank you all. I solved the problem. the problem was a resistance caused by cheap contacts to the PCB that holds the MOSFET they were adding about 4-5 Ohms to the circuit (Cheap breadboard) Thank you all for your responses. I changed the board and now it works perfectly.
H: Confusion regarding component JUMPER-PAD-3-2OF3_NC_BY_TRACE I am working on deploying a open source hardware project.In the list of components, they have mentioned one of the components as PAD-JUMPER-3-2OF3_NC_BY_TRACE_YES_SILK_FULL_BOX with value JUMPER-PAD-3-2OF3_NC_BY_TRACE.I tried to search on net but not able to figure out what is this.Since, I am maths major not electrical engineering, so I have hard time in figuring this. Please help me to figure out what is this and if possible how can I make arrangement for this.Thank you. AI: It isn't really a component at all. It's a solder jumper -- the component footprint forms three contacts, two of which are connected together by a trace. To change the jumper, it's possible to cut the trace between two of the contacts with a knife and drop a bit of solder on the other two to connect them. The picture below shows what these look like on a circuit board. The two bits with the arrows pointing to them (ignore the instructions) are solder jumpers, with the top two contacts connected together by default.
H: Zener over voltage protection from DC hand crank generator I have a board powered by a hand crank generator. Under normal conditions the voltage peaks at 18-20 volts. However spinning the handle violently will peak higher at 30+ volts. The generator runs through a schottky bridge rectifier and into a 5 volt regulator rated at a max of 26 volts. The circuit as a whole pulls less than an amp. In trying to keep the simple circuit simple I was thinking about a 20 volt zener (D5) and a tiny 10 ohm series resistor(R1). When it comes to protecting a voltage regulator is this the most ideal method or should I be looking at something different? AI: Answer to the question in the comment below - If you know the current and voltage you will expect, and want to dump the energy into a resistor instead of the zener, build you can build an emitter follower or source-follower-type circuit running off a small zener with a load resistor. I don't think I can draw a schematic in the comments so I will put in another answer. Follower Circuit: simulate this circuit – Schematic created using CircuitLab
H: Speed of FPGA fabric What limits the speed of FPGA fabric to Mhz range, while CPU pipelines are clocked much faster? Is it the interconnect delays that places a limit on signal timing? AI: Reconfigurability comes at a steep price. Logic cells in FPGAs are complex, making them much slower than the equivalent (but not reconfigurable) hard logic on an ASIC. Routing between logic cells requires long wires and pass gates, both of which increase parasitic resistance and capacitance, slowing signal propagation.
H: convert TTL/CMOS output signals to 0V and 5V Is there an easy way to convert TTL/CMOS output signals to 0 and 5V? For example, if an ic outputs 0.1V as "0", I'd like to convert it to 0V and if it outputs something like 3.9V as "1", I'd like to convert it to +5V. Thanks. AI: Generally a CMOS buffer, or two inverters in series, will do what you ask when powered from a precise reference supply. Anything more than 0.7 Vdd will give you a high output, and anything less than 0.3 Vcc will give you a low. Both will be very close (millivolts) to the supply rails if there is little loading. If you need to accommodate lower inputs you can use something like a voltage translator, an HCT gate or even a CMOS LVDS receiver. For more current and little voltage drop you can use a CMOS-output MOSFET gate driver- some of them can drive very high currents and have an output resistance in the ohms rather than more like 100 ohms+. Using a precise 5V reference for the buffer supply and feeding it a 5V-ish digital PWM input can give you a precise analog voltage when filtered.
H: How to find the data direction register given a port address? I am writing a custom library for Arduino (for example using the ATmega328P). I would like to get the data direction register given an address of the corresponding port address. For example, #include <avr/io.h> DDRB = get_data_direction_register(PORTB) How can I possibly achieve this? AI: If you look at the register map for the ATmega328 the DDR address is always one less than the port address so that is trivial to calculate. It may not be the same for other Arduino processors. You may need to have a table that is accessed by the processor type - that should be available at compile time.
H: why boost PFC topology is the most used? While searching about power factor correction topologies, the topology I came across almost all the time was the Boost PFC topology, and I'd like to know why? why is it the most widely used one, in comparaison to say, buck PFC or flyback PFC. AI: Active power factor correction (PFC) is all about controlling the circuit's input current waveform and making it as close to input voltage waveform as possible (Sinusoidal in the case of AC mains). Converter topologies such as Buck (Shown in next image) and Flyback work by quickly connecting and disconnecting themselves from supply source in order to modify their output voltage while keeping it continuous with stored energy from capacitors and inductors. Since they are completely disconnecting from power source while switching, input current is inherently discontinuous which is not desired if we are attempting to control it. On the other hand, converters such as Boost (Shown in next image) and SEPIC don't directly disconnect themselves from power source. This causes input current to be continuous. Furthermore, its value can be smoothly controlled from the converter's switching components, making these topologies ideal for active PFC. Active PFC methods for Buck-type topologies exist but tend to be less effective than Boost-type topologies. That said, this paper proposes a 2-stage Boost-Buck converter topology (as opposed to one-stage Boost-Buck), in order to control input current with the first stage and to reduce output voltage with the last one (Which could not be done with a Boost converter alone). Disclaimer: I'm one of the authors of this work.
H: get dynamic length packet from UART? I Write a program for receive data from Uart ISI, my Data Packet is something like : 2 first byte contain Device ID and type of command. Data is contain some bytes that could be have different length, minimum 1 byte to max 15 bytes, and the last byte represent the packet is finish. Uart ISI code is: volatile char UartFlag=Fulse; volatile unsigned char count=0; unsigned char coder[13]; ISR (USART_RX_vect) { coder[count]=UDR0; if (coder[count] == 20) UartFlag=True; else count++; } That receive each byte and save in coder. As you see when receive number 20, stop receiving process by UartFlag=True; and in main call a function DoSomthing(); to do something with coder as below: while (1) { if (UartFlag) { DoSomthing(); count=0; UartFlag=Fulse; } } But I have problem that sometime the Data section have 20 in their bytes and its lead to i dont get Correct packet. I try to put \0 (the last byte=0 instead of 20) but maybe have same error. I am looking for best way to find dynamic length of packet. What is the best way to find the dynamic length of packet? A way that maybe work is put the data length in first byte of Data section, but it add extra process in sender device. AI: In my opinion, to be 100% that it will work in all cases, you have to do what you thought of yourself: Include the length of the data (the number of bytes you are sending) as an additional header field, for example after the Type field. This is actually what the Ethernet protocols do. About extra process on sender I don't know how the sender is implemented. But it doesn't seem to require much processing effort. I mean the sender already knows how many packets will be sent, no? So, as you fill in the other two fields (Device ID and Type), you can fill in this additional field as well.
H: Increase low-frequency gain of a cascode amplifier I've been designing a noise generator (for fun) with the goal of getting a smooth frequency response between 10Hz and 10MHz. Since I don't have any "fancy" op-amps, I decided to go with a discrete design that will eventually use a cascade of cascode amplifiers. I the first stage, I would like to use a single zener to bias the amplifier and act as the noise source. Design Notes: I chose the cascode amplifier because it is not subject to the miller effect In order to take advantage of the avalanche effect, Vz > 6.5V (the 1N4737 drops 7.5V @ 30mA) One option is to use the zener only as a noise source, and perform the biasing separately, but I would prefer not to take this route The 1N4148 is used only for biasing and can be replaced with a resistor if necessary The transistors are set to have a bias current of 10mA (max gain point for the 2N3904) I realize op-amps can be used to do the same thing, but I wanted the additional challenge of coming up with a discrete design, and op-amps with a 100MHz bandwidth don't come cheap. Questions: Does my simulation provide a good approximation of the desired behavior? Recall that the goal is to amplify white noise between 10Hz and 10MHz. How can I reduce the value of the emitter bypass capacitor (C2) while maintaining the desired frequency response and passband gain? If I were to build this circuit using an aluminum electrolytic capacitor for C2, will I run into ESR problems at high frequencies? AI: If I were to build this circuit using an aluminum electrolytic capacitor for C2, will I run into ESR problems at high frequencies? Not only ESR but ESL (inductance) will cause a resonant frequency and a gain peaking somewhere in the mid kHz to tens of kHz range and above this, gain will gradually reduce as the inductor becomes more dominant. How can I reduce the value of the emitter bypass capacitor (C2) while maintaining the desired frequency response and passband gain? Difficult given that you need a gain of ten and that you don't want the output impedance to rise much more than 680 ohms - I would consider using ceramic 22 uF capacitors and parallel them up to give you 220 uF. Again, you have to watch out for resonant frequency problems and you may need to add some 100 nF capacitors across those 22 uF caps to get a fairly flat response. Try looking up a few and adding the parasitic components to your model. So, to answer your first question: - Does my simulation provide a good approximation of the desired behavior? Recall that the goal is to amplify white noise between 10Hz and 10MHz. No, but this is easily remedied by adding parasitic components.
H: Should I do PFC? I came across this question:why boost PFC topology is the most used? Now I ask myself, should I do PFC too? My system is a servo with diode bridge rectifier and different current ratings. Very price sensitive. So I thought, I can do a boost PFC in place of inrush current limit (resistor and relay). So at least in my mind it's allowed. But what other benefits will that do for the system and for the user? AI: The main benefit of a PFC is being allowed to sell your product and for the customer to be able to use your product. That's a fairly big one. This requirement kicks in at a predefined power level. Other than that it makes it easier to deal with universal input as others remarked and generally makes for a nicer operating profile as your power conversion is not pulsed. That's mainly relevant for higher power conversion, in other cases its really just added cost and complexity. For inrush current you may want to consider a triac instead of a relay. For background this kind of requirement originates with fluorescent lighting which is a nasty load for electric grid. Basically drawing lots of current to uncompensated inductive (or capacitive) load throws off the mains theta which more or less creates load for no benefit. For SMPS the deal is using the whole sine wave and not just the top n percent of it which also messes up the power grid The power quality side of things is a bit more complex having to do with real and "imaginary" load but it comes down to power distribution no likey.
H: Stability problem OpAmp and FET source follower I just found this site and read a few great interesting articles but nothing that would solve my problem I am hunting since a while. I have a OpAmp circuit with a N-FET source follower that regulates the current of some LED's. LED current is set at 700mA. The ripple on the LED voltage source is at around 590kHz and is seen at 20mV on the source shunt of 0.3R. This feedback goes directly to the OP-. The OP+ has a quite stable reference voltage. This means I can measure a 20mVpp ripple between OP+ and OP-! When I measure the OP output and OP- I see that both signals have the same phase and are NOT inverted. This means that 20mVpp on the 0.3R shunt generates 40mV on the gate of the FET with the same phase. But it should be the opposite. So I guess the OP is not stable at this frequency. Otherwise it works fine and regulates the current accurate and quick. It's just that there is lots of ripple noise (and thus current ripple) and I would like to get rid of it! What am I not seeing or did wrong here? Any hints would be appreciated! Here a scope picture. The reference voltage at OP+ is stable. But I can measure a 20mVpp ripple between OP+ and OP-! This means the OpAmp is not really doing what it should do. On the picture is the gate voltage with the shunt voltage as comparison. The shunt voltage (OP-) is yellow with about 20mVpp ripple. The pink is the gate voltage at around 1.75V and it has the same ripple but with the same phase and 40mVpp. I always measured with the short, spring type GND connectors on the probes and took the OpAmp GND as reference. So my question is here: why doesn't the OpAmp try to remove the ripple as the feedback of the source is directly connected to OP-? Why is it running at the same phase? The GBW is at 2.2MHz. Isn't this circuit running at unity gain? What am I missing? Thank you very much for any hints/inputs!!! AI: You have oscillations because there is a chance that the circuit is unstable due to the gate-source capacitance of the MOSFET adding significant phase shift within the negative feedback closed loop containing the op-amp. Simple analysis would make you think that because the MOSFET is connected as a source follower (regarding negative feedback) any gate capacitance will be neutralized but this isn't the case if you look into the detail. The gate source capacitance is about 1nF and approximately one-third of this will be "seen" by the circuit because MOSFETs are not perfect voltage followers. Your waveform shows this: - Source voltage (yellow) is about 2/3 of gate voltage (pink) implying that between gate and source is a voltage of about one-third hence residual capacitance projected back to the op-amp output via the 100 ohm resistor is also about one-third. It should be noted that I have assumed the capacitance to be about 300 pF but, on further inspection of the DS it would be more like 450 pF. So you have a 100 ohm resistor and approximately 300 pF (or maybe 450 pF) forming an extra phase shift. This will add 45 degrees phase shift at about 5 MHz. However, it doesn't mean that the circuit won't oscillate at lower frequencies - the phase shift will be less than 45 degrees of course but, if there is enough added phase shift to bring the phase margin down to zero at above unity gain, you get an oscillator. Clues lie in this picture taken from the op-amp data sheet: - It's not absolutely clear cut because TI (in their infinite wisdom) have not delivered a graph of pure open loop gain and phase so this is the nearest we can use. For the gain, anything above the red line is potentially able to oscillate. For the phase anything equal to the blue line will cause oscillations providing gain is above the red line. As you can see, when 100 pF is placed on the output, the op-amp avoids turning into an oscillator i.e. phase margin becomes zero at about 3 MHz but gain has just dropped to about 7 dB below unity = no oscillation. What is the loading effect of the 100 ohm resistor and residual gate capacitance (~300 pF)? Difficult to be precise but if it's like adding a 100 pF capacitor then it's a fair comparison BUT, remember the 100 ohm and ~300 pF add probably +30 degrees of phase shift around the loop at about 3 MHz so it'll oscillate, possibly about 1.5 MHz for a guesstimate. The ripple on the LED voltage source is at around 590kHz and is seen at 20mV on the source shunt of 0.3R This is significantly below the 1.5 MHz that I estimate but the mechanism outlined above cannot be ruled out given the lack of proper information in the data sheet. At 3.3 volts the phase margin may be worse for instance. My estimate for residual gate capacitance might be significantly low as well. Poor grounding may also contribute. Remember also that the LEDs are fed from an independant supply (9.5 volts) that may have significant ripple at 560 kHz and this will not be greatly reduced by the op-amp because it only has a gain overhead of about 15 dB i.e. it won't cope well with this as a disturbance.
H: Full Wave Rectifier Efficiency I was readings this article and extracted from it that "The maximum efficiency of a Full Wave Rectifier is 81.2%". Is that true, or alternatively is there some other fundamental limit on AC/DC conversion efficiency? Since FWR is the canonical stage in converting AC to DC, does it stand to reason that any DC-operating electronics that are not run from battery power intrinsically wastes at least 18.8% of the energy delivered to it by the grid? That sounds crazy to me, what am I missing? AI: The writer of your reference article has himself defined the efficiency differently than usually. In his lossless circuit (=transformer, diodes, resistive load) all power taken from the mains voltage goes to the load, but only 81.2 percent of it is DC, the rest is the AC component. The load gets it, too. It's not wasted as dissipation in the rectifying circuit. Writer's definition has some meaning, if we could have a filter that only removes the AC component from the output voltage without any side effects. Inserting that filter causes the power to the same resistive load drop 18.8%. Is that drop lost as thermal dissipation -yes, if the filter dissipates it. In practical circuits, especially if the filter is not dissipative, then the power drop is difficult to to know without complex calculations. ADDENDUM: Commentator @Dave Tweed is right. A lossless filter and diodes interact non-linearly, waveforms get distorted and the action changes radically when compared to original with resistive only loading => the presented "efficiency" 81.2% loses its meaning.
H: 50W LED with 10W power equal to 10W LED at full power? If I buy a 50W LED and power it at 10W will it have the same brightness as a 10W LED at full power? AI: All else being equal, yes. Of course, all things are never equal. First, the lower-power unit will probably be physically smaller than the 50 watt, and this will include the light-emitting area. If you look directly at the LED while it operates, there is a good chance that the lower-power unit will appear brighter, since it is emitting light from a smaller area. However, the appearance is irrelevant. What counts is the total light emitted, not the power density per unit area. Second, (again assuming a physically smaller LED) heat sinking is likely to be more effective on the higher-power LED. At the same power it is likely that the low power LED will run hotter, and this will reduce its efficiency somewhat. Finally, the power rating for an LED refers to the total input power, not the visible output. So the two LEDs may not use the same process, and may have different intrinsic efficiencies.
H: Radio clock sync circuit identification I took out a little board from a radio clock that looks a lot like this one. I believe this is used for automatic clock setting - the coil is probably used as an antenna. It looks like this: The contacts on the far sides I believe are ground and 3V power supply (since the device uses 2 AA batteries). The two in the middle, labelled T and P2, I have no idea what they do. There is a crystal resonator on the back of the board, and the label "JS-D65" (which yielded no useful search results) I tried bringing up one of the middle contacts and listening on the other, but it stayed on the voltage. Because of the chip under the epoxy, I assume the communication is digital. Any help would be appreciated. AI: That looks like one of the modules for receiving the radio time signals transmitted from various places including: UK - Time from the NPL (was MSF from Rugby) Frankfurt, Germany - DCF77 Colorado, USA - WWVB And others Receiver modules might receive the time signal from only one or a few sources, not all, as they are transmitted on a range of different frequencies. I tried bringing up one of the middle contacts and listening on the other, but it stayed on the voltage. It's not clear what you mean by "bringing up" and "on the voltage", but hopefully you haven't damaged the module by whatever you did. I assume the communication is digital. Yes. The protocols are explained on the different Wikipedia pages above. The two in the middle, labelled T and P2 If you search for similar modules, you will find that they typically have 2 signal connections (in addition to the two power connections): one signal (input to the module) to switch the module between "standby" (effectively "off") and "on"; the other signal (output from the module) is the encoded time signal output. That fits with you having 2 unknown connections. Based on the letters used, it would make sense if T is the digital time signal output, and P2 is the power control input. However you will need to investigate further, to see whether that hypothesis is correct. Different modules have different specifications for those signals. If you still have the original clock, you will reduce your reverse-engineering work by reconnecting that module to the clock and monitoring those 2 signals with an oscilloscope, when the clock is in use. It will be useful if you can determine that a device (probably MCU) elsewhere in the clock, is actively driving one of those signals (e.g. the P2 signal), but is not driving the other one (because this one is an input to the MCU in the clock). If you cannot connect that module back to the original clock, then my approach would be something like the following (but since I can't see or test your specific module, I cannot guarantee that this will work for you - use it as a basis for your own analysis): I would pull P2 low via a 10k resistor to 0V (don't connect it directly to 0V or 3V, in case it is actually an output on your specific module), and then monitor the T signal with an oscilloscope. If T does not seem to be driven, then move that 10k resistor to pull P2 high to 3V and re-test. (The idea is to try with P2 low and then high to try to enable (switch-on) the module, so that it drives the T signal, which I suspect to be the time output signal). If T still seems not to be driven, then the T signal might be open collector and require a pull-up resistor itself (as I explained, different modules have different signal specifications). To allow for that possibility, add another 10k resistor between T and 3V, then repeat the trial-and-error process with P2 as above.
H: How much power does an FPGA use Hello I have done a bit of research on google and on SE about how much power a FPGA uses and the answers are all unacceptable in my opinion. Therefore I am going to ask in another way. I need a generalized statistical analysis of the average power consumption to expect for all of my future FPGA related projects. I need just a ballpark figure so that I can use that information when making decisions about more specific elements of a project involving an FPGA I do not particularly care about actual real world parts, nor the manufacturers of those parts. I also do not care about getting an exact number. My preliminary research seems to suggest 90% of FPGA devices can be expected to operate between 10 to 100 watts. And to obtain detailed information would require designing a project around a specific FPGA as the variance can be much more or much less with certain chips in certain arrangements. AI: In addition, pointing me to the power estimator utilities doesn't help to solve my problem as I'm in such an early stage in development, that I don't have the info required to make the sheet give useful information. If you want a worst-case power estimate, just plug in worst-case values. All clock management blocks active at maximum frequency. All logic occupied and switching with 50% transition density at maximum frequency. All DSP blocks, Multiplier blocks, SERDES blocks, etc., operating at maximum frequency. All IO used and outputting with the worst-case logic standard at maximum frequency with 50% transition density. No number anybody can give you will be any more useful than the one you get this way. Alternatively, do more work on your design until you can get a better estimate of the resource requirements, then feed that to the estimator tool. What I want to know is the range between absolute minimum power consumed to the absolute maximum power consumed for various popular FPGAs The minimum power consumed isn't very interesting. That's just the case where the device is unprogrammed or you've programmed it with a single flip-flop that never changes state. If you really need a value (say, because your power supply has a minimum load to maintain regulation), use the estimator tool and specify no resources used. I looked at one estimator spreadsheet from Altera(Intel) and they specifically report a static power consumption --- so use that. In any case your device will be in the unprogrammed state briefly every time you turn it on, so your power system will need to handle it. I'm ... interested in ... how much power the device itself can theoretically consume before it disintegrates in a blinding flash of smoke. You can work this out from the package thermal data. For example, looking at the data for various packages available for Altera's high end FPGA's (here), the minimum \$\theta_{JA}\$ value is about 4 C/W (with maximum airflow). If we want to keep the junction temperature below 100 C with 40 C ambient, that means we need to keep the power consumption below about 15 W. But if you're really desperate you might consider adding external heat sinks or even water-cooling your FPGA. Edit The design decision of using or not using an FPGA being based on power consumption. With no prior experience with any FPGA at all, I need some sort of scale to determine if I should further investigate using an FPGA, or fall back to a slower MCU for the processing. In this case, the maximum power possible to use in an FPGA is totally irrelevant, and you need to do more study about how much power would be needed to solve your actual problem. The usual reason to choose an FPGA over an MCU isn't to save power (it won't); it's to achieve a required throughput (calculations per second) or to maintain control of the latency in the calculations. An FPGA can achieve greater throughput than an MCU for many problems due to its ability to do many calculations in parallel. If your problem is solvable by an MCU, that will typically be the lower-power and lower-cost solution, and you should choose the MCU. It's when you find a problem that an MCU can't keep up with that you'll turn to the FPGA.
H: Is this a reliable circuit to monitor AC mains voltage? I want to design a simple circuit to monitor AC mains voltage and take simple decisions if the voltage falls below a specific reference voltage. It's important to mention that accuracy isn't an issue at all. Circuit description : The input voltage is stepped down to 9 V peak voltage and rectified, the rectifier plus the capacitor is intended to act like a peak detector and the output voltage at C1 would represent the peak input voltage (of course after stepping down). This voltage is scaled via R1 and R2 then compared to a reference voltage using hysteresis comparator (using LM358 but not found in ltspice). Is this a valid concept ? AI: For reference and to protect against future edits, here is your circuit that will be discussed: The basic idea is OK, but there are some things to consider: Powering the opamp from the same signal you are trying to measure is asking for trouble. I don't like how this circuit looks at peaks instead of more a average. Ideally you probably want RMS, but that's difficult to do. You know the voltage is mostly a sine, so the average tells you what you need to know. The problem with grabbing peaks is that it's quite susceptible to noise. You're only getting information from the waveform at two points each cycle. This means you have little opportunity to filter out noise. Worse yet, the susceptibility to noise is non-linear. One positive glitch at a peak will make the measurement read high for quite a while. A single negative glitch at a peak will go largely unnoticed. Put a resistor before the capacitor so that you get low pass filtered absolute value, not recent max peaks. You may not need a opamp or comparator at all. If the output of OP07 is going into a microcontroller, then you might as well feed the filtered and clipped analog signal to the micro directly instead. The micro can trivially compare the result to some threshold in firmware. Then changing the threshold is a firmware change, not a hardware change.
H: Verilog: Implementation Using Primitive Modules vs. Bit-wise Operators The textbook I'm reading implements 1-bit adders using built-in primitive modules: module yAdder1(z, cout, a, b, cin); output[0:0] z, cout; input[0:0] a, b, cin; wire[0:0] tmp, outL, outR; xor left_xor(tmp, a, b); xor right_xor(z, cin, tmp); and left_and(outL, a, b); and right_and(outR, tmp, cin); or my_or(cout, outR, outL); endmodule But why not use bit-wise operators? Seems simpler. module yAdder1(z, cout, a, b, cin); output[0:0] z, cout; input[0:0] a, b, cin; assign z = (a ^ b) ^ cin; assign cout = (a & b) | ((a ^ b) & cin); endmodule Unless bit-wise operators implicitly use primitive modules? AI: Why not use Boolean operators? assign Z = (a != b) != c; assign cout = (a && b) || ((a != b) && cin); All three options are functionally equivalent ways of implementing a standard full adder. The simple answer to your question (and to my rhetorical one) is probably simply that your textbook is trying to build up knowledge, starting with low level design and working up. In the Verilog module we teach at uni, we start off with primitives (gate level), then move on to Verilog bitwise and Boolean operators, then move to higher levels of abstraction (e.g. {cout, Z} = a + b + cin). We find that this helps introduce people who may not have a strong programming or digital background into FPGA design, allowing them to get a feel for how higher and higher levels of abstraction work.
H: Are radio waves naturally polarized like light or is that a function of how they are produced? I am thinking of let us say AM or FM radio being transmitted by a vertical antenna. Would the they be polarized vertically or horizontally and wouldn't the angle of the receiving antenna then determine the strength of the signal. AI: Radio waves emitted by an antenna have a specific polarization, and receiving antennas are generally sensitive only to a specific polarization. So in principle if the transmit antenna were strictly vertical and your receive antenna were strictly horizontal, you would receive nothing. But there are a couple of complexities: Partially-aligned linear radio antennas can receive each other with modest losses. A circularly polarized antenna can receive any linear polarization with modestly reduced efficiency, and vice versa. Short-wave signals are generally received after bouncing off the ionosphere, which randomizes the polarization. Similarly, Wi-Fi and other 2.4/5 GHz signals are often bounced off buildings or walls, which tends to randomize the polarization. Signals that are not narrow-band can have complex mixtures of polarizations, and polarization can change very rapidly with time. The key difference between radio waves and visible light is that most of the radio signals we are familiar with are produced by coherent emission processes, which (usually) produce fully-polarized radio waves. More, almost all detectors of radio waves coherently detect just one polarization; radio astronomers usually use pairs of crossed dipoles so we can record both polarizations and reconstruct the input signal's polarization state. Most of the visible light sources we deal with are incoherent and produce unpolarized light (an even mixture of polarizations) and our detectors mostly aren't sensitive to polarization anyway. Lasers are coherent and indeed are polarized, but unless the laser is designed to have a stable polarization, you tend to get random jumping around on very short time scales, averaging out to unpolarized. The human eye is in fact very slightly sensitive to polarization, though we don't usually pay attention, and there are processes - like reflection - that readily add polarization to light, hence the utility of polarized sunglasses (to preferentially block light reflected off horizontal surfaces).
H: Why put unpopulated components on a BOM? I've ran into quite a few engineers from unrelated backgrounds that put unpopulated components on the BOM. Some will do a section clearly labelled DNP at the bottom, others will leave them dispersed throughout the BOM, but highlight the rows. Having a DNP section seems like the way to go if you must do this, the only downside I can think of being that there will have to be more manual editing of the CAD package output. (Have personally witnessed this, the DNPs were changed at the last minute, the DNP section didn't get editted properly, and parts that shouldn't have been on the board were placed.) Leaving them throughout and highlighting the rows seems suboptimal because there could easily be duplicate rows for populated and not populated, and again, more manual editing. I don't see why this practice is necessary. A BOM by definition is a list of things required to build something. If a component is not on the BOM and assembly drawing, it should not be on the board. Adding components that aren't actually there just seems like a source of confusion further down the line for whoever enters the BOM into the ERP and purchasing. What does putting unpopulated parts on the BOM achieve that leaving them off the BOM and assembly drawing doesn't? AI: If you don't explicitly document that these components are not to be placed, you will inevitably have your manufacturing team notice that there is a location on the board with no corresponding line in the BOM, and delay the build to send an engineering query asking what is supposed to be placed there. Explicitly documenting not-placed components avoids these queries, much like "this page intentionally left blank" in the manual avoids people asking what was supposed to be printed on the pages that were blank in their copy.
H: Transformer with Single Turn Secondary in an Induction heater I was thinking of a circuit which essentially has a series LC circuit connected in between an H bridge. There were certain things, I wasn't sure if I understand correctly, and would appreciate any input about it. Neglecting the details of the L & C values. Now if I was to turn on M1 and M4 at the same time, while turning off M2 and M3 and the other way around in the other cycle. I would expect an AC square wave, which when passes through the LC low pass filter, I would expect a Sine wave in the middle of L and C. Here is the part I want to confirm I understand correctly: Now if I was to run this at resonance, then the voltage in the middle will have the highest peak at that frequency. If my Inductance was actually a work coil(that is used in an induction heater) and I was to play a steel rod in between or across it, then am I correct in assuming that this will behave like a step down transformer ? So my work coil will be the primary with the high voltage, and the steel rod would be the single turn secondary, thus voltage steps down and CUrrent multiplies and causes eddy currents in the rod to heat it up. Thanks for looking. AI: You have everything right. The only slight clarification I'd make is in this paragraph I would expect an AC square wave, which when it drives the LC resonator, I would expect a large sinewave current through the LC, and a large sinewave voltage at the middle of LC.
H: Is a current signal more robust than voltage signal from noise? Why? I am trying to transmit a 3.3V digital signal over 20 cm distance. The problem is the environment might be noisy because of high current wire near it. I read that current signal is more robust than voltage signal. Does that apply to my case? and why? AI: It's not whether the signal is coded as voltage or as current what makes it more or less robust, it's the power involved. Receiver of voltage signals (e.g. CMOS gate input) have extremely high impedance. So a noise voltage source does not need to provide much power to disturb the signal. That's why you shouldn't use it for "long" (some 0.1m) distances, because changing magnetic fields through the area circumscribed by the signal and GND line can easily induce some volts; although not much power (BTW: that's why twisted pairs are used; it causes the noise voltages of each twist to roughly cancel out each other). Current signals (e.g. 4-20mA) are working against some resistors at the receiver side that require quite a voltage to generate the current. That means noise (voltage or current) source need to provide considerable power to have an effect. This is what makes it more robust. But as thobie mentioned there are other good ways of reducing noise interference than just by requiring more power: E.g. using differential signaling. That way you can still use voltage signals with high impedance receivers (but need as second signal line). That's what is used e.g. for USB.
H: What 3-input logic gate is this? I was experimenting with logic gates on my breadboard and tried a 3-input logic gate. I got the following result: Input | Output A B C | ? 0 0 0 | 1 0 0 1 | 0 0 1 0 | 1 0 1 1 | 0 1 0 0 | 1 1 0 1 | 0 1 1 0 | 1 1 1 1 | 1 What logic gate could this be? Thanks AI: Functionally it is this: - And it doesn't have a specific name unless someone somewhere in some obscure place has named it.
H: Arduino-to-Arduino SPI unexpected characters I've been trying to follow the posts on Nick Gammon's page which are admittedly a bit old, but seem to be everyone's recommendation for trying out Arduino to Arduino SPI. I'm able to get communications working, but there's a small issue I can't seem to pinpoint. Every once in a while there are three characters that show up in my output that are completely unexpected. Where are these characters coming from? Here is what they are: Hello, world! Hello, wo&⸮BHello, world! Hello, world! Hello, world! Hello, world! In the second row, you'll see &⸮B in place of what should be rld!\n. I have two Name-brand Arduino UNOs both being powered via USB to my computer, and they each have pins 13,12,11, and 10 directly wired to each other. Here's the code I'm using: The unexpected characters are the same every time, and seemingly show up as UTF-8...the decimal values for those characters amounts to 38, 226, 66, 10. I'm not sure if that has any significance. Master: #include <SPI.h> void setup (void) { Serial.begin(115200); digitalWrite(SS, HIGH); SPI.begin (); SPI.setClockDivider(SPI_CLOCK_DIV8); } void loop (void) { char c; // enable Slave Select digitalWrite(SS, LOW); // SS is pin 10 // send test string const char p [14] = "Hello, world!\n"; for (int i = 0; i < 14; i++) { SPI.transfer(p[i]); Serial.print(p[i]); } // disable Slave Select digitalWrite(SS, HIGH); delay (2000); } Slave: #include <SPI.h> volatile char buf [100]; volatile byte pos; volatile boolean process_it; void setup (void) { Serial.begin (115200); // have to send on master in, *slave out* pinMode(MISO, OUTPUT); // turn on SPI in slave mode SPCR |= _BV(SPE); // turn on interrupts SPCR |= _BV(SPIE); // get ready for an interrupt pos = 0; // buffer empty process_it = false; // now turn on interrupts //SPI.attachInterrupt(); } // SPI interrupt routine ISR (SPI_STC_vect) { byte c = SPDR; // grab byte from SPI Data Register if (pos < sizeof buf) { if (c == '\n') { buf [pos++] = '\0'; process_it = true; } else{ buf [pos++] = c; } } } void loop (void) { if (process_it){ char * bufp = buf; Serial.println (bufp); pos = 0; process_it = false; } } AI: Despite the fact that both units were being powered from the same USB Hub, there was a difference in voltage on the GNDs of each Arduino. Jumping the GND pins between the two got rid of the stray characters and strange behavior. D'oh.
H: Difference between cheap and costly DAC chips? For same input parameters (resolution and sample rate) are costly DAC chips meant to produce signals closer to real world analog than cheaper ones? AI: The resolution may be the same spec, but the "noise" will be different; that means the step-size will vary away from ideal on the cheaper DACs. A 12 bit DAC, promising to cover 0.0 volts to 4.096 volts in precise 1.0000 millivolt steps, is a good test case. Or even 8 bit DACs. Read the datasheets. You'll also find the cumulative errors, the sum of those steps, will be larger on the cheaper DACs. Thus at mid scale, where you expect 2.048 volts, you likely will be nowhere near 2.048 volts. Additionally, the codes right before and right after 2.048 likely will show HUGE errors (the dreaded mid-scale nonlinearity). The two specs of interest are DNL and INL. DNL is differential nonlinearity, describing the finegrain wander of Vout (or Iout) away from ideal 1.00000 milliVolt steps. Integral nonlinearity described the cumulative wander of Vout away from ideal. Often the DACs are trimmed for ZERO and for FULLSCALE accuracy. That will be spec'd (if not, you should not consider that DAC for use). The INL describes the DAC output error, worst case, at voltages other than exactly ZERO and exactly FULLSCALE.
H: Wiring help for a DC LVDT? I have a DC LVDT here with a built in signal conditioner: http://www.te.com/usa-en/product-CAT-LVDT0004.html I have a weak background in electronics, so I really have no clue who I'm supposed to extract signals from this sensor. Ideally I would want it hooked up with an arduino. Anyways, three wires: Red: +VCC (Loop supply Input) Black: -LOW (Loop supply Return) Green: Case Ground Any help on setting this up would be much appreciated! AI: I'll assume you've read through the datasheet. Here is a decent article on 4-20mA sensors and how to work with them. Notice that you need to provide something between 10 V and 28 V DC to operate this sensor, which is not going to be particularly "Arduino-friendly," but you can use an interface module to convert the sensor response to a 0-5V signal. You can also roll your own using a trans-impedance amplifier circuit, but I'm thinking you'll do better with off-the-shelf tech, maybe something like this, which integrates an Allegro ACS712 current sensor. As for what to do with the chassis ground... I would connect it to your system ground (i.e. the Arduino ground). And you'd wire it something like this: simulate this circuit – Schematic created using CircuitLab
H: Raspberry PI, PCF8574AT, MPU6050, BMP180 i2c voltage difference? I'm building a rpi 3 b+ based robot which collects data from a MPU6050 (Gy-521 breakout board, I power it with 5V), a BMP180 (3V input) and displays some data on a LCD via a PCF8574AT i2c converter (which I want to power with 5V, because on 3V the LCD loses contrast). Currently it is utilizing an Arduino connected with a usb cable to collect the data and print to the LCD (so all the devices are connected to the Arduino, not the rpi, but I want to get rid of the Arduino completely and control everything from the rpi directly). The problem is that rpi supports max 3V on the GPIO. Information about these modules' i2c voltage are hard to find, although I read in the datasheet of the PCF8574AT converter that it has a 'non-overvoltage i/o' - but i'm still unsure if it really means that it can be safely connected to the rpi? And won't the mpu6050 and bmp180 also fry the rpi? Will I need a voltage level converter or are these devices compatible with the rpi's 3V GPIO i2c? Links to docs: pcf8574at: https://abc-rc.pl/templates/images/files/995/1426613116-konwertertwi.pdf Mpu6050: https://abc-rc.pl/templates/images/files/995/1443347244-dokumentacja.pdf AI: I2c is an open collector bus. You could easily power the bus at 3.3V of the RPi, and it will be safe. But then the devices won't work. They expect a Voltage Input High of 0.7*VDD which is 0.7 * 5v = 3.5V. No guarante that they will read the input from the RPi via the pull up resistor. You will need an i2c voltage converter. Please search the site as there are hundred of questions explaining these circuits.
H: Why doesn't my 555 + MOSFET setup drive my motor? I have a 3v motor driven by a MOSFET that gets its input off an inverter circuit that comes off the output of a 555: The idea is that when you press the button, it waits for about 20 seconds and then starts the motor. (It's for a rotating banner that goes under a drone, I need it to start spinning after the drone lifts off from the ground and not before!) So it seems to work if I put an LED instead of the motor, but with a motor it makes a mess and becomes flaky. Sometimes the motor turns on, sometimes it doesn't, sometimes it runs for a few seconds and then turns off. Motor is supposed to run on 3v, but I think it's not getting the full 3 volts. It's hard to measure as the voltage seems to change when I put the voltmeter on! When I replace it with an LED it seems dimmer than it would be if I hooked it up to straight 3v. (The "GDS" circle thing is an IRFZ34N N-Type Power MOSFET, sorry I didn't know how to draw it.) I tried replacing the 9v with 3v and 16v, still doesn't really work. Also tried replacing the 1k resisters with 100, they just got really hot. Any ideas what I might be doing wrong and how I can get this to work? (Thanks for your patience, I'm a bit of a noob) AI: I bet you will learn a lot more if you figure it out yourself. So, rather than telling you the answer, I am going to help you to get the answer yourself. First break the problem/circuit into parts and check each part against how you think it should work. I recommend: 555 and RC BJT FET Motor For each part, consider what is required for each state. Start at the Motor and work backwards and check each with a DVM until something doesn't respond as you expect it would. Ex: For the FET Input(G) Output(D) Result 0V Off (3V) Motor Not Running 9V On (0V) Motor Running
H: How does 16550 UART handle non-integer baud rates? The 16550 UART calculates the baud rate using formula 115200 divided by the 16-bit number obtained by concatinating the High and Low DL registers. There are several well-known divisors that get you well known baud rates, and are easy to calculate. A baud rate of 9600 is just 115200/12. 57600 is 115200/2, 300 baud is 115200/384, etc... 115200 has 90 integer divisors. My question, which I don't /think/ is answered in the datasheet, is that happens when you input a value for the divisor that doesn't come out to an integer like, say, 7. 115200/7 = 16457.142.... I can see any one of the following being potential outcomes: The chip attempts to operate at the baud rate specified including fractional timings The divisor is considered invalid, the change is ignored, and the chip continues to transmit at the previous rate (perhaps raising an error?) The baud rate is rounded to the nearest integer (i.e, '7' would result in a baud rate of 16457) The baud rate is rounded to the nearest integer divisor of 115200 (one of the 90, so '7' would result in 14400 baud) Something else I haven't thought of. I know, for example, some datasheets warn about writing '0' to these registers, as 115200/0 is undefined. I was going to test this on a Raspberry Pi 3, only to discover the UARTs on there aren't real 16550s. I will attempt to test real hardware as soon as I can lay my hands on some, but that may be a while and my little project is stalled until I know the answer. Any ideas? AI: It might depend on whose 16550 chip you have (though I would expect they are all the same): The TI datasheet says this in section 8.5.1 The UART contains a programmable Baud Generator that is capable of taking any clock input from DC to 24 MHz and dividing it by any divisor from 2 to 216–1. The output frequency of the Baud Generator is 16 × the Baud [divisor # = (frequency input) ÷ (baud rate × 16)]. Two 8-bit latches store the divisor in a 16-bit binary format. These Divisor Latches must be loaded during initialization to ensure proper operation of the Baud Generator. Upon loading either of the Divisor Latches, a 16-bit Baud counter is immediately loaded. Table 4 provides decimal divisors to use with crystal frequencies of 1.8432 MHz, 3.072 MHz and 18.432 MHz, respectively. For baud rates of 38400 and below, the error obtained is minimal. The accuracy of the desired baud rate is dependent on the crystal frequency chosen. Using a divisor of zero is not recommended. So you can use any divisor you want with any clock you want. Non-standard baud rates would of course have to match at each end.
H: Is it safe to use a Peak detection circuit on a 200 V AC sine wave ? I have an AC waveform of which I wanted to detect a peak. The application is to find when the AC signal is above some amplitude. The amplitude I want to detect is around 200 V. I intend to use a peak detection circuit. Something like this: Will it be fine as long as I use a ceramic cap of a high voltage rating, and a power diode ? Thanks for looking. AI: You are talking about a power circuit .Yes use a power diode .Now consider the initial charging current of your cap .Some caps are not rated for unlimited charging current .You should limit the charging current by whatever means .A series resister is a simple approach .Such a resister should have good surge power rating .Your proposed ceramic cap is more sensible than an electrolytic .Because you are just detecting a peak and not drawing any significant current then the series resister does not have to be to low .Your proposed ceramic cap will be say 1 microfarad or less for cost and purchasing reasons so say 100 ohm will give your peak detector a fast attack response of 100 microseconds .If you do not want things to be so fast then you can increase the series resister and use a smaller diode .
H: DC sweep analysis not matching transient analysis for amplifier I'm building a little sound amplifier for something and can't quite understand why it's not a powerful as I though it might be. This is the circuit, and a simulated DC sweep is shown below:- Vdrive is the meeting point of the output transistors Q1 and Q3. (Don't know what happened to Q2). The op amp is an LT1006. You can see that there is the possibility of a 1V to 11V output swing which makes kinda sense allowing for the op amp approaching it's rails. And it's linear wrt the input signal, symmetrical about a 6V mid point. This these are the transient analyses for signals of 1.5V (4Vp-p) and 4V (8Vp-p):- You can see that the first graph is slightly clipped. And the clipping just increases as the input signal increases above 1.5V, as in the second graph. Nevertheless the 4V signal should still fit into the output range as calculated by the sweep analysis. The clipping also occurs independently of the signal frequency. Why? Shouldn't that output signal range be identical to the sweep analysis? Otherwise what's a sweep analysis for? AI: Some notes, Paul: The LT1006 is weak. (Output impedance seems about \$300\:\Omega\$.) But that's not the cause of the clipping you see when supplying \$V_{PEAK}=1.5\:\textrm{V}\$. You are driving your circuit directly with an uncommon input signal that has a nice DC bias exactly half-way between your rails. But that's obviously just a convenience. So let's cut to the chase. The proximate cause for the clipping when supplying even so little as \$V_{PEAK}=1.5\:\textrm{V}\$ is \$R_2\$ and \$R_3\$. Those are the devices that are supplying base currents to your two output transistors, \$Q_1\$ and \$Q_3\$. I'll analyze this situation below, too. Hopefully you can see that the LT1006 isn't driving your output BJTs. It can't forward bias either BJT, because in both cases it's output is blocked from reaching and forward biasing either BJT by one or the other of those two diodes in the circuit. All the LT1006 can do is to sink added current from \$R_2\$ or source added current into \$R_3\$. That's how it acts to move the BJT bases around. But the base current supply isn't coming from the opamp. It's coming from the resistors. Let's look at the situation with your worst case \$V_{PEAK}=4\:\textrm{V}\$. This will help caricature the situation more bleakly and make the obvious stand out even more. Then you can return to the question of \$V_{PEAK}=1.5\:\textrm{V}\$, which will then be clear to see. (Keep in mind that we are talking about AC frequencies fast enough that \$C_2\$ can be treated as nearly a dead short compared to your load resistance, \$R_1\$. If you were to operate this at very low frequency -- say \$1\:\textrm{Hz}\$ -- then some of this problem would disappear because \$C_2\$ would present a high impedance and reduce the loading which will resolve some of the problem ahead.) With \$V_{PEAK}=4\:\textrm{V}\$, your input range to the opamp is from \$V_+=2\:\textrm{V}\$ to \$V_+=10\:\textrm{V}\$ (centered, of course, on \$6\:\textrm{V}\$.) With \$V_+=2\:\textrm{V}\$, the output also needs to match this so that the \$V_-\$ terminal can have the same value. But this means that the base of \$Q_3\$ must be perhaps \$850\:\textrm{mV}\$ lower, or about \$1.15\:\textrm{V}\$. But that only allows \$R_3\$ to sink \$1.15\:\textrm{mA}\$ of base current. (The LT1006 will be itself sinking \$\frac{12\:\textrm{V}-2.5\:\textrm{V}}{1\:\textrm{k}\Omega}-1.15\:\textrm{mA}\approx 8.35\:\textrm{mA}\$ in order to force sufficent drop across \$R_2\$.) With that low of a base current via \$R_3\$, \$Q_3\$ can't handle much collector current. As you can see, as the base voltage on \$Q_3\$ drives too close to ground, \$R_3\$ can sink less and less base current for \$Q_3\$ and this means lower current compliance out of \$Q_3\$. The same happens on the other side, where \$Q_2\$'s base is driven up towards the upper voltage rail and there is less and less base current available for \$Q_2\$ via \$R_2\$. Before we look at the situation with \$V_{PEAK}=1.5\:\textrm{V}\$, where you still saw clipping, let's just try and figure out where that clipping should occur. We'll do this by seeing just how low the output voltage can be, while at the same time providing adequate current for the load. The output voltage at the node with the shared emitters, \$V_{OUT}\$, less a forward biased BE junction \$V_{BE}=850\:\textrm{mV}\$, is the base voltage for \$Q_3\$. This base voltage, divided by \$R_3\$ is the total base current available. This base current, times an assumed \$\beta=100\$, provides the collector current. So the following equation applies: $$6\:\textrm{V}-R_3\cdot\frac{V_{OUT}-V_{BE}}{R_2}\cdot\beta=V_{OUT}$$ (The \$6\:\textrm{V}\$ present above is the quiescent output voltage.) That solves out as: $$V_{OUT}=\frac{6\:\textrm{V}\cdot R_2+V_{BE}\cdot \beta\cdot R_3}{R_2+\beta\cdot R_3}$$ And if you plug in your values, you should get the result of \$V_{OUT}\approx 4.53\:\textrm{V}\$. Which, after accounting for the quiescent output voltage, translates to \$V_{PEAK}\approx 1.47\:\textrm{V}\$. And that is about what you found, too, for the point at which clipping started occurring. So it is predictable!! Good thing. (I guess there's no need now to go look at the situation where \$V_{PEAK}=1.5\:\textrm{V}\$ since we just predicted that this is where the problems will start.) A solution to this problem is not necessarily a beefier opamp (well, the LT1006 is weak, so I'd pick something stronger regardless.) What you need is an adequate constant current supply. You can do this with an actual constant current supply built out of parts or else you can use bootstrapping (cheaper, easier.) The simplest fix is bootstrapping. It doesn't create a good current source but it does create a good-enough one to get by. simulate this circuit – Schematic created using CircuitLab The idea here is that \$C_2\$ will charge up to a DC bias that is about half your rail voltage of \$12\:\textrm{V}\$. So about \$6\:\textrm{V}\$ across \$C_2\$. In theory, discounting some minor details, this means that \$R_2\$ should have a constant \$5.2\:\textrm{V}\$ across it. (The emitters of the BJTs and the output of the opamp should have about the same voltage; and there is about \$800\:\textrm{mV}\$ across \$D_2\$ and, as already mentioned, about \$6\:\textrm{V}\$ across \$C_2\$.) This helps to create that constant current source I mentioned you needing there. (It's not perfect, but it is very much better than you had before despite the fact that it's mostly just a wiring change to get there.) In this case, the constant current is about \$\frac{5.2\:\textrm{V}}{180\:\Omega}\approx 29\:\textrm{mA}\$. Of course, this now means your opamp needs to be able to divert a fair portion of that current. So you need an opamp with output current compliance better than that. I've used such in the above circuit. Let's assume, for a moment, that you can only expect (at most) a \$\beta=100\$ from your BJTs at these currents. If you wanted to drive the above circuit with \$V_{PEAK}=4\:\textrm{V}\$, you'd need \$I_{PEAK}=1\:\textrm{A}\$ of collector current. This would mean at least \$10\:\textrm{mA}\$ of base current, which means at least \$1.8\:\textrm{V}\$ across those \$180\:\Omega\$ resistors (\$R_2\$ and \$R_3\$.) At those currents, of course, your small signal BJTs would be well beyond their specification range. But let's assume they are fine. We'd still have to expect more than \$1\:\textrm{V}\$ across their BE junction. So the output cannot swing lower than about \$2.8\:\textrm{V}\$. Which basically means you won't be able to drive this circuit using \$V_{PEAK}\gt 3.2\:\textrm{V}\$. Probably less, actually. And that's assuming these BJTs could operate with those collector currents. And we know they can't. But that gives you an idea of how to estimate the maximum swing to expect, even with bootstrapping added. Try the above circuit out in your simulator and use \$V_{PEAK}\le 3\:\textrm{V}\$. (It can't do better, so why try?) You'll probably also need to replace those BJTs, when you get around to thinking more practically. And there are other circuit changes to make, anyway. But that's for another day.
H: VHDL assignment and condition at the same clock edge on parallel processes Suppose that I have two processes in VHDL: One process is triggered on the rising clock edge and it is a state machine that sets a flag in one of its states. The second process is also triggered on the rising clock edge and it has a condition statement that reads this flag to see if it is '1'. When will the condition in the second process become 'true'? On the same clock that the flag is set, or at the next clock? Here is an example: -- state machine process (clk) begin if rising_edge (clk) then case StateMachine is when '0' => flagA <= '0'; StateMachine <= '1'; when '1' => flagA <= '1'; StateMachine <= '1'; -- loop forever end case; end if; end process; -- condition process (clk) begin if rising_edge (clk) then if (flagA = '1') then flagB <= '1'; else flagB <= '0'; end if; end if; end process; In this example, will flagB become '1' at the same clock that flagA became '1'? Or will flagB become '1' at the next clock? AI: Flag state will be sampled on the next clock edge. In the synchronous design, all the conditions are (or must be) "prepared" before clock edge occurs, and when it occurs, circuit transitions to new state within very short timeframe. Thus in situations like yours circuit will see previously "prepared" condition, thus previous state of the flag. On the next clock edge new flag value will be "prepared", and it will be sampled. You spend whole cycle to get the new propagated flag value; you may consider to trigger one process on rising edge, and another process on the falling edge, this way you will spend only half of the cycle. Update (as per comments below): when designing circuits driven by both edges of the clock, you must pay close attention to the duty cycle of the clock, not just its period.
H: How does the current remains constant in an ideal capacitor? In a tutorial given here, it is explained that the current remains constant upon sweeping the potential in an ideal capacitor. Can someone explain in a descriptive way (in terms of the role of electrons) how it happens? When applying a positive potential, we are charging the capacitor (right?) Why the current remains constant during charging? Constant current means a constant flow of electrons in the circuit (right?) Then, how is this flow kept constant? AI: Can someone explain in a descriptive way If you had a perfect undriven flywheel spinning at 1000 rpm and applied a constant loading torque to it, the speed would reduce linearly to zero over a period of time. Imagine the flywheel is the capacitor, the speed is the voltage and the torque is the current. If on the other hand you drove that flywheel from rest with a constant torque, the speed would rise linearly with time. If you can understand intuitively how this happens with a flywheel then you have your answer because, apart from symbolic stuff in the formulas, the formulas are the same. Can someone explain in a descriptive way (in terms of the role of electrons) how it happens? Not really a good fit for this site - try SE.Physics.
H: Is this solderable? I have an Arduino Mega and one of the capacitors is broken off. It's the same as the one still attached (47, 25V, RVI) and the capacitor should be placed at location PC2. I think this is SMD technique, anyway. My soldering skills are not high, I was wondering if I can solder it... since if it's positions at the right location I cannot put my soldering iron to it anymore, also since I don't think I can make the solder overlap the lines coming from the capacitor. If not possible, would it be an idea to carefully try to solder two wires on the Mega at the connector points of PC2 and use 2 wires which I solder against the capacitor and put the capacitor somewhere else? Update: Does anybody know what would be the consequence to leave the capacitor out completely (like now)? (I did not dare to turn the Arduino Mega on because of the broken capacitor). AI: Yes, this is solderable. It is not difficult. You need a soldering iron with a small point, and you need fine solder. I use 0.5 mm solder for just about everything. Use solder wick to clean both pads and make them nice and flat. Use your iron to heat one pad, and melt some solder on that pad. Place the capacitor so the it sits properly. Pay attention to the polarity. This is an electrolytic capacitor. If you connect it backwards it will not work correctly. Hold the capacitor in place with a pair of tweezers. Just push down on the top of the capacitor with the tweezers. This will keep it in place. Heat the pad that you put solder on. It will melt, and the capacitor will pop down onto the pad - you can actually feel a "snap" through the tweezers when the capacitor seats. Remove the iron. Let the solder cool. Remove the tweezers. Apply the iron to the junction of the pad and pin on the unsoldered pin. Heat them both. Apply solder to the heated pad and pin. It will melt and flow to cover both. Remove the solder, them remove the iron. Let cool, then resolder the first pad. The first pad wasn't soldered well because there wasn't enough flux on it. Heat the junction with the iron until the solder melts. Apply solder to the junction. Remove the solder, remove the iron, let cool. Finished. This method can be applied to any SMD part with two pads. More pads can be done just as easily, but you have pay more attention to the alignment. Had a closer look at the picture. There is a piece of copper stuck to one pin of the capacitor, and a torn place on one of the pads. You'll have to remove the scrap of copper from the capacitor. You can solder it back in, no problem. The rest of the pad appears to still be connected to the ground plane, so when you solder the capacitor to what is left of the pad you will still have a good connection to ground for it.
H: Calculation of ripple voltage after rectifier (equation from TAOE) I am reading The Art of Electronics, Third Edition by Paul Horowitz and Winfield Hill but I feel like I am missing something when the book comes to talk about (begins to, instead) Power-supply filtering (Chapter 1.6.3_A, page 32) after a half/full wave rectifier with capacitor. In the A subparagraph, here is what it is said : It is easy to calculate the approximate ripple voltage, particularly if it is small compared with the DC. The load causes the capacitor to discharge somewhat between cycles (or half-cycles, for full-waves rectification). If you assume that the load current stays constant (it will, for small ripple), you have : $$ \Delta V = \frac{I}{C}\Delta t $$ Just use 1/f (or 1/2f for full-wave rectification) for \$\Delta t\$ (this estimate is a bit one the safe side, because the capacitor begins charging again in less than a half-cycle). You get $$ \Delta V = \frac{I_{Load}}{fC} $$ for half-wave $$ \Delta V = \frac{I_{Load}}{2fC} $$ for full-wave Ok, I don't understand where that comes from, more specifically the \$I_{Load}\$ term instead of \$I_{in} - I_{Load}\$ term that I found (see below). I tried to retrieve it, I don't. Let's take the following schematic (which is used in the book): The KCL and KVL gives respectively : $$ I_{in} = I_C + I_{Load} = C\frac{dV_{Load}}{dt} + I_{Load} $$ $$ V_C = V_{Load} = V $$ The latter is quite useless in fact. So, if we work with the KCL equation : $$ \Delta V = \frac{I_{in} - I_{Load}}{Cf} $$ for a half-wave rectifier. And : $$ \Delta V = \frac{I_{in} - I_{Load}}{2Cf} $$ for a full-wave rectifier. What am I doing wrong ? Why don't I find the same equation that the book gives ? Thanks ! AI: Commenter @carloc has it right. Just to go into a little more detail: KCL works for average current, and it works for instantaneous current, but of course you have to be consistent in what you are comparing. Looking at the average: The average (DC) value of current in a capacitor is zero, so then Iin (avg) = Iload (avg), and the average change is voltage is zero, since it increases and decreases by the same amount each cycle once you reach steady state. Looking at the instantaneous current is more useful because you are trying to find the amount the voltage drops during the time the diodes are off, Iin = 0, and the capacitor is supplying all the load current. As @carloc says, Iin is zero for most of the cycle, because the diodes are only forward biased for a small period of time near the positive and negative peaks of the input AC voltage. If you set Iin = 0 then your equation matches the book except for the sign, but peak-to-peak ripple is conventionally given as a positive number so you would take the absolute value. By the way, it is an approximate formula. If the ripple voltage is high and/or diode and transformer resistance limit the diode current, then the forward biased time is an appreciable fraction of the cycle and you can no longer assume the discharge time is T/2 (full wave) or T (half-wave).
H: Need to divide variable DC signal voltage I have a CNC controller that outputs 0-10v PWM signal voltage for the purpose of controlling cutting spindle RPM (or in this case, laser power output). This is not the actual driving voltage, this is a signal voltage (I hope that makes sense). My laser engraver requires 0-5v PWM signal, so I essentially need to cut the controller's output voltage by 50%. I am not an electronics expert by any means, but I can successfully put a circuit together. I have op amps, resistors, capacitors, and diodes at my disposal, so hopefully it can be done with what I have on-hand. Thank you very much for any help. AI: Since you apparently just want to divide a signal voltage in two and have no current requirement: simulate this circuit – Schematic created using CircuitLab This is called a voltage divider.
H: Connect signals to N.C pins in diode array (for ESD protection) We created a circuit with diode array for ESD protection PN: IP4283CZ10-TBR (MFR: Nexperia) Data sheet As you can in the datasheet, there are some pins which supposed to be N.C, but we did connect them to the signal pins, as you can see here: We afraid that this connection might explain some problems we see in our samples, for example- one PCB didn't communicate with UART until we removed the part and another circuit failed during first watchdog test (again, until we removed this part). In the datasheet all I could find that is related to the N.C pins was this line that mention that the capcitance between N.C pin to signal pin is 0.07pF (sorry, can't upload another image) Correct me if I'm wrong, but it doesn't say much (just that the pins are seperated using small value CAP, nothing about not short between them). Hope you could help me understand if this what is causing the problem or is it something else. We also use another 3 identical diodes arrays with same "connection problem", but they seem to work fine (Connected to RS422 RX,TX and Debugger lines) The debugger we are using is J-LINK (using JTAG communication). AI: When listed in a datasheet, "No connect" can mean "Don't connect this pin!" or "We didn't connect this pin.". This looks like the latter. The 0.6 pF is just parasitics, and picofarads will not affect a UART or reset. Your problems are most likely unrelated to these connections. Since you're seeing differences between copies of supposedly identical boards, there are probably manufacturing or assembly defects (but this isn't 100% certain - could also be marginal design). I would start with a thorough visual inspection under a microscope looking for soldering shorts and opens. Check polarity / pin 1 position of all active devices. Maybe when you removed this part from your defective boards, you also reflowed some other nearby parts.
H: IN-12B Nixie Comma Current Limit I'm looking into using a set of IN-12B Nixie tubes in a clock design I've been working on. If I tie my anode to the 170V striking voltage and use ~16kOhm to limit the digit current to 2.5mA then we're fine. However, I don't know how I am supposed to achieve the 0.3mA current limit for the comma cathode. I'd like to think that I can just add another resistor in series with my cathode in order to meet the current spec but I've been having trouble finding any examples of people using the IN-12B without ignoring the comma cathode entirely. What is the right way to go about this? AI: 2.5 mA × 16 kΩ = 40 V drop, so the sustaining voltage is about 130 V. You could measure this to verify. If we assume the comma has similar voltage characteristics at the lower current, then you need a 40 V drop @ 0.3 mA, or 130 kΩ. Or just use your resistance substitution box (start at 1 MΩ) and adjust it until it looks about right.
H: How does modules in an asynchronous circuit know when to signal that their output is ready? I have recently become fascinated by asynchronous CPUs, which have no central clock and each module instead sends a signal, when their data has been processed. However, I have been wondering how such modules actually know when their output is ready and stable? In the following example of an asynchronous sequential circuit, some modules communicate using a simple handshake protocol as follows: A module is triggered by a READY signal from a previous module. The module then starts manipulating the input data. The RECEIVED signal is sent to the previous module, when the input has been read and can be modified by the prevous module. When the output is updated and stable, a READY signal is sent to the next module. When the RECEIVED signal is sent as a reply, the process starts over. simulate this circuit – Schematic created using CircuitLab Is it possible to send a READY signal when the output of a module is stable, without specifically timing the propagation delay of the module circuit? If not, what would be the simplest way of delaying a READY signal based on a circuit's worst case propagation delay? AI: Is it possible to send a READY signal when the output of a module is stable, without specifically timing the propagation delay of the module circuit? No. If not, what would be the simplest way of delaying a READY signal based on a circuit's worst case propagation delay? For each module, you have a matched delay line made up of a series of AND gates setup as buffers. See images below:
H: Counter in assembly, using interrupt to prevent multiple counts with single push? I am completely new to assembly, and I must develop a counter using PIC16F628A, a push button, and a display. Additionally it there will be an external oscillator (555). I made some progress on this, but I think I need some help from you people. At first I did a delay based on decrements in order to be able to watch numbers on the display. My problem now is that once I press the button, I need it to count only one number, independently how much time I keep it pressed. Something like, if it changes state up, it will increment 1. I believe this must be done with interrupts, I guess. Now, what is the best solution to my problem? External interrupt, by state interrupt? Any other thing? AI: Since you cannot use a timer (gathered from comments you've made), you need a suitable delay routine to provide a specific period of time. I like the period of \$8\:\textrm{ms}\$, from prior experience. But you can use any period you feel is appropriate. Assuming that your processor is using the factory calibrated \$4\:\textrm{MHz}\$ rate, the instruction cycle will be \$1\:\mu\textrm{s}\$ and it will take \$8,000\$ cycles to make up an \$8\:\textrm{ms}\$ period. The delay code should probably be made into a subroutine, to avoid having to replicate it over and over. DELAY8MS MOVLW 0x3E MOVWF DLO MOVLW 0x07 MOVWF DHI DECFSZ DLO, F GOTO $+2 DECFSZ DHI, F GOTO $-3 NOP GOTO $+1 RETURN The total time occupied by the above routine can be computed as: $$t=5\cdot\left[D_{LO}+2+256\cdot\left(D_{HI}-1\right)\right]$$ where \$1 \le D_{LO}\le 256\$ and \$1 \le D_{HI}\le 256\$, with 0 interpreted as 256. The CALL and RETURN instructions take up 2 cycles each and the above code takes all of that into account. Calling it should take exactly \$8,000\$ cycles and, at \$4\:\textrm{MHz}\$ this means \$8\:\textrm{ms}\$. You will have to create those two variables, \$D_{LO}\$ and \$D_{HI}\$ somewhere. That can be done like this, I think: CBLOCK DLO DHI ENDC There are, of course, other ways. And you can add an absolute address to the CBLOCK line if you want to place the block somewhere specific. Now that you have a delay routine, you can proceed to the next step. You need two new routines. One that repeatedly delays until the button becomes active and one that repeatedly delays until the button becomes inactive. Debouncing is included here: ACTIVE CALL DELAY8MS BTFSC PORTx, PINy GOTO ACTIVE CALL DELAY8MS BTFSC PORTx, PINy GOTO ACTIVE RETURN INACTIVE CALL DELAY8MS BTFSS PORTx, PINy GOTO INACTIVE CALL DELAY8MS BTFSS PORTx, PINy GOTO INACTIVE RETURN I don't know your port or pin number, so I just put in "dummy" values there. You need to replace them, properly. The above two routines assume that 0 is active and 1 is inactive. Now you can write your main code: MAIN ; <code to reset your counter value> GOTO LOOP_NXT LOOP CALL ACTIVE ; <code to increment your counter value> LOOP_NXT ; <code to display your counter value> CALL INACTIVE GOTO LOOP The above code resets your counter value to whatever you want to start at and then it jumps into the loop where it displays the value and waits for the button to become inactive. The effect here is that if you start up your code with the button pressed (it should not be, but what if it is?), then the code will still reset the counter and display it... but it will wait until you release it before continuing. So you have to let up on the switch. Then, once that has happened, the basic loop just waits for a debounced active state of the switch. When it sees that, it increments the counter immediately (on the press, not on the release) but then waits for the button to be released before continuing, again. That's about it. You still need to write appropriate code for the counter and display. But that gets the idea across for the rest.
H: Regulate, convert to DC and store current from home made AC generator to charge laptop I am working on a small experimental project, learning more and more about electricity and electronics. So, I have found this broken mixer and, surprisingly, it has a 300W engine with magnets that produces current when rotated. Just what I needed. Now, what I intend to do is make it capable of charging a laptop battery, at least. The laptop's charger needs 100 V ~ 200 V and 1.6 A (50 ~ 60 Hz) AC and outputs 19 V and 4.5 A DC. Now, I need to take some decisions and understand several things better First, I need to decide if I should make my generator output at least 100V and provide at least 1.6 A to the laptop's charger then let the charger take care of the battery. Looks like the simplest and most secure solution. But if the generator creates 19 V and 4.5 A and sends this directly to the laptop, less work is necessary to rotate it, I guess. Why would I produce high voltage then have the charger reduce it? No matter, what option is better, I need to know: how to regulate the current, how to transform the voltage as needed, how to store it temporarily, at least to keep it steady. I have read a little bit about diodes, Zener diodes, regulators, induction coils, transformers and capacitors. Now, I have to find out how and in which order to connect these (I have bunches of them, of all sizes and colors) in order to achieve something like what I want. So, what should I do? Convert the AC current to DC using a diode bridge, and add a smoothing capacitor Regulate it with a Zener diode (and required resistors to limit current flow) or a regulator (preferably to 12 V) Convert the voltage from the regulated one to 19 V then use a capacitor to keep the amount if energy steady and provide it to the charger / battery? or maybe Use a regulator to stabilize the AC current Convert the voltage to 100 V and use a capacitor to store it Maybe this looks like I am asking for a circuit scheme (which would help, too) but I also want to understand every step AI: This would be an interesting project to learn some electronics but it may not be a practical one. For example, you don't say how you will power your generator. First understand that your laptop power supply's relationship between power in and power out is given by Power in = power out + losses If you have a good power supply the losses will be about 10 - 20%. You should also know that \$ P = VI \$ so that your PSU can output \$ P = 19 \times 4.5 = 85\; W\$. If it were 85% efficient it would draw 100 W from the mains. At 100 V this would be 1 A. At 200 V it would be 0.5 A. The 1.6 A figure is worst case surge on power up. 1a. First, I need to decide if I should make my generator output at least 100V and provide at least 1.6 A to the laptop's charger then let the charger take care of the battery. If I had to do it this is the way I would do it. The PSU regulator offers some protection to the laptop. 1b. But if the generator creates 19 V and 4.5 A and sends this directly to the laptop, less work is necessary to rotate it, I guess. It would, in theory, be 15% less because you wouldn't have the losses of the PSU. This would probably be replaced by the losses in your new circuit. 1c. Why would I produce high voltage then have the charger reduce it? Because your mixer windings are rated for \$ I = \frac {P}{V} = \frac {300}{230} = 1.3 \; A\$. They won't supply 4.5 A. (I have assumed you live in 230 V territory. You didn't add that information to your post or your profile.) I need to know: how to regulate the current, how to transform the voltage as needed, how to store it temporarily, at least to keep it steady. Yes you do. And you will need to research this rather than hope someone here is going to design it for you. [Should I] convert the AC current to DC using a diode bridge, and add a smoothing capacitor. If you want DC or a switched mode power supply this is essential unless you are generating DC. Regulate it with a Zener diode (and required resistors to limit current flow) or a regulator (preferably to 12 V). This would be a hopeless way of regulating. It is not used in commercial designs as it is inefficient and regulation is poor. Convert the voltage from the regulated one to 19 V then use a capacitor to keep the amount if energy steady and provide it to the charger / battery? You need to study SMPS power supply design. Your question implies that you don't understand enough to start this project yet. Cautions The risk of injury are high if you use voltages over 100 V. You have to factor in the risk of destroying a laptop if you get something wrong. I suggest you use the generator to power some LED strips instead. If they blow you've lost €10.
H: Difference between 2.4Ghz ISM and IEEE 802.11 frequency bands? It may be a silly question to ask the difference between a frequency band and a standard protocol which is represented by IEEE. But, I've been reading the wikipedia page of the IEEE 802.11 and it says that the frequncy band is 2.4Ghz ISM and i was massively confused on this problem. What's the difference between 802.11 and 2.4Ghz and why they can't be used to be connected together(like connecting an nRF24L01 to a Wireless LAN? The wikipedia page of 802.11 says that it uses the 2.4Ghz ISM band. So there should be no problem connecting an nRF to a Wireless LAN. Thanks in advance. AI: For radios to communicate, the channels must match, the channel spacings must match, the type of modulation must be compatible (software radio compatible), the packet timing and pre-ambles must match. As well as ErrorCorrection.
H: Powering 5V LED strips using 12V I am planning on powering a set of RGB LED strips (I am leading towards APA102 https://cdn-shop.adafruit.com/datasheets/APA102.pdf). I know that, at full brightness, each LED requires 60mA. So, at 30LEDs/m, that is 1.8A/m, or (5V x 1.8A =) 9W/m. I will be using 13m, and so I need a (13*9W=) 117W power supply that can provide 23.4A at 5V. My question is this. Can I use a 12V 10A (=120W) power supply, and 5 LM1084 voltage regulators in parallel to covert the 12V to 5V? Each LM1084 is rated for 5A output. http://www.ti.com/lit/an/snva558/snva558.pdf Please excuse my ignorance. Perhaps my reasoning is off, or thee is a glaring safety hazard here. Please feel free to offer any helpful advice at all. Thanks in advance. AI: If I understood you correctly you basically want to put an LM1084 every around 13m/5=2.6m, so each one of them will need to supply a current of 2.6m*1.8A/m = 4.7A (I just took your numbers, I didn't check if they are correct). The thing with the LDOs, like the LM1084, is that you have to be very careful regarding the maximum power dissipation they can afford due to their low efficiency. The reason is that simply the excess power that is not needed on each of them will be turned to heat. That is equal to \$(Vin-Vout)*Iout = (12V-5V)*4.7A = 32.9W\$! All this power will be turned to heat, which is enormous! Check the chapter 10.3 ("Thermal considerations") of the datasheet on this very important issue. Basically you have to find out if this 33W of heat can be dissipated in the package. If not, consider using a heat sink. If even with the heat sink the dissipation is still too high, you'll have to find and use a switching regulator instead, where the efficiencies achieved are usually around 80% to 90%. UPDATE: Of course another thing is that your input power supply, the 12V/10A will not be able to provide enough current! The LDOs need (almost) so much current at their input as they provide at their output, meaning the 5 LM1084 will need a total current equal to 5*4.7A=23.5A, which is higher than the 10A of your power source.
H: Adding a DC bias to an AC voltage at the base of a transistor I came across circuits probably using an AC at the base to probably amplify it. I also came across the voltage divider to add a DC bias. simulate this circuit – Schematic created using CircuitLab Neglecting the values of the components and supplies. Is the reason we add this DC bias to not have the voltage go to a negative value because that would damage a transistor? And just to be sure, the voltage at the base would be a sine wave with the DC bias as the baseline, correct? AI: Neglecting the values of the components and supplies. Is the reason we add this DC bias to not have the voltage go to a negative value because that would damage a transistor? The reason for the DC bias is to ensure the transistor stays in the linear region and therefore works as an amplifier. The idea with biasing is that you set the DC operating point of an amplifier somewhere in the linear region, preferably in the middle, and you superimpose a "small" varying ac signal so that it gets amplified. If the signal isn't small enough, you may end up steering your transistor out of the linear region, it will no longer work as an amplifier. Take a look at the following graph. If you bias the transistor so that it works in active mode, you can utilize it to amplify small varying signals. In the graph \$v_{be}\$ is the ac signal to amplify and \$V_{BE}\$ sets the DC bias point and the amplified signal output is \$v_{ce}\$. And just to be sure, the voltage at the base would be a sine wave with the DC bias as the baseline, correct? Yes. You essentially have \$V_{BIAS}+v_{ac}\$ at the input. That's why the input signal has to be small enough so that you don't disrupt the DC bias point.
H: 7408 AND Gate, Odd Behaviour I'm putting together a simple DTACK circuit for a 68k homebrew machine, and part of that is requires me to know when either the Upper Data Strobe (UDS) or Lower Data Strobe (LDS) is asserted LOW. I've only got a few random ICs to hand, so I'm attempting to use a 7408 AND and driving the output into two inputs of a 7402 NOR gate. For some reason the output of the AND gate seemingly isn't correct, it seems to miss pulses and I don't know why. It's a fast chip and I believe we're way within it's tolerances so what should I be looking for? The output below from my logic analyser shows 3 address lines (D0-D2), UDS & LDS (D3 & D4) and the trace from the bridge between the two NOR inputs (D5) (the same signal is present directly on the output of the AND). I know unused inputs should be tied LOW or HIGH but I assume that's when an input isn't used for an individual gate in a quad package? AI: You do not need the NOR gate to detect low on either UDS or LDS or both. The 7408 AND gate can do that in a single gate. UDS LDS OUTPUT L L L L H L H L L H H H As you can see you can get a low out of the 7408 when either LDS or UDS are low.
H: Manual Demonstration Circuit for 74HC595 Shift Register I am trying to create a manual demonstration circuit for a 74HC595 shift register. Se my circuit below. The problem I am having is that I am only getting a +0.7V signal to the data, clock, and latch pins of the 74HC595 when one of the buttons are pushed. The S4 RESET switch works fine (normally high at 5V, press switch to bring down to 0V), however S1, S2, and S3 do not accomplish the reverse scenario (normally LOW, press switch for HIGH) correctly. You will note that I am also trying to debounce the switching with a capacity or resistor. I suspect this debouncing set up is creating my problem. Does anybody have any advice on how to create a debounced normally LOW, momentarily HIGH switch that switches between 0V and 5V? AI: 0.7V is the base emitter voltage of the transistors. To verify: Remove the transistors from the circuit. Fix: Add a resistor (1k to 10k) to the transistor bases. Better yet: Connect the transistors (with resistor) directly to the switches.
H: How can I Access Tiger Sharc DSP processor through host pc? I am working on designing of a test board which involves both FPGA(Xilinx- Kintex 7) and DSP(Tiger Sharc) based signal processing. Now the end user wants to access the DSP processor programming options/ boot configuration through the host PC itself. As I went through the datasheets of these devices, I got the idea that I'll have to use PPLP for the communication purposes with FPGA. My question is that- Is there any device or connector which supports communication for DSP processor with host PC Such as PPLP to USB(- Is this a good idea to use)? Would really appreciate if anyone has any good and concise idea about this kind of devices. Thanks! AI: If the user wants to boot the DSP as well as communicate with it, then the TigerSharc has got LinkPorts (tm), which can be used for both high speed communication, and booting. The question is, how to connect to those to the PC. The PC doesn't have linkports, the DSP doesn't have PXIe. The obvious answer is to put whatever glue logic that's needed into the FPGA. Presumably you have already sorted out a PC to FPGA solution? I'd be using PXIe for that, fast, universal, supported with hardware both ends and by stacks of software, long history of use. That, or ethernet, same list of benefits, but with a different flavour. Now your task is to design an interface and API that sits in the FPGA and exposes the DSP facilities, without feeling like an FPGA to use, but instead feeling like the DSP's peripheral devices. Perhaps a bit-bang register to control any mode pins, and some dual port memory to put the boot code into. High speed comms will depend on what the DSP is running, an RTOS, or something closer to the metal? Ideally Analog will provide drivers to make its link ports look like TCP/IP or PXIe to talk high speed comms with the PC, and you'll simply put a hub or bridge in the FPGA. If not, then you're designing a special interface.
H: Contactor vs Relay As known, contactor is used to switch higher capacity than relay. But there some relays can switch high current too, such as, some power relays can switch current beyond 100A, and there are contactors to switch only 160A. So, if a relay has a same switching current with a contactor, which one to choose? And can relays be used in parallel to achieve high switching current to replace a contactor? AI: Wikipedia's Contactor article explains it pretty well. Unlike general-purpose relays, contactors are designed to be directly connected to high-current load devices. Relays tend to be of lower capacity and are usually designed for both normally closed and normally open applications. Devices switching more than 15 amperes or in circuits rated more than a few kilowatts are usually called contactors. Apart from optional auxiliary low current contacts, contactors are almost exclusively fitted with normally open ("form A") contacts. Unlike relays, contactors are designed with features to control and suppress the arc produced when interrupting heavy motor currents. [Emphasis mine.] Further down the same article ... Differences between a relay and a contactor: Contactors generally are spring loaded to prevent contact welding. Arc-suppression relays usually have NC contacts; contactors usually do not (when de-energerzied, there is no connection). Magnetic suppression and arc dividers are typically utilized when switching multi-horsepower motors. Magnetic suppression is accomplished by forcing the arc to follow the longer field lines of a fixed magnet placed in close proximity to the contacts. The longer path is specifically designed to force an arc length that can’t be sustained by the availableinductive energies. Figure 3 shows a schematic representation of magnetic arc suppression. Source: Automation Direct, Electrical Arcs - Part 1 of 2 part series. The article linked above is well worth a read. Your questions: So, if a relay has a same switching current with a contactor, which one to choose? Look carefully at the application and contact rating - particularly for motor or inductive loads. If you are satisfied that either will suffice you can choose based on some other criteria such as cost. And can relays be used in parallel to achieve high switching current to replace a contactor? Generally not. While doing this does reduce the long term heating of the individual contacts due to steady current running through them it is a problem during switching due to timing differences. Even wiring contacts of the same relay in parallel is risky as they never are perfectly aligned and the first one to make and last one to break carry the full switching action.
H: Powering DC 5v 120w from AC 110v I would like to power a large set of LED strips. Doing my calculation, at maximum output, I would require 120w at 5v. I am relatively new to electronics, so please excuse the noob question. In my research, I have found significant limitations in what I am trying to do. I have not found a wallwart capable of doing this. I have also found that using voltage regulators will produce too much heat, and would be dangerous. I have been searching transformers, but I am not finding this much help either. I have found, however, a 110 V converter switching power supply that is rated at 350w. I suppose this would be what I need to use. I can only find this on eBay or AliExpress. Using something like that direct from China makes me nervous. This will be installed in my house. Could anybody please point me in the right direction? I am looking for theory, as well as safety information. Thanks in advance. AI: There are a number of ways of doing this, with tradeoffs for efficiency, ease of design, cost, space etc... But if you're new to electronics, then consider buying something like this: LS150-5 - AC/DC Enclosed Power Supply (PSU), Compact, 1 Outputs, 130 W, 5 V, 26 A - from Farnell UK This has the advantage that all the (dangerous) mains electronics is done for you and bundled up in a box, so there's less chance for mistakes. This particular one is from the UK, but you should be able to find one wherever you live. The trick is not to look at ebay or amazon, but at an actual electronics supplier. In the UK, that would be Farnell, Rapid, Mouser or RS. In the US, you might look at Digikey.
H: 18V/5W solar panel to trickle charge a 12V car battery? I'm running a small Arduino project out of an old 12V car battery (that was on my car since 2008 till 2015). I also have a 18V/5W solar panel. Can I connect them together so the solar panel charges the car battery? The Arduino project is really really low on consumption so even a trickle charge will be enough to keep it running "forever" with the help of 1-2h of sun per day. Thanks! AI: With a conventional solar setup, solar panels charge deep-cycle batteries. A charge controller is used to prevent overcharging and correct voltage and current. Car batteries are for a surge of current to start the vehicle. I'm not sure if a conventional charge controller could charge your battery correctly because deep cycle and car batteries have different chemistry. But from what I found Here, you should be able to. I would just try it with a charge controller, or try to make one. Just experiment with some different setups and find something that works for you. Also, I don't know if 5w could power your battery because it depends on the capacity and the current draw from the Arduino.
H: Can't understand concept, range of the pulse (waveform) I am working on the lab which is allowed to ask other people for hints. In the image, program called "Audacity" is used to capture that waveform. Question 1 In the image it says, "The peak-to-peak span of the signal is about 2.0 V" It says its span range is 2.0 V, but I think that it spans from around -0.2 (lowest point in the wavefore picture) to +0.8 (highest point in the wavefore picture) according to picture. But why this description says it is 2.0 V? Question 2 In the image it says, "the graph below can show a range of about 4 volts top-to-bottom" Similarly, I think total range of top-to-bottom in the image is from -1.0 to +1.0, which I think 2V in total. But why this description says it is 4 volts? AI: The Y axis in Audacity's graph is scaled in units, where +/- 1.0 is the full scale range of the DAC or ADC. In your link, where it says 'the graph below can show a range of about 4 volts top-to-bottom', you are being told what the graph's scale means in terms of real world volts. This obviously depends on what amplifiers or attenuators are connected externally.
H: Capacitor with Current Source in parallel with a resistor [![enter image description here][1]][1] I know that in a current source in series with just a capacitor (without the parallel resistor), it would charge until forever in an ideal case with the fomrula I= (dV/dt)*C but in this case since it is in parallel, the capacitor in t=0 will start with 1mA but it will be decreasing so I couldn't use the formula above and neither the Vc=Vi(1-e.....) either since no resistor in series or voltage source. so I know the current will decrease (but can't know the slope since its variant) and that the voltage will increase until 1V( 1K*1mA). but I would like to know the formula for it so I could know the timing for other similar cases. THANKS SO MUCH for your time! AI: I couldn't use the formula above and neither the Vc=Vi(1-e.....) either since no resistor in series or voltage source. 1mA in parallel with 1 kohm in the fullness of time produces 1 volt, so change the current source (in parallel with the 1 kohm) to a 1 volt voltage source in series with 1 kohm: - It's called source transformation and is related to Norton's and Thevenin's theorems.
H: Why is it not possible to measure voltage between one battery pole and ground? If a DC multimeter is attached to the plus pole of a battery and the other pin to ground, then in my understanding there is a positive potential on the positive battery pole and a zero potential at ground. Shouldn't the multimeter show the difference in potential between those two points of contact? What is the difference between connecting the negative terminal of my multimeter to the negative terminal of the battery vs connecting it to earth ground? AI: This illustration may help: Figure 1. Without a ground reference the car screw-jack is unable to provide any lift. In a similar manner to the car-jack analogy, a battery without a ground reference cannot provide any lift / thrust / force / current. We need to close the circuit. So you have one DC multimeter and attach one pin to lets say the plus pole of a battery and the other pin to ground/mass (<- is there a difference between those btw.? haven't found anything googeling). simulate this circuit – Schematic created using CircuitLab Figure 2. In (a) the battery is not referenced to ground other than through the meter. No current will flow and the voltmeter will read zero. (b) In this case the circuit is completed through the ground and current can flow around the loop through the meter. In electrical circuits ground is some reference point and, by definition is the zero volts reference. This is analogous to a surveyor picking a zero reference from which to reference all his/her other measurements. Note that voltages can be positive or negative with respect to the reference point. In my understanding there is a positive potential on the positive battery pole. It is only positive with respect to the other pole. Again, a height can only be positive with respect to a reference height. Shouldn't the multimeter show the difference in potential between those two points of contact? The fact that there is no circuit, the act of connecting the multimeter will connect the battery + to ground. In practice you will see a stray voltage reading due to capacitance between the battery and the earth but there is no power behind it and it couldn't, for example, light an LED. Have a look at my answer to Few questions about basic concepts in electronics for more on this.
H: Input Current to Output Current for a step-up voltage regulator? I am buying one of the many step-up regulators available on Amazon, the Icstation 2577. In my project plan, I will be using either a 150mah or 300mah 3.7V 25C lipo battery. I will be stepping that up to ~5V and drawing between 1-1.5A on the 5V output. What I am trying to figure out, which I can't locate the seemingly simple answer to from searching around, is how the input current will relate to the output current. If I am drawing 1.5A at 5V from the regulator output, what can I expect the input current from the battery to look like? AI: The power formula \$ P = VI \$ gives the relationship between power, voltage and current. Your required output is \$ P_{OUT} = 5 \times 1.5 = 7.5 \; W \$. Assuming you could get a booster with 85% efficiency your input power requirement will be \$ P_{IN} = \frac {7.5}{0.85} = 8.8 \; W \$. Going back to the power formula your input current at 3.7 V will be \$ I_{IN} = \frac {P_{IN}}{V_{IN}} = \frac {8.8}{3.7} = 2.4 \; A\$. Run time will be given by \$ hours = \frac {mAh}{mA} = \frac {300}{2400} = \frac {1}{6} = 10 \; minutes\$. At this point you need to refer to the battery data sheet. The 300 mAh is probably a 10 hour discharge rate - i.e., 30 mA for 10 hours. At higher currents and particularly when exceeding the one-hour discharge rate as you intend to the capacity will be greatly reduced - possibly to about 50%.
H: What is the impact of package on SMD capacitor Most of electronic component exist in different package for different reason, it can either be power rating, reachable value, power dissipation... As far as I understood, the size of resistor principally impact the power rating. So, a 1kΩ resistor in a 0805 package will have a different power rating than a 1kΩ resistor in a 0603 package. If the package has an importance in the selection of resistor, is it the same for MLCC capacitor? AI: You are correct that for resistors power rating is one of the big factors affecting size, but it is not the only one. Voltage rating is also important to be aware of. If you are running a circuit at 100V for example, you wouldn't use an 0402 resistor because the breakdown voltage of 0402 resistors is generally much lower than this (i.e. it will short through if you put too high a voltage across it). The larger the package, typically the larger the voltage rating. In the case of capacitance there are several reasons for choosing a larger package over a smaller one. For one larger packages typically allow for higher capacitance because there is more physical space - you couldn't for example get a decent 10uF cap in an 0402 package. If we assume you compare two capacitors of the same value (e.g. two 100nF caps), the larger ones will again typically have a higher rated working voltage. This is beneficial for two reasons. The first is obvious, and that is if you need a higher working voltage for your circuit - you wouldn't choose an 0402 rated at 10V if you need to run a circuit at 25V. The second is more subtle and I'll come to it in a moment. A third reason is that there are different dielectrics. X7R is typically the best performer in terms of stability and has better DC performance. X5R is less good in that regard. Generally X7R capacitors are physically larger for the same voltage/capacitance ratings than X5R capacitors. Furthermore, if you put an MLCC in a circuit charged to a DC level such as with a decoupling cap, you actually want to pick a rated voltage much higher than your working voltage. The reason for this is that the rated capacitance of an MLCC is heavily dependent on DC voltage. A 10V MLCC might have a capacitance 50% lower than rated when running at 5V DC whereas a 25V MLCC might be only 10% lower than rated for the same running voltage. You can for example get a 100nF 0201 6.3V MLCC, but if you try to use that to decouple a 5V power line, you might find that the actual capacitance is only 10nF or less! As such, if you have the space, you typically want to go with a larger package with a higher rated voltage if you have the space
H: AP4306 has paralleled two op amps in it's block diagram, is that possible? Thank you for taking some time to read this. I've recently come across a nice constant voltage/current source (http://www.mouser.com/ds/2/115/AP4306A_B_C-271586.pdf) called an AP4306. Upon reading its datasheet, I was interested in the fact that it could potentially (I think) be used as a voltage comparator with a built in voltage reference. I'm putting together a over-discharge circuit for a lithium polymer battery which will be done with a voltage reference, voltage divider, MOSFET, and an op amp. My major hangup is this block diagram: I'm not interested in actually using this device as much as I'm interested on why it can work when it's tying the outputs of those two op amps together, wouldn't that just make this device a magic smoke generator? (Obviously not, but right now I don't know why) Let me know what your thoughts are if you'd like, thank you for your time! AI: The data sheet states that the outputs that can only sink current. This is very common for comparators to have open-drain(or collector) outputs. Since they only sink current they do not have normal opamp push-pull outputs and it is acceptable to tie them together. It also means that there must be an external current source feeding the output - normally this would be a resistor pull-up to the positive rail of the power supply.
H: Why does my GPS and Wi-Fi work inside a sealed copper clad box? I have just finished assembling a Raspberry Pi-based project into a DIY box made from copper clad FR4 PCB, with the edges soldered together and the copper surface connected to ground. I expected, when I put the lid on the box, the onboard Wi-Fi and the USB GPS receiver would stop functioning - that is, the Pi would drop off Wi-Fi and the GPS fix would be lost. Instead, there is no discernible effect. Wi-Fi and GPS function as if the metal lid is not present. Given the entire reason I put this project in a copper clad case was to shield it from RF (it will be operating in the near field of a 5W VHF FM transmitter), I could do with understanding what's going on here. Here's a pic and with the temporary lid on... (note the thin stripped wire is just to ensure the lid is making electrical contact, and the USB power bank on top is just providing some downwards pressure also to ensure contact) AI: For the box to be an effective Faraday shield, the entire peripheral of the top lid must be in electrical contact with the rest of the box. Failing this, RF can easily couple from the largely isolated plane of the lid to the internal electronics. Also don't overlook the large opening you have on the side of the enclosure. I doubt it would allow the GPS to work but it could enable the ingress/egress of a 2.4 GHz signal.
H: Understanding Transceiver vs buffer Using this OctoWS2811 as an example, when to use a transceiver vs buffer? It uses an 74HCT245 (bus transceiver) with the direction tied high. Why not use a line driver like 74HCT541 (buffer)? Specs look very similar other then a few nanoseconds here and there. AI: They could use either part (or any of a number of similar ones). The 74HCT541 would work equally well. Often there are reasons that go beyond technical though. For example: Cost - one may be cheaper Availability - the other part may have a long lead time or are already in stock. Minimization of number of parts in design - it is often desirable to reduce the number of different part numbers in a design. Especially this may useful to reduce the setup in the pick and place machine. Commonality of design - using the same parts as on other designs
H: Why does a BJT enter saturation? I'm a beginner in EE, learning about the forward active mode of transistors. The equation, Ic = Ib * BETA, and Ic=Ie * ALHPA, are referred to as the ways to figure out what base resistor you should have when running forward active mode. What is Ic and Ie? Do these mean the amount it lets in, or some ratio? Along with this, how do I use these numbers? On this site, https://learn.sparkfun.com/tutorials/transistors they give an example where they say Ic is 100mA. From my understanding, that means that if it hooked straight up to the positive and negative terminals it passes 100mA. But what if there are resistors in series? What effect does this have? Thank you very much. I'm sorry if there is a term for this I don't know, by all means give me something to google. -edits- Thank you for the name suggestion. AI: The equation, Ic = Ib * BETA, and Ic=Ie * ALHPA, are referred to as the ways to figure out what base resistor you should have when running forward active mode. What is Ic and Ie? \$I_c\$ is the collector current. That is, the amount of charge flowing in to the collector per unit of time. \$I_e\$ is the emitter current. In a circuit theory class, this will be the amount of current flowing in to the emitter terminal. But in every-day usage, we might define it to be the current flowing out of the emitter terminal as that is more likely to be a positive number. A good source will provide a schematic with arrows indicated the direction to be considered positive for every current they want to talk about. Do these mean the amount it lets in, or some ratio? \$I_c\$ and \$I_e\$ are currents. \$\beta\$ and \$\alpha\$ are ratios. Usually in EE a variable named \$I\$ will be a current. When talking about BJT's, the important ratio is \$I_c/I_b\$. This is often designated by \$\beta\$ or \$h_{fe}\$. (\$\alpha\$ might sometimes also be important but it's much more rare for it to come up in a practical circuit solution) On this site, https://learn.sparkfun.com/tutorials/transistors they give an example where they say Ic is 100mA. From my understanding, that means that if it hooked straight up to the positive and negative terminals it passes 100mA. If they say the current is 100 mA, they mean the current is 100 mA in whatever circuit they're talking about. If that circuit has the collector connected directly to the power supply, that's what they mean. If they're talking about a circuit with resistors, then they mean the current flowing in to the collector in that circuit. But what if there are resistors in series? What effect does this have? If there's a resistor (and nothing else) between the power supply and the collector, then any current into the collector will have to flow through that resistor to get to the transistor. Ohm's law will tell you what the voltage drop must be across the resistor for that current to flow.
H: Should I use resistors for LED Hi all, This might be possibly the silliest question you have come across, but its bugging me out. Recently i have bought a 1Watt LED and when enquired with the vendor, he informed that the allowed max voltage is 3V. So I was thinking that if I source three LED's in series from a 9V battery as shown in the above figure. Here the power source 9V is equal to the voltage dropped by LED's. So my question is Do i really need to use a current limiting resistor for LED's in the above circuit? Is it safe to use this circuit without resistors? Thanks in advance. P.S: Apologies if it was too silly. AI: You should always have a way of defining the current through an LED. I prefer to call the resistors "current defining" rather than "current limiting" - current limiting sounds as if it is just a protection mechanism. The resistors are there to set the desired current level. Without them the current will depend upon the particular LEDs, the battery level and temperature. To just use a resistor to set the current level you need to provide a certain amount of excess voltage. A 9v battery only provides 9v when it is new (actually it may be slightly more than 9v) but as it runs down the voltage will drop. An alkaline battery is not rundown until it reaches 0.9v per cell, for a 9v battery with 6 cells this is only 5.4v - if you put 3 LEDs in series they would probably dim appreciably when the battery was only 10% discharged. You could probably use 2 LEDs in series that would have maximum voltage of 6v and then use a resistor to drop the remaining 3v. For example if you use a 30ohm resistor it would cause about 100mA to flow when the battery was new that would drop to a few 10's of mA as the battery discharged. If by 9v battery you mean the normal 9v battery they are not designed for such currents and are not a very good source of power for this application. At low currents they can provide about 500mAh but at 100mA they will probably provide less than half of that. A resistor drops the voltage by throwing away the energy as heat, an electronic controller is more efficient and will get better life from the battery.
H: question on diode in the Energy storing circuit Please consider the following circuit. For this case, my following notion is correct? When v(t) is greater than diode turning-on voltage (v_d), the forwarding current (i_d) occurrs, and then energy will be harvested. When v(t) is less than v_d, i_d is zero, and then energy will be not harvested. Thus, the amount of harvested energy is depending on the i_d only when v(t) is greater than v_d. For example: First signal's averagy power (E[|v(t)|^2])is less than second one's. However, the first one turns on diode for a longer time than second one's. (that is, the first signal's sum of red-brakets time is longer.) In this case, although second signal has more average power, the amount of harvested energy by first signal is much, right?? Thank you for reading my question. AI: The answer to your question is more or less yes, but I'm not sure you're asking the right question. 1) Only when there's a voltage across and a current through the load is energy being delivered to the load, which you've drawn as a capacitor. 2) I_d is zero when the source voltage is too low. 3) The amount of energy delivered to the load depends on i_d and v(t)-v_d The input signal does not 'have a power', as the power it delivers depends on the load, and the load the signal sees changes with the load's voltage, due to your diode. The input signal has a voltage, and an output impedance, the latter complicating what happens to the voltage that appears across a load when a current flows.
H: A problem regarding connecting the grounds within two seperate circuits Figure 1 I am trying to design the above circuit for my project to create zero crossing detector as shown in the above figure. This configuration was proposed originally by the member named "Transistor" in this community. But the problem is I cannot connect the grounds at the Point A and Point B as shown in the above diagram due to the nature of my full circuitry. My full circuit is shown below Figure 2 simulate this circuit – Schematic created using CircuitLab 1) The red squared portion is a different section of my full circuit and its functionality does not need to be understood and it has nothing to with my zero crossing detector circuit. But it plays a significant role in preventing me from implementing the circuit shown in the previous image 2) I have a +12V and -12V power supply with the common point at the middle acting as ground. (POINT B) 3) My AC source is 3V 50Hz providing AC signals to the red squared portion of the circuit. But I am also using that particular AC source to supply power to my rectifier circuit as shown in the above image. 4) I have connected my AC source to the common point of the two +-12V supply (I have to do it to provide reference) 5) Now comes the problem, please compare the figures 1 and 2. In figure 1 , my point A has been connected to the point B (common point of the two +-12V sources). But in figure 2, point A cannot be connected to the point B since point C has already been connected to point B to provide a return path for the AC current. How can I change my circuitry in figure 2 so that the results in figure 1 can be achieved? AI: One means of solving the problem is to replace OA3 with a precision full-wave rectifier. Figure 1. For negative inputs, the first section operates as a closed-loop inverter (A=-1) and the second stage is simply a buffer for the positive output. When the input signal is positive, the first opamp output remains saturated near ground and the diode becomes high-impedance, allowing the signal to pass directly to the buffer stage non inverted. The composite effect is a full-wave rectified waveform at the output of the buffer. Source: Linear. Another would be to add a second LM339 comparitor. The outputs can be paralleled as they are open-collector type (as far as I remember). simulate this circuit – Schematic created using CircuitLab Figure 2. Dual comparitor option. The two pots could be replaced by a resistor chain.
H: How to correct Warning : array subscript has type 'char' [-Wchar-subscripts] Hi I write a function to reset a Character Array in Atmel studio as below: char PacketData[50]; void Reset(){ for (char i=0; i<=50;i++) PacketData[i]='\0'; }; But Always get warning when compile the program: Warning array subscript has type 'char' [-Wchar-subscripts] Anybody know how I remove this warning? AI: The warning is fairly self explanatory. One of your array subscripts is a char. Specifically: for (char i=0 ... PacketData[i] = ... Notice how i, your array subscript (index), is a char type. The reason this is bad is that chars are often unexpectedly signed which means you can end up with negative subscripts if you don't realise that - i.e. if you didn't realise it was signed, instead of accessing index [128], you would be inadvertently accessing index [-128]. You should be able to fix the warning by simply replacing char i with unsigned char i if working on an 8-bit processor. For a 32bit processor, you are better off using an unsigned int as it will be much more efficient. Alternatively, given your code doesn't ever set i to be negative, simply ignore the warning. However make sure you check your for loop conditions. Arrays are 0-indexed, meaning an array declared with a size of [50] has elements numbered 0 through to 49. Your for loop does i <= 50 which means it will try to access a non-existent element index 50. You should change that to i < 50. As a sidenote, why reinvent the wheel: memset(PacketData, '\0', sizeof(PacketData)/sizeof(PacketData[0]))
H: Adding a constant load to the input of an Opamp I was looking at this circuit, and there was a resistor connected to ground after the output of a signal generator(50 ohm on the top left) . The explanation was that adding it allows the signal generator to see a constant load. Note: On the right is an op-amp. Can someone explain what that means or why its added ? AI: Short answer : Because the signal generator was designed to work with a 50 ohm load. The reasons behind this constraint dives into the complex world of transmission lines, impedance matching and a lot of complicated mathematics. What you need to know is that without this 50 ohm load resistor, your signal generator will not behave as expected.