text
stringlengths
83
79.5k
H: Electronics riddle: After dropping on the floor, LED flashlight only works with a 3V supply instead of its normal rated 1.5V I'll keep it brief. I've dropped a flashlight on the floor, after which the casing broke a bit. The flashlight normally works with a single 1.5V 'D' battery. I was looking into it and tried these steps in a chronological order (number 1 is the one I tried first): 1) Connected a 1.5V 'D' battery (with juice left in it) directly to the circuit with no luck. 2) Connected my power supply set at 3V (that's the MIN voltage I can set it at) directly to the circuit and the LED worked. 3) Connected x2 1.5V 'D' batteries, as a test and worked again. So after the drop the LED lights up only with x2 batteries at a total of 3V instead of its original 1.5V. There is nothing wrong with the PCB (it's a very basic design); to test this I repeated 1-3 with wires directly touching the LED. Same results again. So I guess something must have 'changed' in the LED. Is this possible? It's all a bit strange and interesting at the same time :). Some photos follow. This is my first post here, so I hope all is clear! AI: If the flashligt worked on a 1.5 V D-cell it has to have a DCDC boost converter otherwise it cannot make a LED light up as most LEDs need at least 3 V for that. The drop could have caused a fault in the DCDC converter. Depending on what and how a component breaks in such a DCDC converter, the input voltage might still reach the output. In that case, applying 3 V at the input might indeed light up the LED. The circuit you see might be simple as it is just a switch. My guess is that there is a more complex circuit close to the LED.
H: What is this chip? Labeled "ATMEL 60824C08" This chip came from a box of assorted ICs from ebay. I have looked up the code on the top and I can't work out what this actually is. AI: It's a 24C08 EEPROM memory chip. The 608 is a date code for 2016, week 08.
H: Add aux input to Audio Amplifier circuit I have bought a ready Audio Amplifier using TDA7498 chip and now I need some AUX input other than default one. Is there any circuit for this? https://www.st.com/en/audio-ics/tda7498.html https://handsontec.com/index.php/product/tda7498-2x100w-class-d-audio-amplifier/ AI: Impossible to see what's optimal because we do not know the exact schematic. If you are a beginner, do not dig anything on the board, insert a mixer to the input as already suggested by others. The simplest possible mixer is to connect 2 signals to the same input through few kOhm resistors. You lose easily 50% of the input signal voltage, but that can still be fixable by cranking the volume upwards. I have used 3,9 kOhm resistors in series when two line level signals had to be connected to the same input. The signal sources gave about 2Vpp and less than half was needed, so the caused 6dB attenuation wasn't a problem. I made a cable which had the series resistors. Using 2 different resistors you can get some control over the balance of the combined signals. But the sum of the resistors must be a few kOhms. If you need continuous adjustability, you can use a few kOhm linear potentiometer instead of 2 resistors. The viper is connected to the amp input. A stereo potentiometer adjusts both channels. If you cannot understand what the text above means, do not try it. Get some local help! A properly designed summing mixer with independent volume potentiometers and possible tone controls, too, gives much more flexibility, but the cost of home made 2 resistor cable is unbeatable.
H: Is a STM32F103 able to handle 74.25MHz from an image sensor? So I have this project in my mind where I want to read pixel data from an image sensor via a microcontroller (e.g. STM32F103) and then display the image on a screen. The image sensor I have in mind is the AR0103CS which is capable of max. 1280 x 960 @ 45 fps. I am aware that the Cortex-M3 is not able to handle 45fps but I am pretty sure that 1 or two pictures per second are possible. Now there is a problem: The image sensor expects either a direct clock input of 74.25MHz or something inbetween 6 - 50 MHz (the AR0103CS would then increase this input clock to 74.25MHz with its own built-in PLL). In either case it is going to put out one pixel data (12-bit parallel) after the other in a progressive scan-line fashion at a rate of 74.25MHz. I currently don't know if the STM32F103 is able to generate something above 50MHz as output clock (I think the GPIOs max out in high-speed mode at 50MHz)? Is there a way to route a timer to a pin which could then be used as the input clock? Even if the STM32 isn't able to create 74.25MHz for the image sensor, I can still go with something lower in the range of 6 - 50MHz as the mentioned PLL will generate the necessary system clock for the sensor. But in this case the Pixel-Clock-Output signal from the sensor (PXCLK) is still going to be 74.25MHz. This means I have to be able to read values from 12 GPIO pins at a rate of 74.25MHz for a short time frame (16.55ms to be exact). The way I thought this could work is to create an interrupt for a specific GPIO which is in input mode and receives the PXCLK signal from the sensor. Every time the input value goes from low to high, an interrupt is generated. In this interrupt handler I then read from 12 GPIO pins. In the 8-bit uC I worked with, the values of 8 GPIOs were combined into a single register such that one is able to read them in parallel. As the STM32 is a 32-bit CPU I don't know if there is a Port with 12 or more GPIOs which can be read in parallel? Maybe there is other kind of hardware on the STM32 which is more high-speed because I assume doing this with GPIOs is not the best way possible. Anyway, someone where who has more expertise with this controller and what it is capable of? AI: No, it simply is not capable in any way. The F103 MCU has a maximum frequency of 72 MHz so it cannot do anything past that, and it has no DCMI camera interface to assist in receiving parallel video data either.
H: How to analyze this voltage doubler circuitry? What does this circuitry used for? How should I analyze? (It is inserted in the capacitor voltage doubler circuitry between the two stages) AI: Provided that \$R_1\$ is a pull-up connected to a voltage larger than \$V_{TH,N_2}+V_{D,N_2}\$, this circuit behaves as a charge pump. At phase one (\$u_1=low\$), \$N_5\$ and \$N_6\$ are off, and \$N_2\$ is on. Hence the capacitor \$C_2\$ is charged to the potential connected to the drain side of \$N_2\$. Once it is charged, \$u_1\$ can be pulled high (phase two). At this moment, \$N_5\$ and \$N_6\$ are conducting. \$N_5\$ ensures that \$N_2\$ is off, therefore the capacitor cannot discharge through it, and \$N_6\$ grounds one of the plates of the capacitor, forcing it to reverse its polarity and discharge towards the node \$N_3\$.
H: Is this output normal for a capacitor filter? I have a question regarding oscillator outputs and capacitor filters. I'm trying to build a voltage regulator. I've already calculated and tested the transformer and rectifier, and also the filter part. But when I looked at the output waves from the oscilloscope, the output is straight. Like this: From the photos I have seen online: The output waves are very different, is this a normal output from the filter? Take note though that I have a very high value for the farads. The equation I used to solve this is: C = (Iload / 2) * frequency * Vpp where Iload = 1 A, f = 1Hz, Vpp = 1V The result is 0.5F or 500000uF. Is it an acceptable output from the capacitor filter? Here is my schematic so far: AI: To see the ripple on the output you must add a load in parallel with the capacitor. Add a resistor that will draw about 1A at your desired output voltage. Also, you must adjust the timescale of your simulation so that you can see the ripples. Have your ac source start at 0.1s and simulate for just one second.
H: Choosing resistors in series to make standard values I am working on designing a precision analog filter for a university electronics project. The resistance values I am using (to meet the filter specifications) are not E96 values, so I am combining smaller value resistors in series to make the non-standard resistor. Is there any guidance for choosing these resistor values? For example, to create an 11.9 kΩ resistor, I could use a 100 Ω resistor and an 11.8 kΩ resistor; or I could use a 7.15 kΩ resistor in series with a 4.75 kΩ resistor. Intuitively I feel having the resistors a similar size may be better, but I am not sure if that is correct, or why it may be the case. Also, would this change significantly based on the precision of my resistors? For example if I moved from 1% tolerance resistors to 0.5% or 0.1% values? AI: If you are making these combinations based only on the marked (nominal) value of the resistors then you may not going like the results. For example, an 11.8k\$\Omega\$ resistor with a 1% tolerance can actually have a resistance that is +/- 118\$\Omega\$ different from the marked value. Adding a 100\$\Omega\$, 1% resistor in series will give you a nominal resistor of 11.9k\$\Omega\$ +/-119\$\Omega\$. So, the actual resistance value could be as low as 11.781k\$\Omega\$. If you really need 11.900k\$\Omega\$ you will need to buy resistors with a much smaller tolerance or measure each one. Oh, and be sure to calculate the accuracy of your ohmmeter.
H: RTD Temperature Reading I developed a circuit to read a temperature value [30,60] with a PT100 using a PIC32.The circuit has a 1mA current source connected to the PT100.The PT100 is then connected to a differential op amp to accommodate de voltage to the 3.3V Range of the ADC.The other input is the value of V in the PT100 for my minimum temperature (30Degrees)(Since V=RI I calculated the PT100 Resistance at 30º with the Callendar-van Dusen Eq and multiplied it by 1mA).So I Have Vo=(Gain*(PT-PTmin)). The question is now that my output is [0,3.3]V can I calculate the temperature in the PIC by just linearizing the ADC value (something like 1023-60 and 0-30) or do I still need to calculate the Resistance in the PT100 and from that get the temperature?Im trying to get at least a 1º accuracy in the readings if that makes a difference. Thanks AI: As a quick check, you can assume that if you scale the reading such that 30°C and 60°C are exact, the maximum error will occur near 45°C. So you can calculate that error and see if it is close enough for your purposes. Using the DIN curve \$\alpha=0.00385\$, 30°C is 111.67\$\Omega\$ and 60°C is 123.24\$\Omega\$ so mid-scale is 117.455\$\Omega\$. The actual resistance for 45°C is 117.47\$\Omega\$, for a difference of 0.015\$\Omega\$ or about 0.04°C, or roughly doubling the error of a 1/10 DIN sensor.
H: Car Faraday cage I have been reading about the Faraday cages in the last couple of days. My learning raised another questions about charge accumulation on a car frame. Air friction can raise the amount of charge on the frame surface acting like a Faraday cage. That charge can evacuate by giving a shock when someone touches it. However, if no one touches for a very long time after it accumulated charges, will it remain there so anyone who touches it will finally unload the car charge by getting a shock or will it discharge in another way? AI: All charge imbalances will eventually equalize since no perfect insulators exist (unless maybe floating in a vacuum). It's just a matter of how long. Everything also becomes a conductor at high enough voltage so if you keep accumulating charge faster than it can dissipate, eventually the potential difference between it and the next nearest thing in the universe will become high enough to arc to it in order to equalize, whether it be air or vacuum, if something doesn't touch it first.
H: Terminal resistor function on JTAG SMT2 schematic I'm studying the Digilent JTAG-SMT2 datasheet, in particular the schematic of Example 3 (Interfacing a Zynq-7000 while retaining the Xilinx JTAG Header). I have a doubt about the function and value of the highlighted resistors. Can someone help me to understand it? AI: I think they are to let the external JTAG programmer remain connected without disrupting the JTAG line between the SMT2 and Zynq. The SMT2 can overpower the external JTAG with the resistors there. Unsure how the SMT2 doesn't always overpower the external programmer though. I would have put resistors to allow the external programmer to overpower the SMT2 because you can unplug the external programmer and make it go away.
H: Calculate ideal unloaded rotor speed I have the following problem: MY ATTEMPT Okay, I've found two formulas that will definitely help me solve this problem: The first is \$S=\frac{N_s-N}{N_s} \$, where \$S\$ is the slip, \$N_s\$ is the synchronous speed, and \$ N\$ is the rotor-speed. The second formula is \$N_s=\frac{120f}{P}\$, where \$ f\$ is the frequency and \$P\$ is the number of poles. Combining these two formulas and isolating \$N\$ we get: \$N=\frac{-120Sf}{P}+\frac{120f}{P}=\frac{-120 \cdot 0.05\cdot350 \text{Hz}}{8 \:\text{poles}}+\frac{120\cdot 350\text{Hz}}{8\:\text{poles}}=4987.50 \: \text{RPM}\$ So a rotor speed of 4987.50 RPM. But is this what is meant when they ask for the "ideal unloaded rotor speed" or have I calculated something different? I hope someone can clarify this for me. AI: Slip in an induction motor is what makes the motor capable of producing torque. Nominal slip is the slip that the motor has when is is producing the nominal, AKA rated or nameplate torque. In the normal load range, the slip of an induction motor is a fairly linear function of torque. In most engineering contexts, "ideal" means neglecting all losses and characteristics that make things complicated. However neglecting those things does not mean that they can always be neglected or forgotten. An unloaded motor is one that has nothing connected to the motor shaft. That does not mean that no torque is being produced, there are still mechanical losses consisting of bearing friction and aerodynamic drag inside the motor and due to the cooling fan that may be on the end of the shaft opposite the drive end. Those losses are neglected fo an "ideal" unloaded motor. That means that the speed of an ideal unloaded induction motor is the synchronous speed. Mention of the slip could be considered something thrown in to make this a trick "question." Asked of someone who should have learned a lot about motors including their losses, it is a question that is designed to make the student think carefully about what is being asked.
H: Couplers and dividers as baluns Can every divider and coupler that provides 3dB amplitude divison and 180 degrees phase difference be used as balun? Rat race coupler and lange coupler provide the two conditions mentioned above. If I place a λ/2 transmission line after a Wilkinson divider, overall structure satisfies the conditions mentioned above. My question is, why do we use a different name for these kind of devices if I can already implement them with couplers and dividers? AI: why do we use a different name for these kind of devices if I can already implement them with couplers and dividers? We often name things for their function, even when we already have a name for their construction. An "emitter follower" is just another name for a BJT used in a particular way. A "quarter wave stub" is another name for a transmission line connected in a particular way. And a "balun" is a transformer or other device used for a particular purpose (to convert between balanced and unbalanced signals).
H: How to tell flat flexible cable(FFC) alignment when inserting? I have an MSR with an 8 pin FFC with contacts on one side. I don't know if there is a top and bottom and if so how to tell the orientation. There is writing on the contact side I assume to indicate alignment that looks like <- D1| D2| D3| D4| D5| Does this indicate that with the contacts and writing facing up, the left most pin is pin1? The header designed to accept it is an 8 pin FH52-8S-0.5SH, with contacts on top and bottom. The header has a triangle on one side to indicate pin 1 on the header. Am I correct in assuming the FFC should be inserted with the contacts up so the left most contact is on the same side of the header as the triangle? In general, is there a rule of thumb that contacts should always go up or down or does it just depend on the design? AI: The header designed to accept it is an 8 pin FH52-8S-0.5SH, with contacts on top and bottom. Check the datasheet available here. This connector has contacts on the bottom only. In general, is there a rule of thumb that contacts should always go up or down or does it just depend on the design? It depends on a design. Most connector series include both variants with contacts on top and on bottom, providing designers with flexibility in mechanical orientation of the components. Many series also include variant with contacts on top and bottom. That does not mean you can insert FPC either way. Although it is technically possible, most likely it simply means 2-sided FPC is used to reduce the overall width.
H: Overall Q of a fourth order LPF and higher order LPF cascaded If I end up cascading low pass filters to derive higher order filters, how should i calculate the Q of the overall filter? For example if i have the following transfer function $$ H(s)=\frac{1}{(s^2 +s \frac{1}{Q1}+1)\cdot(s^2 +s \frac{1}{Q2}+1) } $$ Then what should be the Q of the LPF filter? I think the Q of the last stage should dominate the overall Q so should be Q2, am i correct? In general if i was to cascade more stages then what should be the overall response? AI: Each 4th order filter is itself composed of a pair of 2nd order sections. Typically, one of those sections has a fairly high Q, the other has a low Q (overdamped). Thus, the rising response of the higher Q section below resonance cancels the early roll-off of the lower Q section; above resonance, both filters are rolling off, with a relatively sharp corner at resonance. An 8th order filter would NOT be composed of two 4th order filters but four 2nd order filters. Typically two would be high-Q (underdamped) to different extents, while two would be overdamped - each matching its under-damped counterpart. (As pointed out in a comment, there are other realisations especially in passive L-C filters : this answer is biased towards active filters such as Sallen & Key. Even in the FDNR implementations I've seen, it is possible to distinguish sections with the same spread of Q values). Exactly where the resonances and Qs are placed determines the filter alignment; choosing these is commonly done by following polynomials such as Bessel, Butterworth, Chebyshev or Cauer polynomials according to the compromise you want between phase response, passband flatness, or steepness of attenuation. See for example the 2nd, 4th, 6th and 8th order polynomials under "Normalized Butterworth polynomials" in the above page, where for example the 4th order polynomial is given as: (s^2+0.7654s+1)(s^2+1.8478s+1) Characterising high order filters is done in terms of these polynomials, rather than a specific overall Q value.
H: Differential to single-ended converter Originally I wanted to setup a simple curve trace circuit to measure the characteristics of a BJT I salvaged. This circuit operates by measuring the voltage and current through the transistor for different applied voltages on the base and collector. For this, I found a simple tutorial which uses the same USB oscilloscope that I have (https://www.instructables.com/id/Semiconductor-Curve-Tracer-With-the-Analog-Discove/). The hitch arose when trying to implement the circuit on the breadboard; I found that their circuit measures the differential voltage across the collector resistor to determine the current. This wouldn't have been a problem for me until I (rather permanently) connected a BNC probe adapter to my USB oscope, which ties the negative inputs of all BNC probes together. So as a simple remedy I figured I could just build a differential to single-ended converter to avoid the issue of ground loops in my probes. For even more background, I did a quick google search on the terms 'differential to single-ended converter' and it gave me this article as a top result (https://www.analog.com/en/analog-dialogue/raqs/raq-issue-145.html). In my haste to get to my primary project, I threw this circuit together on a breadboard. My results were decently accurate for differential signals, but I found the common-mode signals weren't completely attenuated. Great, so I guess this has something to do with the non-idealities of the opamps. Then I got to thinking back on what I know about circuits, and I realized that an OPAMP is essentially a differential to single-ended converter itself. I quickly found a 'differential amplifier' circuit that would essentially achieve the same thing with only one opamp (https://www.electronicshub.org/differential-amplifier/). So my question (finally) is, what advantages does the dual opamp circuit offer versus the single opamp solution? Furthermore, would either one be better for simple (near-DC) current measurement? Thanks in advance AI: The common-mode rejection ratio (CMRR) is limited by many things. I expect in your case the tolerance of the resistors is the major problem. To get a rejection of 40dB in common mode the gain from each of the two input signals must be the same within 1%. This requires close tolerance resistors and for dynamic signals even the capacitances need to be matched. The circuit you are using currently has the same input impedance on each side, the single opamp circuit a different input impedance so any source impedance will cause a common-mode voltage to be present. The classic differential amplifier actually uses 3 amplifiers and often called an instrumentation amplifier. It has infinite (theoretically) input impedance. It is possible to get high input impedance with 2 amplifiers with this arrangement: With all variations, you not only need to be aware of the balance that can affect the CMRR but also the absolute value of the common-mode voltage. If there is too much one of the amplifiers could exceed its capability before the output reaches its limit.
H: Microcontroller power supply problem I'm using an LM7805 regulator in my circuit to supply power for my microcontroller (Attiny 13) and another module. I have to decrease 5V voltage to 3.3V for the microcontroller so I use two resistors (1k & 2k) to decrease the voltage from 5V to 3.3V but sometimes the microcontroller does not work well (noise occures in GPIO) I measure voltage between VCC and GND of the micro , It's about 3.0V It means I have a 0.3V voltage drop by the microcontroller I want to know how can I calculate the maximum resistors value (minumum current) to supply the micro ? Of course we have a voltage drop with any value of resistors, but which paramater of the microcontroller tell us how much voltage drop is acceptable for it ? Is there any standard parameter for It ? AI: Use an LDO regulator rather than a resistive divider. For example, the Microchip MIC5504-3.3YM5. All you need is one tiny part and two small 1uF ceramic capacitors. The resistive divider will cause unstable voltage when the MCU draws different currents, and it wastes power. This part (and there are many other choices, this is just a particular inexpensive option) will accept up to 5.5V in and supply 3.3V out even if the output current is relatively large. Inspired by @ChrisStratton, here is one way to think of how a linear LDO works (with apologies to Winfield Hill et al.) "Regulator man" adjusts the pot to maintain the output voltage at 3.3V. If the load is heavy the pot gets turned to a low resistance, if there is little load then the pot gets turned to a very high resistance. All Regulator Man can do is turn the pot to almost zero resistance so the output voltage can never quite reach the input voltage. Iq is just a bit of current the regulator needs to work.
H: What material is used for the PCB traces? Are the traces of a PCB always made of copper, we can use another conducting material? Also, is there is another conducting material that can be used in the cladding layer instead of copper, or do we always use copper clad? AI: The PCB process generally assumes copper conductors. This is somewhat by definition because all the equipment is designed around a copper etching and plating process. And the technology - the common set of equipment, standards, practices, and knowledge - also generally assumes copper process. This is the "technology brand" known as printed circuit boards aka PCBs, for now at least. The reasons for this are described in the comments, copper conductors are superior for many applications. That being said, other conductors can be printed/plated/etched and otherwise attached to various substrates. For example, here is a silver plated process with ENIG finish on ceramic substrate produced by Remtec They certainly look like PCBs to me, but the manufacturer does not refer to them as such.
H: Difference between "cladding" and "traces" in PCBs? I am kind of confused about, the difference between copper traces and copper cladding in PCBs. Are they the same thing? Is it necessary for a PCB to have them? AI: Cladding is the copper layer applied to the surface of a board substrate: it's the starting material. This layer is selectively etched away to form traces in the finished PCB.
H: How does a non-designer understand a chipset's functionality? I once met a person who told me as a part of his work he was testing and characterizing analog chips which include all sorts of components like ADCs, DACs, regulators ect. embedded in one chip. But he said he doesn't know how to design such a chip not at all about VLSI design. How come such complex transistor level circuit functionality is understood by a person who wants to test it? Do they provide extra diagrams? And lets say if one wants to check only the ADC components how is that possible in such a complex system? AI: All chips, from the very simple to insanely complex, have functional specifications. These describe what the chip does. The IC designer will make a circuit to implement that functional spec, while a validation engineer or test engineer will develop a set of tests to check the implemented chip against the same functional spec. It's not necessary for a test engineer or validation person to know 'what's inside'. In fact, good practice avoids having these folks know the implementation details, lest any design assumptions creep into their tests. In some very limited circumstances a test engineer may need to know a low-level detail, such as an I/O pad structure, to test it properly. They may also need to know about a larger block, like an A/D or D/A converter, in order to access its design-for-test capabilities. But otherwise they treat the chip as if it were a black box.
H: Finding each UART pin with multimeter I am trying to findout the pinout for Serial Connection to a router. However I wasn't able differentiate the pins other than GND. Pinout: PIN | RESISTANCE (UNPLUGGED) | VOLTAGE (PLUGGED) 1. ? -> 730ohm to the ground -> 3.35-3.37v 2. GND 3. ? -> Infinite resistance -> 3.34-3.36v 4. ? -> Infinite resistance -> 3.32-3.34v As far as I have read online. Rx should have been 0 (low). However all 3 pins are high. I don't know if this is a different interface or something is broken. Edit: Added info about when this measurements taken. UPDATE: I tracked the pins to the SoC and they going to a location where UART pins of SoC are. There are few more findings: There was a sign around 1st pin which indicates Vcc pin, which I wasn't able to make sense before. After checking pins for a longer time, I realized that 3rd pin fluctuates between 1.5-3.3v which indicates it's Tx. Therefore, that makes 4th pin Rx. Alltogether, Jasen's answer is the correct one. AI: If forced to guess from the above: 1 as VCC. 3 as TX. 4 as RX.
H: Relay coil in series with resistor I want to control a relay RSB1A120BDS that works on 24V DC using my Arduino. But, my power source that I use for other parts is 41V AC connected to a bridge rectifier. So my idea was to put a resistor in series with the relay coil to get the right voltage. This is a bit my setup: simulate this circuit – Schematic created using CircuitLab The transistor is a TIP 120 STM NPN-Darlington TO-220 60V 5A 65W. (I know now MOSFETs are better for switching, but I didn't know that when I bought the transistors) I added a 220Ω resistor, because I read somewhere that this is necessary to limit current. I measure 3.27 V over R2 and 1.37 V between the base and the emitter. So we have 3.27 / 220 * 1000 = 15 mA base current. The datasheet says the average resistance of the relay coil is 1440 ohm and the rated operational voltage is between 19.2 V and 26.4 V. So then by using a resistor of 1 kΩ, the voltage over the relay coil should be 41 * 1440 / (1440 + 1000) = 24 and that should be alright. The flyback diode is a 1N4001 50V 1A. It removed it and measured it with my multimeter, and it seems to be working. (low resistance one way, very high resistance the other way) However, it didn't work. When I turn the transistor on, sometimes I measure 23.7 V over the relay coil, but most of the time I measure 0 V, and all 41 V are over R1. So maybe the relay coil is short circuited. I also hear a sound sometimes while I'm working with the multi meter, I think the realy is turning on and off. I thought maybe the issue is that I use pulsed DC instead of normal DC, and the peak voltage of 24 * sqrt(2) = 34 is too high. But this answer says that shouldn't be a problem. A thing is that 26.4 / 19.2 = 1.375 < 1.4 = sqrt(2). So I can never pick a resistor R1 to keep both the peak voltage and the RMS within the rated operational voltage limits, but almost though. The relay makes a sound when I touch it with scissors. See this video. When I try the simpler circuit below (same diode and relay, different but same type resistor), it does work as expected. No noise when I touch it with the scissors. simulate this circuit What is the problem here? AI: Your problem may be caused by any of the following: The transistor not being driven to saturation The surge current, at the point of relay actuation, causing a higher voltage drop across the resistor A loose connection. This simpler circuit, not requiring dropping of voltage to the extent of 35V, may be tried out.
H: Transformer vs SMPS for bench supply One of the basic circuit to step down input voltage is to use a buck converter topology. Some power supplies reduce input voltage using non isolated type while others use isolated type (example use toroidal, EI transformer). If I am designing a bench supply, which should I prefer? AI: Mains (low) frequency transformer first is the old school way. Depending on the application, the whole supply can be extremely simple to design as the transformer already provides isolation. It results in a heavier unit because of the mains frequency transformer and unit cost is higher. A switching PFC is the modern day way. It involves significantly more design (usually 2 switching stages), but it's better in terms of the aspects discussed above. Also the PFC stage shapes the current drawn from the mains plug, making it that of a resistive load. I'm not familiar with the regulations with regards to this, but you may not be allowed to go above a certain power level without doing something for the current to look good (PFC). I guess I'd choose based on: Power level (nominal and peak). If it's high, the transformer can be massive and also you don't want to draw current that's not PF correct. How many units will be made, as to me there's a trade-off between development cost and unit cost. I understand I'm not providing a definitive answer here, but hopefully I'm helping you get a view of some factors involved.
H: Is it reliable to use old computer hard disk as an external hard drive to store data? I have a hard drive which cannot boot windows but works for even at higher transfer rates also its 6 years old, I found cheap enclosure SATA to USB 3.0 enclosure that works for even 100Mbps transfer rate. I have doubt will I lose data someday in the future? since the drive is old or the enclosure circuit could damage the HDD? AI: Your drive is not reliable - & - It MAY last for a long time. The answer to your question as the term reliable is usually understood is No!. The drive MAY last months or even years. But there is much less certainty that it will not die within months to a year than there is with say a one year old drive. You should ALWAYS backup data which you do not wish to lose irrevocably at any moment. Using the drive as a backup or a duplicate of another drive is an acceptable use. Use for sole storage of anything should be on the basis that you are completely prepared to accept its total loss. Keeping the drive reasonably cool may help longevity - either in a well ventilated space in a comfortable temperature room or perhaps in the air flow of a cooling fan. ___________________________________ Your drive will die! - All hard disks fail - typically between about 4 hours and 8 years after first use. I've seen several examples very near the low end (on the first day of use) and have a few at about the high end of that range. Your disk is past the end of what could be considered a reasonable lifetime. It may operate well for another six years - but or if failed at any moment you should not be at all surprised. Some statistics: Cloud storage provider Backblaze publish ongoing hard drive failure and reliability statistics for their storage array of over 100,000 drives with statistics for a range of brands. They have a number of pages providing statistics and discussion and more. Here is a good entry point to their site. They see failure rates in the 1 to about 2.5% per annum with some outliers. replacements are usually due to availability of increasingly data-dense newer drives and they are unlikely to have many drives as old as your one.
H: Pitfalls in charging a supercapacitor from a small solar panel I'm designing a circuit which occasionally requires small amounts of power (it's switched off most of the time). I want to use small solar panels to charge a supercapacitor, and the cap then serves as an energy reservoir in the absence of full sunlight. I have already set up a basic circuit with a EDLC supercap (VINAtech, 100F, 3V), a small solar panel (3V, 270mA) and a 1N4001 diode. It seems to work fine, the supercap voltage appears to stabilise at around 2.85V with the panel pointed at the sun, full sunshine and the panels clean. Such ideal conditions will be rare though, the panel may be shaded most of the time. I know that an MPPT charger would be more efficient but I want this to stay as cheap and simple as possible. simulate this circuit – Schematic created using CircuitLab In this schematic, V1 is the solar panel, C1 is the SuperCap and U1 is a generic boost converter based on a CE8301A50T. I'm not sure about the final circuit yet, but there will certainly be a microcontroller (Atmega16A) and an HD44780-based LCD with LED backlight. SW1 will only be closed for a few minutes at a time (no more than 15 minutes), very few times a day (probably not at all for several days in a row). Datasheets: Boost converter IC: https://www.mpja.com/download/ce830.pdf SuperCap: https://www.vina.co.kr/winko/data/product/SS_EDLC_VEC_3R0_107_QG.pdf Solar panel: Sorry, there is no datasheet for the solar panel. I just want to know if there might be any pitfalls I might have missed. The circuit will operate outdoors, i.e. it will have to survive temperatures of -25°C (-13F) with no problems. Unfortunately, manufacturers don't seem to test the behaviour of their supercaps at low temperatures (as far as leakage current or change in capacitance goes) so I'm not sure if there might be issues. I don't think temperatures will rise above 50°C (122F) in the summer. Any thoughts? Thanks! AI: Any thoughts? The devil will be in the detail and you have provided little. Start with defining your load and the current consumption profile. You also need to state how low the capacitor voltage can drop before your load fails on under-voltage. define peak currents in terms of amps and duration - i.e. the full load profile. Data sheets for everything is needed. Leakage currents for the supercap WILL be an important consideration. Anything that doesn't support your temperature range is going to make decisions on the circuits ability to fulfill your needs extremely dubious. Make no assumptions that something "should" work - prove it by due diligence. and a 1N4001 diode I'm unsure about this - I think you need to show your proposed circuit.
H: Why is the output DC voltage of full wave rectifier not equal to the RMS of the original AC input? https://www.electronics-tutorials.ws/accircuits/average-voltage.html https://www.electronics-tutorials.ws/accircuits/rms-voltage.html I understand that the RMS of an AC voltage is the magnitude of the equivalent DC voltage in terms of power produced,so I expected that when the AC voltage is rectified, the output DC voltage would be equal to the RMS of the original AC (assuming no other voltage drops for diodes.) When used to compare the equivalent RMS voltage value of an alternating sinusoidal waveform that supplies the same electrical power to a given load as an equivalent DC circuit, the RMS value is called the “effective value” and is generally presented as: V eff or I eff. In other words, the effective value is an equivalent DC value which tells you how many volts or amps of DC that a time-varying sinusoidal waveform is equal to in terms of its ability to produce the same power. V DC = 0.9 * V RMS AI: The \$\boxed{\text{average of }|x|}\$ is \$\color{red}{\text{not mathematically the same as the}}\$ \$\sqrt{\text{average of } |x|^2}\$ One computes an average value of a signal but the other computes the power associated with that signal.
H: FQPF5N60C MOSFET turns on but limited current I am a software engineer by trade and an electronic hobbyist, so please be gentle :) I am using a FQPF5N60C N-Channel MOSFET to switch on a 12v motor using a Raspberry Pi. The spec shows VGSth at 2 - 4v and I am feeding 4.47v into the gate and the MOSFET indeed turns on, but .. it seems to be quite restricted by current resulting with the motor turning very softly (compared to when the motor is connected directly). Any idea what I am doing wrong? Thanks! EDIT: added schematic AI: The Rds(on) is only guaranteed with 10V drive so the 3.3V the Raspberry Pi can output is nowhere near enough (and neither is the 4.47V), also that's a high-voltage (600V) MOSFET which isn't really well suited to switching a low voltage 12V motor. For the same die size/cost you get much much lower Rds(on) by using a MOSFET with a lower maximum Vds, so usually you do not want to over-rate the Vds by too large a margin. Get a logic-level MOSFET rated for maximum Rds(on) with 3.3V or less drive (and be sure to put a diode across the motor). For example this one. Edit: the Vgs(th) is better used as an indication of when the MOSFET is almost fully "off" rather than "on". In the case of your datasheet, it's only conducting 250uA with (as much as) 4V in. Edit': Your drive circuit is okay, except you should have a series base resistor on the NPN (eg. 1K). You can simply move R3/2N3906 emitter connection from +5 to +12 and you'll get sufficient drive voltage for that MOSFET or other more appropriate ones. There's no unusual danger to the Pi if nothing mechanically shorts. That particular MOSFET is still not great (2.5 ohms when cold, so maybe 3-4 ohms hot), so you can work out the voltage drop from your motor current draw. With 12V drive you could use a much better through-hole part like a IRFB7437PBF with 1200 times lower voltage drop. Don't forget the diode across the motor.
H: What's the purpose of components in these mosfet driver I see these double ended transformer drivers on many places, I know the working principles of these gate drive transformers but still don't have an idea about external components that I've marked with red. What's the purpose of capacitor connected series in primary, how the pnp bjt is working here, what's the purpose of these diodes and why two of them? Thak you. AI: What's the purpose of capacitor connected series in primary The capacitor prevents DC current flowing in the transformer primary. DC current in the primary is to be avoided because it will cause the magnetic core to saturate and performance may seriously degrade. how the pnp bjt is working here It quickly turns off the MOSFET that follows it by actively dragging current from the charged gate. what's the purpose of these diodes and why two of them? Well you need at least one diode in order to produce the maximum gate drive voltage when turning on the associated MOSFET. Two diodes may be used to provide some dead-band area to avoid both MOSFETs conducting at once when the drive changeover occurs. Maybe have a look at this website for extra details about driving MOSFETs via transformers. There is also this app note by MAXIM.
H: Hall-Effect Current Sensor Output Voltage Difference (26.4 mV/A) I am using a hall effect current sensor. Its part number is ACS781LLRTR-050B-T from Allegro MicroSystems. In my circuit, I'm reading its output voltage to calculate current with MCU. With no load, I read output voltage of current sensor as 2215 mV and use this value as zero current value. When the load is connected, it draws 2.3 A which means the output voltage of the current sensor must be 2286 mV, because the sensitivity of the sensor is 26.4 mV/A. However, in the MCU the voltage read is 2331 mV, which is higher than the expected 2286 mV. What is the problem? What might be the solution? Hall effect sensor datasheet AI: We solved the problem. After trials with different loads (1A, 2A, 3A and so on), we realised that current sensor sensitivity is 36mV/A. We think that it might be because of the op-amp gains after the current sensor. Thank you all for your efforts.
H: How to detect 3 way slide switch positions? With one of this 3 position 8 pin slide switches, I want to detect all three positions: Required function: Position 1: Node1 to ground Position 2: None Position 3: Node2 to ground If I pull an digital I/O pin HIGH and connect it after SW2, HIGH means position 1 and LOW means position 2: simulate this circuit – Schematic created using CircuitLab Can't connect another digital I/O pin to any of the nodes (I have spare I/O pins but nodes voltages are higher than 5V. even if I use a voltage divider, I can't get an Ideal HIGH from them because their voltages will vary) and nodes shouldn't be connected to each other. How can I detect all third positions? AI: I think you're misunderstanding how the switch operates. simulate this circuit – Schematic created using CircuitLab Figure 1. Each pole of the switch consists of a common wiper contact, 'C', which can touch one of the other contacts, 1, 2 or 3, at a time. The switch is shown in position 1. I think you're ON-OFF-ON requirement can be satisfied as shown on the right using only one half of the switch. After details of switch added: simulate this circuit Figure 2. Does this arrangement using one pole of the switch satisfy your requirments? Table 1. Truth table assuming pullup resistors on GPIO. Posn 1 2 1 Low High 2 High High 3 High Low
H: What happens when this integrator circuit reaches saturation? I'm trying to understand integrator circuit timing, specifically at the point of op amp saturation. Here's my circuit: The op amp is dual supply +/-12V. Vin is a square wave varying from -5V to +5V with a 50% duty cycle and period of 40ms - slow enough so that Vout will reach saturation. Here's the simulation output: Here's my understanding: At 6ms the op amp reaches negative saturation voltage and can no longer maintain 0V (virtual ground) at its negative input. V(R3) is still 5V so current flows into C1 (it can't flow into the "infinite" impedance op amp input). As charge collects at C1 its voltage increases meaning less current flows through R3. What is the formula for calculating the time taken for V(R3) to drop to zero? Is it just 5RC (which is 5ms) and tallies with the simulation or do I have to take into account the fact that there's charge already accumulated on C1 plates due negative voltage at Vout? Additionally, what happens when Vin changes to -5V (at 21ms). What's the formula for calculating the time taken for V- to reach zero? AI: What is the formula for calculating the time taken for V(R3) to drop to zero? Is it just 5RC (which is 5ms) and tallies with the simulation or do I have to take into account the fact that there's charge already accumulated on C1 plates due negative voltage at Vout? It's an exponential decay function, so it never really reaches zero. You just have to decide how close is "close enough". 5×RC is the time that it takes to be within 1% of its final value. 3×RC gets you within 5%. The initial charge has no appreciable effect on this. The general formula is this: The time it takes for the exponential decay curve to be within 1/N (we're talking about a fraction of the difference between the initial and final values) of the final value is $$t = \ln(N)\cdot\tau$$ where tau (\$\tau\$) is the "time constant", equal to RC in this case. So, ln(20) = 3 and ln(100) = 4.6, which we generally round up to 5. Additionally, what happens when Vin changes to -5V (at 21ms). What's the formula for calculating the time taken for V- to reach zero? This is just the first part of the exponential decay, where the value reaches the point halfway between its initial and final values. As anyone who has used a 555 timer knows, this is 0.693×RC. (Because ln(2) = 0.693147)
H: Calculate motor current consumption I have the following problem: MY ATTEMPT Okay on this site, https://www.jcalc.net/motor-current-calculator, I found a formula for the motor current with full load and a 3-phase supply: \$I=\frac{P\cdot 1000}{\sqrt{3}\cdot V\cdot pf \cdot \eta} \$, where \$ P\$ is the motor power rating, \$ V\$ is voltage, \$pf \$ is the power factor, \$\eta \$ is the efficiency. Plugging my values into the equation we get: \$I=\frac{P\cdot 1000}{\sqrt{3}\cdot V\cdot pf \cdot \eta} =\frac{12\text{kW} \cdot 1000}{\sqrt{3} \cdot 220 \text{V} \cdot 0.89 \cdot 0.87}=40.67 \text{A}\$ This seems like a VERY large current and it made me doubt my answer. Have I used the correct formula, and used the information correctly - at the moment I don't use the fact that there is a slip of \$5 \% \$. I hope someone can help me with this. AI: Your calculation is correct. Your use of line-to-line voltage and division by the square root of 3 takes care of the per-phase aspect of the problem. You can probably find motors that size that have a bit higher efficiency and a bit better power factor, but that current is about typical for a 3-phase motor that size. The slip is not needed, but it is a hint that this motor is not as efficient as you might expect. Typical slip for a 3-phase motor is about 2% to 3%. Slip is directly proportional to power lost in the rotor resistance. Another hint about efficiency and power factor is the number of motor poles. 4-pole motors generally have the highest efficiency and power factor. Motors with more than 6 poles generally have a significantly lower power factor.
H: DMA Continuous Request functionality What is the difference between these features on STM32G0 "Scan Conversion mode" and "Continuous conversion mode" and "DMA continuous request" features? When you go on CubeIDE you will see these features on ADC configuration. AI: I haven't checked the output of the CubeIDE if you select that option, but I think that they are referring to the feature which is described on page 325 of the reference manual (Section 14.5.5 Managing converted data using the DMA. Basically the ADC can run in two different modes when coupled to the DMA. It can run in a one shot mode, which will stop the ADC as soon as all transfers which were configured in the DMA are finished. So you set the DMA transfer count to 10 and the ADC will stop after 10 samples during the 11th conversion. DMA one shot mode (DMACFG=0) In this mode, the ADC generates a DMA transfer request each time a new conversion data word is available and stops generating DMA requests once the DMA has reached the last DMA transfer (when a DMA_EOT interrupt occurs, see Section 9: Direct memory access controller (DMA) on page 241) even if a conversion has been started again. The second mode is the circular mode. In this mode the ADC will continuously generate DMA request even if the last DMA transfer is done. This is because you can setup the DMA to work in a circular way resetting to the first position after the last transfer and start again automatically. This generates a ring buffer of continuously updating ADC samples. DMA circular mode (DMACFG=1) In this mode, the ADC generates a DMA transfer request each time a new conversion data word is available in the data register, even if the DMA has reached the last DMA transfer. This allows the DMA to be configured in circular mode to handle a continuous analog input data stream. The corresponding DMACFG bit is located in the ADC_CFGR1 register.
H: Do I need a diode to protect my ESP32 from a connected relay? My ESP32 is controlling a relay which I switch on for 500ms (controlled through the internet). Do I need a diode to protect my ESP32 from voltage spikes when the magnetic flux collapses? I'm using an ESP-WROOM-32, and the relay specs are (taken from here): Small and light weight.Specifications:- Contact form: SPST normally open- Coil Resistance: 1050ohms- Nom Volts: 12 - Pickup Voltage: 8.4V DC- Rated Current: 11.4mA - Allowable Voltage: 160%- Dropout Voltage: 1.2V DC - Carry Current = 500mA.- Power Consumption (W): 0.1 - Contact Rating: 1A- 12 VOLT I've been using it without any problems for quite a few days, but I'm wondering if there's a chance it could be damaged randomly, or if there's something in the ESP32 pins that protects it from things like this. Here's a circuit diagram of my setup: AI: Basically, yes you need to protect any switching device from the inductive kickback. Yes, the IO pins have internal protection diodes, but they are meant for ESD protection, they are not meant to be used as part of normal operation, that would be considered bad design. On most chips, the amount of injected current would exceed the safe limits. So the problem is, you should not be driving the relay power directly from the IO pin to begin with, but with a transistor. As per the relay specification, the voltage of the IO pin is not even guaranteed to operate the relay. Even if it seems to work now, it may not be reliable, especially now that the internal ESD protection diode has already had extra burden on it for which it is not designed for.
H: How to obtain R1 from MAX31740ATA+T equation? How to obtain R1 from MAX31740ATA+T equation ? The equation: AI: You need to choose the Dmin that is appropriate for your system. After that, it's just algebra. Rearranging the equation gives R1 = ((200*R2)/Dmin) - R2. The datasheet doesn't give any information on input leakage current for the Dmin pin & wesbite doesn't have reference designs, so order of magnitude for the resistors is hard to know. 10's of K or 100's of K ranges might be acceptable. Best answer is to contact a Maxim FAE.
H: Microcap BJT amplifier DC and AC analysis I am using to simulate DC and AC amplifiers of 2n3904 but it does not match with my theoretical calculations. My simple circuit is in attachment. Normally IB must be 28,12 uA but it seems 14.9 uA. Also for Ac analysis Vout/Vin should be about -268 but regarding to simulation it is about -180. Could you please help me to solve this problem. AI: In the Analysis Limits dialog, enter "100,0,20" in the "Temperature" box. This will step temperature from 0°C to 100°C in 20°C steps. This changes the transistor Vbe a bit: Q1 bias current is (Vb-Vbe)/R1 so it depends on Vbe. Q1 transconductance depends on bias current. Since there is no emitter resistor in AC, the AC gain of this amplifier is completely determined by Q1's transconductance (the inverse of its emitter resistance). So stepping temperature acts a bit like picking a different transistor: each transistor will have a bit different Vbe. I could also step Q1 hFe but this would have no effect here as the signal source impedance is zero ohm. If the signal source was high impedance, then Q1 base current, and thus its hFe, would also matter... so gain would vary according to dispersion of transistor parameters. So, the gain of this circuit depends on transistor parameters, which means it won't be accurate at all. This is not the circuit to use if you want accurate gain. You could add resistor R6 to reduce transistor dependent gain variation (and lower the gain). Note capacitor C1's ESR also matters since it is in series. Another option is to use feedback. C1 ESR is set to 1 ohm. With this circuit, gain is about 25, and it depends much less on transistor parameters. Basically you can't get accurate gain above 10-30 with a single transistor. You need multiple stages. You can get inaccurate high gain though, which is useful for maximum open loop gain if feedback is used.
H: Is it possible to read calculator LCD values? Hi all, I'm thinking about a project, but I have no idea if it's even possible... So the general idea would be to have a calculator that can display letters and other characters. I have a very basic calculator (Sharp Elsi Mate EL-376S) with a 7-segment LCD. Is it possible to read the values from the calculator's chip and write them to another LCD? (both read and write with an arduino or any micro controller) The "old" display would be detached. Letter/Character mode could be changeable by pressing a key on the calculator. Calculator PCB: (Solar cell power on the left(wires detached), battery power on the right) Please share any thoughts, tutorials, concepts I should read/know! Thanks, in advance! AI: It's possible to do, though painful and tedious. The display is "bare glass" and no doubt a multiplexed type. There are 28 conductors, including probably 4 commons for a total of 88 possible segments total (including decimal points, indicators etc). The LCD will use a complex stepped waveform (probably 4 possible levels) on the commons and segment lines to determine whether a given segment is on or off. You would have to capture the levels on each line and determine whether the segments were supposed to be on or off, then relay that information. For example, you could trigger off of the edges in one of the commons and then sample all the other lines using an ADC. See application notes for "multiplexed LCD" display for more information on the waveforms to expect. You would need an oscilloscope or this would be too challenging, I think. Edit: for example: http://ww1.microchip.com/downloads/en/appnotes/01428a.pdf
H: Which solution is right for this question? I think both of the solutions make sense There is a multiple choice question: $$V_{G1}=10cos(10t+60°)$$ and $$V_{G2}=5cos(10t)$$ The resistance of the wire is \$R=10Ω\$, which one is the power that \$G_1\$ transmits? A. 1.25W B. 1.5W C. 2.5W D. 3.25W The answer for this question is (C)2.5W Solution provided from the book $$V_{G1}=10cos(10t+60°)=10∠60°$$ So the \$V_{rms}\$ of \$V_{G1}\$ is \$V^{(1)}_{rms}=\frac{10∠60°}{\sqrt{2}}=5\sqrt{2}∠60°V\$ $$V_{G2}=5cos(10t)=5∠0°$$ So the \$V_{rms}\$ of \$V_{G2}\$ is \$V^{(1)}_{rms}=\frac{5∠0°}{\sqrt{2}}=2.5\sqrt{2}∠0°V\$ Now the \$S_{12}=V^{(1)}_{rms}I^*=V^{(1)}_{rms}(\frac{V^{(1)}_{rms}-V^{(2)}_{rms}}{R∠0°})^*=5∠0°-2.5∠60°\$,so \$P_{12}=5cos0°-2.5cos60°=3.75W\$, But there is no option that \$P_{12}=3.75W\$, so this question has a little mistake Now my classmate said no, there is no mistake in this question. The book provided the wrong solution so the author thinks this question has a little mistake. My classmate's solution is as below: My classmate's solution \$I=\frac{10-5}{10}=0.5\$,So \$P=I^2R=(0.5^2) \times 10 =2.5W\$,so the answer for this question should be (C)2.5W The author of this book didn't design this question,and I want to ask which solution is right because I think both of their solutions make sense. What I don't know is which one is correct for this question? If the answer is 2.5W,and what should we modify the question to let the answer become \$P_{12}=3.75W?\$? my friend teaches me how to calculate this question,so my solution is the same as my classmate,but not the same as the solution from the book.Both solutions can convince me that their solution are right,however,there is only one true answer,that is why i ask here AI: Unless you know for a fact that you can ignore the phase relationship when the frequencies are the same, do not use a method that assumes that. I could not quite visualize if it was true or not visualizing how the two voltage sine waves overlap in my head, but writing out the phasor math makes it pretty clear that you cannot. The author's answer is right. The answer list is wrong. I was too lazy to write out my own math but read the author's answer and checked in a simulator: https://www.falstad.com/circuit/circuitjs.html?cct=$+1+0.000005+135.0639324902207+50+5+50%0Av+192+544+192+416+0+1+10+10+0+1.0471975511965976+0.5%0Av+624+544+624+416+0+1+10+5+0+0+0.5%0Ar+192+416+624+416+0+10%0Aw+192+544+624+544+0%0Ao+2+64+7+12289+0.0001+0.0001+0+1+7.500000000000051%0A There are multiple ways to get this answer and most of them do not need to assume anything about the phase. The first is the author's method using complex power calculations. The second way is to use do a circuit analysis with phasors (i.e. frequency domain circuit analysis where voltage sources appear as phasors \$V_{pk}\angle\theta = V_{pk}e^{j\theta}\$ , inductors appear as \$j\omega\ell\$ and capacitors appear as \$\frac{1}{j\omega C}\$). The third way is to just do a time-domain circuit analysis which is easy since you just have a resistor and no capacitors or inductor. You would work out the equations for time-dependent equations for current and along with the time dependent equations for both voltage sources: Instantaneous power: \$ p(t) = v(t) \times i(t)\$ Energy over one period = \$ \int_0^T{p(t)}dt\$ Average Real Power = \$\frac{Energy}{T}\$
H: How can I find the time at which the diode charges the capacitor every cycle(half wave rectifier) Half wave rectifier is given and given values are : V_p = 10V, V_d = 0.7V, 1/T = 60Hz, R = 10. I want to know when C = 0.1F, how can I find the time at which the diode charges the capacitor every cycle. I failed to approach this problem because I'm not sure at which parameter I should find out. I'm understanding capacitor charging time is T, but I don't think T=1/60 is answer. I tried so far on this : $$i_{DMAX} = \frac{V_p-V_d}{R} + C\cdot V_p\cdot \frac{2\pi}{T}\sqrt{\frac{2V_R}{V_p}} = 65.02A$$ $$ dv/dt = V_P\cdot \omega \cos(\omega t)$$ ---redo--- Now I find when charging time is \$ \Delta T\$ $$v_0(T-\Delta T) = V_p-V_d-V_r$$ and $$V_r =\frac{(V_p-V_d)T}{RC}$$ $$v_0(T-\Delta T) = V_pcos(\frac{2\pi}{T}\Delta T)-V_D$$ AI: [Wrong answer] the capacitor never charges completely, we often calculate the time at which the capacitor will charge to 63% of Vs First have the charging formula: \begin{eqnarray*} V(t)=V1(1- e^{\frac{-t}{RC}}) \end{eqnarray*} simulate this circuit – Schematic created using CircuitLab \begin{equation} V(RC)=V1(1- e^{-1})= 0.63\times V1\\ \end{equation} So now you have your time constant in seconds which is: RC you still must verifiy a condition on RC, it should be smaller than the priod of your signal
H: Precautions while handling DC Series and Shunt motors I have been searching the net and have only found the following precaution for the DC series motor and Shunt dc motor and that is that never start series motor without load and shunt motor on load. Are there any more precautions? AI: A series motor may reach a speed that is too high if it is started without a load. A shunt or separately excited motor can reach an excessive speed if the field becomes disconnected or reduced to a current that is too low. There may be some difficulty that makes a DC motor undesirable for use with centrifugal pumps or fans. Prior to the availability of electronic speed control, wound-rotor induction motors with rotor resistance speed control were somewhat preferred for those uses. High-slip induction motors with primary voltage speed control were also used. DC motors rated more that 50 to 100 watts or so need a starting method that limits the starting current. Prior to the availability of electronic speed control, resistors were connected in series with the armature for that purpose. Later, such motors were rarely used without electronic speed control. For variable speed use, DC motors made need a separately-powered blower for cooling depending of the low-speed duty cycle.
H: Under-clocking an 10gb Ethernet SFP+ module The Xilinx device on the Picozed board is a version of the Zynq 7030. The 7030 in general has a GTX transceiver, which in principle is rated to 12.5Gb/s SERDES with the relevant line coding for 10gig Ethernet. Unfortunately, the specific chip on the Picozed has its GTX transceiver down rated to a reduced 6.6Gb/s. Is it possible to use a reduced line rate with off the shelf 10gig Ethernet networking hardware (modules and switches)? Without knowing enough about the precise tech, at first glance, I can't see a problem in principle with the TX side since it would result in a reduced implicit clock frequency. On the RX side it seems more problematic: the other end would need to either realise or be told to transmit at a reduced rate. Is this something that can be made to work? AI: A lot of the simpler SFP+ transceivers (direct attach copper cable, short-reach optical fiber) are really little more than analog devices and wouldn't care about the clock (fun fact: friends of mine use SFP optical transmitters to simply distribute a 10 MHz clock, and it works). None of the stock networking hardware would be prepaed for a 50% rate mismatch, and I'd doubt their PLLs would lock reliably. So, while point-to-point between devices using relatively dumb SFP+ transceivers might work with arbitrary clocks, I'd not expect functionality with stock networking hardware. It's more likely that you can convince your Zynq's transmitter to work at the full rate than that... I don't know whether that's viable on your board, but maybe you could use the PCIe endpoints to have 2 or 4 lanes to a 10 GigE network card.
H: Why is the switch drawn like this (NRST, STM32)? The below part of a circuit is from the AN2586 Reference design for an STM32F1: https://www.st.com/resource/en/application_note/cd00164185-getting-started-with-stm32f10xxx-hardware-development-stmicroelectronics.pdf RESET goes to the NRST pin of an STM32, and the left clipped line towards a JTAG connector. B1 is a button, but what is the advantage of having the lines near 3 and 4? What is the difference with a simple 2 pin button? AI: Many pcb mount push buttons are 4 pin, mostly for mechanical stability (as far as I'm aware). They are often internally connected like this (i.e. the pins on each of the two sides are connected together). Since you have the pins anyway, you might as well connect them together externally as well just in case you want to use a different switch where they aren't connected together internally.
H: Raising ground level of Wemos ADC from Multiplexer The Setup I'm using a 74HC4051N multiplexer to interface with several analog sensors. I'm using a Wemos D1 Mini to read Z (output from the mux) using the Wemos ADC. I've only illustrated one (of 4) TMP36 temperature sensors. I'm also monitoring the 12 V line from the battery. I use a 1.6 M ohm resistor to bring the 12 V down to 3.3 V at A0 in conjunction with the internal potential divider of the Wemos. The Problem If I disconnect all sensors (y0-y7) and iterate through all mux addresses, reading off values I get something like the following: Address 0: Val 12 Address 1: Val 8 Address 2: Val 9 Address 3: Val 13 Address 4: Val 12 Address 5: Val 15 Address 6: Val 10 Address 7: Val 8 The low values are expected, the Wemos ADC is a bit noisy. If I now connect the 12 V sense line only to say y1, the readings look like this: Address 0: Val 476 Address 1: Val 802 Address 2: Val 436 Address 3: Val 410 Address 4: Val 469 Address 5: Val 398 Address 6: Val 401 Address 7: Val 451 The 12 V reading (value of 802) on address 1 looks to be correct. The Mux chip appears to be fine as using the TMP36 sensors on their own I see the expected output: Address 0: Val 240 Address 1: Val 920 Address 2: Val 786 Address 3: Val 870 Address 4: Val 12 Address 5: Val 15 Address 6: Val 10 Address 7: Val 8 What am I missing? Why does it appear to raise the ground level of the ADC? AI: When anything other than address 1 is selected Y1 will attempt to go to +12V. This will exceed the input rating of the 74HC4051 and almost anything can happen. Most analog multiplexers do not allow any input voltage to go outside the supply rail limits or current can leak through unintended paths within the device. It will work correctly when Y1 is selected because the voltage is within range because of the load of the resistors in the D1. It looks like the 12V is leaking to all the other inputs under the fault condition. If you want good accuracy I would not recommend relying on the resistor values of the D1 Mini. I would instead use a relatively low-value resistor potential divider to bring down the voltage to one acceptable to the D1. The resistor values in the potential divider would need to be much lower than the values in the D1. Say 10K ohms. If you just want to avoid the fault you could use a diode clamp to +5V to avoid the overvoltage condition.
H: Solar panel to battery, dc to dc converter question I need to design an MPPT and the converter that uses it. the thing is: the Solar panel has a max voltage of 40V and the battery that is going to be charged is around 100V, the most easy thing would be to use a Boost converter, but from what i understand, an MPPT changes the duty cycle of the dc-dc converter according to de current and voltage of the panel, so it couldn't produce a stable output voltage to charge the battery, my idea would be to use a buck converter for the MPPT and boost the output of the buck with a boost converter, each converter with independent duty cycles, is this a bad idea?, why?, what would be a better solution? and if it's the correct route, should i put a capacitor between the 2 converters as an energy reservoir? AI: the Solar panel has a max voltage of 40V and the battery that is going to be charged is around 100V, the most easy thing would be to use a Boost converter Correct but from what i understand, an MPPT changes the duty cycle of the dc-dc converter according to de current and voltage of the panel, so it couldn't produce a stable output voltage to charge the battery To charge a battery you don't need a voltage source, you need a voltage-limited current source. Current should be set to the battery's specified charging current. Max voltage to the final charging voltage. This gives you a nice CC-CV characteristic. is this a bad idea?, why? Yes, too complicated, inefficient, expensive. Why use two DC-DCs when you need one? what would be a better solution? Use one boost DC-DC acting as a current source, controlling output current. Look at boost LED drivers for inspiration. Or you can use a boost current sink, controlling input current. Pick a FET and inductor that will not smoke if the FET is ON continuously and shorts the solar panel. Also use the minimum input capacitance, so it does not store enough energy to explode the FET if it is continuously ON and shorts the cap. This will make debugging cheaper. In this boost converter, output voltage is fixed: it is the battery voltage. The converter can't change that, so first thing you have to do is forget that the duty cycle acts on the output voltage. Here duty cycle controls current, not voltage. Higher duty cycle (FET ON longer) increases input current. Lower duty cycle decreases input current. If output voltage is kept constant by the battery, this translates to output current. So it's quite simple. Measure output voltage to determine if battery is charged or still needs charging. If it is charged, set PWM to zero and do nothing. If it needs charging, you can be subtle and calculate the charging current required depending on its state of charge, using a lower current when it is almost full... or you can be not subtle and set current to maximum then stop when output voltage reaches the "charged" target. In both cases, you have a "desired output current" setpoint. Do a bit of math to translate this into a "desired input current" setpoint. Continuously measure input current. Adjust PWM until the desired input current is reached. Continuously measure input voltage. Multiply with current, you get power. Wiggle the PWM value around its set point while measuring input power. Adjust PWM in the direction that increases power, while not going over the input current set point, because that would exceed the desired battery charging current. And... repeat. Personally I'd use a hardware comparator to stop the PWM immediately if the output voltage goes over a certain limit (because the battery disconnected) and also perhaps if the FET current goes over a limit (but that entails measuring the FET current, which means a resistor which will reduce efficiency). This will make debugging cheaper.
H: How does the formula for cross sectional area accurately determine which gauge wire should be used without including amperage or voltage? p = resistivity L = length R = resistance A = cross sectional area A = p * (L/R) What I don't understand is how it's physically possible to determine the appropriate gauge wire without knowing the volts OR amperage. I learned this formula in a book for apprentices of electrical work and in the book this formula was used in a couple examples to solve for cross sectional area and from that determine which gauge wire is appropriate. This may be an odd question which I'm sure you guys get plenty of sense you're working in a field of magic but to clarify my conundrum: right now the only way I can make sense of this formula is to assume that the volts you are working with is an assumed variable. Also I'm an apprentice for electrical work so I think a big source of my confusion comes from the fact that the which gauge wire I run in a home depends on amperage and amperage solely. What am I not getting? AI: This formula gives resistance per unit length (for example ohms per meter) which does not depend on volts or amps. A wire has the same resistance whether current flows in it or not. which gauge wire I run in a home depends on amperage and amperage solely That's a different thing. When current I flows in a wire of resistance R, power R*I^2 is dissipated as heat. Minimum wire gauge for construction/installation work doesn't depend on the wire, it is determined by the insulation melting point temperature. For a given current, if the wire is too thin its resistance will be too high, so at the intended current it will heat too much and the insulation will melt, which will cause shorts and maybe fire. So this means the minimum wire gauge for a specific current does not depend on the wire itself, rather it depends on the insulation material temperature characteristics and how the wire is cooled. For example a non-insulated wire in good airflow may carry a lot more current than a PVC insulated wire inside a conduit. Enameled wire (as used in transformers and motors) withstands high temperatures. PVC-insulated cables and wires used in construction don't tolerate high temperatures, and they are installed in conduits without airflow to cool them, so the building codes take this into account and say "this wire gauge will not start a fire at XX amps." If the cable is long enough, its resistance can matter a lot regarding losses. In this case you may want to use a thicker cable if it's worth it. This is unrelated to the previous paragraph, as resistive losses over the entire cable matter for efficiency purposes, but what matters for safety (ie, melting insulation) are losses per unit length of cable.
H: How to estimate a linear power supply efficiency in LTspice? I have a little transformer that I would like to turn into a 5V power supply. Before building it I decided to model the device in LTspice: The inductance of the primary and secondary windings were measured using an LCR-meter. The wave on the output 20 Vpp in LTspice agrees with what I see on the oscilloscope. The linear voltage regulator is not shown. Since V(OUT) is 5.6-8.5V I would like to use L4941 LDO which has maximum 0.6V drop. But I couldn't find a model for it. What bothers me is that calculated efficiency is about 8%: pin: AVG(abs(v(in,ingnd)*i(v1)))=5.46445 FROM 0 TO 500 pout: AVG(5*i(i1))=0.45 FROM 0 TO 500 eff: pout/pin=0.0823504 What brings a question - is such low efficiency expected for such a simple power supply? Maybe I'm calculating it the wrong way? Or there is something wrong with the design itself? I've uploaded the model here. UPD: After replacing diodes with 1N5408 and C1 - with 1200 uF capacitor I could increase the current through D1-D4 to 3A and the load current to 500 mA. This gives an efficiency about 35.6%. Still it doesn't look right. AI: You are basically facing a Power Factor Correction problem when you are measuring the input power. Since there is some lag between the phase and voltage, the measured input power (apparent power) is not exactly the power which is being delivered to the rest of the circuit. This happens when you are driving an inductive load (transformer) and/or a capacitive load (filter capacitor) with an AC source. In order to evaluate the effective power being drawn from the source, you would have to measure the real power: $$P_{REAL} = PF \cdot P_{APPARENT}$$ where: \$PF\$ is the power factor and defined as \$PF=\cos \theta\ \cdot \left(\dfrac{1}{1+THD}\right)^{0.5}\$ \$\theta\$ is the phase difference between the supplied voltage and current. \$THD\$ is a total harmonic distortion. In the case of pure sinusoidal source it is zero. I could think of two approaches you could use: run the simulation, and measure the phase difference of the waveforms. Run an FFTand measure the THD for the frequency of interest. Calculate the power factor and then work out the effective input power. Parametrize everything and try to measure the phase difference between two signals (a bit tricky I must say, to determine where to select the phase). I tried the following simulation, and apparently it works fine, but you would need some fine tuning for your final application. As for the THD, it can be obtained in a first simulation through the command .four 60 I(v1) For this simulation, I obtained an efficiency of \$4.7\%\$ when using the apparent power (wrong) and an efficiency of \$62.9\%\$ when using the real power, which sounds much more realistic.
H: What does macromodel mean in this context? I was looking for the meaning of IC "macromodel". And I came across the following: They call the above schematics of the opmap IC as its "macromodel". But what is modelled here? They show all the components explicitly with all nodes. I thought I would find something like a simplified diagram. Is the above schematics correctly called as macromodel? If not, what is an IC macromodel? AI: The 'macromodel' is a SPICE representation of an IC or other complex block. It's a 'macro' as it's a collection of individual models connected together into a larger entity, in this case, an op-amp. The schematic you show is a diagram of that SPICE macromodel.
H: How can I calculate the power dissipation of an IC from datasheet values? I have to make a circuit using ICs on a PCB. But before that, I want to calculate power dissipation of IC. So, I saw datasheet of 7400, and it gives me an input voltage, supply voltage, input current and output current. But I can't recognize what is needed for the calculation. How can I calculate power dissipation? Or is there other information I have to know? AI: Generally, the power dissipation of each IC is based on the product of its supply voltage, VCC and their supply current, ICC.
H: Op Amp for DC Voltage Measurement I'd like to measure 0-800VDC using the ADC of a PIC18F and opto-isolated from the high voltage. Since optocoupler's CTR is variable and non-linear, I'll use the IL300. Checking this application note, I've found the following schematic: That's the schematic I've made: From that, I'd like to understand what parameters I have to check to choose the right op amps for this application. Any general purpose rail-to-rail 12V single supply would fit the project? Also, is IL300 the best option for this? AI: The op-amp is non-critical- it just needs to be able to drive a few mA through the LED, and it needs to be "single supply" (input common-mode range includes ground) unless you have bipolar supplies. And reasonable bandwidth. IL300 does not make a particularly good isolation amplifier, the gain mismatch is 0.56 to 1.65 so it definitely requires calibration (your 800V could read as 448V or 1320V or anywhere in between, plus resistor tolerances), and it can shift several percent for temperature changes. You might want to consider buying a commercial isolation amplifier or digitizing the signal and passing the value over the isolation barrier in digital form.
H: What's a conceptual way to understand ohm-cm? Resistivity is defined in units of \$\Omega \times \textrm{cm}\$. I don't conceptually understand what is meant by the unit. If it was \$\Omega / \textrm{cm}\$, that would be easy to understand - a certain number of ohms for every centimeter. How can one understand \$\Omega \times \textrm{cm}\$? AI: Imagine a block of material with a uniform density to it. Something like this: The material also has, let's say, a uniform "resistivity" to it. Now, suppose we cover the entire face pointed at by the arrow, and the face opposite to it that we cannot see, by plating them with silver (which is very conductive.) We then measure the resistance between these two silvered faces on opposite ends using an ohmmeter. There will be some value for that in Ohms. Now, let's consider three modifications: Suppose we doubled the length. Here, since the silvered faces touched by the ohmmeter have the same area as before, but are further apart, we should expect that the resistance we'd measure between the opposite X faces would double. Suppose we doubled the height. Here, since the silvered faces touched by the ohmmeter have doubled in area but are the same distance apart as before, we should expect that the resistance we'd measure between the opposite X faces would be cut in half. Suppose we doubled the width. Here, since the silvered faces touched by the ohmmeter have doubled in area and are the same distance apart as before, we should again expect that the resistance we'd measure between the opposite X faces would be cut in half. So, we postulate the following about the resistance we'd measure: \$R\propto \text{Length}\$ \$R\propto \frac1{\text{Width}}\$ \$R\propto \frac1{\text{Height}}\$ \$\therefore R\propto \frac{\text{Length}}{\text{Width}\:\cdot\:\text{Height}}\$ Now, if we call the length, \$L\$, the width, \$W\$, and the height, \$H\$, and introduce a constant of proportionality, we can say: $$R=\rho \cdot \frac{L}{W\cdot H}$$ Let's now express the above only looking at the SI dimensions: $$\begin{align*}\Omega=\rho \cdot \frac{\text{m}}{\text{m}^2}, &&\therefore \rho=\Omega\cdot\frac{\text{m}^2}{\text{m}}=\Omega\cdot\text{m}\end{align*}$$ Just simple dimensional analysis.
H: Input Voltage vs Output Voltage for a Shunt Positive Clipper with Bias Voltage Assume voltage drop across diode is 0.7V in this case. From what I understand When V(input) is positive, but less than 0.7+V OR When V(input) is negative AKA When the diode is acting as open circuit: V(output) will equal some fraction of V(input), depending on the value of R and the value of the Load Resistance. For example if R=RL, V(output) will equal 1/2 of V(input). And if R is very small compared to RL, V(output) will be approximately equal to V(input). This makes sense to me intuitively. My confusion arises from the information I've read from varying sources, which all seem to indicate that V(output) will always be equal to V(input), when the diode is acting as an open circuit. When V(input) is positive, and greater than 0.7+V AKA When the diode acts like a jumper wire Vo is equal to 0.7+V. This makes sense because Vo is the voltage parallel to 0.7+V. I tried simulating a circuit like this in falstad, and noticed that Vo will get clipped when Vo>(0.7+V), instead of when Vi>(0.7+V). I also noticed that Vo does not equal Vi when the diode is open circuit, Vo = 1/2Vi (I made the load resistance and the other resistor equal) But every source I've looked to says that Vo will always be equal to Vi AND that Vo will be clipped when Vi tries to go above 0.7+V. My question is, what am I misunderstanding? Is Vo clipped when Vi is greater than 0.7+V or is Vo clipped when Vo is greater than 0.7+V. Is Vi always equal to Vo? The only situation in which I can think of Vi being equal to Vo for a circuit like this, is when R is small compared to RL, as mentioned earlier. AI: It would be helpful to put an explicit RL load resistor on that diagram. All that you say is correct, except for the fact that you're confusing the times when RL is absent, or large, or small. The circuit always clips when Vout > V + 0.7 V. If RL is absent or very very large, then Vout = Vin, so the same eqaution applies to Vin. If RL is significant, you need to take account of its voltage divider action with R, before you write clipping conditions for Vin.
H: Troubles on understanding datasheet of a 24V DC permanent magnet motor Can you help me on understanding this datasheet of a DC Permanent Magnets motor? Note that the deleted parts do not add information, they are those not translated into English or referring to other models. PAGE 1 PAGE 2 Precisely, my doubt is about current, voltage and rpm of this motor. Consider the first table with these specifications. You may see that this motor can work at S1 (continuous service) at 100 W and at S2 for 25 minutes at 140W. The supply dc voltage is the same (24V), while current is 8.4A in S2 and 6A in S1. First question: How can I change the current from 6A to 8.4A or viceversa, to set the power? I have physically this motor, it does not have any switch, potentiometer or other tuning mechanism. The only way I think to reduce power (whose nominal value is 140W) is to lower supply voltage, which is not the method shown in this datasheet. Second question: I do not understand these numbers. Power computation seems not to be correct. 24 V × 8.4 A = 201.6W (and not 140W) 24 V × 6A = 144W (and not 100W) Now, let's go to the last graph. Third question: from this table it seems that I can supply the motor also at 50V, and that it is good to do this since efficiency becomes 1. It seems in contrast with the fact that the nominal voltage has been declared to be 24V in the previous page. Fourth question: Consider for instance 25V of power supply, which from the table near the graph corresponds to 100W, 2500 RPM and 0.5 efficiency. It seems totally in contrast with the initial table of the previous page, since: power was declared to be 140W (S2) or 100W (S1) RPM was declared to be 3000 AI: Precisely, my doubt is about current, voltage and rpm of this motor. Consider the first table with these specifications. You may see that this motor can work at S1 (continuous service) at 100 W and at S2 for 25 minutes at 140W. The supply dc voltage is the same (24V), while current is 8.4A in S2 and 6A in S1. First question: How can I change the current from 6A to 8.4A or viceversa, to set the power? I have physically this motor, it does not have any switch, potentiometer or other tuning mechanism. The only way I think to reduce power (whose nominal value is 140W) is to lower supply voltage, which is not the method shown in this datasheet. You alter the current the motor takes by altering the load on the motor. Neglecting losses, voltage corresponds to speed, and torque corresponds to current. On no load, the motor will draw little current at all. Second question: I do not understand these numbers. Power computation seems not to be correct. 24 V × 8.4 A = 201.6W (and not 140W) 24 V × 6A = 144W (and not 100W) The power is the mechanical output power, not the electrical input power. Now, let's go to the last graph. Third question: from this table it seems that I can supply the motor also at 50V, and that it is good to do this since efficiency becomes 1. It seems in contrast with the fact that the nominal voltage has been declared to be 24V in the previous page. The voltage scale goes up to 50 V, which is for convenience as the graph has 10 divisions. Given that the voltage is constant, they could have simplified the graph, and removed the voltage trace and scale. However, thy probably use standard graph software, which automatically generates all traces. You'll notice that there is a constant trace labelled 'volts' across the graph at V=24. Fourth question: Consider for instance 25V of power supply, which from the table near the graph corresponds to 100W, 2500 RPM and 0.5 efficiency. It seems totally in contrast with the initial table of the previous page, since: power was declared to be 140W (S2) or 100W (S1) RPM was declared to be 3000 This graph is for V=24 only. Multi-scale / multi-trace graphs can be a bit tricky to read. The entire graph is plotted against the torque variable on the x axis. You recall that in the first question, we vary the current by varying the torque. This graph is for a motor at constant input voltage (which means almost constant speed) operated at varying torque. Let's find the S1 conditions, where Pout = 140 W. It's on the right side of the graph at a torque of 0.35 Nm. At that torque, we can then read off the other values. The efficiency is about 80%, the current about 6 A, and the speed about 3200 rpm (it's dropped a little from the no load speed due to IR voltage loss).
H: Electrostatic discharge on ungrounded components My understanding of ESD is that there has to exist a path to ground for static charges to discharge. Take a MOSFET as an example that is vulnerable to ESD and packaged in a way to protect against it, as long as the component is free standing or none of the leads is connected to ground or a lower potential, I should be safe to hold it with my hand without taking any precautious. Essentially my hand and the transistor form an open circuit both sharing the same electric potential with no current. But it is a fact that touching sensitive components even when they are not powered or just being held by itself can cause damage through static discharge - for that I just can't understand. As if a plane can safely fly through a densely charged cloud (me holding a sensitive component), only until a lightning strikes through the fuselage on its way to reach ground (the component has a path to ground). [EDIT] A drawing should help clarify my question. (A) A hand accumulates some static charges (5000V of static charges to ground). Assume the body is not properly grounded and the static charges on the hand have no path to go. A transistor has 0V or no static charges presence. (B) A finger comes in contact with the drain of a transistor. The drain is now 5000V above the source and goes beyond the maximum voltage rating. Thus the transistor is damaged. (C) All three terminals of the transistor come in contact with the hand at the same time. The drain, source and gate go from 0V to 5000V in parallel. There is no potential difference between the terminals and transistor is not damage static discharge. AI: The damaging happens when you touch the component. Just at that moment a spark can hit between your hand and component's leg. If that hasn't still happened and the component is inside your hand it's in safe. The next dangerous moment is when you let the component touch something. For ex. you keep it between your fingers and one of the legs touches something that you do not touch. If it happens that you haven't an ESD safe workplace nor an ESD safe package and you must handle a component there, have at least a metallic platform for the component. Touch the platform at first when you take the component from it and touch it also before you leave the component on the platform. It's not fully safe because it's far too easy to have one second time gap between the touches. That's enough for you to generate a new potential difference for ex. by moving your feet on a plastic floor. Components are often in ESD.safe bags which are said to be conductive. That can be true for the interior of the bags, but the outer surfaces can be of normal plastic. Thus placing a component in an ESD safe bag is effective if the component is placed inside and you touch the interior when you place or take a component. BTW all semiconductor parts and circuit boards can be damaged by ESD. A component can stil work after ESD hits but it's operation can be less reliable. It's operating voltage range and temperature range can be reduced substantially. Handling a computer circuit board with poor ESD protection is a waterproof way to get a computer which either does not work at all or it suddenly stops and must be reset.
H: Why signal coming out of a Constant K filter takes 0.5ms to stabilize? I made a simulation of an audio amplifier with an input signal of 200 microVolts. The question is why my filter does not deliver the desired output, which operates between 512 Hz and 2048 Hz. From the Bode plot, I get a gain of -7dB which I assume it's from the matching impedance from input and output which is later corrected with an amplifying op-amp. With the values from the schematic, running a transient analysis, I get the following data: The larger signal is the input and the smaller one is the output on R24. We can see that at 0.5 ms the signal is not perfect, why does that happen and how can it be corrected? AI: Why signal coming out of a Constant K filter takes 0.5ms to stabilize? and why does that happen It's called the transient response - you can't expect a filter to immediately start producing the steady AC response from an initial application of a sinewave. At the point of initially applying a sinewave, the input signal goes from a steady value of zero to a half sine pulse. The filter does not know in those first few moments that what is being applied to it is a sinewave. It cannot predict the future hence, it does what it does and eventually settle to the steady-state AC response (as all filters do). how can it be corrected? It can't. It is what it is.
H: Synthesis error in module using Verilog (Xilinx Vivado 2015.4) I am facing an issue with the post-synthesis implementation of a special Serial-In-Serial-Out kind of buffer. It receives inputs and stores them in registers and can later output the stored inputs. When iReset is HIGH, the contents of the buffer are cleared. When iValid is HIGH, the values of the registers in the buffer are shifted and buffer[0] receives iResidual. When oEnable is HIGH, the buffer outputs the values of the last register to oResidual (in this case buffer[2]) and the values are shifted accordingly. When oValid is HIGH, it means that oResidual is a valid output. Here is the full code: module ResidualBuffer ( input iClock, input iValid, input iReset, input oEnable, input signed [15:0] iResidual, output signed [15:0] oResidual, output oValid, output signed [15:0] bufferReg0, output signed [15:0] bufferReg1, output signed [15:0] bufferReg2 ); parameter blkSize = 3; reg signed [15:0] buffer [0 : blkSize - 1]; integer i; reg signed [15:0] oResidualReg; assign oResidual = oResidualReg; reg signed [blkSize : 0] oValidReg = -2; // for blkSize = 3, oValidReg = 4'b1110 assign oValid = oValidReg[0]; assign bufferReg0 = buffer[0]; assign bufferReg1 = buffer[1]; assign bufferReg2 = buffer[2]; always @(posedge iClock or posedge iReset) begin if (iReset) begin for (i = 0; i < blkSize; i = i + 1) begin buffer[i] <= 0; end oResidualReg <= 0; oValidReg <= -2; end else if (iValid) begin for (i = 1; i < blkSize; i = i + 1) begin buffer[i] <= buffer[i - 1]; end buffer[0] <= iResidual; oResidualReg <= oResidualReg; oValidReg <= oValidReg; end else if (oEnable) begin oValidReg <= oValidReg>>1; oResidualReg <= buffer[blkSize - 1]; for (i = 1; i < blkSize; i = i + 1) begin buffer[i] <= buffer[i - 1]; end end end endmodule The behavioural simulation is working as expected: The post-synthesis simulation is different and confusing: The problems I am facing are: When iValid is LOW, buffer[0] continues to receive iResidual and the values in the registers are shifted and I don't understand why. When oEnable is HIGH, it is reflected only in the next clock cycle. However this is not the case for iValid as explained in (1) and I don't understand why. ~Edit: For those who want to look at the testbench I used `timescale 1ns / 100ps module ResidualBufferTB; reg iClock; reg iValid; reg iReset; reg oEnable; reg signed [15:0] iResidual; wire signed [15:0] oResidual; wire oValid; wire signed [15:0] bufferReg0; wire signed [15:0] bufferReg1; wire signed [15:0] bufferReg2; ResidualBuffer #(.blkSize(3)) DUT ( .iClock(iClock), .iValid(iValid), .iReset(iReset), .oEnable(oEnable), .iResidual(iResidual), .oResidual(oResidual), .oValid(oValid), .bufferReg0(bufferReg0), .bufferReg1(bufferReg1), .bufferReg2(bufferReg2) ); always begin #0 iClock = 0; #125 iClock = 1; #125 iClock = 0; end initial begin iReset = 1; iValid = 0; oEnable = 0; #125 iReset = 0; #250 iValid = 1; iResidual = 20; #250 iResidual = -123; #250 iResidual = 31; #250 iValid = 0; iResidual = 0; #250 #250 oEnable = 1; #1250 $stop; end endmodule AI: You didn't provide the testbench so I can't be certain but it looks like you are changing the values of the inputs at the same time the clock rises. That will often cause differences between behavioral and post-synthesis simulations. You may be seeing the results of setup time violations in the logic. The behavioral simulation has no delays but the post-synthesis simulation does, so their behavior will be different. Try moving the changes in the inputs to be coincident with the falling edge of the clock rather than the rising edge.
H: ΔΣ ADC Programmable Gain Without Amplifier There are some ΔΣ ADCs of which one can set an input referred gain. Like if there was a PGA between the input and the ADC, but in fact, there seems to be no PGA between the ADC and the inputs but instead the gain is set inside of the ΔΣ modulator somehow. For example this one form Analog Devices: https://www.analog.com/media/en/technical-documentation/data-sheets/2480fe.pdf Am I right, that there is no amplifier between the input and the ADC here and the gain is set somehow in the modulator? How is that even possible, setting the gain in the ΔΣ modulator. I am interested in this, because, if I am reading the datasheets correctly, there will be an input current I need to account for even though if I set a gain to the ADC, since there is no amplifier on the input. But, if I am wrong, then the amplifier should provide a high Z input and decreased input current. AI: Don't see how you can do it in the modulator, but it's trivial to do in the decimation filter. And the linked device does have a buil-in decimation filter. If you do it in the decimation filter, does that increase the input referred resolution? I'm not very familiar with the mathematics of it... Because tables 2a and 2b clearly says that buy increasing the gain you increase the resolution so it can't be a simple digital multiplication. But maybe you average more samples and get a higher resolution like that? But than why does the input dynamic range decrease with the higher gain? I think you answered the question yourself in this comment... Digital filters tend to involve word growth : if you multiply two N-bit numbers the product is effectively either 2N or 2N-1 bits. (For signed data, 2N-1 bits is the practical result because multiplying 2 sign bits produces ... 1 sign bit) The decimation filter (following the delta-sigma modulator) is generally an FIR filter. which involves multiplying some large number like 100 samples with 100 coefficients, and summing all the results. If the samples from the modulator and the coefficients are each 16 bits, and you sum 2^7 samples, the result is thus a (31 + 7) = 38 bit quantity. Now you need to extract a 16 bit result from this... Word format (s = sign, d = useful data, n = noise) of the filter output... 37 ... 31 .... 23 ... 15 ... 7 ... 0 ss_sddd_dddd_dddd_dddd_dddd_dddd_ddnn_nnnn_nnnn | low gain setting | Above is a typical filter output for a large amplitude input ss_ssss_ssss_sddd_dddd_dddd_dddd_ddnn_nnnn_nnnn | high gain setting | And for a low amplitude input The "programmable amplifier" is simply a selector, choosing 16 of the (in this case) 38 filter output bits. The gain steps are constrained to be powers of 2. By selecting lower order bits for the higher gain setting, naturally large input signals will overflow (thus dynamic range is reduced) but the last couple of bits contain noise, as shown in Table 2A. This is just an illustrative example : the details of your device will vary but it is highly likely to follow this principle. The lower order bits (7 downto 0 here) will simply never be calculated, to save logic. And there may be saturation logic so that any positive overflow saturates at + full scale instead of wrapping to a spurious -ve output (and vice versa). Reading the data sheet, this device apparently uses a much simpler Sinc4 filter instead of the FIR filter I described above (common in more critical applications like audio) but the principle is similar. I have shown the top couple of bits unused : DSM performance usually degrades catastrophically near the top of the input range where the "noise" from noise shaping and the input signal sum to an out-of-range value. Simply ignoring the top bit or two is an easy way to achieve this. Quoting the datasheet for this device: " The proprietary architecture used for the LTC2480 third order modulator resolves this problem and guarantees a predictable stable behavior at input signal levels of up to 150% of full-scale. "
H: Higher cutoff frequency of op-amp circuit is different in simulation and on paper I have the circuit below: This is an inverting amplifier, the op-amp that is used is 741 as shown in the circuit. Open-loop gain of the op-amp is \$ A_{0}=2\times 10^5 \$ and the cutoff frequency of the op-amp is \$ f_{c}=5 Hz \$. I am going to provide what I have done so far both on paper and using HSpice simulation. Correct me if I'm wrong in each step. Step 1 - Maximum Output Swing I have calculated that if we set the voltage \$ V \$ (which is connected to R1) to be \$ V_{CC} \$ it would provide a DC offset in the output equal to \$ V_{CC}/2 \$ : $$ V_{3}=V\times \frac{10k}{10k+10k}=V/2 $$ $$ V_{2}=V_{out} $$ $$\Longrightarrow V_{out}=(V/2-V_{out})A_{0} $$ $$\Longrightarrow V_{out}=\frac {V/2\times A_{0}}{1+A_{0}} $$ Where \$ A_{0} \$ is the 741's gain which is said to be \$ A_{0}=2\times 10^5 \$. So because \$ A_{0} \$ is very larger than 1, we assume the equation for output voltage becomes: $$ V_{out}\approx \frac {V}{2} $$ Which I believe if \$V\$ is set to be equal to \$V_{CC}\$ , the output can have the maximum symmetrical swing. Step 2 - Frequency Response I will now try to calculate the closed-loop gain of the circuit. So if we want to analyze the circuit in AC we would have \$ V_{3}\approx 0 (V) \$ because the capacitor is short circuit and we assume no current flows through the op-amp's non-inverting pin. So in order to calculate the closed-loop gain: $$ I=\frac {V_{in}-0}{1k} =\frac {0-V_{out}}{10k}\rightarrow \frac {V_{out}}{V_{in}}=-10 = 20 dB$$ Given the cutoff frequency of the op-amp and the closed-loop gain and the "Gain-Bandwith Product" which is: $$ GBWP=2\times 10^5\times 5=10^6 $$ I can now calculate the higher cutoff frequency for the inverting amplifier circuit using the closed-loop gain and knowing that the GBP must be constant for frequencies higher than \$ f_{c} \$ of the op-amp. I am going to call the higher cutoff frequency, \$ f_{-3dB} \$ by definition. Therefore: $$ f_{-3dB}\times (20dB-3dB)=10^6 \Longrightarrow f_{-3dB}=141.242\; KHz $$ Step 3 - Simulation I found this HSPICE model for the 741 amplifier by searching in google and honestly I can't remember where I got it from because I found it like a month ago and just didn't use it until now. So it's basically called a "Subcircuit" model, here is the code: %741 Op-Amp subcircuit model **%Note: There is no connection for the Rp resistor in this SPICE model (i.e., the "offset null" connection). This resistor is used to balance out asymmetries in real op-amps caused by transistor mismatch. Since simulations ignore the random variations between devices that is introduced in manufacturing, there is no need for this resistor in simulations. Just leave it out. *----------------------------------------------------------------------------- * * To use a subcircuit, the name must begin with 'X'. For example: * X1 1 2 3 4 5 uA741 * * connections: non-inverting input * | inverting input * | | positive power supply * | | | negative power supply * | | | | output * | | | | | .subckt uA741 1 2 3 4 5 c1 11 12 8.661E-12 c2 6 7 30.00E-12 dc 5 53 dx de 54 5 dx dlp 90 91 dx dln 92 90 dx dp 4 3 dx egnd 99 0 poly(2) (3,0) (4,0) 0 .5 .5 fb 7 99 poly(5) vb vc ve vlp vln 0 10.61E6 -10E6 10E6 10E6 -10E6 ga 6 0 11 12 188.5E-6 gcm 0 6 10 99 5.961E-9 iee 10 4 dc 15.16E-6 hlim 90 0 vlim 1K q1 11 2 13 qx q2 12 1 14 qx r2 6 9 100.0E3 rc1 3 11 5.305E3 rc2 3 12 5.305E3 re1 13 10 1.836E3 re2 14 10 1.836E3 ree 10 99 13.19E6 ro1 8 5 50 ro2 7 99 100 rp 3 4 18.16E3 vb 9 0 dc 0 vc 3 53 dc 1 ve 54 4 dc 1 vlim 7 8 dc 0 vlp 91 0 dc 40 vln 0 92 dc 40 .model dx D(Is=800.0E-18 Rs=1) .model qx NPN(Is=800.0E-18 Bf=93.75) .ends And here is my code for simulating the given circuit, which is used the 741 subcircuit in it: Nothing here **************************************** .option accurate post .inc U741.sp X1 2 3 7 0 6 uA741 R4 6 2 10k R3 2 4 1k C1 5 4 10u R5 3 1 1k C2 1 0 10u R2 1 0 10k R1 8 1 10k Va 8 0 12v Vin 5 0 ac=1 sin 0 50m 1k Vcc 7 0 12v ******************************************* .op .tran 1u 5m start=0 .ac dec 40 0 40x .end And this is the frequency response that I get for the output node: The X-axis is Log10 and Y-axis is dB. As you can see \$ f_{-3dB} \$ is shown to be 85.6 kHz which is a little different from what I calculated (141.242 kHz.) Questions Why does the cutoff frequency differ between the simulation and on paper? Am I calculating it wrong or is there something wrong with the simulation code? How can I calculate the lower cutoff frequency" for this circuit? Because there is a capacitor \$ C_{1} \$ present, does it mean the circuit should have a lower cutoff frequency? Also, how can I calculate the low frequency poles for this circuit? AI: Your method of going from an open-loop bandwidth to a closed-loop bandwidth is wrong. If the unity gain open-loop bandwidth is 1 MHz then for a closed-loop gain of 10 the closed-loop bandwidth will be 100 kHz. In more detail: as the frequency approaches 100 kHz the actual gain starts to drop from 10 to 7.071 (3 dB down) at 100 kHz before merging with the open-loop gain response: - Picture from here. For a closed-loop gain of 11, the 3 dB point in the response will be around 91 kHz. Because there is a capacitor C1 present, does it mean the circuit should have a lower cutoff frequency? Yes, certainly - you now have two stages of filtering and they are multiplicative in that their respective transfer functions can be multiplied together to get the overall transfer function.
H: What is RF switch used for in a lab equipment? I'm only familiar with scopes, DMMs and a little spectrum analyzers. But I couldn't understand what an "RF switch" is used for particularly in a characterization lab. Can you give an example of its use and function and describe in layman terms? AI: One place RF switches are used is to switch, or multiplex multiple RF sources into a single piece of test equipment. Here's an example. Most modern day antenna systems have multiple RF outputs that may include SUM, Delta-Az, and Delta-El channels. All of these need to be characterized for phase and amplitude response as function of frequency, scan angle, temperature, etc. These measurements are done using complicated/expensive test gear like spectrum analyzers and vector network analyzers (VNAs). It is usually cost prohibited to provide one set of this test gear for each of the 3 channels mentioned earlier. So it is very common to have a single set of test gear and an RF or set of RF switches that switch the desired RF channel into the test set to make the needed measurements for each channel.
H: How to PWM a load regardless of the direction of current? I'm helping a student at our art academy with an interesting project. She is building an array of peltier elements (let's say 25 to start with). She wants to control them individually using PWM and be able to switch the polarity on them every now and then. We are aware this is a terrible idea in terms of thermal stress and the life expectancy of the elements, but we'd like to try it anyway for the sake of this project. We could do it with an H-bridge for each one of the elements, but that would take a lot of work and components. I was thinking that we might be able to simplify it. Would it be possible to use four relays to reverse polarity on the power supply side, and then connect the peltier elements with optocouplers or something to do the pulse width modulation? That way, for each heating element we only need one component (which we open and close from an Arduino with a shift register), instead of a whole H-bridge module or four mosfets for each. Which component would be a good fit if we want to send 12V @ 1 amps (peak 3 amps) through it? At first I thought a MOSFET like IRLZ44N would work, but they only work in one direction I think, with the source connected to ground. Would something like the TLP175A enable us to PWM the elements even when the direction of the current reverses? Thanks for any advice! EDIT So I was thinking of something like this: But this has the problem that the MOSFETS will not work with the current in both directions. So I was thinking there would be an alternative component that would. AI: You may connect relay coils in parallel or series for polarity control as long as you connect relay main contacts as opposite (normally open on one and normally closed on the other). You can also just use one DPDT relay instead of two SPDT relays. simulate this circuit – Schematic created using CircuitLab
H: How to decide which key was pressed on a calculator? Hi all, I'm trying have an arduino read key presses on my calculator. The calculator does it by pressing the keyboard membrane to the PCB and connecting two points, so each key has two of these points. I've soldered wires to two key's contact points(four wires in total), and connected the arduino's GND to the calculator's -ve. I read the values in analog, then calculate the voltage, so I would see a change in voltage, when a key is pressed, and the key would be identified by which contact point had the v. change happens on. Now here's what actually happens... Every contact point has a voltage change(to an equal degree, too) when either key is pressed. One and three jumps to 1.4, two and four jumps to 0.0. There's a chance I somehow damaged the circuit, but other than that, I have no idea what I'm doing wrong... Here's my code: int one = A0; int two = A1; int three = A2; int four = A3; float refVolt = 5.0; void setup() { // put your setup code here, to run once: Serial.begin(9600); } void loop() { // put your main code here, to run repeatedly: int oneR = analogRead(one); int twoR = analogRead(two); int threeR = analogRead(three); int fourR = analogRead(four); float oneV = (oneR/1023.0)*refVolt; float twoV = (twoR/1023.0)*refVolt; float threeV = (threeR/1023.0)*refVolt; float fourV = (fourR/1023.0)*refVolt; Serial.println("One: "+(String)oneV+" - Two: "+(String)twoV+" - Three: "+(String)threeV+" - Four: "+(String)fourV); delay(100); } PCB with the wired points marked: If this is how the whole thing should work, is there any way to read and decide which key was pressed? If my way should work, what am I doing wrong, and how can I fix it? Please share any thoughts, solutions, ideas! Thanks in advance! AI: This will not answer your question but will give you some background information why your project will be difficult. Figure 1. Keyboard scanning matrix. Image source: The Cuckoo's Nest. The animation is showing that (for this example) the rows are switched on in sequence 0, 1, 2, 0, 1, 2, .... If a key is pressed then that COLumn will turn high. To know which key was pressed you have to know: Which ROW output is on. Which COLumn input is on. So for you to do this you need to monitor each row and column and you have to be faster than the calculator so analog might not be a good idea. The problem is more difficult because the calculator is probably not the same voltage as your micro-controller. This might not be worth the trouble!
H: Beginner circuit diagram of superposition I get 2amps*(10ohms)/(10+25)ohms-3.5/25=.666-.14amps Why did I not get .66? AI: You wrote: 2amps*(10ohms)/(10+25)ohms-3.5/25=.666 The right-hand side is correct (\$\dfrac{3.5}{25}\$) but the left one (\$\dfrac{2\cdot10}{10+25}\$) is incorrect, since it should be (\$\dfrac{2\cdot10}{10+15}\$), which does get you to .66 A.
H: Why will we get the wrong answer if we use \$I_1=\frac{V_1}{Z_1}\$ to calculate \$I_1\$ directly? There is a transformer with \$600V/120V\$, and the impedance in the high voltage side is \$52-30j Ω\$, and there is a loading, which impedance is \$0.8∠10°Ω\$, connected with the low voltage side. Now when the \$V_2=120∠0°V\$, what is the value of the current in the primary side?? My thinking \$V=IZ,\$ so \$I_1=\frac{V_1}{Z_1}\$, that is, current in the primary side \$I_1=\frac{600}{52-30j}=\frac{600}{60∠\theta }=10∠-\theta A\$ Solution Current in the secondary side \$I_2=\frac{V_2}{Z_2}=\frac{120∠0°}{0.8∠10°}=150∠-10°A\$, so current in the primary side \$I_1=I_2 \times \frac{N_2}{N_1}=I_2 \times \frac{V_2}{V_1}=150∠-10° \times \frac{120}{600}=30∠-10°A\$ The solution just use \$I=\frac{V}{z}\$ to calculate the current in the secondary side,\$I_2\$, then use the ratio between voltage and current to calculate the current in the primary side, \$I_1\$. However, we have the voltage and impedance in the primary side, why don't we just use \$I_1=\frac{V_1}{Z_1}\$ to calculate the current in the primary side?Why will we get the wrong answer if we use \$I_1=\frac{V_1}{Z_1}\$ to calculate \$I_1\$ directly? AI: You haven't drawn the circuit but how do you know that the impedance on the primary side is across the input voltage 600V? I=V/Z holds true if that 52-j30 Ohm impedance were in parallel to the input souce. And if that is the case, the current through that impedance, still would not be the primary current going into the primary side of the transfomer. This is what you're actually doing: simulate this circuit – Schematic created using CircuitLab \$I_1\$ is the current through the 50-j30 impedance, but is not the one that actually flows into the primary side. Maybe, just maybe this is what you really have: simulate this circuit And in this case, all the primary current flows through the 50-j30 impedance, and it makes sense to find that current with the \$I_P\cdot N_1=I_S\cdot N_2\$ relationship since you already have all you need.
H: Amplification Circuit for Generating High-Current Sinusoidal Signal to Feed a Coil I am trying to implement a control algorithm using Arduino Due. I need to generate high-current constant-frequency sinusoidal signals to feed a coil using some batteries. The coil will generate electromagnetic force. I want to design a circuit that amplifies a reference sinusoidal signal from the DAC0 and DAC1 of the Arduino Due and feeds the coils with the same-frequency but high-current sinusoidal signal. What is a simple, reliable way to do that? The amplified current should be the same over a frequency range of the reference sinusoid, say 10Hz to 100Hz or some other range, and I want to generate the almost largest possible electromagnetic force. I have two 12V Li-Po batteries but I can buy other batteries. I also have DC-DC converters if any needed. There is no restriction on the frequency range. Basically, I have 3 coils and each coil can be fed with an arbitrary frequency but different from other coils. The resistance of each coil is approximately R=16 ohms and the inductance is L=0.070 H. The impedance is (R^2+w^2*L^2)^(1/2), where w is the frequency of the sinusoidal current in rad/s. I need about 3 A but I would like to get higher. The coil is made with 22 AWG magnet wire with a polyamide overcoat as the material for current-carrying conductor of the coil. Multilayer multirow winding is used. Right now I have 500 turns and the radius of coil is 0.1 m. Max current about 5A and about 0.01 N force. Duration about 30 s. Design Goal: A simple, reliable circuit that accomplishes the above force Hope that I have provided enough information Any help is appreciated. AI: Force is proportional to current and Inductance. For 100Hz ( 628rad/s ) your R/L ratio must be T<=1ms . So using Litz wire with the most turns and diameter and core permeability is what you need with an inductance and resistance calculator for the best Litz wire you can get. (most strands) Matching the ESR of the batteries to the DCR of the Coil will maximize your power transfer but also load you batteries to the max. which must be minimized by prudent low duty cycle. Now if you know how to measure battery ESR, and capacitance in kiloFarads , you can use the same conjugate impedance for the coil for the theoretical max. power transfer. Everything is a compromise when you realize how big that is, then you have to cascade coils or think outside the box.
H: Sensor Connection: DIN 43650 vs M12 Preface, I'm a plant engineer and my background is mechanical engineering. My current project requires me to find some pressure sensors for my compressed air lines and this is a pretty new territory for me. These sensors will be located in a dry room, same room as our compressors and PLC; so no running long wires or having to worry about water. I'm pretty set with the specifications of the sensors, now what I'm stuck with is choosing between two connection types. As an ME I'm not terribly familiar with the different connection types. The site I'm looking at offers two connectors: a DIN 43650 (DIN 175301-803C) connector an M12 Connector So my questions are: What would be the advantages/disadvantages of both? From what I've found, M12 is better for harsher environments, like from a washdown; but other sites mention that the DIN connector is also capable of sealing from water. It also seems to me that M12 is more secure due to screw connection. How does the DIN connector keeps itself secure? For my application, does it matter which one I choose? Are there any effects to the electrical signal between the sensor and my PLC when choosing one connector to another? I'm using current to sense the pressure. (bonus) What other connection options are commonly used among electrical engineers for connecting small sensors and actuators? AI: ... the DIN connector is also capable of sealing from water. The compressible cable gland waterproofs the cable entry. The square gasket waterproofs the electrical terminals. It also seems to me that M12 is more secure due to screw connection. How does the DIN connector keeps itself secure? The long screw visible in your photo goes through a central hole in the rear of the shell and into the threaded insert in the transducer. It's good. For my application, does it matter which one I choose? Probably not. The DIN is less fiddly to wire up. You might consider calibration and what type of leads you or your calibration contractor have to hand but either is easily fabricated. Are there any effects to the electrical signal between the sensor and my PLC when choosing one connector to another? I'm using current to sense the pressure. No. What other connection options are commonly used among electrical engineers for connecting small sensors and actuators? The M8 version of the circular connector is most common. Your two and the M8 are the most popular.
H: How to automatically disconnect from battery when USB +5v is present I'm creating a SMD PCB where space is extremely limited. Usually, the application is powered by a battery (Single cell 3.7v lipo). To charge the battery, I'm using a USB cable. What I'd like is for the application to automatically switch to USB power instead of the battery whenever power from the USB is present. A normally closed "switch" between the battery and load sounds ideal. The +5v from the USB would be used to open the switch. When the USB is disconnected (left floating) the switch would close and connect the battery again. Is there a simple way to achieve this? Am I overcomplicating it - would a diode on the positive terminal of the battery serve the same purpose? As I say, space is extremely limited so I'm trying to achieve this with as few components as possible. I'm hoping I can find a single solution in say a SOT23 package or similar. Here is a simplified schematic drawing. AI: There are IC's designed specifically for this, and in small packages too. One example is TPS2113A from TI. The TPS2121 is another option if you need higher current. Other suppliers like Maxim make devices as well. Search for "power mux IC".
H: How do I convert 48V to 3.3V? What is the proper method or device to convert 48V to 3.3V? Total current consumption at 3.3V would be from 60mA to 150mA. I found several voltage regulators, but they do not have the required capacity (7W, 45V * .15A). Likewise I tried doing it with voltage dividers but the capacity is likewise too small. AI: As you've observed, a voltage regulator won't work for the power levels you're working with. A buck converter is the device you're looking for, it's used to step down DC voltage. It's the opposite of a Boost converter, which is used to step up DC voltage.
H: How to use resistor wattage formula I've read about how to determine resistor wattage but it is not clear to me which V we need to consider in the calculation. For example, this is a schematic of some electronics I am getting familiar with. The 75R resistor (I assume to drop voltage before the 5V regulator), is rated at 5W. The input voltage is around 26VDC from the 18VAC transformer winding after rectification. As such, when calculating P=V^2/R, do we use 26VDC, or just the voltage that is dropped across the resistor? Thanks! AI: Only the voltage across the resistor is important, as that's the only voltage it "sees." Both the current through the resistor multiplied by the voltage it drops (\$P=IE\$) or voltage squared over resistance (\$P=\frac{E^2}{R}\$) will get you the nominal power dissipation. If you don't know the voltage drop, but know the current through the resistor, \$P=I^2R\$ also works. Choose a resistor that has some margin of error more power-handling capability, perhaps 10-20%.
H: When does a transistor act as a switch, and when as an amplifier? I'm a beginner with electronics, anyway I'm experimenting a lot with it. I still don't understand how you can use the same transistor for example to amplify an audio signal, or to act as a switch for turning on and off the current of a motor. Do you use different types of transistor for different purposes? Any help is appreciated, please be tolerant and try to explain it in a simple way, I'm new to electronics AI: simulate this circuit – Schematic created using CircuitLab We know that for a BJT the collector current increases with the increase in base current. For example, a 0.01mA increase in base current has caused a 10mA increase in collector current. Now, let's assume, you have connected the collector and emitter through a copper wire (i.e. short-circuited). Then the current through 'RL' will be: \begin{equation} i_L= \frac{VCC}{R_L} \end{equation} This is the maximum value of the current that can be passed through the resistance if the collector & emitter works as a short circuit. And the condition when the collector & emitter works like a short circuit is called the saturation condition. And the current at that condition is called saturation current, which is defined as: \begin{equation} i_c(sat)= \frac{VCC}{R_L} ; when V_{CE} = 0 \end{equation} But in real life collector-emitter voltage will never be zero. So the equation will be: \begin{equation} i_c(sat)= \frac{VCC-V_{CE}}{R_L} \end{equation} So with the increase in base current, the collector current will increase until it reaches saturation. As soon as the transistor reaches saturation, it is fully on. Similarly, if you reduce the base current, the collector current will decrease. For a certain base current, the collector current will be almost equal to zero. That point is called the cutoff. At that point, your transistor is fully off. The range between the cutoff and saturation can be used as an amplifier. Because at that region collector current changes with base current. And the cutoff and saturation condition acts as a switch. Image source: Output Characteristics Curves of a Typical Bipolar Transistor from Electronics Tutorials
H: What changes when you modify gyro/accelerometer/magnetometer range? In hobby-grade MEMS IMUs and MARGS such as MPU-6050, you can change certain registers on the chip and modify the sensor's range. For example 2 g or 8 g for an accelerometer. Of course, because the ADC resolution doesn't change, you get lower maximum accuracy for higher ranges. What exactly changes when you modify the range registers? Obviously the mechanical and physical structure of the sensor can't change, but I don't know what enables it to measure higher or lower values. AI: There's very different ways of building a MEMS accelerometer, but the MPU-6050 uses a proof mass method: Imagine a plate capacitor, with one plate held into place using springs (~): ~~~~~~|~~~~~~~ | | - --------| |------ + | | ~~~~~~|~~~~~~~ as you can imagine, an acceleration in left or right direction shifts the spring-hung plate and thus changes the capacity of that capacitor. You can have one of such mechanism per axis, and build a 3-axis accelerometer. The adjustability in sensitivity can be achieved through different ways, for example: Adjusting the sensing range of the capacitance detector. Capacitance is usually measured by observing the frequency of an oscillator built using the capacitor. If one is to change e.g. the reactive component in that oscillator, then that changes the frequency, and higher frequencies are better to measure low capacitances, but make it hard to measure large ones accurately. You can even make the the springs part of the oscillating system: applying a voltage between plates of a capacitor very much applies an attracting or repelling force! actually adjust the springs: in some MEMS accelerometers, the mass is actually not standing still, but in a mechanical oscillation, caused by the springs being excited by Piezo crystal components, excited with an external frequency. As you might remember, "ideal" spring forces are proportional to elongation, and thus you can extract spring "pre-tension" through external acceleration from the oscillatory properties. simply have a different system for each sensitivity range, if that's easier than adjusting the physics of your one system. So, be careful about Obviously the mechanical and physical structure of the sensor can't change since this is a micromechanical system.
H: Inductor as current source In this schematic for an induction heater, there are two inductors used for current sources. My question is what happens if I increase/decrease the inductance? For example, what would happen if I replaced the two 100uH coils with 1mH coils? Source: https://markobakula.wordpress.com/power-electronics/500w-royer-induction-heater/ AI: The inductors limit the maximum output current of the inverter. Replacing the inductors as you suggest will reduce the maximum current to 1/10 of the original maximum current (at resonance). It is common in the induction heating industry to refer to an H bridge with inductors in series with the output of inverter as a "current source". The word "constant" is Not used. The inductors provide current after the active devices are turned off. The series inductors do not substantially affect the resonant frequency of the tank.
H: Overheat from voltage dividing circuit? I want to create a circuit that is capable of delivering 1V, 3A from a 5V, 3A power supply. To do this im using a voltage dividing circuit with a 1k ohm resistor (r1) and 220ohm resistor (r2). The resistors im using are rated 1/4W. Will they overheat? Thanks AI: No, it won't overheat, but it won't work either. Let's analyze this voltage divider design from 5V using R1=\$1000\Omega\$ and R2=\$220\Omega\$. Open-circuit voltage (no-load condition) at midpoint of divider: \$ Vo = {5V}\times\frac{R2}{(R1 + R2)}= {5V}\times\frac{220\Omega}{(1000\Omega + 220\Omega)} = 0.901V \$ Bias current through the series connection of R1 + R2: \$ iBias = \frac{5V}{(R1 + R2)} = \frac{5V}{(1000\Omega + 220\Omega)} = 0.004098A \$ Power dissipation in each resistor depends on Ohm's law and the definition of power: \$ V = i\times{R} \$ \$ P = V\times{i} \$ \$ P = i^2\times{R} \$ \$ P_{R1} = {iBias}^2\times{R1}= {0.004098A}^2\times{1000\Omega} = 16.79 mW\$ \$ P_{R2} = {iBias}^2\times{R2}= {0.004098A}^2\times{220\Omega} = 3.69 mW \$ Now what happens when you connect a load that demands 3A from a 1V source? That is like putting a \$0.333333\Omega\$ resistor in parallel with the R2 \$220\Omega\$ resistor, which changes the design of the voltage divider... Voltage at midpoint of divider under 3A load: \$ R_{L} = \frac{1V}{3A} = 0.333333\Omega \$ \$ R2_{L} = \frac{1}{\frac{1}{R2}+\frac{1}{R_{L}}} = \frac{1}{\frac{1}{220\Omega}+\frac{1}{0.333333\Omega}} = 0.3328\Omega \$ (Notice that since \$R_{L}\$ is so much smaller than R2, it almost doesn't matter what the value of R2 is.) \$ Vo_{L} = {5V}\times\frac{R2_{L}}{(R1 + R2_{L})}= {5V}\times\frac{0.3328\Omega}{(1000\Omega + 0.3328\Omega)} = 0.001663V \$ Bias current through the series connection of \$R1 + R2_{L}\$: \$ iBias_{L} = \frac{5V}{(1000\Omega + 0.3328\Omega)} = 0.004998A \$ \$ P_{R1} = {iBias_{L}}^2\times{R1}= {0.004998A}^2\times{1000\Omega} = 24.9833 mW\$ \$ P_{R2_{L}} = {iBias_{L}}^2\times{R2_{L}}= {0.004998A}^2\times{0.3328\Omega} = 0.008 mW \$ So even though power dissipation seems not to be a problem in this case, the output voltage is not well regulated. The output voltage is sensitive to the amount of load current drawn. With no load, the output voltage is close to the design target of 1V, but with a load that draws 3A from 1A, the output voltage drops to near zero. The Thevenin equivalent of this resistor divider is an ideal voltage source with some source resistance in series, which models how much the voltage drops depending on the load current. 3A is a lot of load current, so this needs to be considered. \$ V_{Thevenin} = {5V}\times\frac{R2}{(R1 + R2)}= {5V}\times\frac{220\Omega}{(1000\Omega + 220\Omega)} = 0.901V \$ \$ R_{Thevenin} = \frac{1}{\frac{1}{R1}+\frac{1}{R2}}= \frac{1}{\frac{1}{1000\Omega}+\frac{1}{220\Omega}} = 180\Omega \$ The short-circuit output current of this model is then \$ i_{shortCircuitThevenin} = \frac{V_{Thevenin}}{R_{Thevenin}} = \frac{0.901V}{180\Omega} = 0.005A = 5mA \$ This is far less than the design target of 3000mA. Notice that this short-circuit current is also close to the bias current. This is a design flaw with using a resistor divider this way: there's no way to get a large output current from a resistor divider, without burning an impractical amount of power in the resistor divider. In reality, resistor dividers are normally used only to set up reference voltages with fairly low bias currents. The bias current needs to be quite a lot larger than the load current, otherwise the accuracy is impacted. By the way, the maximum amount of power transfer happens when the load resistance equals the source resistance. Higher resistance draws less current, lower resistance draws more current but at less voltage. Open circuit draws zero power because V x 0A = 0W, and ideal short-circuit draws zero power because 0V x i = 0W. (A real short-circuit does draw some power because wires have some resistance, so we never get exactly to short circuit condition. There could be little or no power delivered to the load, but still very damaging amount of power burned in some internal structure of the source.) \$ P_{Maximum} = \frac{(\frac{V_{Thevenin}}{2})^2}{R_{Thevenin}} = \frac{(\frac{0.901V}{2})^2}{180\Omega} = 1.127 mW \$ This resistor divider approach can only provide about 1mW or less to the load, and your design target is 3000mW at the 1V@3A operating point. You could make this circuit work better as a voltage regulator, if you add an NPN BJT transistor configured as an emitter follower (there is lots of info on web). This is a circuit building block that uses a small signal (the output of the voltage divider) to control a larger signal (the 3A load current). Of course there's no free lunch, you now have to worry about power dissipation of the emitter follower transistor. And the emitter voltage is not identical to the base voltage, so the resistor divider has to compensate for the Vbe voltage drop. However this technique is much less sensitive to variations in load current. If you've never used transistors before, there are a couple of intermediate steps you will need to go through before you can drive a 3A output current. First get some 2N3904 and 2N2222 transistors; these are old but widely available small-signal transistors. The 2N2222 is good up to about 0.5A, so start with small-scale load testing of 0.01A, 0.02A, 0.05A, 0.1A, 0.2A, 0.5A. Most likely you will need a heatsink on the power transistor. There are other techniques that could be used to make a regulated 1V@3A voltage regulator, like replacing the bottom part of the resistor divider with a Zener diode for improved accuracy, but you still need a large power transistor to handle the high current output.
H: Identify this 2-pin component - markings include AUPO A4-F This part went bad, I have no idea what it is to order it on Mueller site, can you identify it for me? Thanks to anyone that does. . AI: That is an AUPO A4-F 130°C 2A thermal fuse. Datasheet available here. Here is a copy of a relevant part of the datasheet:
H: USB-C connector locking mechnism? Is there an internal component in usb-c that helps hold the connection tight? I have 2 usb cables by 2 different manufacturers. One I bought last year by some random company named 'SUNGUY' on amazon. And more recently splurged for fancy Anker ones. The Anker ones don't stay in as well as the SUNGUY one. There's a subtle click of sorts for both, but much much more pronounced for the SUNGUY one. I can dangle my phone by the cable with the SUNGUY one for example, but it'd fallout for the anker one. I measured both connectors with a decent caliper and both came out to 2.41mm x 8.31mm +/- 0.02mm. So I don't think it's the external part that's making the difference in the security of the connection. I have tried other cables since, all behave like the Anker one. Which is unfortunate because I want more like the SUNGUY one. https://www.amazon.com/gp/product/B07Z4BMRXG is the SUNGUY cable. Nothing there seems to suggest it's special in anyway? AI: The USB-C plug has retention springs inside the shell: (diagram excerpted from USB Type-C Cable and Connector Specification, Release 2.0). The specified extraction force for a USB-C connector, from the same specification document, is between 8 and 20 newtons (1.8 - 4.5 lbf) when brand new and 6 and 20 newtons (1.35 - 4.5 lbf) after 10,000 insertion/removal cycles. The lower end of this is enough to "dangle" most phones (my phone masses 190g, which means it weighs about 1.9 N) as long as you don't swing or jerk it around.
H: Is it possible to take the transfer function of a transistor? I am using a common emitter amplifier to bias and amplify the signal from an electric guitar, and in the process I started wondering whether it is possible to include a transistor in a transfer function. If it is assumed that the transistor is acting in the linear region of operation, then it is my understanding that the component is approximately linear. As the transfer function only applies to linear elements, I was wondering if this is possible. In the linear region of operation, I learned in class that we can approximate this component as a resistor, where the value of resistance is: $$ R_{on} = 1/(k*(V_{GS} - V_{TH})) $$ I've also seen this written as: $$ R_{on} = 1/(k*(V_{BE} - V_{cutoff})) $$However, because the resistance is dependent on voltages at other points, I was unsure as to whether or not this is again possible. To be thorough, I attached a picture of my circuit in LT Spice. My transistor is a PN2222. Thank you very much. AI: Biased like that, the transistor is in saturation, and will not amplify. Set r2 = r/2, and you'll have a large and nearly linear output range, with gain of -1. Now add a series RC in parallel with the emitter resistor, and make that new resistor be r/10; you'll have a rather linear gain = 10x. Now for some values: R1 = 100K ohms (brown/black/yellow) R2 = 47K ohms (yellow/purple/orange) R3 = 5.1K ohms (green/brown/red) new resistor in series RC: R_gain_set = 510 ohms (green/brown/brown) and that new capacitor ithe series RC C_low_corner = 22uf (and check the polarity) for 16 cycle per second 3dB rolloff at the low/bass. R4 = 5.1K ohms With base at VDD/3, and emitter a little lower, we guarantee the collector will be a little higher than 2/3 VDD, and we are guaranteed the transistor Vce is about VDD/3 which is very nicely far from saturation. life is good. if beta >> 20, life is really good. ================================ lets discuss distortion. If the emitter is grounded, such that the input AC signal appears across base to emitter, then a 4milliVolt peak peak input (0.004 volts) causes about 10% distortion. In our added RC series network across the R3 resistor, if the R portion is ZERO, having only a capacitor in parallel with R3, you've in effect AC_Grounded that emitter, and 4mVPP input will cause 10% distortion. However the distortion of the base-emitter junction is predicted by a polynomial that gives the designer the coefficients for each order of distortion. Thus a polynomial like Vout = K0 + K1 * Vin + K2 * (Vin * Vin) + K3 * (Vin * Vin * Vin) + .... can be very useful to people evaluating low distortion audio circuits or low distortion ADC preamplifiers or low distortion Radio Frequency amplifiers, or high distortion guitar amplifiers. The 2nd order term predicts A * B intermodulation distortion, as well as predicting A * A (self) distortion. Examine the 2nd Order Intercept as useful. The 3rd order term predicts A * A * A (self) distortion, as well as predicting A * A * B and A * B * B (which you will find are symmetric effects). Examine the 3rd Order Intercept as useful. NOW THE KEY IDEA: if you have a resistor in series with the emitter, and there is NO CAPACITOR across that resistor, you can dial up the desired amount of distortion. A resistor with DC drop of 10 * 0.026 volts, will greatly reduce the distortion. A resistor with DC drop of 100 * 0.026 volts, produces even less distortion. You can re-derive the polynomial, with a linear resistor added onto the exponential base-emitter diode non-linearity, and have the revised math for this.
H: How do I find op amp specifications regarding voltage limits? This question is perhaps a bit unusual and is about how to read/interpret specifications. I have been doing some testing with a common difference amp (LM324) and have also used LTSpice to double check that my actual breadboarded results are what the simulation expects. The testing I have been doing is to measure the effect of changing Vcc on the value of Vout. I am comparing 2 input voltages, say 29V and 26V and expect Vout to be 3V (I am using a unity gain configuration with the op amp). The power for the op amp is varied from 27V down to zero and what I see is that as the voltage gets to around 14-15V, the Vout drops from a steady 3V to below 1V. The same holds true if my input voltages are 26V and 23V so I'm clearly hitting some limiting value of the op amp. My question is this. Where in the LM324 documentation can I find the details that explain this behaviour and how can I use this to predict the behaviour before I start building the circuit? I'm guessing it's something to do with common rail details, but this is only a guess. My motivation is to find out what is the lowest Vcc I can drive my LM324 with if I am measuring a whole range of different input voltages from 28V down to 0V. As per suggestion, I have now added a screenshot showing the circuit and the test results in LTSpice. Thanks for your guidance. Mike AI: for sensible results the signals on the input pins of the op-amp must be between Vee -0.3 and and about Vcc -2 (this from memory it will be in the data-sheet sowewhere) Assuming you're using the normal unity gain circuit with four equal resistors you'll hit that limit when the high input is about twice the supply voltage. simulate this circuit – Schematic created using CircuitLab
H: What can cause oven’s heating element to deform? What can cause a heating element to deform this way? My oven is tripping the breaker whenever I turn on the heating, LCD and light are working fine, the front LED flickers though! The model of the oven is IKEA Framdit OV9. One of the elements has 30 ohm resistance and the other has 60, and given the data on the elements they have 1650 and 800 Watts power rating respectively. So for 230v they resistance seems be fine. AI: It is shorted to case, check resistance between contact and outer metallic shield. This resistance should be infinite, in your case is something near zero if you test it with megger. You should change the damaged heater.
H: A general purpose way to connect cellphone line out to a microcontroller ADC I have looked at many questions with similar titles but cone cover my query. Question: How can I protect an STM32 Cortex M4 series ADC input when measuring the AUX output voltage from typical smartphones (eg Samsung Galaxy) Here is a circuit I used to sample the mobile audio output through AUX jack, the measurement will be done by an ADC with maximum voltage input of 3V3 (may be 3.6V but I keep it lower.) The ADC is part of an STM32F446re microcontroller. simulate this circuit – Schematic created using CircuitLab I am advised that this circuit can damage the ADC. I (and many others) am looking for a safe way (safety is more important than quality of the signal) to sample audio from AUX jack (headphone output/line out/etc) where the solution is independent of the phone and is standard to most of them. AI: I'd be tempted to do this: - You certainly need the added 10 kohm resistor to limit the current pulse that could flow into the ADC's input if the audio level was uncontrolled. Most ADCs will have a maximum limit of around 5 mA beyond which you can damage the ADC. So, if a 10 kohm resistor was used, the peak voltage at the junction of R2, R3 and C1 would have to rise 50 volts above the ADC's supply rail before damage could occur. This is unlikely to happen but you still need to check what that maximum p-p output level could be from the circuit that drives the input. Another option is to use a TVS diode that will clamp at say 10 or 20 volts - this ensures that the peak current into the added 10 kohm resistor could not rise to anywhere near 5 mA. But, you need to adequately design this - don't just choose some piddling TVS that could be damaged by a sustained signal from the audio source - design it properly and look at data sheets and calculate the energy withstand requirements for the TVS.
H: Applied 370degC at the input of an ADC in a Microcontroller I am using this Microcontroller - 100 pin I have configured pin 82 as an ADC input. ADC Reference voltage is 5V (provided from an LDO whose input voltage is 12V) To test my ADC accuracy, I took a thin 45cm copper wire from that pin and started to measure. But while the board was in functioning, by mistake I used the soldering iron at the other end of the wire taken from the ADC pin. The soldering iron was around 370degC for 3sec contact at the wire. Noticing this immediately, I turned off the module power and restarted it again. Upon restarting, the power module started consuming an excess current of 80mA. What would have happened at the ADC input pin due to this mistake? And what would the same scenario if those pins were output (open drain or push pull)? AI: What would have happened at the ADC input pin due to this mistake? It's likely that an ESD event occurred - maybe the circuit was unearthed and had attracted some charge (not a problem by itself but you have to know how to handle these events). The worst thing you can do is earth one single pin and, by the sounds of it, that is what happened when the soldering iron tip touched the wire connected to the pin. And what would the same scenario if those pins were output (open drain or push pull)? More than likely it would still damage the chip. Even an unearthed soldering iron tip can be a huge problem - capacitive coupling to the tip from the AC supply can push a few mA into the chip's pin and cause damage. See this from the data sheet for the part you used: -
H: TI TPS6116 Output Voltage I am designing an LED backlight driver as a sub-circuit for a DPI interface. The backlight in question requires 12.8V. I am trying to use the TPS61161 and have read through all of the documentation quite extensively. What I don't understand is that the output voltage is "38V Max" and there is no reference to how the output voltage is set. Vout is used in a few equations but as far as I can tell, not in a way that actually affects the Vout itself. Am I missing something? Does the chip just sink any excess voltage as long as the load is in range? https://datasheet.lcsc.com/szlcsc/1811061825_Texas-Instruments-TPS61161DRVR_C324075.pdf Thanks in advance! Edit: This is all the information I have for the LED's: I have just wired up the following with a variable power supply that I have and got no illumination and a very hot power supply! I thought this would work to dim the backlight from the pi? AI: Am I missing something? The TPS61161 can produce a maximum of 38 volts but its main aim is to provide a constant current for any LEDs connected. It does this by monitoring the current through \$R_{set}\$ below: - In effect, the duty cycle of the TPS61161 is adjusted to force the required current through \$R_{set}\$ and whatever the peak voltage is at the top LED anode is whatever it needs to be to supply current through the LED string. The FB pin needs to reach 200 mV and then the chip regulates so, in the circuit above, if 20 mA flows through the LEDs and \$R_{set}\$ is 10 ohms then 200 mV is produced at the FB pin and, the chip regulates that voltage by continually adjusting the duty cycle to maintain 20 mA through the LEDs. The backlight in question requires 12.8V. The devil is in the detail so, without any other knowledge about the LED in question, it's impossible to say if the chip and its circuit is suitable.
H: Does 12 V DC to +6 VDC and -6VDC with discrete components require conversion to AC? I have a 12 volt battery and want to power a 741 Op Amp, but that requires + and - voltage supplies. Is there a more efficient way to give a + and - supply to the op amp from the battery without converting to AC first. By converting to AC first I mean: DC Battery-> AC Inverter-> Transformer:Primary Two Tap, Secondary Center Tap:Three Tap with center Tap Grounded-> DC Rectify Top and Bottom Tap ~ +6 V Top and ~ -6V Bottom AI: It requires conversion to AC, but not necessarily a transformer. Maybe an inductor or capacitors will work fine, depending on the current requirements. The easiest thing is to buy a DC-DC converter module with an isolated output or +/- output. You can also use a switchmode power supply chip as an inverting boost converter with only a single inductor. In keeping with your retro-space-age \$\mu A741\$ theme, you could use an MC34063. That could give you +/-12V Alternatively, you could split the 12V into +6V/-6V. The easiest way to do that is to use a rail-splitter IC such as the TLE2426. The disadvantage of that is that you no longer have ground-referenced inputs and outputs relative to the +12V/0V supply. Whether that's an issue or not depends on your application. You could also have noise issues if the +12V supply is not clean (for example, if it's a vehicle +12V supply).
H: How do I amplify the difference between two voltages? Starting at the beginning: I have a shunt that carries DC current, and the voltage drop is very slight. I want to amplify it in order to see it better on oscilloscope, and to have it scaled correctly for e.g. an Arduino ADC. This to measure the current. So one side of the shunt has voltage V (~12V), the other side has V minus epsilon. I have a pile of NPN and PNP transistors, capacitors, resistors, diodes, etc… It would be more obvious to me as a newbie if I were trying to scale up a signal around 0V, but since I'm trying to measure "epsilon". I would like to power the whole thing (transistors, arduino, etc…) off of the same circuit, so I'm guessing I'd need a common ground. This means I don't think I can use "V-epsilon" as ground. So, how do I amplify, and/or "re-base" this voltage drop across the shunt? Edit: Ideally not with a ready component dedicated for this (though thank you for the pointer to that), since this is also part for me to understand how to build circuits with transistors, and I have transistors on hand and stores are not exactly open and convenient at the moment. (oops, obviously buck converter also has access to grund. And "magic happens here" can also have access to both ground and 5V from buck converter) AI: If you want a canned solution you can use a "current sense amplifier". This is the name of the category you should dig in at your favorite supplier. These amps are characterized by the usual parameters: offset, bandwidth, acceptable common mode range, etc. Make sure you check. But they have extra features on top of that. They work from a low impedance source (usually a current shunt) so the effects of source impedance imbalance on CMRR due to high source impedance on CMRR can be neglected. This allows a circuit that is simpler and cheaper than an instrumentation amplifier. An important bit is that usually a current sense amp is able to sense a tiny voltage on the high side (on a shunt in the power supply) at a voltage above its own power supply. Some can sense with large negative common mod too. Some sense current at one polarity only, others at both polarities. So, check the specs. For example if you have a 3.3V supply for your ADC you can power MAX4378 from +3V3 and sense current in a 24V supply. Here is one: MAX4376 but there are a lot. For a high side shunt you could also use an opamp wired as a difference amplifier, however if you want high gain the acceptable common mode range of this circuit means that this requires a rail to rail input opamp powered from the rail you want to sense. Also the output is referenced to the power rail, not ground, which is inconvenient, and it is unidirectional. A canned current sense amp chip is much simpler to use. Now if you want a discrete circuit (since you mention discrete transistors...) One could use a low offset discrete opamp to implement Spehro's circuit. This would need matched transistors like DMMT3904. Input common mode has to include the positive rail, which means a darlington to add a bit of voltage headroom to avoid saturating the BJTs in the current mirror and differential pair. This is an interesting circuit (gain is R6/R2) but quite complicated versus a SOT23 ready made amplifier. Another crude circuit could be this one. Its output is referenced to ground, but there is an inconvenient DC offset voltage at zero current. This could be an advantage, as it means it can sense current in both directions, but you got to calibrate the offset out somehow. This one has a differential output, with a common mode you can choose. If you need a single-ended ground-referenced output, an extra opamp is needed. Note the offset of all these circuits will depend on matching between transistors in the differential pair. You can always use a DMMT3904 which I think is specced for 1mV offset, but that is not as good as a decent current sense amp. Linearity should be good for the first one (it's an opamp with feedback). The others don't use feedback so they'll be a bit less linear, but they are much simpler. EDIT: The last two circuits exploit the fact that the signal to be amplified is presented at a very low impedance on the low value shunt resistor, which means instead of using transistor bases as input we can use the emitters... because as you know the emitter is an output, but it is also a low impedance input ! This isn't really an opamp, rather it's a simple feedback circuit with a differential input pair Q6,Q7 biased via Q5. Current mirror Q8,Q9 directs the output current of the diff pair into Q10. It tries to keep voltage on Q6,Q7 emitters equal by injecting a small adjustment current via Q10 which changes Q7 emitter voltage to make it the same as Q6 emitter voltage. So this current is proportional to the voltage being measured on the shunt, and since this current is created via R8, we get out amplified output voltage on R8. Again, don't expect miracles wrt accuracy or offset voltage, but it's a much simpler version of the first schematic and similar performance.
H: Recommendations for On-Hand 74xx (and others?) ICs I've been learning about logic gates (from Ben Eater's videos, etc.) and I find myself constantly having to order more ICs, especially 74xxs. Is there a list of ones I should keep on hand to help avoid unnecessary delays from ordering. This question helped a lot but it didn't seem to cover logic gates. AI: A lot depends on what you want to do. I like to keep a fair amount of glue logic around. My most common use parts are '00 (NAND), '32 (OR), 08 (AND), '04 (NOT), '06 (open collector inverters) and '14 (inverters with schmitt trigger inputs) and I really like the SN74LVC1G99 configurable logic gate from which you can implement many boolean functions. I do keep a few bus latches / drivers around for interfacing projects. ('244. '245, '373 and the 16 bit variants). There are some items apart from that, but those are the parts I personally use the most. There will always be a few parts I do not have to hand, but I can usually get them within a day anyway.
H: Op-amp astable multivibrator with single supply I want to create a simple circuit to flash a LED with one op-amp. I already simulated the circuit (astable multivibrator) but there is one problem with the supply. I don't know what to change so that it works for single supply. However, when I use dual-supply it will burn my LED when placed like on the circuit below. The output will oscillate between 10V and -10. Unfortunately, -10V is below the max. allowed voltage for an LED (which is maybe 5V?). I have two questions now: What would I have to change when I want it to work with dual-supply (currently it would kill my led)? How do I need to change the circuit to get it work with single supply? Thank you! AI: What would I have to change when I want it to work with dual-supply (currently it would kill my led)? Put a reverse protection diode across the LED (or maybe a reversed LED of a different colour to get a nice flashing two-colour effect). How do I need to change the circuit to get it work with single supply? Try this: - The frequency might need a bit of an adjustment if it's critical.
H: SystemVerilog same function, different simulation results Why does the simulator calculate different results for these signals: prova1 and prova2? In my opinion, those are representing the very same function. Signal declaration is: logic [12:0] sub_ton; logic [11:0] sub_ton_neg; logic prova1; logic prova2; Here are simulation results: AI: Your problem is the rules for Verilog expression bit length say that operands get extended to their context determined lengths before applying the operators. In the expression: ~(sub_ton[11:0])+1 The +1 is really +32'sd1, a signed 32 bit decimal number. sub_ton[11:0] gets 0-padded to 32 bits, then the bitwise negation happens, followed by the addition. When you make the assignment to sub_ton_neg it gets truncated back to 12 bits. That truncation doesn't happen in your 3rd assignment to prova2. This little example should give you a better picture of what's happening. module top; bit [12:0] sub_ton=1; bit [11:0] sub_ton_neg; initial begin sub_ton_neg = (~(sub_ton[11:0])+1); $displayh(sub_ton_neg); $displayh((~(sub_ton[11:0])+1)); end endmodule The rules are explained section 11.8.2 Steps for evaluating an expression in the IEEE SystemVerilog 1800-2017 LRM. These rules have been in place since before the IEEE Verilog 1364-1995 LRM.
H: Dual channel DAC - Combined output mode - Possible downsides Can a Dual channel DAC be operated with both output channels tied together to achieve a higher output current? What would be possible downsides? Background: I require an programmable voltage source with up to 25 mA, and would have liked to do so with minimum required parts. :-) I have an MCP4822 dual channel DAC with inbuilt op amps for both channels. Each channel typically has a "Short Circuit Current" of 15 mA (25mA is the absolute max. rating for the each output) http://ww1.microchip.com/downloads/en/devicedoc/20002249b.pdf I probably could use the Latch DAC Input (LDAC) to keep both outputs in sync. AI: DACs are not meant to provide current. Short-circuit current is with output dragged all the way down to 0V by definition. Just because it can survive that doesn't mean the output voltage will be anywhere near accurate while doing so. Expect a 100% voltage droop at 15mA. Use a buffer.
H: is this level shifter a dud or am i doing something silly? i am very new to hardware circuitry and am attempting to connect a wiegand rfid and pin code reader to a raspberry pi for door access control logic. more details at: https://gist.github.com/grenade/d7289df5c83172b82052b2b87df9f978. in the 2nd photo below, there are 3 wires connected to a bidirectional level shifter (5v - 3.3v). from left to right they are connected as follows: wiegand data 0 - level shifter hv 1 wiegand gnd - level shifter hv gnd wiegand data 1 - level shifter hv 4 i intend to connect the level shifter to the raspberry pi as follows: raspi gpio 04 - level shifter hv raspi gpio 06 - level shifter lv gnd raspi gpio 11 - level shifter lv 1 (lower left pin) raspi gpio 12 - level shifter lv 4 (lower right pin) raspi gpio 17 - level shifter lv i am following a related post which describes a schematic which uses similar hardware. multimeter shows 12v coming from the power supply at the top of the first picture multimeter shows 6v coming from both data 0 and data 1 on the wiegand reader. i was expecting 5.5v, but whatever. multimeter shows 6v coming from both lv 1 and lv 4 on the level shifter. i was expecting a level shift down to 3v ~ 4v. have i misunderstood what level shifters do or is this one a dud since it outputs the same voltage on the lv end as is going in on the hv end? AI: Michael Karas' comment contained the answer i needed. once the level shifter was powered on both the hv and lv sides from the respective 5v and 3v sources, the shifter gives the right 3.3v shifted voltage from the data0 and data1 pins.
H: Differential amplifier (transistors): Which input is inverting? For example, on the following differential amplifier it is clear, which of the inputs is the inverting one. If the voltage on the non-inverting input (base of Q1) is higher, the current through Q1 will increase. That leads to a higher voltage on R2 and the collector potential will go down. On comparison, the current through Q2 is being decreased which means less voltage through R4 and the collector (=output) will become higher. Is that correct? However, on this circuit, it looks different: The inputs are swapped as the output is being taken from the "non-inverting transistor". However, logically the output should become lower when increasing the positive input? I don't understand how they come to the conclusion that the inputs are correct? simulate this circuit – Schematic created using CircuitLab AI: \$Q_4\$, \$D_1\$, \$D_2\$, \$R_2\$ and \$R_4\$ form a constant current sink. Increasing the voltage on the base of \$Q_2\$ causes the voltage at its collector to fall. This in turn turns \$Q_3\$ on more increasing the output voltage. So clearly \$Q_1\$ base is the inverting input, \$Q_2\$ base is the non-inverting input and \$Q_3\$ collector is the output.
H: Double Sided Printed Circuit Board New to this forum and literally do not know anything about electrical engineering. I have a simple question for you as I am looking for some informed opinion in order to decide wether to invest in a stock or not.. This company: https://www.nano-di.com/investor-news/hensoldt-and-nano-dimension-achieve-breakthrough-in-electronics-3d-printing just announced that it's capable of 3d printing a double sided circuit board. Apparently that was not possible and it is a world's first. Could you tell me how much of a breakthrough is their technology? Is it something remarkable or not so much? Could you help me understand in a nutshell what is that about? Thank you so much in advance for your patience. AI: Standard double sided PCBA manufacturing requires 2 passes through the reflow oven. This means the components on the 1st side and the board itself go through two reflow thermal profiles. Not all components can tolerate that, most notably modules and older T.H connectors. Nano-Di probably managed to invent a material and 3d printing process that allows more than one pass.
H: How to interpret the S-parameters of this result? I'm designing an FSS in CST MWS that should be providing maximum reflection for the band (12 to 15)GHz and hence minimum transmission to this band. I've got the results has shown here. How do I find the return loss and transmission loss? Out of these 8 data legends like SZmax(2),Zmax(2), which are the return loss (S11) and transmission coefficient (S21)? By transmission coefficient, forward transmission coefficient is meant. Floquet port model is used. Fig.1 S-parameter output Fig.2 CST port illustration AI: According to the CST MWS help page on FSS, it is mentioned that the reflection is specified using SZmax(1),Zmax(1) while the transmission is specified using SZmin(1),Zmax(1). When the magnitude of the reflection is greater than the transmission by a fair margin, then the designed FSS is said to function as a ground plane for the designed band.
H: Control Systems - Pole Location Hello there I am trying to solve a question and in the question I have a system with two complex conjugate poles. The question asks me to analyse the Maximum % Overshoot, Settling-Peak-Rise Time and Damping ratio if those poles were moved to the left. Here are my conclusions: Natural frequency (wn) increases since the radial distance of the poles are going to be greater. Angle theta is going to decrease. Since theta decreases damping ratio (zeta) is going to increase since it is equal to cos(theta). Maximum percent overshoot is going to decrease. Settling time is going to decrease. I could not reach to a certain conclusion about the Peak Time and Rise Time. I know that Peak time is equal to pi/wn.(sqrt(1-zeta^2)). And I also know that Rise time cannot be calculated directly. Can you help me with that? Here is the question: I am of course not trying to get the answer. I am just trying to get some feedback. AI: Natural frequency (wn) increases since the radial distance of the poles are going to be greater. Angle theta is going to decrease. Correct. Since theta decreases damping ratio (zeta) is going to increase since it is equal to cos(theta). Correct. Maximum percent overshoot is going to decrease. Yes, because \$\zeta\$ has increased Settling time is going to decrease. Because \$\zeta\$ has increased, overshoot has decreased hence settling time to within "some percentage" limits are decreased. But it depends on how you define settling time. Once the step has caused the output response to "hit" the first overshoot, it will settle more quickly to stability. But as \$\zeta\$ approaches 1 (the limit for conjugate poles) it does take a long time to hit that first overshoot. For instance if \$\zeta\$ is 0.90 it will hit the first overshoot in about 70% of the time it would take if \$\zeta\$ was 0.95. But, once that overshoot is hit, a \$\zeta\$ value of 0.95 will settle down more quickly. So, it depends on how you define settling time.
H: Do digital signal isolators shift the level of the signal I am using 2 digital signal isolators, the ISOW7841 and the ISO7730. The datasheet does not seem to explicitly say that if a signal coming from the 5V side comes out 3.3V on the other side or vice versa. I just want to make sure. Do digital signal isolators inherently shift the voltage level of the signal? AI: To me, shift = offset. But I assume you actually mean translate, as in logic level translation. Yes they do, and yes it does say. Look at the input and output logic thresholds in the datasheet. ISO773x ISO784x
H: Flashing program into microcontroller, usb-ttl, idea behind? I have limited knowledge on this topic(MCUs, communication protocols, etc.). I back then worked with an STM32 MCU and did some UART(or USART, can't remember) communication between a PC and the MCU. While I was doing that, I was using on-board ST-LINK programmer/debugger to load my programs into the MCU(F4 Discovery board, to be specific). Now, while I was doing U(S)ART, I had to configure corresponding pins as U(S)ART alternate function along with configuring U(S)ART peripheral registers. I recently, looking for programming so called Blue Pill board based on STM32F103Cxx MCU. The point I did not understand is, they(the tutorial owners) don't do a configuration for U(S)ART or so. Generally, they just replace a jumper from one location to another and be able to load programs via USB to TTL converters. So, what is the exact working behind this, why don't they need to set some pins as U(S)ART Rx or Tx for example? Another little question, I see some products sold as "FTDI programmer". Now, I do have a Prolific USB to TTL/Serial converter. Will there be a difference in functionality, can I use any USB to TTL converter for programming the MCU? I mean, does that "programmer" have a meaning, or just a naming? Thanks. AI: It's hard to know exactly what you are describing, but there's a likely case which would fit your description. Now, while I was doing U(S)ART, I had to configure corresponding pins as U(S)ART alternate function along with configuring U(S)ART peripheral registers. This would be the case when the program you are writing yourself wants to do serial communication. I recently, looking for programming so called Blue Pill board based on STM32F103Cxx MCU. The point I did not understand is, they(the tutorial owners) don't do a configuration for U(S)ART or so. Generally, they just replace a jumper from one location to another and be able to load programs via USB to TTL converters. This sounds like uploading new code to be received by the chip's factory ROM bootloader. In that case, your custom code is not yet what is running on the chip, rather the factory ROM program that functions as a bootloader is. And since that program is trying to do serial communication, ST's engineers wrote it to appropriately configure the various peripherals it is trying to use. Another little question, I see some products sold as "FTDI programmer". Now, I do have a Prolific USB to TTL/Serial converter. Will there be a difference in functionality, can I use any USB to TTL converter for programming the MCU? I mean, does that "programmer" have a meaning, or just a naming? This could mean many different things, particularly with hobby/gray-market online sellers. FTDI (as well as prolific) are IC makers, not particular ICs. Often references to an FTDI converter would mean a part such as the FT232RL or similar, which is mostly a USB to asynchronous serial bridge with a few added capabilities. That, or the competing chip from Prolific (or the counterfeit fake of it that you likely actually have) would be generally suitable for asynchronous serial communication with a bootloader. But some of FTDI's other parts like the FT2232 series can also do a fairly good job of arbitrary synchronous serial communication, which lets them do things like implement JTAG or SWD protocols (or ATmega ISP) for more directly communicating with the chip, programming it without going through the bootloader, doing breakpoint debugging, etc. These capabilities are necessarily chip- and brand- unique in how they function and are used by whatever PC software is operating them. Though that PC software itself may abstract that out to a fairly similar set of user commands/operations.
H: Steering wheel controls with zener diode? I am looking at my steering wheel schematics and trying to find what are these zener diode doing there? I want to be sure there's no problem feeding 5V from an Arduino to check the ADC values without having 12V from the control backlights backfeeding into the resistor network. I measured with a multimeter I could get any voltage at all which is good. AI: Without a more inclusive schematic to give it context, I'd guess that the Signal- line is actually ground, and the buttons are read by measuring the resistance from Signal+ to ground. The bottom diode would likely be to protect the backlight from voltage spikes. The one on top I'm less sure of. It may be protecting from spikes from nearby circuitry, or protecting from spikes on Signal+ from developing sufficient voltage to damage the switches. Since it only has voltage on it when no switch is closed, it may just be regulating the "default" level at which no switch closure is detected.
H: How do resistive type fuel level sensors work? Resistive type fuel level sensors which are used in fuel storage tanks(such as those at fuel stations) comprise of 2 visible parts; dual probe rods and a connecting float. When the float is at the very bottom the output of the sensor is Vmin(possibly ~0V) and as the float progressively approaches the top the output increases upto Vmax(possibly ~Vsupp). I would like to understand how these sensors work possibly by a simple circuit model? For example: XM/XT 66400 Series Continuous Level Transmitter(note the dual rod/probes) AI: The datasheet says: These rugged transmitters are designed for tanks up to 18 feet (549 cm) in depth. Heavy duty stems resist turbulence, and float options accommodate liquids with minimum specific gravity as low as 0.53. Standard resolution is 1/2 inch; higher resolutions are available on request. [Emphasis mine.] simulate this circuit – Schematic created using CircuitLab Figure 1. The innards. The clue is in the 1/2" resolution that the sensor is using reed switches spaced at 1/2" intervals. The magnetic float is big enough to ensure that at lease one switch will be on at any position and it can be expected that there will be overlap in transitioning from one switch to the next. When the float is at the very bottom the output of the sensor is Vmin (possibly ~0 V) and as the float progressively approaches the top the output increases up to Vmax (possibly ~Vsupp). The sensors come in a variety of options including 4 - 20 mA and 0 - 5 V output, for example. This means that there is a conditioning circuit in the head which converts the resistance scale to the required output.
H: Issue with low voltage for mosfet gate I am a beginner when it comes to electronics so I would appreciate your help with a simple project I am working on and have been struggling with for several days. The idea is that I have a PIR Sensor outputting to GPIO 24 on my Pi. Via a python script, I would like to switch on a 12-volt LED strip with its own external power supply. I am able to see when motion is detected via the script, so that is working fine. The issue however is that I am not able to properly use my mosfets to act as a switch. In order to take it step by step, for now I am trying to output signal to my mosfet Gate via GPIO 24, and manually alter the PWM via terminal (instead of using the PIR output). My scheme is similar to this: https://screen.sh/cAaSbZiM - however not using the battery, and using the GPIO 24 output instead of PIR. I have tried this with a IRLB8721 mosfet, but realized it needs a higher voltage than what the GPIO can provide. The most suitable one I could find then was the IRF3205, seeing that it is used more frequently with Raspberry PI/Arduino applications, but unfortunately this is also not working. It seems like my GPIO voltage is still too low. Alternatively I have a couple of 2N3904 transistors available but I am not entirely sure how I can use these without frying them. I have noticed that when using the IRLB8721, my LEDs are always on, and I am not able to switch them off. I did make sure not to mix up the Drain and Source, yet still not working. Could you please guide me in the right direction? I am not able to get any other components for another week, so I am stuck with a diode, couple of 1kOhm resistors, and these 3 types of transistors/mosfets/ Is there anything I can do to fix this? I have looked this up extensively and tried it on my breadboards but with no success. Only lead I have is pointing to a fairchild mosfet which I am not able to get to for weeks. Thank you. AI: What made you think the IRF3205 was suitable? It is not a logic level MOSFET. For use as a switch, ignore the gate threshold voltage. This is the gate-source voltage where the MOSFET just barely starts to conduct. You don't care about that. You care about the MOSFET conducting as much as possible so instead of gate threshold voltage, look at the gate-source voltage used to achieve the desired RDson. You need a 3.3V logic level MOSFET so the 3.3V on the RPI GPIO can actually turn it on and off.
H: Hole size of BGA stencil I need to design a stencil for reballing. 0.76 is the diameter of the ball. What should be the size of each hole in the stencil? I was thinking about 0.8mm. Is that correct? AI: This is a very complex matter and a higly professional task. Please take care that (in normal cases), BGA pad dimensions will mainly depend from the BGA pitch. Once the pads dimension is defined, normally the stencil aperture should always be smaller than the pad. In case you need more solder volume, you have to increase the stencil thickness. The complexity of the design depends either the BGA pads are soldermask-defined or not.
H: Why does this part of an OLED filter light only when orientated correctly? Note: The screen is an OLED screen. I had assumed it was LCD, therefore the post mentions it as such. I was disassembling a Samsung phone (GT-I9305) and was able to seperate the LCD into two sections. The top section consists of the top glass surface and a tinted film wedged between the surface and another piece of glass. The bottom section includes all the other layers of the LCD with a bronze glass layer on top with a very fine grid like pattern, barely visible to the naked eye. When light it shone on it, a colourful, but distorted reflection is produced. Below this is a black film-like layer with fairly clear and thin light grey horizontal and vertical dashes, similar to those seen in a segmented LCD. Each of the dashes, consistent in size are approximately 1 mm in length. Based upon this Samsung website, I'm assuming that the layers, from top to bottom are as follows: Top section Glass surface - Top chassis Tinted film - Top polariser Glass layer - Glass substrate Bottom section Bronze layer - Color filter (RGB) Black film - Common electrode When the top section is placed on the bottom section is acts as a filter, blocking a significant amount of light. This filtering occurs as long as the top section has the glass surface facing up. If the top section is flipped, such that the glass surface is facing down, no filtering occurs and is only darkened to the extent that the tinted layer is normally. I initially thought that the glass surface had an effect on the filtering, so I removed a small section of the tinted film and observed the same effect. If the glass is orientated how it was designed, it filtered, if it was flipped, it wouldn't. I thought polarisation should work either horizontally or vertically no matter if the film is flipped or not. That's what I've experienced with sunglasses, so, clearly something else is going on. If the molecules were orientated along the z axis, that would explain how the top section can be rotated without a change to the filtering, but it doesn't explain why it only works in one direction. So my questions are: Have I correctly identified the layers? How does the tinted film only filter in one direction? What is the name of this effect? AI: How does the tinted film only filter in one direction? That is an OLED screen, not LCD, so the structure is a little different. Instead of having just the top polarizer, an OLED screen has the top polarizer attached to the top of a quarter waveplate: When oriented the correct way, the polarizer converts incoming ambient light to be linearly polarized. The waveplate then converts it to circular. After reflecting through the display, the light passes through the quarter waveplate a second time, accumulating 1 half wave of retardance. This turns it back into linear polarized light, but with the perpendicular polarization, causing it to be absorbed by the polarizer. In the incorrect orientation, the light hits the linear polarizer before the waveplate and is not rotated one half wave. Without that rotation, nothing is absorbed, which is what you observe. What is the name of this effect? It is sometimes informally referred to as the poor man's isolator because it can be used to mimic part of the effect of a Faraday isolator (allowing light to pass in one direction but not the other) but without the cost, weight and magnetic field. Aside from that, I don't think it has a name, or is even considered to be one specific effect. It is simply due the fact that order matters in this particular sequence of filters. Is there a simple explanation of how linearly polarised light can be converted to circularly polarised? Not really. It also isn't very important here. The key point is that passing through a half waveplate will rotate polarization by 90 degrees (vertical becomes horizontal) while the polarizer absorbs one state and passes the other (vertical passes, horizontal is absorbed for instance). Passing through a quarter waveplate twice adds up to one half wave, so ambient light hitting the screen that reflects off the display panel will pass through the quarter wave plate twice, be rotated 90 degrees, and then (if the plate is in the right orientation) hit the polarizer last and be absorbed. It doesn't matter what the polarization state is in the middle. But then, how do they ensure the material produces a 1/4 wavelength delay? Does it have something to do with thickness, but then white light is made up of all different wavelengths? If very high rejection of light is required, a waveplate can be achromatized by taking two waveplates made of different materials and combining them. By selecting materials with very different dispersions and setting the ratio of retardance between the two materials correctly, the change in refractive index in each material can cause retardance to be nearly constant over a range of wavelengths. This is usually not so important for what is basically just an anti-reflection coating though.
H: Schmitt trigger offset voltage I found this circuit which generates an offset voltage to the hystersis (so the threshold voltages are for example not -3V and 3V but -2V and 4V). It would be the same as if you remove R1 and put a voltage source between R2 and GND, right? The circuit above just realizes that? How does this exactly work and how can I calculate the resistors? I don't understand how it works with R1. The voltage where R1 is connected to is the same as the positive battery voltage of the op-amp. The output will change between 5V and -5V. For my logic, 2.5V should drop on R1 and 2.5V on R2. However, there are also 5V (or -5V) that need to drop on R_hyst and R2. How does that work? What will be the final voltages and how to calculate it? An explanation would be awesome. Thank you AI: I found this circuit which generates an offset voltage to the hystersis (so the threshold voltages are for example not -3V and 3V but -2V and 4V). No, they won't be +/- 3 volts because to get symmetrical hysteresis voltages (when you have a split +/- 5 volt supply) you will need R2 taking down to -5 volts. At the moment it is connected to 0 volts and this won't give symmetrical-about-ground hysteresis voltages. Connect R2 to -5 volts. With a feedback resistor of 576 kohm as shown in the diagram AND R2 taken to -5 volts, the positive hysteresis voltage will be: - $$\text{10 volts}\cdot \dfrac{\text{R2}}{\text{R1 || 576 k + R2}} - \text{5 volts}$$ And, using R1 = R2 = 100 kohm the positive hysteresis is +400 mV. By the same token, the negative hysteresis voltage will be -400 mV (due to symmetry now that R2 is connected to the -5 volt supply). The above calculation assumes that the comparator output can swing rail-to-rail. It would be the same as if you remove R1 and put a voltage source between R2 and GND, right? Here's where you words are confusing - if you put a voltage source between R2 and ground, that could mean two things: - disconnect R2 from ground and insert a voltage source or add a voltage source across R2 I think you mean the first one and, if you do, then that is an acceptable way to set levels of hysteresis.
H: Switch power on/off when SD card is inserted/removed in slot I have ESP32 and I'm using it to read/write data from an SD card that is directly connected over SPI. This works fine. Now I want to switch on the ESP32 automatically by switching on the 3.3v power supply when an SD card is connected. When the SD card is beeing disconnnected, the power should be switched off. I've tried to accomplish this using a 2N2222 transistor but I'm stuck now. My plan was place the ESP32 on the collector and GND on the emmiter. I wanted the base to be switched by a connection from the GND pin of the SD card (using a 220ohm resistor between SD card GND pin the power supplies GND to get part of the current to the base). This leads to the transistor switching just fine, but with this setup the SPI doesn't work anymore. What can I do to make this work. Is there another way to switch on/off power when an SD card is connected? Sorry for the bad schematics, its my first try. simulate this circuit – Schematic created using CircuitLab AI: Logic circuits rely on good ground connections to function properly and putting anything in the way is going to lead to problems. Sensing current on the supply pins has a better chance of working, as they're generally more tolerant of a wider range of voltage. But there's a far better way to achieve what you want to do - use an SD socket with a built in switch to detect card insertion, like this one: Most SD card sockets have a similar facility, it shouldn't be hard to find one.
H: Does foam have the ability to broaden the bandwidth of an antenna? In some designs of aperture coupled patch antennas, a foam is used between the ground plane and the patch substrate. What would be main logic using it there? The reason of using it may be that when feed line substrate and patch substrate are the same? The design by side view like that; Patch Patch Substrate (Ep:3.55) Foam Ground plane(with aperture) Feed line Substrate (Ep:3.55) Feed line AI: The foam is there because many foams can have a very low relative permittivity (essentially \$\epsilon_R = 1\$) as they are mostly filled with air. If you look at the fundamentals of patch antennas, you can see that in general, a larger distance between patch and ground plane result in a higher bandwidth. One way to do this is to keep the patch further away from the ground plane with standoffs, and have air in between. The foam has similar electrical performance as air but unlike air, the foam can be used structurally as well, to keep a specific and precise distance between the patch and the feed/groundplane, and to ensure that nothing gets in between the patch and groundplane to interfere with the performance.
H: Can this driver be simplified? I have this driver. It has two MOSFETs and two complementary signals to drive them: The load (a bunch of LEDs) sees 5 V or 0 V and, in average, they take a maximum of 100 mA. However, the top MOSFET needs a driver signal that is not referenced to ground, which makes things a bit complicated. I am looking for a similar driver that: is simple (it does not have many components). does not need floating signals (and ideally only one control signal). does not have any big resistors in the 100 mA path to avoid losses (in voltage or in power) is an all or nothing driver (5 V or 0 V), so the LEDs light up properly when they are ON. I am open to any suggestions (with BJTs, with extra components to create the floating driver signal from the pins of the microcontroller, etc.). Thank you for your help! AI: This wasn't mentioned in the original question: - \$\color{red}{\text{That the LEDs couldn't be electrically disconnected from ground}}\$ Speed of operation wasn't mentioned either and neither was there any mention of the input logic circuit being inverted so, try this as a simplification: - You will still need to ensure that the LEDs can handle 5 volts with no chance of an over-current situation. I am looking for a similar driver that: is simple (it does not have many components). \$\color{red}{\text{YES}}\$ does not need floating signals (and ideally only one control signal). \$\color{red}{\text{YES}}\$ does not have any big resistors in the 100 mA path to avoid losses (in voltage or in power) \$\color{red}{\text{you need to be more forthcoming about what you currently do}}\$ is an all or nothing driver (5 V or 0 V), so the LEDs light up properly when they are ON. \$\color{red}{\text{YES}}\$ PMOS load switch: - Image from Using MOSFETs in Load Switch Applications.