text
stringlengths
83
79.5k
H: How can cache be that fast? Here is a screenshot of a cache benchmark: In the benchmark the L1 cache read speed is about 186 GB/s, with the latency being about 3-4 clock cycles. How is such a speed even achieved? Consider the memory here: the theoretical maximum speed is 665 MHz (memory frequency) x 2 (double data rate) x 64 bit (bus width) which is about 10.6 GB/s, which is closer to the benchmark value of 9.6 GB/s. But with the L1 cache, even if we could read at every cycle with the processor at its maximum frequency (3 GHz), we would need about 496 data lines to achieve such a throughput which sounds unrealistic. This applies to other caches as well. What am I missing? How do we calculate the throughput of a cache from its parameters? AI: This CPU has... 2 cores A 32-KB instruction and 32-KB data first-level cache (L1) for each core Since there are two cores, we can expect the benchmark to run two threads in parallel. Their website gives remarkably little information, though, but if we look here, CPUs with more cores seem to give correspondingly higher L1 throughputs. So I think what is displayed is total throughput with all cores working in parallel. So, for your CPU, we should divide by two for one core and one cache: Read 93 GB/s Write 47 GB/s Copy 90 GB/s Now, the fact "copy" is 2x faster than "write" is highly suspicious. How could it copy faster than it can write? I'm going to bet that what the benchmark displays as "copy" is the sum of read+write throughput, and in this case it would both read and write at 45 GB/s, but display 90, because it's a benchmark, and who the hell trusts benchmarks? So let's ignore "copy". Read 93 GB/s => 30 bytes/clock Write 47 GB/s => 15 bytes/clock Now, one 128-bit register is 16 bytes, close enough, so it sounds like this cache can do two 128-bit reads and one write per clock. This is exactly you'd want to really streamline those SSE number-crunching instructions: two reads and one write per cycle. This would most likely be implemented with lots of parallel data lines, which is the usual way to haul around lots of data very fast inside a chip.
H: why does current flow from (+) psu A to (-) psu B? I am a self-taught hobbyist, and I have a question about an experiment that I did. I got two ATX power supplies, a and b, and hooked them up as follows: simulate this circuit – Schematic created using CircuitLab I was not expecting current to flow, but to my surprise, the LED lit up. This is my first question: Why does current flow from PSU a to PSU b? Is this because the PSUs are connected to the same power strip, and are not actually insulated? So after seeing the results of that, I had another idea. I wondered if the PSUs could sink current from a 9v battery. So I connected (-) of the 9v battery, and 12v from the PSU according to the following schematic: simulate this circuit Again, the LED lit up, but just barely. I could only just see a tiny bit of light from the led. Why does this happen? AI: Why does current flow from PSU a to PSU b? If they're ATX supplies then the output GND is connected to protection earth, so both GNDs are connected together to the power strip's earth (or the house wiring). So, no surprises here. So after seeing the results of that, I had another idea. I wondered if the PSUs could sink current from a 9v battery. So I connected (-) of the 9v battery, and 12v from the PSU according to the following schematic This sounds like your switching PSU is leaking a bit of high frequency common mode noise. This is normal and expected for a switching PSU, although the actual current should be very low to meet with EMC/EMI regulations. To have current flow, you need a closed circuit. In your case, what closes the circuit would be the parasitic capacitance between the 9V battery and the ATX power supply metal case. If I'm right, the LED illumination will vary if you bring the 9V battery closer to the ATX supply. It would also work if you replace the 9V battery with any metallic object (like a kitchen spoon, whatever) since what matters isn't the fact it is a battery, but rather a conductive object large enough to form a capacitor with the PSU case. Try it ;) I find this question interesting, because the first part is really basic, while the second is rather high-level ;)
H: Can anyone identify this resistive component? I got this components: Resistance about 9 to 10 Ohms. I am unable to identify this components. edit: Thank you so much answering this question. Yes it is a 0.15A PTC fuse. ITrip is about 0.5A and it is very slow. AI: I believe they are Tyco/Raychem TRF600-150, high-voltage resettable fuses, rated for 250VDC nominal (600VAC interrupt rating) and 150mA nominal. I found a Tyco/Raychem datasheet that seems to match general part package and specs. Excerpt from page 10 (note the minimum resistance of 6Ω in the datasheet, which matches your measurement): Raychem went defunct in 1999, and were acquired by Tyco, so maybe they are now available under that brand name. If you search Ebay for "Raychem 600 150", you'll find various listings for parts that look exactly like yours. One example, from this listing, claiming to sell "RAYCHEM PTC Resettable Fuse Radial Leads .16A/.32A 250V 3A NEW 5/PKG": The bottom row on these says XS1G rather than your WN1U. This is probably a production series code, which differs from batch to batch. Here is another listing, claiming to sell "( 25 PC. ) RAYCHEM/TYCO PTC TR600-150-2 RESETTABLE POLYSWITCH CIRCUIT PROTECTOR":
H: Calculating correct resistor for an optocoupler I need to convert 12v inputs to a 3.3v electric imp005, i have a bunch of TLP521 optocouplers i bought to repair something (i've never used one in a circuit before). From the data sheet it looks like the recommended forward current for the LED is 16ma at 1.15v so i think i need a 680 ohm resistor to power it from 12v as for the detector resistor, is this just a pull up to stop the output floating? does this seem right? Thanks guys and sorry for the noob question. Dean. AI: The 680 Ohm is fine. The 10-k Ohm resistor on the detector may or may not be fine, depending on what is the speed you want to achieve. Larger resistor values will yield a smaller bandwidth. Smaller resistor values will yield higher speed, but they might increase the low-level voltage when the detector is on. See figure from the datasheet:
H: How to calculate the current of a MCU and CAN BUS circuit? I have a circuit which uses a STM32F205RGY MCU, and two SN65HVD230 CAN Bus transceivers, one 20mA LED with a 100Ω resistor and a USB port. The MCU will run a 24MHz crystal. Each CAN Bus Transceiver will run at 1Mbps The USB is standard speed. The circuit is 3v3 and the working input voltage is 8-30v. I'm struggling to work out what current my regulator needs to supply. Due to size constraints I will probably use a LDO regulator. I have a similar circuit built but with 4 LEDs, and a STM32F105 and that uses 0.04A-0.1A with no data on the CAN Bus as I have no CAN analyser at my current location, this seems awfully low though. What current can I expect my circuit to use? Will a 300mA LDO be ok? AI: Power budget: SN65HVD23 datasheet table 8.3 - maximum current value 48mA (times two) STM32F205RGY datasheet table 12 - maximum total current 120mA. This value probably assumes that totally everything is running at the same time. Your real consumption will be lower than that. LED - 20mA Total 236mA. 300mA LDO should be fine.
H: Will this circuit work ? 100 Leds project I am new to electronics. I have been given 100 Leds project by my teacher at college . Can anyone here validate this circuit. Thanks in advance. AI: No, that's a really bad circuit. You have 100 LEDs, all in parallel. Bad idea. Since this is a assignment, I'm not going to give you a better circuit outright. However, consider that LEDs want to be driven with a fixed current, not a fixed voltage. The change in current due to a change in voltage is very large when the LED is lit. Conversely, the voltage changes little as a function of changing current. Now also consider that every LED will end up at a slightly different voltage when run at the same current. If you put two such LEDs in parallel, then they both get the same voltage by definition of parallel. The small difference in forward voltage of the two LEDs leads to a much large difference in current between the two LEDs. Even worse, the LED that gets the higher current dissipates more power. That makes it get hotter, which decreases it's forward voltage, which makes it take a larger share of the current, which makes it get even hotter, etc. Think of a way to run LEDs, or groups of LEDs, at a fixed or reasonably controlled current.
H: Propagation delay in case of synchronous counters Consider 4 bit counter as shown below:- Here suppose we consider propagation delay of each flip flop =2ns and propagation delay of each AND gate is 3ns, then what will be the total propagation delay? As all flip flops get the same clock, we add the propagation delay of flip flop only once but my doubt is whether to add the propagation delay of AND gate only once or twice ? So will the propagation delay be 2+3 =5ns or 2+3+3=8ns ? AI: The propagation delay will be 2ns because that is the time between the changing input (clock) and outputs (Q0-Q3). Propagation delay of the AND gates not relevant because their outputs should be stable when the flip flips are clocked. However they may limit the maximum frequency that the counter can be clocked at, because their outputs need to be correct when the clock occurs. The JK input to the fourth flip flop (FF3) is determined by the states of Q0-Q2. These outputs are stable 2ns after the clock, but then pass through up to two AND gates. If the next clock occurs too early then FF3's JK input will not have had enough time to stabilize and the counter will malfunction. The total delay between clock input and FF3 JK input is up to 2+3+3 = 8ns. Therefore the minimum acceptable time between clocks is 8ns + JK setup time. So long as this timing is met the counter should work correctly, and the propagation delay will be 2ns because Q outputs only change in response to clock inputs.
H: Playing around with I2C does not yield expected results for SSD1306-OLED I'm trying to use an OLED-screen with I2C, but nothing happens (literally) so I come here to see if anyone can shed some lights on what I've done wrong. My first approach was to write a software I2C-driver, after I read up about the protocol from an appnote by Texas Instruments, but somehow my signals from SDA and SCL occured completely separate (clock pulse came after data). 4h of bug searching I found out that it was 2 out of 5 GPIO-pins on my PIC12F683 that was damaged. At that point I was fed up, so I grabbed a PIC16F886 with hardware-I2C instead, because my main point was to use the OLED-screen instead of writing a protocol handler. However, not even that does work. In the main loop I start/stop an LED just to see if the loop is running, and it is in indeed. This means, I think, that the I2C itself is working (since I have a while-loop waiting for the ACK, and it comes), but the screen somehow stays off. The screen is brand new and unused, so it should work, but it is ofcourse a possibility it's broken. The main suspect here is my code though, it's most likely me who's forgotten something. So, here it is, written for the PIC16F886 using XC8-compiler: #include <xc.h> // CONFIG1 #pragma config FOSC = INTRC_NOCLKOUT// Oscillator Selection bits (INTOSCIO oscillator: I/O function on RA6/OSC2/CLKOUT pin, I/O function on RA7/OSC1/CLKIN) #pragma config WDTE = OFF // Watchdog Timer Enable bit (WDT disabled and can be enabled by SWDTEN bit of the WDTCON register) #pragma config PWRTE = OFF // Power-up Timer Enable bit (PWRT disabled) #pragma config MCLRE = ON // RE3/MCLR pin function select bit (RE3/MCLR pin function is MCLR) #pragma config CP = OFF // Code Protection bit (Program memory code protection is disabled) #pragma config CPD = OFF // Data Code Protection bit (Data memory code protection is disabled) #pragma config BOREN = OFF // Brown Out Reset Selection bits (BOR disabled) #pragma config IESO = OFF // Internal External Switchover bit (Internal/External Switchover mode is disabled) #pragma config FCMEN = OFF // Fail-Safe Clock Monitor Enabled bit (Fail-Safe Clock Monitor is disabled) #pragma config LVP = OFF // Low Voltage Programming Enable bit (RB3 pin has digital I/O, HV on MCLR must be used for programming) // CONFIG2 #pragma config BOR4V = BOR40V // Brown-out Reset Selection bit (Brown-out Reset set to 4.0V) #pragma config WRT = OFF // Flash Program Memory Self Write Enable bits (Write protection off) #define OLED_CONTRAST 0x81 #define OLED_DISPLAY_ALL_ON_RESUME 0xa5 #define OLED_NORMAL_DISPLAY 0xa6 #define OLED_DISPLAY_ON 0xaf #define OLED_DISPLAY_OFF 0xAE #define OLED_ADDRESS 0b01111000 #define OLED_WRITE 0b01111000 #define OLED_READ 0b01111001 #define LED RC7 void delay() { volatile unsigned long korv; for (korv = 0; korv < 2*65535; korv++) { NOP(); } } void init_mcu() { OSCCON = 0b01110001; // 8MHz intosc ANSEL = 0; // Digital I/O ANSELH = 0; CM1CON0 = 0; // Disable comparators CM2CON0 = 0; TRISA = 0; TRISB = 0; TRISC = 0b00011000; // SDA/SCL - inputs SSPCONbits.SSPEN = 1; // Enable serial module SSPCONbits.SSPM = 0b1000; // I2C-master SSPCON2bits.RCEN = 0; // Disable receive mode SSPADD = 4; // 8MHz Fosc -> 400kbps I2C SSPSTATbits.SMP = 0; // Enable slew rate control for 400kHz } void i2c_wait() { while (!SSPIF) {} SSPIF = 0; } void i2c_start() { SSPCON2bits.SEN = 1; i2c_wait(); } void i2c_stop() { SSPCON2bits.PEN = 1; i2c_wait(); } void i2c_write_byte(unsigned char b) { SSPBUF = b; i2c_wait(); } void oled_command(unsigned char c) { i2c_start(); i2c_write_byte(OLED_WRITE); i2c_write_byte(0x00); // D/C i2c_write_byte(c); i2c_stop(); } void main() { init_mcu(); while (1) { oled_command(OLED_DISPLAY_ON); LED = 1; delay(); oled_command(OLED_DISPLAY_OFF); LED = 0; delay(); } } Are there any obvious errors, regarding the I2C-usage, I've done? UPDATE Schematics, photo of oscilloscope and breadboard as follows. Scoping SDA/SCL yields more or less constant 5V, no pulses at all. Surmising the chip was broken, I replaced it with a brand new, factory new, one. Same error. I'm at a complete loss here. AI: You seem to be using an SSD1306 oled. (worth mentioning!) To get such a display to life, you must initialize it. The sequence I use is static constexpr const uint8_t init_sequence[] = { CMD_MODE, DISPLAYOFF, CMD_MODE, SETDISPLAYCLOCKDIV, 0x80, CMD_MODE, SETMULTIPLEX, 0x3F, CMD_MODE, SETDISPLAYOFFSET, 0x00, CMD_MODE, SETSTARTLINE | 0x00, CMD_MODE, CHARGEPUMP, 0x14, CMD_MODE, MEMORYMODE, 0x00, CMD_MODE, SEGREMAP | 0x01, CMD_MODE, COMSCANDEC, CMD_MODE, SETCOMPINS, 0x12, CMD_MODE, SETCONTRAST, 0xCF, CMD_MODE, SETPRECHARGE, 0xF1, CMD_MODE, SETVCOMDETECT, 0x40, CMD_MODE, DISPLAYALLON_RESUME, CMD_MODE, NORMALDISPLAY, CMD_MODE, DISPLAYON }; After that you can start writing to the pixel RAM. I am not sure the on/off switching you use will show anything at all (even with correct initialization) when the pixel RAM happens to be all zeros. But as next-hack hinted in his comment, there might be lots of problems. Start with a simple I2C chip, like a PCF8574A. Once you get that to work, copy teh code from a known-working driver for your display and get it to work in your context. My driver can be found at https://github.com/wovo/hwlib/blob/master/library/hwlib-glcd-oled.hpp It is in C++, but you should be able to adapt it to C. (Or better: switch to a modern chip that supports C++! Cortex modules or the Arduino Due are very cheap from Aliexpress. Or take an LPC1114 if you want a DIP chip.)
H: DC motor constant acceleration I am looking at sizing a motor for lifting a mass a set distance in a set time and have found multiple design notes that have allowed me to complete this task. For example https://www.maxonmotor.com/medias/sys_master/root/8815460712478/DC-EC-Key-Information-14-EN-42-50.pdf?attachment=true or http://www.machinedesign.com/motorsdrives/how-pick-motors-linear-motion However, my main qualm lies in understanding the ability to dictate and obtain a trapezoidal profile. Taking the the second source as an example, for acceleration/deceleration the torque required is 0.3582 Nm and 0.3309 Nm for constant velocity. 1) I understand the concept of the torque constant, but how do you control the current supply to obtain these constant torques for acceleration and constant velocity? Could it be implemented by measuring the current to the motor, turning off the voltage supply when it hits this limit, and turn it back on when it falls below, until it reaches the target speed and then turning off this limiter? AI: To control acceleration you need to measure one of two things: Current or Velocity. Acceleration is the affect of torque being applied to some inertia & torque is proportional to current. Thus if you measure current & control current you will control acceleration. Acceleration is the rate of change of velocity. Thus if you measure the velocity you can deduce acceleration. to implement this you will need a controller to generate an error term (be it current or velocity) & via a PI controller to generate a voltage demand to then generate the needed PWM to synthesis the needed voltage. I would suggest controlling based upon velocity simply because controlling acceleration via Torque (ie current) is extremely dependant on the load profile. Equally if a DC motor is being used you can take credit for the velocity is proportional to the voltage applied & while this "constant" isn't really constant, it variation with stator current maybe low enough that you could operate open-loop with just a voltage profile.
H: Why sometimes I have voice but no data on a GSM network? From what I understand, both voice and data are using the same antenna (At least 3G). Sometimes in a remote area, I have plenty of signal but no data service. Is it possible some cell towers don't offer data (voice only)? AI: GSM by itself does not support data, just voice (even SMS is put ontop of GSM). Data service, beyond the 9600baud CSD (which doesn't seem to be supported by most phones/mobile providers anymore) requires the availability of GPRS or EDGE. Although both systems are very old, depending on where you are, there are still base stations around that only support GSM without any data service.
H: SMD Elements on the back of THT I'm designing a PCB right now and found out I can save a lot of space using the back of THT elements: In the picture there's IC1 (ATMega) and LCD. I design only 1-sided PCB for now. IC1 and LCD are put "flipped", so the pins of them are on this side, and the big things (screen, IC1 main body) on the back (easier to solder). My PCB has limited size and because of that I though of maybe of using the area between pins of IC1 and the whole empty area of the back of LCD. It's a lot of space to use and as you can see with C2 example - easier and shorter. I know that maybe for ADC elements the problem may happen, because of same GND grid or something like that, but it's not the problem in this situation. Is is legit to design like that? Are there any problems that might occur? Any opinion appreciated. AI: Yes, you can do this In general, there is no problem with putting components on both sides of the board, in the same place. If you had a two (or more) layer board, you's probably want to put a ground or power plane between them, but you can't do that with a single layer board. So you should watch out for a fast digital chip interfering with an analog circuit. It would be inadvisable, for example, to put a microcontroller on one side and a high input impedance buffer on the other. But putting decoupling caps opposite the chip is probably a good plan. If you want to meet CE requirements for noise immunity and emissions, you will have difficulty with this type of design, and a two layer board would help a lot. For hobby use, what you have looks fine.
H: Charge distribution in an open circuit This is a pretty simple question about electric charge flow, but it's something that I just need a clear answer on. So if net charge on a conductor will spread out over the surface (because like charges repel and want to get as far away from each other as they can). But what about for an open circuit without a net charge? Obviously the electricity wouldn't be able to complete a circuit from the positive to negative. Would the positive and negative charges from the power source spread through the wires, or remain at the power source? AI: Power sources don't create charge, they create voltage. Nearly equal and opposite charges already exist in every wire. The effect of the applied voltage is to push a very small amount of net positive charge on the positive side and negative charge on the negative side. It does this by forcing a very few electrons from the positive side to the negative (these may or may not be "the same electrons" depending on the type of source — it doesn't matter). This flow stops as soon as the force of the unbalanced charges wanting to spread out equals the force exerted by the source. Within one side of the open circuit, the electrons will distribute themselves according to the electrostatic principles you have already learned. If you were to disconnect the two wires from the source, then they would have a net positive and net negative charge. It's just so small (assuming the source is not supplying kilovolts) that there are typically no detectable effects. We usually don't talk about this excess charge in wires, because it is so very small — as soon as a circuit is closed and current flows, far more electrons than that slight excess/shortage flow. But this is exactly the same thing as "parasitic" capacitance of wires, and so it can have a significant effect on the behavior of circuits — it's just not thought of in explicitly electrostatic terms. If you make the two wires closer together and bigger — make them a better capacitor — then there will be a greater net charge on each wire/plate.
H: Buffering a Digital Microcontroller Signal for Connecting to an Optocoupler I frequently work on projects in which I use optocouplers for isolating digital +5VDC control signals (for example, from a microcontroller) from the rest of the circuit. However, since these work by illuminating an LED inside the device, there can be several tens of milliamps load on the microcontroller pins. I am looking for advice on what would be the best practice for buffering this control signal with an additonal stage, so that the microcontroller effectively sees a high impedance, and thereby reducing the current that it needs to provide? Just naively off the top of my head, I can think of a few things which might work: Simply use an op amp as a unity gain buffer amplifier. Use a dedicated comparator chip to compare the input signal with, for example, +2.5VDC. Use a MOSFET as a kind of signal amplifier. However, upon doing some reading, I have come across a whole bunch of chips that I have never used before, but sound like they may be designed for this kind of thing. For example: A Differential Line Driver (MC3487) A Differential Line Receiver (DC90C032) A Line Transceiver (SN65MLVD040) Buffer gates and drivers (SN74LS07, SN74ABT126) I really have no experience with any of these and am a little overwhelmed by the amount of stuff available! So can anyone help me to learn the differences between these devices, and which ones of them would / wouldn't be suitable in this case. Is there a best / standard way of achieving what I describe? edit: Since I could be switching up to around x30 outputs, I do not want to be concerned at all about loading the microcontrollers, and so will not be considering connecting directly to the DIO pins. Therefore, I think I will go for a logic buffer IC. I am going to try using the SN74LVC1G125 "Single Bus Buffer Gate With 3-State Output" for each input, and see how that works out. AI: You have many options. If you need to connect very few optocouplers, you can connect them directly to the GPIO of your microcontroller (through a resistor), provided that: You do not exceed the GPIO output current. You do not exceed the total port current. You do not exceed the total gnd/vdd current. If you need to connect more optocouplers, you can try to use low-current, high current transfer ratio optocouplers such as SFH618 (https://www.vishay.com/docs/83673/sfh618a.pdf), and connect them directly to your GPIOs (through a resistor). Or, you can use a BJT or MOSFET (see schematics below). Some notes: Remember to put the pull-down/pull-up resistor, which ensures that the MOSFET/BJT are OFF when the GPIO is not initialized yet (e.g. during reset). Pull-up or down resistor might be omitted if your MCU has GPIO pin with pull-up/down always enabled during reset. If using MOSFETs, remember to use logic level MOSFETs (e.g. BSS138). If you use the active-low solution, make sure that the high level voltage of the GPIO is VDD. I.e. do not use a 3.3V-GPIO and VDD = 5V in the active low solution!. Still, if you need to drive many optocouplers (e.g. 6) you can use the 74LS07 you mentioned, as it allows a 40mA per pin, and you'll have to mount only one component (instead of 6 BJTs/MOSFETs). Remember that, unlike CMOS, TTL ICs are instrinsically pulled-up! However, you might still want the pull-up resistor (the datasheet also recommends not to leave inputs floating). And, since '07 is not inverting, this solution will be active LOW. The 74ABT126 is CMOS so you MUST use anyway the pull-up resistor! simulate this circuit – Schematic created using CircuitLab
H: Relationship between base-emitter voltage, base current and collector current I'm reading Sedra's Microelectronics book and I can't understand the following: We can get a relationship between \$I_b\$ and \$V_{BE}\$ (DC): $$I_B=\frac{V_{BB}}{R_B}-\frac{1}{R_B}V_{BE}$$ This tell us that \$I_B\$ and \$V_{BE}\$ are inversely proportional, but how can this be if: $$I_C=I_S\exp(V_{BE}/V_T)$$ and $$I_C=\beta I_B$$ I simulated a circuit with this exact configuration and it shows that \$I_B\propto V_{BE}\$ which is what I expected. Snipping of the oscilloscope's graph. Levels and magnitudes are adjusted just to fit the image, but clearly they are proportional. What am I missing? Thanks... AI: I guess the first thing I want to do is to point out that there are several entirely equivalent level-1 DC Ebers-Moll models for the BJT. If you want to skim through them, see my answer to "Why is Vbc absent from bjt equations?", where I list them in some detail. Engineers have generally settled on the non-linear hybrid-\$\pi\$ version, where a linearized version of it is also quite convenient for small-signal analysis. If you ignore the portions related to the (usually) reverse-biased \$V_{BC}\$ junction, then the collector current equation can be simplified. Usually, another parameter, \$\beta_F\$, is simply applied (by dividing) to get the base current. In simplified terms then, the base and collector currents are both determined by \$V_{BE}\$, with \$\beta_F\$ divided into the computed collector current to get the base current: $$\begin{align*} I_C&\approx I_S\left(e^{\frac{V_{BE}}{n\cdot V_T}}-1\right)\\\\ I_B&\approx \frac{I_C}{\beta_F} \end{align*}$$ (\$n\approx 1\$ and \$V_T\approx 26\:\textrm{mV}\$, in many cases.) For some purposes, it doesn't need to get much more complicated than that. For your circuit, setting signal \$v_i=0\$ for now, you have to subtract \$V_{BE}\$ from \$V_{BB}\$ in order to get the voltage across the resistor \$R_B\$. Knowing the voltage across the resistor, you can compute the resistor's current (and therefore the BJT's base current) as \$I_{R_B}=\frac{V_{BB}-V_{BE}}{R_B}\$. That's the same expression you provide in your question. However, you describe this using an English term "inversely proportional" and then later use the symbol \$\propto\$ when talking about your observations. This is usually taken to mean the multiplicative inverse and not the additive inverse, within the context you provided. I'm pretty sure that you instead meant the additive inverse, though. But this only means that if the magnitude of \$V_{BE}\$ increases, that it leaves a smaller remaining voltage across \$R_B\$, so the base current declines. But the thing that increases \$V_{BE}\$ (other than lower BJT temperatures) is higher base and collector currents. So, if \$V_{BE}\$ slightly increased for a moment for some reason, then the base current would decline slightly (because of the smaller remaining voltage across \$R_B\$) and this would then lower \$V_{BE}\$ back to where it was. (In that sense, \$R_B\$ provides a stabilizing negative feedback.) Technically, though, the complete equation (assuming active region of operation for the BJT) is: $$V_{BB}-\frac{I_C}{\beta_F}\cdot R_B - n\cdot V_T\cdot\operatorname{ln}\left(\frac{I_C}{I_S}+1\right)=0\label{eq1}\tag{Eq 1}$$ Which you need to solve for \$I_C\$, and therefore also \$I_B=\frac{I_C}{\beta_f}\$, and therefore also \$V_{BE}=n\cdot V_T\cdot\operatorname{ln}\left(\frac{I_C}{I_S}+1\right)\$. If you want to try and mathematically solve the above equation, you can do so using the LambertW function. I show how to solve a similar equation as an answer to "Differential and Multistage Amplifiers(BJT)". It's actually quite straight-forward, once you get used to the relatively simple algebraic manipulation required. However, most folks don't bother. Instead, they just realize that a \$60\:\textrm{mV}\$ increase in \$V_{BE}\$ will mean a ten-fold increase in the collector current (or base current) and deduce that \$V_{BE}\$ won't change much and can be treated approximately as a constant once it is estimated. So the above equation instead becomes: $$\begin{align*}V_{BB}-\frac{I_C}{\beta_F}\cdot R_B - V_{BE}&=0\label{eq2}\tag{Eq 2}\\\\ \therefore I_C&=\beta_F\cdot\frac{V_{BB}-V_{BE}}{R_B}\\\\ I_B&=\frac{V_{BB}-V_{BE}}{R_B} \end{align*}$$ But please feel free to work out the following equation from \$\ref{eq1}\$ above: $$I_B=\frac{n V_T}{R_B}\operatorname{LambertW}\left(\frac{I_S R_B}{n V_T \beta_F}\:e^\frac{V_{BB} \beta_F + I_S R_B}{n V_T \beta_F}\right)-\frac{I_S }{\beta_F}\label{eq3}\tag{Eq 3}$$ It's just modest algebra. (The last term is because of the y-intercept and is likely too small to bother with and can probably be discarded.) Yes, if \$V_{BE}\$ increases in your circuit, then the base current will decline. So if you lower the temperature of the BJT, then the base current will decline. But usually the temperature of the BJT rises with use and so the base current will probably increase, causing the collector to pull harder on the collector load. In general you use an estimate for \$V_{BE}\$ during design and just stick with it. And then you take into account that each individual BJT will have slightly different values for the same currents; that your circuit will be used in various weather/climate conditions; and that the circuit will have to deal with varying loads (often); and so you verify that your circuit still operates well regardless of these variations. Which makes \$\ref{eq3}\$ superfluous, in practice, and explains why you don't see its usage pushed much. It's over-kill (unless you are taking a mathematics course.) So your book uses a rather normal-looking equation. As it should.
H: Capacitance on the output of an op amp I have read a lot about capacitive loads on the output of an op-amp and the possibility of unwanted oscillations and instability. my knowledge on phase margin and frequency domain analysis is a little weak and I can't figure out if my circuit bellow is gonna be unstable or not: The Op-amp supply is connected to 5vdc and GND. and here is it's datasheet: I read in some application note that placing R19 would help the stability issue by increasing phase margin or something like that and fortunately I already had that in my circuit. but as you can see C10 capacitor also has a very high capacitance value. is there an analysis or rule of thumb which would determine if this circuit is safe? should I change my op-amp IC? or should I use other methods as well to insure my circuit's safe operation? I can provide additional information if you need it. Thanks. AI: Most OAs have limited capacitive load capabilities. The origin is quite intuitive. Consider the simple circuit below. On the left there is a real OA. On the right, the OA has the same characteristics (poles, etc.) but the output resistance is externally shown. This will have the effect of creating a pole, with time constant \$R_{out}\cdot C\$. Such pole will reduce your phase margin of your system. Limiting case: C is so large that it can be considered a short. Then you're actually removing the feedback, and you get a comparator (assuming non zero feedback Z)! Instead, if there is a "large enough" resistor in series to C (R19 in your circuit), the series R-C will add a pole (with time constant \$(R_{out}+R_{19})\cdot C\$ ) and a zero (with time constant \$R_{19}\cdot C\$ ). If, as said, R19 is large enough, the pole and zero time constants are close enough and the overall effect is negligible (limiting case: R19 is infinity). Intuitively, after the zero frequency, the series R19-C will act as a load, and it will not open the feedback (it will change the feedback attenuation factor, but if \$R_{19}>>R_{out}\$ then this variation will be negligible as well). EDIT: In your case, the circuit will be stable. EDIT2: the following consideration is no more valid with the new edits to the schematics made by the OP: I assume that "Input Signal" in your schematics is a current signal (i.e. not a connected voltage source), otherwise the circuit won't work. simulate this circuit – Schematic created using CircuitLab
H: Unexpected plots for active low pass filter response in LTspice I was trying to compare the differences between passive and active filters and below is for almost same cut off and by using an ideal opamp: Above was something I expected. But then I used a real opamp LM324 and obtain the following outputs: And for the same circuit if I switch to an opamp called LMC6482 I obtain the following output: Why with real opamps there is such loss in voltage values or power at very very low freq. unlike in ideal case? Is that something related that Im using single supply? Does the vertical axis show peak-to-peak voltages or amplitudes in LTspice? And why two real opamps give totally different loss? edit: The opmap is clipping yes, but why is the amplitude is 0.6V at 0.2Hz in Bode plot which is 1V in transient analysis. And for LMC6482 the amplitude is 0.1V at 0.2Hz in Bode plot which is 1V in transient analysis. There must be a meaning behind these numbers (?) in Bode plots AI: In the first circuit the active Low Pass filter is a 2nd order filter while the passive Low Pass filter is a 1st order filter so I totally expect the curves to be different and they are what they should be. In the second circuit you're expecting the impossible from that poor LM324 opamp. You give it a single (positive only) supply yet the circuit expects it to be able to output negative voltages as well. That's not going to happen unless you supply the LM324 with a symmetric power supply so for example +5 V and - 5 V. In your circuit the LM324 cannot amplify properly and that's what we see in the plot as well. So connect the negative supply rail of the LM324 not to ground but to a -12 V (for example) supply. You have to add another voltage source for that!
H: Raspberry PI 3 - LiFePO4 power supply I'm working on a mobile computer unit with a Raspberry PI 3 motherboard. The computer unit needs to run on battery so i can bring it with me anywhere. I'm going to solder both the battery pack and power supply (buck or buck boost) myself, since it need to fit as perfect as possible in a custom made 3D printed case. I have decided to use LiPePO4 battery cells, either 2 (6.4VDC) or 3 cells (9.6VDC), I guess 3S is best since I only need a buck converter to get 5V contra 2S that may need buck boost. I'm thinking of soldering a) power supply and b) cut of (high voltage/low voltage) on the same PCB. But I'm not so very experienced with electronics other than I have some basic electronic education, so I'm wondered if someone has some advice or inputs to this project. I have mainly three questions. Is it better to use 2S or 3S battery to this project, I'm then thinking of efficiency of power supply, as far as I understand 6.4VDC is closer to Vout (5VDC) so it will be a more efficient power supply than Vin as 9.6VDC. Runtime is importent, so more efficiency power supply is longer runtime. I'm looking for power supply circuits that can deliver 2-3A, is it enough to run just the RPI 3 board? I have found several DC/DC converters that delivers 2A@5VDC, but these use Linear Technology IC that are pretty expensive, I have been told multiple times that LT's IC is overrated, what other IC brand can I search for? And if you guys have been doing similar projects with RPI 3, or have other advice, please inform me. AI: ...6.4VDC is closer to Vout (5VDC) so it will be a more efficient power supply than Vin as 9.6VDC. That is true if you would be using a linear regulator since then the current remains constant. But then the 3 cell solution would give a longer battery life as the individual cells can drop to a lower voltage until the regulator cannot provide 5 V anymore. But you should use a switched regulator since that is going to be much more efficient for the currents needed by an RPi. And for a switched regulator efficiency will be better when the input voltage is higher. The most efficient solution is 3 cells and a switched regulator. I suggest not to make your own switched regulator as that might end with all kinds of issues. There are cheap ready-made switched converter modules for sale which will just work saving you a lot of trouble. My suggestion: find an LM2596 based module, these are cheap as chips and will just work. These can also supply up to 3 A or so, more than enough for an RPi.
H: Increase EEPROM capacity and data lines Question: I need to increase the capacity and data lines of a EEPROM board using decoder and/or basic logic gates. The EEPROM is Hitachi HN58C1001. it is 128K by 8 bits. Task is to make it 1M by 32 bits. My guess is to use decoder (3to8) and connect it to CE pins of 8 of this boards, I'll give me 1M of data but also 64 data lines, not 32... What is the correct way to do this? AI: You are confusing address space with data space. You do indeed need to multiply your address space by 8, but you only need to multiply your data width by four. As such you need a matrix of memory devices that is 8 rows by 4 columns. That is, 32 devices. You are correct, you need to decode the upper three bits of the 1M address bus to select the appropriate row. Then feed each device across that row into the appropriate byte of the 32 bit data bus. If the memory is byte addressable you will need to offset the address on the devices to ignore the bottom two bits. That is, feed address bus bit 2 to device A0... etc. Also, make sure whatever is driving the address control and data lines has enough fan-out to drive all those devices in parallel. If not, you will need additional buffers in there with consequent timing delays.
H: Logic Level Translator Operation I'm trying to analyze the Sparkfun Logic Level converter found here: https://www.sparkfun.com/products/12009 A rough schematic is shown below: simulate this circuit – Schematic created using CircuitLab Now, I've looked at the application note mentioned on Sparkfun's page, and I understood most of it in regards to how it shifts levels, but I'm having trouble understanding the third part going from the high logic voltage to the lower logic voltage. I searched for some similar questions on the site, but teh answers that I found didn't seem to explain this part well. My question is how does it shift the logic level from the high logic to the low logic? The app note says that when voltage on the high side, it utilizes the diode between the drain and substrate; at this point, the substrate (body) is at 3.3 V, so when the high input is 0 volts, the 'diode' conducts, dropping the voltage at the low port to 0, turning on the diode, further dropping the voltage. When the 'diode' conducts, why does it drop the source pin (connected to the body/substrate) down from 3.3 to 0? I would have thought that it stayed the same voltage at 3.3 V. AI: One characteristic of MOSFETs is that the drain pin can act as a source (and vice versa) depending upon the voltages applied to the terminals. That is exploited in this circuit. When the source pin is more positive than the drain pin it will act as a drain with the drain pin acting as a source. If the source pin is more than 0.7V positive than the drain the body diode will conduct but that doesn't interfere with the FET action. When there is a low level on the high-voltage port and the voltage between the gate and the drain pin will exceeds the threshold voltage of the device the device will start to conduct. The drain pin acts as a source pin in this case. The positive bias causes the MOSFET to conduct from source to drain and pull down the voltage on the low-voltage port down to a very low voltage (only millivolts above the voltage on the high-voltage port. I measured a BSS138 and discovered that the threshold voltage in this inverse mode is very similar to that in normal mode - I couldn't find that detail in any data sheets. The method of fabricating MOSFETs creates the diode (referred to as a body-diode) as a side-effect - in this case the body diode does not affect operation significantly except for the case where the voltage on the high-voltage port is low enough to cause the body diode to conduct but there is not enough voltage difference between the gate and the drain pin to turn the device on - in this case the low-voltage port will be at a voltage abut 600mV above that on the other port. As commented by @Michael Karas the threshold voltage needs to be significantly lower than the low voltage source to operate correctly. Also as commented by @Michael Kara the physical construction of discrete devices is not symmetric but for basic analysis of the circuit it can be treated as such with the body diode in parallel.
H: Can a single bjt PNP be an or gate? I'm curious, with a transistor being equivalent to two diodes, could one wire a single PNP with a pull down diode to be an OR gate? My thinking is as follows: Is this right, or wrong. Why? AI: simulate this circuit – Schematic created using CircuitLab Figure 1. A tidy version of the OP's 'OR' gate. It might work in certain circumstances. The problem is that the base current affects the emitter-collector resistance. If A is high then an emitter-base current will flow. This will turn on Q1 and provide a low-resistance path between A and B. What happens next will depend on the output impedances of A and B. If they are the same then Q1 will tend to pull them together towards mid-supply voltage. It's not going to work as you planned. simulate this circuit Figure 2. Simulation circuit. Running a simulation using the CircuitLab tool results in the following readings for R1 = R2 at 1k and 100 Ω: 1k 100 Ω A 1.927 V 2.469 V B 1.909 V 2.365 V Q 1.163 V 1.663 V Note that at lower source resistance the effect of R3 is less and A and B settle down close to mid-supply. Q is 0.7 V below the emitter voltage as would be expected.
H: Measuring voltage 10 times every second with HP 34401A DMM I'm trying to measure voltage continuously for a short time using the HP 34401A digital multimeter. To do this I have connected the DMM to my PC via RS-232. The PC is running Windows 10 and I use Termite which is a serial monitor. With this setup I can control the DMM remotely without errors. Now I want to measure the voltage 10 times per second and receive the readings by the PC. I used the following setting. VOLTage:DC:RANGe 0.1 VOLTage:DC:RESOlution 0.0001 SAMPle:COUNt 1 TRIGger:COUNt 10 TRIGger:DELAy 0.1 (Trigger source is 'Immediate' by default. I didn't change it) The DMM sends 10 values to PC, when I send 'READ?' command. But I have no idea the actual sampling rate. With the trigger source of 'immediate' and the trigger delay of 100ms. Can I expect that my sampling rate will be 10samples/seconds ? If the above setting is valid. Could I achieve a sampling rate of 100 samples/seconds too? AI: You can find out from the manual; :NPLCycles? Or choose 1 cycle for 50Hz sample rate or 20ms integration time or 10 cycles (PLC) for a 5Hz sampling rate or 200ms integration time ( if you are in 50 Hz land ;) I guess if you want a 10 sample/s rate it is NPLC=1 and Delay = 80ms :NPLCycles {0.02|0.2|1|10|100|MINimum|MAXimum} Select the integration time in number of power line cycles for the present function (the default is 10 PLC). This command is valid only for dc volts, ratio, dc current, 2-wire ohms, and 4-wire ohms. MIN = 0.02. MAX = 100. For 1k readings/s *RST *CLS DISPlay OFF FUNCtion "" :RESolution MAXimum :NPLCycles MINimum ZERO:AUTO OFF :RANGe:AUTO OFF CALCulate:STATe OFF TRIGger:SOURce IMMediate TRIGger:DELay 0 READ?
H: Finding replacement for transistor, what to look for I have a schematics that uses the BS270 FET transistor, but I want to know what to do if one is not available or one wants to adapt the schematics to a different transistor What quantities should I look for in order to match a transistor to another? Gate, Source, Drain voltages, etc. For example, I have 2SK246 Toshiba FET, and I have both datasheets open, but not sure how to go about to modify the schematics or which values to consider in order for the calculations Edit the application is using the FET as a switching device for a LED with a Beaglebone AI: First of all, you must be sure that maximum current, voltages, and power dissipation are ok for your particular use, to avoid the magic smoke. Then, each application might have other requirements, in addition to current, voltage and power ratings. It's very difficult to cover all the possible cases. I will cite some cases, which are totally not meant to be exhaustive, and should be considered just as examples. For instance, in a digital application, you might want to have a look at: The switching characteristics (i.e. the speed). The on-state resistance (if used with a pull-up resistor). The threshold voltage and if it is logic level compatible. In an analog application, you might want to have a look at: the transconductance. the threshold voltage. also the output characteristics. The capacitances (for bandwidth/stability estimation and or compensation) In power applications you might need: The on-state resistance. The switching characteristics (speed). The input/output capacitances, and gate charge. The threshold voltage. The peak power as a function of duty cycle. In other applications (e.g. if you use the MOSFET as an analog switch), you might need also to know the OFF-state characteristics.
H: Reverse voltage flowing from drain to source in P channel MOSFET The below given circuit works as a switch. The P channel MOSFET has max Vdss = -30V and Vgss = +/- 20V.MOSFET DATASHEET The voltage divider protects the gate of the MOSFET from exceeding it's Vgss in case if high voltage is applied. Now the issue is, when external +12V is applied to Output 2 connector as shown in image, the voltage flows from Drain to Source and flows through all the regulator and generate +5V. [Ground is always there] Whereas +5V should only be generated when 12V is applied to the 12V connector. The simplest solution is to use a diode at the output 2 which will stop any reverse voltage and only allow forward voltage through it (while switching). But why the voltage is flowing from drain to source when circuit has no power, why does Led glow (means 5V generated). What else can be done to rectify this issue apart from using Diode? AI: In each MOSFET, the body forms diodes with the drain and source junctions. The body in a pMOSFET is n-type, while the drain and source p-type. The opposite in in an nMOSFET. Discrete MOSFETs have the body and source terminal shorted together. This form a diode between source and drain. To see the direction of the diode, just have a look at the arrow in the symbol (in your schematics, the diode is also explicitly shown). On a pMOSFET, the anode is at the drain, the cathode at the source. The opposite occurs on nMOSFET. Therefore, in your circuit, current will flow from drain to source, regardless if the pMOSFET is ON or OFF. To avoid this, you can use a back to back MOSFET connection shown below. simulate this circuit – Schematic created using CircuitLab
H: Transistor's Input Current This circuit is shown without coupling capacitors and source, which means I am referring only to DC. If I define Ib1 (Q1) with voltage divider, so that Ic1 (Q1) equals 10mA and Ib2 (Q2) equals 5 mA: Will the 10mA current be flowing into the base of Q2 or the 5mA current? Or will 10mA current be flowing into the R4 and differential base-emitter resistance + Re2 ? Should Ic1=Ib2 ? simulate this circuit – Schematic created using CircuitLab AI: I guess you are asking how to design such a circuit. You start with assigning approximate amplification factors to each stage. How you choose them depends on your application. Having the same for each stage is a good start. If your supply voltage is on the low end, you want to have more gain at the earlier stages than at the later. You know what current flow you want to have at the second stage and you know the gain you want. The gain is approximately Rc2/Re2. The current flowing will be at most Vcc/(Re2+Rc2). These two equations give you the upper limit for Re2 and Rc2. You should choose them a bit smaller than that. How much smaller is depends on how much Collector-Emitter voltage you want to have. This depends on the voltage capability of the BJT and the power dissipation capability. The former is usually not the limiting factor, unless you are doing high power application, while the latter is. You should have an idea how large your output signal is. Add some margin to that (10-100%) and use that as your Vce. As a rule of thumb, do not go below 1V of Vbe. Then you can plug this into the above formlas again: A=Rc2/Re2 and Ib=(Vcc-Vce)/(Re2+Rc2). The only remaining thing left now is to set Vbe. This simply follows from what you've chosen as Re2 and Ic, as this defines the voltage over Re2. Vbe, or rather Vb2 (relative to GND) should be V_Re2 + V_be(Ic2), where V_be(Ic2) is the basis-emitter voltage to get Ic2 current. As a decent approximation you can choose 0.6V for this on silicon transistors. Armed with that, you go to the first stage. Here we have an additional constraint: The output voltage has to be V_b2. First of all, I would leave out R3 and R4, at least at first, as they only complicate the circuit without giving you anything (beside lower impedance between the two stages). So we choose Rc1 with respect to Ic1 such that the voltage over Rc1 gives us the right V_b2. Now can choose Re1 to give us the right amplification. The last thing to do is to choose R1 and R2 such that you get the right V_b1 (similar to how it's done for V_b2). The current through R1 and R2 should be at least a few times I_b1 (use at least a factor 2, factor 10 is better), which you can calculate from Ib=Ic/beta. As last step, you check whether the circuit fulfills all your requirements and whether there is anything weird (V_be, V_ce and such at too high or too low values) and adjust your circuit if it doesn't meet one of these.
H: What is current in RL-circuit? When t<0 Components in circuit are Jt = 3 A, R1 = R2 = 2 Ω, R3 = 4 Ω ja L = 10 H At t=0 switch K will be closed and i need to figure out what iL(t) is when t=4 So i tried to figure out iL(t) with differential equation. First i combined R1 and R2 to get R12=1Ω Then i transformed power supply from current to voltage Et=Jt*R12=3A*1Ω =3V Then i combined R12 and R3 to get Rz=5Ω Now i can create equation which is $$E_t=L*\frac{di_L(t)}{dt}+R_ti(t) $$ After adding constant i can start solving equation $$10*\frac{di_L(t)}{dt}+5i_L(t)=3 $$ $$ u(t)=e^{\int5dt}=e^{5t} $$ $$ e^{5t}* \frac{di_L(t)}{dt}+ e^{5t}*i_L(t)=e^{5t}*3 $$ $$ \frac{di}{dt}(e^{5t}*i_L(t))=3e^{5t} $$ Then for integral in both sides $$ u=5t $$ $$ \frac{du}{5}=dt $$ $$ e^{5t}*i_L(t)=\frac{3}{5}e^{5t} +C$$ Then divide in both sides $$ i_L(t)=\frac{3}{5}+ C*e^{-5t}$$ Now i need to figure out what is iL(0) so that i can get C, but i don't know how. AI: I do not give full solutions to homeworks, but when t<0, there was a steady state. All transients were dead. Consider the inductor behave as a zero ohm wire until the switch turns. Then the inductor current starts to change gradually towards the new steady state value. Calculate the initial value from the preceding. Then calculate the new steady state value and let the difference between the initial value and final value die off with time constant L/R where R = 5 ohms. Your time constant is wrong. Your solving method of the differential equation needs repairing.
H: What does radio frequency "potential" mean? Page 1-2 (Naoki Shinohara: Wireless Power Transfer via Radiowaves) During the same period, when Marchese G. Marconi and Reginald Fessenden pioneered communication via radiowaves, Nicola Tesla suggested the idea of wireless power transfer and carried out the first WPT experiments in 1899 [TES 04a, TES 04b]. He said “This energy will be collected all over the globe preferably in small amounts ranging from a fraction of one to a few horse-power. One of its chief uses will be the illumination of isolated homes”. Tesla actually built a gigantic coil that was connected to a 200 ft high mast with a 3 ft diameter ball at its top. The device was called the “Tesla Tower” (Figure 1.1). Tesla fed 300 kw of power to the coil that resonated at a frequency of 150 kHz. The radio frequency (RF) potential at the top sphere reached 100 MV. Unfortunately, the experiment failed because the transmitted power was diffused in all directions using 150 kHz radiowaves, whose wavelength was 21 km. After this first WPT trial, the history of radiowaves has been dominated by wireless communications and remote sensing. What does radio frequency potential mean here? AI: Potential means "voltage relative to ground" so they had RF AC voltages of 100 MV relative to the ground / earth potential which is defined as 0 V (zero Volt).
H: Identifying and specing an Inductor I have a circuit board with a surface mounted inductor which I believe is a film inductor. The board is not working properly and there is a loud squealing sound coming from said inductor so I am trying to replace it. I have located it in the data sheet for the board but I am inexperienced at identifying and specifying inductors so I am having trouble identifying a suitable replacement or testing the inductor to confirm that it is indeed the malfunctioning component. Could anyone here advise on how to determine the specs of the inductor, how to make sure I find a replacement that is suitable, or how to test the current inductor to make sure it is actually the malfunctioning component? Advice on any of these topics would be much appreciated, pictures are below for reference. The inductor says 2R2 135S as far as I can tell and the schematic has two annotations to the component: FDA1055-2R2M=P3 and IND-2D2UH-100-GP. The only thing I have been able to find on my own is that the inductor must be 2.2 uH (That's what the 2R2 is, I think) but I'm not sure of any other specs. AI: This is NOT a "film inductor", it is a standard wirewound high-power inductor. A simple Google search says it is FDA1055-H-2R2M=P3 made by Murata, "Fixed Inductors 2.2uH 4.8mOhms 15.5A +/-20%" and available at Mouser (and many other places) However, it is highly unlikely that the faulty inductor is the root cause of your problem. First, this is a 16-A power supply. It needs some qualification to fix it, it could be transistor problems, or snubber cicuits deteriorated, tantalum caps expire (one of caps, TC8, seems a bit burned out). Or it could be nothing wrong at all. The board seems to be some high-density laptop mainboard. It is very likely that some very large chip has developed an INTERNAL problem with +5V rail, and it would be impossible to fix it. As an exercise, you can try to identify the source of overload by looking at excessive power dissipation, the guide can be found here, at SE.
H: In microprocessor 8085, how can I clear/reset all flags(s,z,ac,p,cy) without affecting contents of accumulator? I only know the code in which accumulator contents are affected. It is not a duplicate as I have asked for a solution which doesn't affect accumulator. AI: You can reset the all the flags by executing instructions which affect them appropriately. Here's a method that only uses one register and no stack (this example uses B):- MVI B,0 ; B = 0 INR B ; B = 1 (resets Sign, Zero, Parity and Auxiliary Carry flags) STC ; set Carry flag CMC ; complement Carry flag The INR (Increment Register) instruction sets S, Z, P and AC flags according to the result of the increment. 1 is positive and not zero, so the S and Z flags are reset. It has an odd number of '1' bits so P is reset, and there is no carry from bit 3 so AC is reset. INR does not affect the Carry flag, so we must reset it separately. Unfortunately the 8085 does not have a 'reset carry' instruction, but it does have 'set carry' and 'complement carry'. By loading different numbers into the register and/or varying the other instructions you could set or reset flags at will. This is useful for returning status information that can be quickly responded to using conditional jump instructions. The flag values can represent whatever you want (eg. P flag set does not have to mean 'even parity').
H: Binary number divisible by 3 I have do design a series circuit. There are 3 inputs x0x1x2 which represent a 3 digit binary number. There is one output y. y = 1 if and only if the current number multiplied by the previous number is divisible by 3. I am supposed to use T FlipFlops. I was thinking that I could get away by using only one FF so to remember if the previous number is divisible by 3. That way if the current number or the previous are divisible by 3 then so will their product be. When I tried designing the tables and the states tables using only one FF the logic was too much, something like 20 gates plus the FF. On the other hand if I use 3 FF as to rememner the whole previous number the table gets really big, like 64 rows, which I can probably reduce. Which is the ideal solution, is there any trick or shortcut that I'm missing. Thanks for your help. AI: 3 is a prime, so only one number needs to be checked if the previous already was checked and its divisibility by 3 is stored as one bit. There are not that many 3 bit numbers to be recognized. They are 0,3 and 6 in decimal. You must get =1 if your number is one of those. Boolean OR that with the stored divisibility. Of course there's a trick to a beginner. How to store a bit to T-FF. With D-FF it would be simple. EDIT due the comments: Obviously someone feeds the 3 bit input number A, B and C (=ABC) and a signal G which shows that ABC is valid to be used. As long as G=1 the input ABC is assumed to be stable. That G is called normally "Strobe". It's a little different than a clock pulse. If G were a clock pulse, only G's transition edge from 0 to 1 would be the signal "ABC is valid". So let G be the strobe "ABC is valid". That G was not told to be existent in the problem text, but without it all else would be nonsense. Without G (strobe or clock pulse) there would not exist a current nor previous. Your circuit should simply repeat G as an output. That was not wanted, but it's essential for the system which is interested in the results. Your other output, say W, tells if the product of current ABC and previous ABC is divisible with 3. You need a combinatoric circuit that has output X. X=1 if input ABC is in decimal 0, 3 or 6, otherwise X=0 Store X to a flipflop storage when G drops from 1 to 0. Feed the output of that storage to an OR gate with G. The output of the OR gate is your W.
H: What is the purpose of external terminals on a logic probe? I have an LP-3500 logic probe. In addition to the normal connectors to power and ground at the bottom, it comes with two external terminals (one red, one black), which can be connected on the side near the tip: What is the purpose of the external terminals, and how are they used? AI: The connector is just connected directly to the main probe, but some of the attachments are easier for certain things than others. There is one which is an alligator clip, and the other one is a hook probe. If you press the bottom of the hook clip, you should see a small metal hook slide out of the top. You can put a lead or wire through it, then release the bottom, and the hook will retract, clamping the lead or wire under it.
H: Controlling a latching solenoid with the DRV8838: Is this PCB design correct? I'm using the following Texas Instruments devices: DRV8838 (Low-Voltage H-Bridge Driver) and MSP430G2553 microcontroller. The purpose of the DRV8838 is to control a 9V latching solenoid. To test it, I have used the the DRV8838 breakout board made by Pololu. As I only need to have between OUT1 and OUT2 9V and -9V and sleep or turn on the DRV8838, the SLEEP and ENABLE pins are connected (see Table 2, page 11 on the datasheet). I have developed a test code to open and close the solenoid and it works correctly. So far so good. The problem is that when designing my PCB, I made the mistake of not connecting pin 4 of the DRV8838 to GND so it did not work. I contacted the assembler who de-soldered the DRV8838 and connected pin 4 to GND through a wire. However I do not guarantee that everything is correct, because they had to do it by hand and the integrated circuit is very small. The problem is that it still does not work (I use the same test code which I tested with the pololu breakout board). At this point, I don't know if my design is wrong, or if the re-soldering of the DRV8838 was not done correctly. So, I put the design of the circuit in case you see some design flaw that I'm not seeing. In figure 1 is the schematic design of my PCB and in figure 2 is how I connected the development board of Pololu. Do you think there is a design error? simulate this circuit – Schematic created using CircuitLab I've attached a screenshot of the layout in case it helps to determine if the layout is correct or not. (R1 is a resistor that is not fitted because the DRV8838 already has internal pull-down resistors) And this is a image with the 4th pin connected to GND through a wire: If necessary I can upload the code. Thank you very much in advance. AI: The TVS diodes are useless here rated at 12V but operate up to 18V meanwhile only using a 9 V supply. The drivers have body diodes across Vds. The driver is rated for 1.8A and 1.9A is the threshold for OCP. So if the solenoid resistance is less than 9V/1.8A = 0.5 ohm , the driver will go to sleep. With a leadless chip that has a coplanarity of 0.02mm , I would have a hard time verifying a solder path exists even with a 10x magnifier. My final thoughts are that manual soldering this chip cannot be easily used without skill and any rework must be done under a microscope. Next time increase your pad size for probing and you are better of using a 3rd party driver board than DIY soldering these. Until you get up to speed on layout quality and solder quality , I think for 4 bucks, you are better off buying this chip on a board. See any difference? See any gaps? or solder bridges, heatsink pad underneath soldered? Catch my drift?
H: 4148ST diode marking? I ordered some (what I thought to be) 1N4148 diodes. However, the part number on the diodes I received reads: 4148ST. I have searched the realms of Google DuckDuckGo, and have found no indications whatsoever on what the ST means. Are the diodes I received 1N41148s? Or are they some other kind of diode that I could not find a datasheet for? AI: Many manufacturers that produce the same part or compatible one, use slightly different markings. For example the classic general purpose NPN transistor BC548 is often mark as C548 as well. This practice is very common. Your 4148ST diode is for sure the same part as or compatible with the 1N4148. For more detail about what the manufacturer meant by 'ST', you need to get the datasheet from that exact manufacturer and look for a section typically named 'Marking'. Some times, particularly in SMD stuff, markings in components tend to differ from the real model name they are known for, so this datasheet section becomes almost the only clue available for identifying the part and its characteristics. In case the brand is unknown for being a cheap China fake generic clone... part,the only hope is trying to find the manufacturer they tried to clone and check its datasheet.
H: Radiated power by the antenna For this question in my view the the output power should be \$1 mW\$ since the antenna is lossless,and so no ohmic power should be dissipated and the radiated power should equal the power fed. \$\eta_r=\frac{P_{rad}}{P_{in}}\$,here antenna is lossless so \$\eta_r=1\$ so \$P_{rad}=P_{in}=1mW\$ But the answer is \$4mW\$.Why is it like that can anyone explain please? AI: As you've correctly surmised, the amount of power radiated from the antenna, physically, is 1 milliwatt. This makes sense since the antenna is lossless (100% efficiency), you've fed in 1 milliwatt, so by conservation of energy the antenna is radiating at 1 milliwatt. What I think the question is means to ask for is the EIRP, or Effective Isotropic Radiated Power. The EIRP is a measure of much power a hypothetical isotropic (perfectly omnidirectional) antenna must radiate so that the power measured in the far field of the main lobe of the directional antenna is equal to the power of the isotropic antenna measured at the same distance. This is a useful figure for doing a link budget. If the question is asking for EIRP, then 4 milliwatts is correct: $$10log_{10}(1 mW) = 0 dBm$$ $$0 dBm + 6 dB = 6 dBm$$ $$10^\frac{6 dBm}{10} = 4 mW$$ If this is a homework assignment, you should write a note to your instructor. The question is ambiguous and should clearly state that it's looking for the EIRP.
H: How to account for input noise on a zero crossing detector op amp? To save potential confusion let me start by saying I just started learning opamp theory about a week ago. I'm currently familiar with inverting, non-inverting, summer, and difference opamps. My (fairly new) understanding of zero crossing detectors is that when the input signal crosses from positive to negative or vice versa the polarity of output of the opamp will change as well. Essentially it's useful for converting something like a sine wave to a square signal. If my understanding is correct then what happens if noise is introduced on the input line? I understand better with examples and numbers so let's say I know my noise oscillates over 0v at 100mv peak to peak. How would you account for that? EDIT: If possible a schematic drawing of a before and after case for handling the noise would be extremely helpful! AI: Low signal to noise ratio may cause glitches if the device can support sufficient frequency transitions. Also additive noise causes phase noise on zero crossing limiters. Hysteresis is generally added when the noise cannot be controlled using positive feedback R ratio. This doesn't eliminate the phase noise but will move the crossing threshold in the opposite away from "zero" each time the output hits the peak. Often 1% is considered reasonable for some analog signals 10% for others and 33% for noisy logic interfaces. For single supply amps, the "zero" crossing is set to Vcc/2 by some method. One can even use CMOS inverter logic as a limiter (aka slicer, aka zero crossing detector) if the signal is AC coupled and use a self-biasing high R value (1M) as negative feedback. The other thing about Op Amps is since they have so much gain and high order delay effects they must put in a cap inside to make the Op Amp essentially a 1st order filter for open loop to make it stable with unity gain. Comparators on the other hand do not have this compensation cap so they work much faster as zero crossing detectors and ECL comparators work over 1GHz as well as current mode logic but using differential current with zero crossing current and differential load resistors.
H: Twisted pair for mains AC load wiring? Working on an enclosure in which there will be low voltage high precision circuitry (my area of more experience) coexisting with fairly high current (~15A) 120VAC load wiring (my area of less experience). Would be it be advisable to twist those AC wiring pairs which carry equal + opposite currents to reduce the amount of field radiated from these wires? Or is it considered sufficient just to route them together side-by-side? Obviously I don't care about the susceptibility of the mains wiring, only its emission and the susceptibility of the low voltage circuits. Never seen this done before but don't work with mains powered devices much. Don't want to reinvent the wheel or do something dumb / non-standard so anyone's experienced take would be appreciated. AI: Lets model that twisted pair, assuming VERY SLOW RATE OF TWIST, as 2 wires of spacing 4mm (these are power wires, after all) with distance 100mm and 104mm from the vulnerable sensitive PCB loop of area 1cm by 4cm. We'll compute the induced voltage for distance 100mm and for 104mm, then subtract those for the presumed magnetically-induced trash in the sensitive circuit. We'll also need the slewrate of the power line currents; we'll assume the rectified diode peak currents: 15amps * 10X, and 1microsecond turnon time, to be the aggressor Hfield. Math: Vinduce = [MU0 * MUr * Area / (2 * Pi * Distance)] * dI/dT and inserting MU0 = 4 * pi * 1e-7, MUr = 1, we get the form Vinduce = 2e-7 * Area/Distance * dI/dT Vinduce = 2e-7Henry/meter * 1cm * 4cm /100mm * 150amps/1uS Vinduce = 2e-7 * 0.0004 meter^2 / 0.1 meter * 150e^ Amps/second Vinduce = 2e-7 * 0.004 * 150 e+6 = 2e-7 * 0.6e+6 = 1.2 e-1 = 0.12 volts for the 100mm distance. For 104mm distance (the other wire of the twisted pair), the voltage is 4% lower; our residual voltage from the twisted pair is 4% of 0.12 volts, or 5 milliVolts. Can your precision sensitive circuits tolerate 5 milliVolts of trash, with fundamental repetition of 120 Hz, with a few microseconds duration and 1uS risetime? EDIT How to mitigate this 5,000 microVolts of trash? We have all the degrees of freedom specified in the math: loop area, distance, dI/dT, and the UNSPECIFIED variables of (1) how tightly the twisted pair is twisted and (2) how uniformly are the twists. You can measure these effects in the lab. Make a loop of 1cm by 4cm (or your personal choice of loop area), and measure Vinduce for various sinusoidal drives (with 50 ohm resistor to avoid shorting the Function Gen), with untwisted wires, human-twisted (non-uniform) wires, and machine-twisted wiring. Note the skin depth of copper at 60Hz is 8 millimeters, at 60MHz is 1,000X smaller at 8 microns (1/3 mil or 0.0003 inches), and at 6MHz that skin depth is 8micron * sqrt(10) = 25 microns, compared to 1 ounce/foot^2 foil of 35 (3-5) micron thickness. Your 1microsecond Trise has period of 2uS, or 500,000Hz (if this is valid way to model a quick Trise with very slow 120hz repetition). Skin Depth of 500Khz is about 80 microns of Copper. You may want a steel tube around the power lines, or route the power lines through a steel trough. EDIT#2 Should you decide to NOT USE TWISTED PAIRS, but use separate (color-coded?) wiring for 117vac, there is nothing to hold those wires at 4mm spacing, and your induced voltage can easily double or triple, to 10 or 15 milliVolts. EDIT#3 April 2020 You will notice a big reduction in coupling, if the rate-of-twist is fast (many twists per inch) and the twisting is done by machine (so the magnetic field variations are very regular and thus mostly self-cancelling).
H: Programming a new STM32 chip I am new to STM32 and recently trying to make a custom board with STM32F103RB. But I am a bit confused by the available methods to program the chip. Let's say I use ST-Link via SWD, does it mean that I do not need to burn bootloader to the chip on my own? That is, does it mean that I can program the brand new chip directly with ST-Link without any other additional procedure? If yes, then does it mean that I do not need to use BOOT0 and BOOT1 pins? What about using USB via USBDP and USBDM pins on STM32? Thank you all very much! AI: The bootloader is in ROM, it is not user-modifiable (ie: you can't "burn" a new bootloader). This appnote describes the bootloader features for different STM32 parts. On the all the STM32 parts that I've used, you pull BOOT0 to ground to bypass the bootloader and boot from flash (address 0x0800000 on most STM32s). With BOOT0 pulled low, the pin state of BOOT1 doesn't matter. I normally pull BOOT0 down with a 100 kOhm resistor on the boards I design at work. Don't leave it floating, it is not pulled down internally so funny stuff will happen if you don't pull it down. You can use ST-Link SWD to flash your firmware to the chip and debug running code. Any IDE that supports the STM32 should know how to do this out of the box, I personally use Ac6 because it's free. As a minimum, you need to connect the SWDIO, SWCLK, and GND lines to the programmer, but you should also bring out NRST wherever possible because debugging without a hardware reset line can be a real pain. As for your last question - the bootloader for the F103 series does not support USB, although some other parts (like the F105 series) do. I would not recommend programming via USB unless you have a specific reason to: SWD will let you program and debug, and is vastly simpler to get working.
H: What does the 902-927Mhz in UHF RFID means? I have an ALN9662 uhf sticker with operating frequency of 840-960Mhz while my integrated reader is 902-927Mhz. I'm just wondering if uhf tag sends its data in a fixed frequency in air? or are there instances that it might change while on air? I configure the reader to 902Mhz single frequency. But it still read the ALN9662 sticker. How is it possible? AI: ...How is it possible? Easy, it's not that tag which detemines the frequency but the reader. The tag works between 840 - 960 MHz. Also that tag is just some antennas and a chip. The tag has no battery, it gets its power from the reader via its antennas. The tag also does not have a way of generating a precise frequency. For that it would need a crystal which would make the tag more expensive. And there's no need for that. The combination chip + antennas is just made such that it can work between 840 - 960 MHz. Now the reader is more complex, it also needs antennas to communicate with the tag. It needs a power source like a battery, adapter or USB connection. It will also have a crystal for generating a precise clock. This allows you to set it to a certain frequency. As long as that frequency is within the suitable range for the tag, it can be read. When the tag receives the signal from the reader, that signal is used to power the tag. To transfer the data some clock is needed, it is possible that the tag divides down the frequency it receives from the reader to a much lower frequency and it might use that as a clock to transfer the data. It is also possible that the tag uses its own (less accurate) internal clock and that the reader simply derives the data clock from that signal.
H: Interaction of electromagnetic waves Is there any cap on electromagnetic waves on the amplitude a certain frequency can carry?how does the amplitude affect the wave interactions? AI: In free space or a homogeneous dielectric (like plastic), the interaction of electromagnetism is purely linear. This means that, classically, however much energy you put into a wave, it simply adds with any other wave. This is of course the same for any frequency. This is a property of the principle of superposition. Non-linearity can happen in other materials, such as ferrites. Then the superposition principle does no longer apply. Here the material dissipation sets the upper limit, because the material will melt with sufficient energy. A typical example is magnetic saturation in transformers. Another example is the directivity of a ferrite RF circulator If we go beyond Maxwell's equations, and into relativity and modern physics, there is something called a "Kugelblitz" A man-made black hole, that could theoretically be made from getting enough energy into a small enough space. This also breaks down the superposition principle. This sets an upper limit to energy concentration in space, but this energy is extremely huge. To figure out the energy required you would solve the mass-energy equivalence together with the Schwarzschild radius, to find the energy required \$E\$. $$E=\frac{r_s\cdot c^4}{2G}$$ This comes out at about \$10^{42} \,\text{J}\$ (don't quote me on that) for a ping-pong ball sized black hole. Which obviously is extreme. I cannot imagine what kind of laser you would need to make this incredible amount of energy. The conclusion is that, in most standard situations, there is no interaction limiting the electromagnetic waves in the area/volume you measure.
H: Cheap and easy way of coating breakout board/wires black? Part of my housing for an arduino based project leaves the back of a screen breakout board and several wires visible through a grill and I need to cover these in black to minimise visibility as the housing itself is black. The board itself is already conformal coated out of the box, and the only exposed electrical part would be my solder joints between a ribbon cable and the board. Is there any type of off the shelf paint (brushed or spray) I can safely use to disguise these? I have some Rust-Oleum Painter's Touch acrylic black from another project, would that be suitable? AI: What about using "brush on electrical tape"? https://www.amazon.com/Gardner-Bender-LTB-400-Electrical-Waterproof/dp/B000FPAN2K
H: A basic level question about grounds in a differential amplifier circuit As far as I know, voltage is a defined with respect to a common ground in circuits. So if there is a source and a receiver/amplifier; then they should have a common ground if we want to talk meaningfully about the voltages at any node. Below illustration showing a differential amplifier used for both single-ended and differential inputs: I can see in single-ended input case the signal ground of the source is tied to one input of the diff. amplifier. So they share a common ground. But where is the common ground for the differential input case? Is the source's ground is tied to the ground of the diff. amp? Where is the differential source's ground? Where is the differential amplifier's ground? Can someone draw a more clear example diagram which also shows how and where their grounds are tied? Edit: I tried to make the confusion more clear with this illustration below: We have on the left a differential signalling circuit and it has its own ground GNDA. We have on the right a differential amplifier and it has its own ground GNDB. Its rails are supplied with +15/-15V wrt GNDB. Imagine for a moment in time the signal source is outputting +1V and -1V with respect to its own ground GNDA. One might say here the differential amplifier will take the difference which is +1 - (-1) = 2V and this 2V will be the voltage with respect to GNDB. Now lets say we measured GNDA = GNDB + 100V. What does that mean? It doesn't matter or does it? And here I tried to simulate the situation with LTspice: It seems if GNDA - GNDB is greater than some voltages the output corrupts. What is this about? AI: But where is the common ground for the differential input case? It can be anywhere! Assume that the average voltage of the two input voltages is at common ground, so 0 Volt. Now what is the differential input voltage ? It is: \$V_{in,diff} = (V_{inp} - 0) - (V_{inn} - 0) = V_{inp}-V_{inn}\$ Now I use a different common ground which is at + 12.3 V, what do we get now: \$V_{in,diff} = (V_{inp} - 12.3) - (V_{inn} - 12.3) = V_{inp}-V_{inn}\$ See, no difference! Since it is the difference between \$V_{inp}\$ and \$V_{inp}\$ what counts, whatever the common ground is, it is added and subtracted so the net result is zero. The common ground is irrelevant. Also they do not need to be tied although in practice they often are. Ethernet (network) cables for example use differential mode signals. The ground does not need to be connected. On both sides of the Ethernet cables small isolation transformers are used to re-define the ground level so that it suits the local circuit. Also in practice the voltage between inputs and the ground of the circuit will be limited for example by the input voltage range of the amplifier.
H: Is the '0V' on a non-isolated buck converter the same on the input as it is on the output? I'm working on a project where I want to switch a load (utility LED light, +- 20W) using a motion sensor. The PIR can only operate at a maximum of 12 volts, while the load can be operated at a voltage between 12-36 volts. I want to power this circuit using a lead acid battery, with the light on the 'unregulated' output and the PIR regulated (with a buck converter) to something like 5V. The PIR also outputs a 3,3 volt signal, so I made a circuit to boost this to the battery voltage. I was wondering if a circuit like this would work or if the '0V' line would be different between the battery negative and the buck converter negative output. Signal boost circuit Whole circuit simplified AI: According to the comments the ground is probably the same if some constraints are satisfied: Non-isolated buck converter Buck converter without current regulation, current regulated converters have a shunt at the ground line changing the voltage Buck-boost converters have a negative (afaik) 'ground' voltage and thus are not usable
H: Irll024N logic level mosfet + atmega 2560 problem We're designed a circuit based on an atmega2560 and wanted to turn on or off some parts of our pcb. So we've installed an logic level mosfet (IRLL024N) that was supposed to act as a switch on our 5V line. The mosfet is controlled by a digital pin of the atmega set to high or low to turn on or off the rest of the circuit. The problem is that we are experiencing a huge voltage drop on the mosfet when it's turned on. The source voltage is 4.93V the gate is 4.89V but the drain voltage is only 3.02V Why is this happening? Below is our schematic AI: You must use a p-Channel MOSFET. IRLL024N is an n-Channel MOSFET, therefore there will be a Vth-Vov drop, between the gate and the source (Vov = overdrive with respect the threshold voltage, to allow the load current to flow into the MOSFET). Otherwise, you must use a VGS value much higher than the drain voltage (e.g. the gate must be 10V).
H: Which of these diagrams should be in the copper side of pcb? I'm trying a DIY PCB using toner transfer method then etching but I am confused with the diagram below. I thought the top trace goes to the copper board but there's this bottom trace which shows solder side. Which of the two goes to the copper side? Any help would be greatly appreciated thanks. AI: This is a design for a two-sided board. You need to have copper on both sides of the board and etch each one according to the provided pattern to make this circuit. Edit: FWIW, I agree with the other answers and comments saying this board as drawn is not a great candidate for toner transfer. However it should not be difficult to re-design this as a one-sided design, probably with a few jumpers. Or simply pay a few dollars and wait a couple weeks to have it built for you.
H: Must the SPICE M= multiply parameter be an integer? M= is used as a multiplier of component values in SPICE to effectively create multiple parallel copies of SPICE components. I think of it as a current-controlled current source on every pin of a device. Does the M value have to be an integer? AI: No, it doesn't have to be an integer. LTspice: Combining Multiple Model Instances Into One Symbol A number of intrinsic devices support the M (parallel units) parameter, such as the capacitor, inductor, diode and MOSFET models. If you are not sure if the model supports the M (parallel units) parameter, try it, and if you do not get an error message, you should be good. The diode (including LED) model is the only intrinsic model that supports N (series units) parameter. To define multiple instances of a model in a device symbol: Ctrl + right-click the symbol to edit the component attributes. Insert “m=” or “n=” into the Value2 field. Note that non-integer values are allowed.
H: wire to board connector for 10 wires I was looking for a quick way of attaching 10 wires to a PCB Header. This is for a production job of 50 units at a time, the header style/ design is fixed. Three components are to be attached to the header, 1 x 2 wire solenoid and 2 x 4 wire sensors. From what I can tell the options are using barrel crimps with a connector or an IDC Receptacle. I would ideally like to use an IDC but have found that it is tricky to align each wire correctly, probably why they're mainly used for ribbon cables... Crimping 10 individual wires seems time consuming but is looking like the only viable way. Would appreciate some recommendations / alternatives. AI: Best practice is to crimp the ends of the wired onto barrel housings and insert them into appropriate housings. With the appropriate tools it's not as time consuming or as difficult as you may imagine with practice. Example shown is a Molex offering. House PN#:0022552101, 24-30AWG Crimp PN#:16-02-0069
H: Logic shutdown for CMOS oscillator I would like to add a !(SLEEP) line for this CMOS oscillator in a way that in case of low input on !(SLEEP) line, the output PWM line is constant low. 2 questions for the circuit: In this case do I need to use Schmitt trigger for the AND gate? Is there a simpler, more professional way to achieve this? AI: I'd run the sleep line directly into pin 2 of IC2A instead of adding another AND gate. That way you retain the Schmidt trigger and don't add a race to IC2A for the clock line. NOTE: You could also feed the /SLEEP signal to pin 6 on IC2B instead, but then the clock is still free running and when you wake it you can never know where you are in the clock cycle. Addition: I'd also consider limiting the extents of R3 by adding another series resistor in there to ensure you can never get zero ohms when the pot is at one end or the other.
H: Do I have to provide VCC to every VCC pin on Atmega32u4 MCU? Atmega32u4 has 7 VCC pins. Can I connect 1 of the 7 VCC pins to the power supply to power the MCU, and use the rest of the VCC pins (6 of them) on the MCU to power other peripherals, such as LEDs? AI: Not sure where you see 7. The datasheet shows 2 AVCC, 2 VCC, 1 UVCC and 1 VUSB. The 2 AVCC are used to power the Analog circuitry, and not connecting them, and not filtering it, would mean shitty analog to digital or digital to analog conversions. If you don't need the ADC or DAC features it's not mandatory. The VCC powers the digital circuitry. You should connect both. YMMV if you don't. Drawing too much power cab cause issues then. The UVCC is for powering the USB circuitry. Again if you don't use it... VBUS is actually an input that connects to USB power, for sensing when a usb cable is connected. And there is the GND pins. All should be connected. Technically one tends to be AGND but still, connect it.
H: Single-supply amplifier design without output capacitor? I'm working to create a small amplifier using a single 12V supply. One (of several) limitations I have encountered is the necessary size of the output capacitors(1000uF to deliver 20Hz into 8ohm loads) and the associated power loss (1000uF @ 20Hz has nearly 8ohms impedance so it is using 50% of the output power). Is there any Single Supply amplifier topology that does not require an output capacitor? AI: Figure 1. A typical bridge mode configuration. Bridge mode amplifiers are popular in car audio systems. The output of each amplifier is biased to half-supply. (Since both sides of the speaker are at the same voltage in the quiescent state, no current will flow through the speaker.) One amplifier is fed directly with the signal while the other is driven with an inverted version. The output then "see-saws" about the mid-supply voltage.
H: RJ45 connector on breadboard For my project which for the next (many) months I want to make it on a breadboard. I ordered RJ45 connectors, but these do not fit on a breadboard. What would be a good way (or is there a way) to connect these on a breadboard? Of course I can solder 8 wires but than still the connector is hanging loose. Maybe there is a (not necessarily electric solution) how to keep these connectors in place somehow, but I don't have experience in it. AI: I have seen ready-made "breakout boards" being sold for RJ45 connectors (RJ45 connector + header pins on 0.1" pitch already fitted to PCB). Search the usual places. Alternatively, use hot glue to mount the RJ45 socket on a small piece of normal 0.1" "perf board" prototype PCB yourself (for mechanical stability). Then solder wires between the RJ45 pins and header pins which you have mounted in the "perf board".
H: What does a black band at the end of a resistor mean? I have a burned out big blueish gray resistor (probably a 1/2 or 1 watt) that I'm trying to find the value of but there is a mysterious black band at the end of the resistor where the tolerance band would be but obviously the tolerance/failure rate/temperature band cannot be black. What does it mean? 4.6mm x 15.5mm I have confirmed colors: Brown Black Gold Gold Black So I think it's safe to say it's a 1 ohm, 5% tolerance. But what is that last black band!? It's driving me insane. AI: revised brown =1 black =0 gold = x 0.1 gold = 5% black = non-inductive (bifilar wound WW) is a 10x0.01= 1.0 Ohm 5% WW resistor. If 0.9mm body length then 1W , if more then 2 or more. The tempco is used by some non-wirewound types and black would be 300 ppm or the highest for non WW. rated tempco. If yours is non wire wound bet this. otherwise... While I’m at this again, get flame-proof if you like. You can decide best on the colours that make sense from this example chart.
H: How is silicon for ICs produced? You hear in grade school that the input to silicon foundries is sand and ICs come out the other side, but you never much more about the refining process or what is actually involved in extracting silicon from the overall sand composition. How is silicon for foundries sourced and/or extracted from sand? Does it come from a special beach in an exotic location? From the nearest creek to a foundry? AI: They start with a seed crystal and grow it into a long ingot by pulling it out of molten silicon. Then they slice the ingot into thin wafers which move on to the IC production phase. Try reading this on how they process silicon from sand: https://www.pveducation.org/pvcdrom/manufacturing-si-cells/refining-silicon Metallurgical Grade Silicon Silica is the dioxide form of silicon (SiO2) and occurs naturally in the form of quartz. While beach sand is also largely quartz, the most common raw material for electronic grade is high purity quartz rock. Ideally the silica has low concentrations of iron, aluminum and other metals. The silica is reduced (oxygen removed) through a reaction with carbon in the form of coal, charcoal and heating to 1500-2000 °C in an electrode arc furnace. SiO2 + C → Si + CO2 The resulting silicon is metallurgical grade silicon (MG-Si). It is 98% pure and is used extensively in the metallurgical industry. An even greater production of silicon is in the form of ferrosilicon that is manufactured using a similar process to that described above but is done in the presence of iron. Ferrosilicon is used extensively in metals manufacturing. In 2013 the total production of silicon was 7.6 million tonnes and 80 % of that was in the form of ferrosilicon. Electronic Grade Silicon A small amount of the metallurgical grade silicon is further refined for the semiconductor industry. Powdered MG-Si is reacted with anhydrous HCl at 300 °C in a fluidized bed reactor to form SiHCl3 Si + 3HCl → SiHCl3 + H2 During this reaction impurities such as Fe, Al, and B react to form their halides (e.g. FeCl3, AlCl3, and BCl3). The SiHCl3 has a low boiling point of 31.8 °C and distillation is used to purify the SiHCl3 from the impurity halides. The resulting SiHCl3 now has electrically active impurities(such as Al, P, B, Fe, Cu or Au) of less than 1 ppba. Finally, the pure SiHCl3 is reacted with hydrogen at 1100°C for ~200 – 300 hours to produce a very pure form of silicon. SiHCl3 + H2 →Si + 3 HCl The reaction takes place inside large vacuum chambers and the silicon is deposited onto thin polysilicon rods (small grain size silicon) to produce high-purity polysilicon rods of diameter 150-200mm. The process was first developed by Siemens in the 60's and is often referred to as the Siemens process. The resulting rods of semiconductor grade silicon are broken up to form the feedstock for the crystallisation process. The production of semiconductor grade silicon requires a lot of energy. Solar cells can tolerate higher levels of impurity than integrated circuit fabrication and there are proposals for alternative processes to create a "solar-grade" silicon. I hope this answers your question.
H: Can I replace a NUP2105L voltage suppressor with a NUP3105L? Someone gifted me a device to read electronic control units from cars and trucks. They gave it to me because they don't use it and they have also removed some components from it to fix another device. I am trying to rebuild it but theres a missing component I am having troubles getting locally (I can't import it because of strict import rules we have where I live will make it hard and expensive to get) is a little smd component marked 27E and it looks like a transistor. After some googling I found it is a dual bidirectional voltage suppressor NUP2105L. I have no idea what that means, my knowledge in electronics is very basic. The place where I went to buy it told me they don't have it, but I could replace it with a NUP3105L that they do have. But I decided to do some research to make sure. So I tried looking at the datasheets (which I barely understand) and the first thing that makes me doubt if it will be a good substitution is that the symbol of each component looks like 4 schottky diodes, but they are pointing in opposite directions. My question is, would the NUP3105L be a good substitution for a NUP2105L? If not, is there another replacement I could use to just solder in place of the NUP2105L, without having to modify anything else? AI: You dont need it for,functionality. It is an ESD, Load dump, Lightning protection device. The 3105 is rated for trucks with 24V systems at a higher voltage before clamping The 2105 is standard for 12V systems.
H: LED Strip Controller - Can someone check over my schematic? This is my first time designing a circuit with any kind of computer program, and before I ordered all the components and PCBs I just wanted to make sure I wasn't making any rookie mistakes since the whole circuit is made up from datasheet examples and answered questions on forums. I've gone over it myself a couple times, but since I haven't done this before I don't know what I'm looking for mistake-wise. My major concern is that the boost circuit on the left side isn't going to work, but I'm also worried that I don't have the right connections on the MOSFETs or the ICSP header. AI: Good news. You've picked mosfets that can work with a 5V gate voltage. Not the best case, you might experience fairly high Rds(on) and they might become warm. I'm not sure of the purpose of the booster. Why do you need 24V LED strips when there are also lots of strips for 12V? However, this 2A boost converter could work. The board layout is very important at 1.2 MHz though. I would recommend for a first design with switchmode supplies to stay in the KHz switching range. I also have a few other remarks: - Add an LC input filter to prevent noise upstream. - Add more than 22uF output capacitance. - Make sure the gates and the on/off are at a known level during reset with some pull-up/down resistors. - Add at least some 100nf and 1uf ceramics around the chip, and preferably some 10uF electrolytic near the connector. I can't comment on the ISP connector, this will be entirely up to the symbol/footprint connections.
H: Switching pre regulator and linear regulators I need to have a 5v supply with around an amp of current draw max, and a 6v supply with around a 4amps current draw max. I was thinking: instead of having two switching regulator circuits, just having one 5amp 9v switching regulator circuit feeding into an Lm7805 and an lm338t (linear regulators) to generate my 5v, and 6v rails, respectively. My Vin is around 18v. Pololu makes some nice switching regs that fit my needs. Are there any potential drawbacks to this arrangement? I heard it referred to as a switching pre regulator. When answering please bare in mind I'm a hobbiest. AI: No, this sounds reasonable, if it is what you want to do. One problem is that you will drop 9-6=3 volt at 4 amperes, that's 12 watts. You need a hefty heat sink for that. A better option is perhaps to have a 6V output from a switching regulator, and a 5V low-dropout linear regulator, converting 6V to 5V. The 5 volt regulator would only have to drop one volt, converting 1 watt into heat. This is still a non-trivial amount of heat for one IC, but as peufeu mentions in a comment, is small enough to be dealt with using the copper on the PCB as a heat sink.
H: Is it okay to use a device whose junction temperature is more than operating temperature? I am searching for a voltage regulator which gives output voltage of 3.5V, 2A with input of 7.5V min. I have seen L6932 HIGH PERFORMANCE 2A ULDO LINEAR REGULATOR. In the datasheet there is calculation for junction temperature which I got is 220 degrees celsius. But the operating temperature is 150 C. Is it okay to use this regulator ??? And the thing is that current of 2A flows only for certain period of time and then it decreases. AI: To answer your title first, NO, you can not allow a device to exceed it's max operating temperature. As to whether it is OK to use the device you indicated in your application, then it gets more complicated. It really depends on the scale of "And the thing is that current of 2A flows only for certain period of time and then it decreases. " If you only infrequently supply 2A for very short periods then the device will not heat up much and it will be fine. (Though I would not be using a 2A device to drive a 2A load anyway...) You need to calculate the worst case power thermal deviation value. That is not necessarily the 2A power value though, nor it is the average power value. When it comes to power, you really need to look at how much power for how much time. There is a thermal gradient when power is applied such that your device will heat up to death temperature if the high load is on for a more than a certain amount of time. Obviously you need to derate that, and never exceed that value. Further, there must be sufficient time between power bursts to permit the die to cool down to some base level. As others have mentioned in the comments, if you exceed those limits thermal runaway will likely kill the device, or worse. Note however, most modern regulators also have thermal protection that is intended to shut the device off should it get too hot. However, this should never be relied upon as a design mechanism. Rather it is a backup protection should the ambient temperature around the regulator or heat-sink get too high, exceeding what you used for your safe thermal calculations. NOTE: You CAN calculate using the worst case power value and use large heat-sinks and even fans to ensure the thing never gets too hot no matter what, but that can be serious overkill if the high load is only for 250uS once every six months. However, when backing off the heat calculation, you also need to take into account failure scenarios. If something gets stuck in the on position will the regulator catch fire. If the answer is yes, you should use additional heat sinking or protection circuitry.
H: Simple way to isolate video signal To avoid ground loops I want to isolate several video channels (base-band). I have a video source with 4 channel, they all share the same ground. I'm thinking about some transformer like used in audio application with an impedance of 75 ohm. Two issues: I can't find a suitable product, I will lost the DC offset. Is there a better way? AI: For composite video you'd need a wideband transformer, as the signal has quite a bit of low frequency information. Well, I simply googled "composite video transformer" and came up with this. EDIT: Composite video bandwidth seems limited to less than 10MHz so I can reuse the research I did to select a SPDIF transformer. So, Murata DA102. Note transformers don't care about the 75R impedance. They do introcude a discontinuity though, so the length of transmission like from the transformer to the receiver should be kept short.
H: Figuring out the protocol of an SPI device without a datasheet I am pondering about interfacing with the iPad Mini 2 touch panel. It seems that no one has successfully done this and/or published anything about it, so I want to take a stab at it. As far as I understand the device uses a custom touch screen controller IC, meaning I can't get a datasheet for it. Luckily someone at least made a schematic for the device and we can see that the controller seems to talk to the uC through SPI. Given that I don't know the protocol for the IC would there be some way to find out the protocol by spamming the input and seeing what comes out of the output? Thank you! AI: I don't think you'll be able to get anywhere by just addressing registers at the input and seeing what comes out. For one, this doesn't answer how you are supposed to initialize the controller, which is vital to using the part. That's also a lot of data to analyze. You'll probably get a lot farther by sniffing the SPI lines of the touch panel in situ. Mechanically it may be challenging to get probes on those lines while it's connected to the rest of the iPad, but electrically it's worth it. Probe all four lines and record commands on startup (filter out anything where CSL is high). That'll show you how the uC initializes the controller, as well as the clock frequency. Next you can just look for patterns in the commands during operation and you should be able to figure out what registers it's polling to get touch info. This will be 1000% easier with a scope or logic analyzer that can decode and record SPI, because you can just export the list of commands to an excel doc and work from there.
H: Part identification: electret mic I'm currently trying to reverse engineer a remote in my headphones (one of these three button things). The remote seems to work fine, but the mic is intermittent - at times it will work perfectly, at other times it will go completely silent. In order to proceed further in trying to figure out what the mystery IC is doing, I need to figure out what microphone is being used so that I can identify the connections. All I know is it's an SMD mic with all pads underneath the chip so I can't trace them easily. Above is a photo of the microphone (apologies for the poor quality, I'll try for a better one later). I'm pretty sure that it must be an electret one as there are no inductors/filters anywhere that could otherwise create a supply for a MEMS microphone given that the whole controller is powered via a bias resistor embedded in the phone. (*) The markings are as follows: Top Line: 1005 Bottom Line: 7406 (I think. Might be 74C6) The overall dimensions are roughly 4 mm x 2.4 mm x 1.1 mm. The black dot is the microphone membrane itself (so entry is on top). The output of the mic is analog, not digital and seems to be in the order of a few millivolts. A cursory Google search returned nothing, so I'm opening the floor for suggestions. Again, the aim of this question is simply to identify the part. (*) This is for an Android phone typically a 2.2 V ish supply with a 2.2 kΩ or 5 kΩ series resistor (depends on which spec you read). For those that want some background: In terms of an overall picture of the circuit, as I say it's a 3-button remote in a set of headphones for controlling a mobile phone. The remote is supposedly both for Android and iPhone, but doesn't behave correctly. Initially on connection the mic works fine, but as soon as any button is pressed the mic will subsequently go silent until the whole thing is disconnected for a while. My best guess at this is that my phone is an edge case that doesn't work properly with the way they have designed the controls - iPhones and Android use different signalling schemes so they seem to have some extra circuitry involved to switch between them. I am trying to work out the various connections so that I can start modifications to suit the Android spec. Below is the schematic that I have thus far deduced. Again it is not possible to complete because I cannot physically see or measure what connections are beneath the microphone. In the schematic in the red box I've drawn the routing to match the physical layout as pictured in the photo. Of interesting note the voltage at the input is never more than 1.4 V. When it is working correctly (i.e. mic can be heard) the voltage is about 1.06 V. When the mic can't be heard it has jumped to 1.4 V. Pin 3 of the unknown 6-pin IC is sitting at 1.4 V when the mic is not working. When it is working this voltage drops to near 0 V. Beyond that it is hard to get an idea of what is going on from the voltages as the rest of the pins seem to be roughly the supply voltage with the exception of pin 1 which is sometimes 0 and other times floats up to 0.4 V ish. The voltage at the phone input when buttons are pressed are as expected - roughly 300 mV for VOL-, 150 mV for VOL+ and 0 V for CNTRL. From what I can assume the unknown IC is probably some form of multiplexer, but there is no information marked on the chip. But to be honest the aim is to simply identify the connections to the microphone after which I am planning to remove the mystery IC entirely and find a way to wire in the mic correctly. AI: Never mind. Simply removed the mystery IC and shorted pins 2 and 3 of it to ground (the mic was working previously when pin 3 was 0V). As if by magic everything works. I'm going to leave this as an unaccepted answer because it doesn't answer the main part of the original question which was to identify the microphone.
H: Complementary Transistor Pair I am wondering, why is there no complementary transistor pairs to buy, since their use as individual devices is pretty common in audio amplifier applications. So for complementary transistor pair it is desired to have as identical properties as possible (gain, bandwidth, breakdown voltages, etc.) - it is hard for an amateur like me to find two very closely identical transistors (one pnp and other npn; for BJT lets say). Are there any (power) complementary transistors available in any electronics shop? If there are any, example of them would be great! If there are no such transistors in one package sealed together, why is there no one manufacturing such components? One more thing. I have recently bought a lot of cheap npn and pnp transistors of same model (npn->BC337 & pnp->BC327). I have found two almost identical transistor, as the thing concerns of the gain which is approx. 320 for both. Sadly transistors are low power and cannot be used in power amplifier in the driver stage, where currents get amplified to greater levels (1 amp and above). I kind of modified these two transistor so I can use it (two pieces in one) on the breadboard - if one starts heating, the other also starts heating for the same portion; making them work in as identical conditions as possible. What do you think of this "modified version" of two transistors as one transistor? Bad, dumb, nothing special, awesome? Transistors are pulled together with thermal shrinkable tube :D AI: Paired complementary transistors may be in separate packages (old solution), or in combined (e.g. 6 pin) package. Matching of NPN and PNP transistors may privilege only a few parameters. They may be switching times, capacitance, gain, etc. Depending on your design some are more relevant than others. There are several choices. BC846BPDW1, ON Semi, SOT-363, VCEmax=65V, ICcont=100mA CPH5524-D, ON Semi, SC-74, VCEmax=50V (100V for VCBO of NPN), but ICcont=3A PBSS4112PANP, NXP Semiconductors, has maybe a difficult package (DFN2020-6), but VCEmax=120V, ICcont=1A PBSS4160DS, NXP Semiconductors, SC-74, lower VCmax (60V), ICcont=1A. Both NXP can be used for low VCEsat applications. SMBTA06UPN, Infineon, VCEmax=80V, ICcont=500mA So , you see that low and medium collector currents can be accommodated for, and voltages up to 100V. To my memory the maximum VCE voltage for matched pairs is around 150-200V. A power transistor with large VCEmax (200V), ICcont=10A, that is sold as separate TO-3P for NPN and PNP is FJA4313 and FJA4210.
H: Best solder wire - Sn63Pb37 vs Sn60Pb40 vs ...? Usually my circuits are full of very fine-pitch SMD components. I solder the prototypes manually, which takes a lot of time. Good tools and high-quality solder can speed up the process. I prefer using leaded solder, as it flows better at relatively low temperatures. This way I can prevent my components from overheating. Leaded solder is not allowed for commercial products, but is okay for prototyping. There are several types of leaded solder wire on the market. I'm trying to find out which one is "best". Let's define "best" as follows: Low melting temperature (prevents overheating components). Good wetting of pads and pins. Preferably contains some flux, so one doesn't have to apply it all the time externally. Very fine diameter for soldering small components (like LFCSP package, 0402 or even 0201 resistors, ...) Price is no issue. I have several questions:     1. Tin - Lead alloys I read on Wikipedia that the Sn60Pb40 solder is very popular for electronics (I agree, I have used this one so far). Wikipedia also mentions that Sn63Pb37 is slightly more expensive but also gives slightly better joints. What do you think about Sn60Pb40 vs Sn63Pb37? What is actually the difference?   2. Exotic alloys But these are not the only solder alloys. More exotic combinations - containing tin + lead + silver and even with gold exist. Will these exotic combinations change the properties?   3. Bismuth and Indium alloys Some of you made me aware of Bismuth- and Indium- based alloys. I've dedicated a new question to cover them: Bismuth or Indium solder - what would you choose? NOTE: I use a solder-smoke extractor. AI: Sn63/Pb37 is better than 60/40 because it is a eutectic alloy. That means it has the lowest melting point of any Sn/Pb alloy, and it solidifies relatively abruptly at one temperature rather than over a range. Generally both are advantages or neutral. Combinations with small amounts of (say) gold tend to be for reducing the tendency of solder to dissolve the material (gold in this case). Many solders these days avoid the use of lead and are often mostly tin with other materials such as copper, bismuth, silver etc. This is done to reduce the toxicity of electronics that finds its way into the waste stream. In my experience it is worse in every way compared to tin/lead solder except perhaps in applications where high melting temperature is important. Flux another matter- there are a number of different types. If RoHS compliance (and toxicity) are of no concern, 63/37 Sn/Pb solder with RMA rosin flux is an excellent choice, and is good for high reliability applications. Fine for hand soldering or reflow. For production for world markets, it may be necessary to use lead-free solders with more finicky temperature profiles and inferior performance. Sometimes water soluble or no-clean fluxes are acceptable, depending on the product and how much it might affect the process (and possibly the product functionality).
H: Does a saturated inductor radiate more? Background: I am using a shielded inductor in a switching power supply, and am investigating an EMI issue. My hypothesis is that since the inductor is saturated, the magnetic field is no longer contained in the ferrite core, and the inductor becomes equivalent to an air core inductor. I know that air core inductors radiate their magnetic field much more than ferrite core inductors. Question: Once a ferrite core inductor saturates, does its magnetic field get radiated significantly more? AI: Once the core is saturated, it doesn't look like a magnetic core anymore when you try to increate the magnetic field. The inductor becomes essentially air-core for any additional current. While over-saturating a shielded inductor will cause the magnetic field to leak outside the shield more, this shouldn't cause much radiation by itself. More likely the sudden decrease in inductance causes large current spikes somewhere. These can cause radiation, depending on how they are routed, and therefore how much they act as antennas.
H: High-side switch Vs Complete mosfet push-pull configuration to drive an inductive load In my application, a microcontroller with 5V logic needs to drive two 12V DC fans in voltage. The microcontroller will use a PWM drive signal with a carrier frequency of 32 kHZ. For several reasons, I want to drive the fans with constant voltage sources, ie no unfiltered PWM should reach the fans. The fans draw a maximum of 0.3A per fan, which gives 3.6W at maximum power. I need to evaluate two kinds of designs: The first one, which is a complete mosfet based push-pull configuration with an LC filter to linearize the PWM. The second one, which is a simpler high side switch, directly driven from the microcontroller, with an LC filter at the output. I am aware that the first option is more efficient, but due to cost/complexity I would prefer the second option if viable. What are the benefits/drawbacks of one configuration over the other? First Option uses an LM27222 driver and a dual mosfet IC: Second option uses an ISP452 (I should go for a faster-response IC): AI: I need to evaluate two kinds of designs: The first one, which is a complete mosfet based push-pull configuration with an LC filter to linearize the PWM. This isn't actually a push-pull design. The lower FET never pulls. Current always flows through it from source to drain, not drain to source. This configuration is usually called an "active switch" configuration. Because the lower FET is replacing the diode of your second design. What are the benefits/drawbacks of one configuration over the other? The active switch design can be more efficient. The diode based design is likely to be less expensive. Exactly as you already determined. Deciding which is best for you will depend on the exact balance of the importance of cost and efficiency in your application. And it how well you can optimize it will depend on all the detailed choices you make in designing either of the two choices, and your ability (or your organization's ability) to find good prices for the different types of components. Unfortunately (or fortunately) there are thousands of different controller chips, MOSFETs, and diodes out there to choose from, and pricing changes dynamically so it's unlikely you'll ever find the absolute best choice. Edit: As another answer points out, either one of these is probably overkill (and excess cost) to solve your actual problem. The fan itself can be used as an inductive element to smooth a PWM signal. Or, if the reason you want "no unfiltered PWM" is to reduce audible artifacts, finding a controller that can provide a higher PWM frequency may be the most cost-effective solution.
H: Ground Plane for different GND types I am looking for any recommendations or suggestion for Ground Planes for two different grounds I am designing a battery powered device with two voltage sources, 1 coming from the AC wall and other from a Li-ion battery when there is not energy available from the AC wall. in the PCB design state I am facing a concern about the ground planes because it is very important for the design (reduce heat, impedance control, EMI) so I added a gas gauge to monitoring my battery state, but it requires a Rsense from the battery to my load. my question is now I have "two" different Ground separates for a miliOhm resistor, on stage for my DC-DC input converter and the other for the Battery to the load. is it right? how can I can handle the ground planes if one is "isolate" for a mOhm resistor. should I keep my two different ground loops as small as possible? Details: input 9V Load 5v/5A battery 3.7v (high current 15A max) I added my Block diagram for the design any recommendation for monitoring the current in a different way? or any battery charger IC with gas gauge feature? Thanks AI: You have to put the Rsense into the minus lead of the battery. I.e. It will see both charging and discharging current. And this will do nicely away with the split ground. Alternatively you can use a gas gauge with high side current sensing.
H: How do I lower the amps without tampering with the voltage? I have old speakers and decided to make them useful again by making then battery powered but I am new to this so I'm not 100% sure on what to do. In the speakers there is a small transformer which converts the 230 volts to 12 volts and 1.2 amps but I have two 12 volt 4.5 amp batteries which I am going to use in parrellel so that I get a longer battery life. This means I will have 9 amps coming from the batteries but I don't know how to reduce the amps without interfering with the voltage. Thank you in advance. AI: A battery rating of 4.5A means that the maximum current it can supply shouldn't exceed 4.5A, or you risk damaging it (this can happen when an insufficient load is connected, e.g. when a battery is shorted). As such, in this scenario two batteries working at 1.2A (connected in parallel) will operate at roughly 1.2/9 of their total available power without any negative side effects. That being said, before connecting anything, make sure you read and interpreted the ratings correctly. Batteries are often labeled with their capacity in Ah, or ampere-hours. This is merely an indication of the energy stored in such a cell, and not the available power. If the battery was actually rated 4.5Ah, it would mean it can operate for 4.5 hours providing a current of 1A. Such a discharge rate is rather low for most common batteries I personally know of.
H: Ceramic Capacitors What is the difference between General purpose and Automotive grade ceramic capacitors, if they have the same capacitance, temperature coefficient and operating temperature. is there a manufacturing or design difference or are they the same except the Automotive ones go through additional screening? AI: What it really means is; you are maybe unlikely to find any Chinese Supplier qualified to AEC-Q200 due to quality control, traceability. They will be Japanese companies who take quality most seriously like ... Panasonic, Toyo Yuden, Rubicon for caps. AEC-Q100: Integrated circuits (ICs) AEC-Q101: Discrete semiconductor components (transistors, diodes, etc.) AEC-Q200: Passive components (capacitors, inductors, etc.) They include temperature grades from 105~150C for under the hood or interior etc. MTBF drops 50% /+10’C rise , so life tests at elevated temps are mandatory and bunch of other stress tests,
H: Why are there not more cores in modern CPUs? Many often confuse the meaning of Moore's Law... it refers to the number of transistors on a chip, not performance. A while back, it became apparent that the gains from increasing clock frequency on chips was not worth the expense and chip makers started adding extra cores to CPUs. However, the increase in the number of cores on consumer chips has not matched the increase in transistors on each chip. I surmise that a lot of these transistors have gone into features such as prediction logic ect, because it is difficult for some workloads to be parallelized, or many programmers find parallelizing their programs too time intensive, or CPUs are optimized for existing programs. However, from my perspective, I would like to see transistors go into increasing core count and on-chip-cache as this would benefit my programs more than marginal increases in single threaded performance given that I have no trouble writing multi-threaded code for most of my particular goals. If I use the extra transistors for a really large cache, I will not have to make as many trips to memory, which can also be a big performance booster. Am I incorrect as to the reason core counts do not seem to be increasing at the same rate as the number of transistors? Or is there also some diminished return for increasing core count even for easily parallelized work loads such as memory bandwidth? Why have core counts not increased at anywhere near the rate as the number of transistors on a chip? Edit: Just because a workload can be run in parallel does not mean it is an appropriate task for a GPU ect which tend to deal with doing a lot of floating point calculations. CPUs have diverse general purpose capabilities which more specialized chips lack. An example of this could be, let's say I have a set of 50 heuristic functions I need to run against a large set of data that is already in memory. This is easy to multi thread, give each function its own thread, and you can multi thread it further by diving up subsets of the data for each function (if the data is not highly interdependent). You could easily satiturate all the cores of even a top end Xeon processor, but you won't be able to make much use of a GPU or SIMD. Or, just a common web application serving many different requests that do not need to be coordinated. Or, just several different applications running on the same server for political or administrative reasons. AI: There's a number of technical and business reasons in no particular order: Memory bandwidth becomes an issue with scaling of cores. Memory contention can actually decrease your performance. Xeon Phi is the platform where core hungry (and cash loaded) customers can go. Most software has been designed to run well single threaded. This forms a chicken and egg problem. Why try and sell more cores when most customers can't use them? Most customers won't use them because hardware isn't built in a core-scaled fashion. Many customers are more interested in IIO bandwidth. In that case, you just need enough cores to service IIO. Intel Xeon's do have many more cores as well, but you'll pay a pretty penny for them in general. In that regard, it's simply supply and demand. Because transistor count continues to scale (although not really by Moore's law anymore either), single threaded applications still dominate, and core processing power isn't the bottleneck usually, it's more effective to put those transistors to use making the cache larger and more efficient. Basically, instead of parallelizing the workload by creating more cores, the cores are now getting fed better. Lack of competition in the highly parallel compute segment prevents consumer level pricing. Most mainstream programming languages are ill equipped to handle parallel code well. Even those that appear to haven't seemed to find a way to easily debug parallel code. Potentially a new programming paradigm is necessary to overcome this. Certain common OS's can actually suffer exponential performance loss the more forks you make so even if you have the cores, the OS handling of it ruins the usage of them. This is an extension of points 3 and 8.
H: How to get text in svg2poly for Eagle? I'm working with Eagle 7.7.0 and trying to import SVG files using svg2poly to put a logo on the silkscreen. To aid in the process, I've downloaded the current version of Inkscape, which is 0.92. I'm working from a combination of the instructions on the GitHub site and this tutorial from SparkFun. The short version of the problem is that I can import shapes just fine, but any text in the SVG file is either muddled beyond recognition or missing completely. I've tried different fonts, including the default sans-serif, and different sizes. I followed the instructions for breaking the closed-loop letters into multiple open shapes, and that had no impact. As an ultra-simple case, I tried to import an SVG that was just a lower-case 'l', which is essentially just a rectangle in sans-serif, and even that did not work. (Came out as a triangular shape.) The only things that I see that are different between what I'm doing and what's published: I'm using the current Inkscape (0.92) and the instructions are for "Inkscape 0.47 or newer". Are there any known problems for with the newest version? Is there perhaps some trick to making text work that's not documented? I'm drawing / typing my examples directly in Inkscape, whereas some of the online guides talk about importing SVG that was made with a different program. More detailed description of steps for the simple case with just lower-case 'l': Open Inkscape. Make text area and type lower-case 'l'. Change font size to 72, but leave font as sans-serif. Select all layers (ctrl-alt-a). Click the lock button to lock height/width ratio. Change units to mm and set height to 100. Under File > Document Properties... select Resize page to content drop-down and click Resize page to drawing or selection. Convert object to path (shift-ctrl-c). Ungroup (shift-ctrl-g). Repeat ungroup a few times. (For this simple case, nothing seems to ungroup. When I had a whole word there in an earlier trial, the individual letters each became their own entity.) Select all nodes (F2 followed by ctrl-a). Select Extensions > Modify Path > Add Nodes leave defaults and click Apply. Select Extensions > Modify Path > Flatten Beziers leave defaults and click Apply. (Neither of these steps made any obvious change for the single 'l' or for larger cases with whole words.) The single 'l' has no closed loops, so I skipped the related steps for that. (When I had whole words, I did follow these steps.) Export as "Plain SVG" using Save As from the File menu. In Eagle, type mark and click a location. Type run svg2poly 0.1 and select the file created above. Observe result. In this case a triangular shape instead of an 'l'. (Again, it's sans-serif so the original was basically a rectangle.) In some cases with more text, I got apparently random polygons. Maybe a few looked vaguely like letters. Are their known issues with svg2poly and newer versions of Inkscape? Are there undocumented tricks that I need to make this work? AI: I got this by reverting to version 0.47 of Inkscape and then making a few additional changes to my file. I'd be curious to know which versions of Inkscape are compatible, but I'm not going to increment through them myself now that I have a working tool. FWIW, the last thing that held me back once I downgraded was realizing that the 'i' in my text was an "open shape" from the point of the code. It looked like a square over a rectangle (two closed shapes) to me, but it was a single shape with a break to the code. "Dividing" the dot from the rest ala the published instructions fixed that up.
H: Isolate UART Signal Pins I'm working with an embedded cell modem that will be living on a battery, hopefully for a long period. It only wakes up and sends data occasionally. To that end, my plan is to cut power to it when it's not needed to conserve power. However, I've noticed that if I put a MOSFET-based switch on the power supply to the modem, that seems to successfully shut it down, but there ends up being significant current leakage on the DIN and DOUT pins of the modem (which are connected to pins on the processor, of course). I'm hoping to add something so that I can send a high signal from a third pin of my processor, which would then allow signals to pass between the processor and the DIN and DOUT pins of the modem, but if hold that pin low, it will isolate DIN and DOUT so that no current can flow. Any suggestions? AI: The TXD of a UART is normally high, in idle. Therefore if your modem is unpowered, since the ground is still connected, some current will flow from the TXD pin to the modem. In fact, the IC of the modem will have input protection diodes, which clamp to its VDD any voltage higher than that. This could be also potentially dangerous, due to latch-up (and furthermore your TXD pin will be powering all the modem circuitry connected to its VDD rail...). My suggestion is: Put a weak pull-down in the RXD line and: a) When you need to turn off: Disable the UART. Put the TXD as output low (I just checked a random PIC18, and I don't see they have a pull-down feature). Turn off the modem. b) When you need to turn on the modem: Turn on the modem. Enable UART. In theory, by outputting a low-TXD, while the modem is enabled, you might have some troubles. In fact a low-TXD value is a start condition. But your MCU will be fast enough to complete all the step in few microseconds. Therefore: The steps will be faster than the modem initialization time at startup. They will likely be faster than a "bit" time too. Therefore that spurious "0" will be just ignored. Still, stop bit will be missing, so the modem's UART should ignore any data.
H: Back voltage on voltage regulators I am busy designing a custom board based on Arduino's pro mini design. In the official design, they have a MIC5205 voltage regulator supplying a constant 5V to the ATMEGA328 chip. However it is also possible to feed in 5V directly to the chip if you so wish, but when you do so, you are supplying 5V to the output of the MIC5205 chip. In my design, I want to replace the MIC5205 chip with a LM7805 voltage regulator. But based on a bit of research it seems like you cannot supply 5V to the output of the LM780; based on a quick test, when you do so with 5V the chip doesn't seem to draw any current. After going through each datasheet (MIC5205 and LM7805) I couldn't find any information about the chips' ability to withstand voltage applied to the output pins. My question is finally: Can the MIC5205 actually handle voltage applied to its output? Is it bad to apply voltage to the output of the LM7805? How can I tell from looking at the datasheet if a chip can handle voltage applied to its output? AI: The problem of applying an external voltage to the output pin of an NPN-output type regulator is that this would apply a reverse bias to the emitter-base junction of the NPN. The emitter-base junction of a BJT typically has a very small breakdown voltage (the emitter is heavily doped). In the datasheet you might find this: However, two considerations should be drawn: You'll have a reverse current to the input, and this might cause some problems. The datasheet states that the limit voltage is 7V. In a 7808 (or higher voltage, eg 7812), this would actually be a problem, as you would likely be applying 8V or more. However, in your case, you're likely to externally apply only 5V to the output (otherwise a larger voltage would possibly destroy the circuitry the 7805 is normally powering). Therefore you might not need to insert the diode on a 7805.
H: Current distribution for parallel zeners Suppose you've got two 5V1 zeners in parallel. Due to slight differences between the two parts (tolerance on zener voltage), one of them will conduct more current. simulate this circuit – Schematic created using CircuitLab The one consuming most current will heat up more. This will probably affect its properties. What kind of equilibrium will be reached? Will the current distribute evenly between them? I couldn't estimate myself what equilibrium would be reached because... I got confused by the temperature coefficients. Some zeners tend towards negative, other towards positive temperature coefficients. Negative temperature coefficients would be the worst-case scenario: the zener voltage would drop when the part heats up, drawing even more current. Nevertheless, while drawing more current the voltage over the diode will increase a little because the zener I-V-Curve is not ideal. This would push some current to the other zener(s) mounted in parallel. But would that be enough to overcome the detrimental effects of a negative temperature coefficient? AI: Low voltage Zener diodes (about 5V) feature true Zener breakdown, i.e. band-to-band tunneling. The larger the temperature, the more likely the tunneling occurs because the larger the carrier energy (and the smaller also the energy gap). Therefore, there could be a thermal run-away. In fact, unlike normal diodes (or LED diodes) in forward conduction, small mismatches can lead to large current variations as the curve is very steep in breakdown voltage. The Zener carrying more current will likely heat more (Which will decrease the Zener voltage etc.). High-voltage Zener diodes (e.g. 10V or more), instead, feature avalanche breakdown. The carriers are accelerated by an electric field, and if they reach enough energy, they will produce more electron-hole pairs, due to impact ionization (which are in turn accelerated by the electric field, etc.). However, the larger the temperature, the larger the lattice vibration and thus the smaller the probability that an electron can gain enough energy between impacts, therefore the less likely they can ionize (i.e. produce another electron-hole pair). In this way, there will be a negative feedback, that will make the diode initially carrying more current to be less conductive (therefore the current will spread more equally). The datasheets shows in fact different temperature coefficient for different Zener values.
H: How to implement OpAmp floating input protection I needed a VCCS so then I simulated the circuit below in LTSpice and it worked great. I also designed the PCB and produced it, solder it and it still works great. But I have a big problem! When the input (positive input of the opamp) is left unconnected, and the load is connected, the opamp hugs the VPOS or VNEG rails (arbitrarly!) and the output current goes to 2.35A (I am lucky that I put that current limiting resistor so the current will never go above +/- 2.35 ampers. I think this is because of the input offset voltage of the MP38 and the fact that with no input since it will be open loop...it hugs the supply rail (I do not understand why sometimes it goes to V+ and sometimes goes to V-). So my questions are: Is what I said above the problem? or there is more to it? How to prevent this kind of behaviour (possibly with some schematics please!) What could I do better for this Circuit/PCB? Would be this a solution (R1, 10k resistor pull-down parallel with input connector): AI: Without the input voltage applied the opamp is not in open loop. It still has feedback but the feedback has become positive instead of negative. I do not think offset has anything to do with this. All 4 feedback resistors are 10 k so feedback to both inputs (inverting and non-inverting) is 1/2. If you disconnect the input source then the feedback to the non-inverting input changes to 1 so it "wins" when compared to the (still 1/2) feedback of the inverting input. I disregard the fact that the negative feedback is from one side of the sense resistor and the positive feedback is coming from the other side. But since the voltage across the sense resistor is small I think I can safely do this. A possible solution might be to apply the input voltage on the inverting input side and ground the left side of Rin1. That way when there is no input voltage source applied the negative feedback would increase (which is OK, I think) but the positive feedback stays at 1/2. You might also have to swap the feedback signals around the sense resistor since you would be introducing the input voltage at the negative side now. If possible, compare both situations in a simulator and see if that resolves the issue.
H: Do I need a crystal to for the Atmel SAM D20 MCU to Run at 48 MHz? I am working on a very small device and would like to use the Atmel SAM D20 as MCU (this variant ATSAMD20E16). Reading the data sheet, there is only one point I do not find clearly spelled out: Do I need a crystal to run this MCU at 48 MHz? I found, there are various internal clock sources. Also the feature list states "Internal and external clock options with 48 MHz Digital Frequency Locked Loop". But interestingly every existing design I checked had an crystal added to the board. For my device, the clock can be imprecise. AI: From the SAMD20 datasheet, you have the internal 8MHz RC clock. This can be routed to the generic clock multiplexer 0, which is used as source for the DFLL48M. Please note that the RC oscillator could be imprecise (EDIT: precision is greatly improved by loading factory calibration parameters), so it will be the DFLL, but you stated that you can live with this. To increase accuracy, you can use the 32768 Hz oscillator (with a 32768 Hz external crystal) and the DFLL. On SAMD21 (i.e. not your case), you can also use the USB start of frame (1ms) as synchronization (in device mode), without having to use any crystal!
H: Low side N-Mosfet buck converter I often see schematics of basic buck converters. The majority of these schematics use a P-MOSFET as high side switch. Why is this design preferred over a low side, N-MOSFET buck converter? simulate this circuit – Schematic created using CircuitLab Would this design work? The load is a 12 V fan, 3.6 W. The MOSFET would be driven directly from the microcontroller in PWM, 5 V logic, 32 kHz carrier frequency. AI: Yes, that works. The advantage is that the low side switch is easier to control, since its input is ground-referenced. The downside is that the load is not ground referenced. If you are sure you have a floating load, then this is a very valid thing to do. I usually drive solenoids with a similar circuit, for example. This topology also works if the load is the primary of the transformer in a flyback switcher. For example, here is a snippet of a schematic I'm currently working on. The product is a piece of industrial equipment. Note that you can flip the inductor and the load. That fixes one side of the load to the positive supply, which reduces the common mode voltage swing of the load. In this case, I added a deliberate inductor, even though the load is sufficiently inductive to smooth out the individual pulses. The reason for L6 and C30 is to filter the voltage swings on the SolValve- wire. Without these two components, that wire would carry the full switching pulses. That would cause a lot of RF emissions. Note the Schottky diode to catch the flyback current pulses. Schottky diodes are good for this as long as the voltage isn't too high. 24 V is well within the range where a Schottky diode makes sense. You might wonder why I'm worried about pulses when the solenoid being driven is rated for 24 V, and that's also the available supply voltage. I could just turn on Q6 to turn on the solenoid valve. However, that takes a lot of power. I plan to turn on Q6 for about 500 ms to initially activate the solenoid, then fall back to a lower average current by using PWM. The PWM duty cycle will be chosen to ensure the holding current thru the solenoid, as opposed to the initial activation current. Many relays and solenoids are specified to require less current (or voltage) to keep them activated than it takes to initially activate them. The main advantage of this topology is how it's easy to control the low side switch. In this case, the VALVE signal is coming directly from a 0 to 3.3 V microcontroller digital output. This particular FET is rated for 37 mΩ maximum on-resistance with 2.5 V gate drive. At 285 mA, it will only dissipate 3 mW. That's not enough for you to notice the temperature increase by touching it with a finger.
H: Electricity Energy/Power measure with Different power factor I am designing a circuit that measures the energy consumed from 50Hz AC supply (error must be < 1%), by measuring the voltage through potential divider; and current from shunt resistor. Then these two signals go to ADC inputs of microcontroller PIC18F46K80 to be multiplied and get the power. The voltage signal enters a potential divider and an amplifier with DC offset and goes to the ADC input. In the current measurement circuit, the difference between shunt terminals is amplified and then an amplifier with two feedback resistors (NMOS used to select between them to modify the gain) amplify its again. Then go to a DC offset stage after that to the ADC input. The problem is when the phase change between current and voltage, the energy measurement error changed by 0.8% if the power factor of measured energy changed from 1 to 0.8L. During my work in this circuit I had come with these notes: 1- Capacitive coupled noise (in the OPAMP negative input) could affect the measurement if the power factor is changed. But in this case, I am sure it is not because I measure the RMS value in the ADC to check if there is a capacitive coupled noise or not. 2- The OPAMP is not 100% inverting, it has some parasitic effects on the phase of the signal that comes to it. It depends on the gain and other stuff. 3- I had some DC blocking capacitor that will make a phase shift (time delay), between voltage and current signal, this phase is very small but it affects the measurement accuracy. The points 2&3, I have solved it by making small software delay between acquiring the signal of the voltage and the current to compensate for that phase shift/delay. I have two circuits: The old one works perfectly when the power factor change from 1 to 0.5L. But it suffered from aging of a capacitor as described here (Ceramic Capacitor Aging Stack Exchange). The accuracy had dropped after 2 months of operation. So I have modified the circuit (NEW ONE) by increasing the capacitors and some resistors to minimize the effect of the capacitor aging. But now I couldn’t even have an acceptable result, though I have tried adding variable delays between reading the voltage and current from the ADC. The Code: //================== voltage measurement ==================== // set_adc_channel(VOLT_CHANNEL); //voltage reading channel delay_us(3); //wait for ADC channel capacitor to charge voltage_binary = read_adc(); delay_us(10); //================== current measurement =====================// set_adc_channel(Current_Channel); //current reading channel delay_us(3); //wait for ADC channel capacitor to charge current_binary = read_adc(); Differences between the two designs: Item New Design/Old Design C16 10uF/4.7uF C7 100uF/47uF OPAMP TL064ID/TL064INSR D9 BAS70/BAV99S Resistor Case Code 1608/2012 R38 100K/18.2K R35 470K/82K Sampling rate (Sample/Second) 3400/2700 The Question is: What else could affect the operation of the measurement when the power factor change? AI: Then these two signals go to ADC inputs of microcontroller PIC18F46K80 to be multiplied and get the power It is unlikely that simultaneous sampling will be used so there will inherently be a phase error of one sample period. The problem is when the phase change between current and voltage, the energy measurement error changed by 0.8% if the power factor of measured energy changed from 1 to 0.8L. Yes, that sounds like the problem of not sampling simultaneously - close to unity PF the error due to this will be small and, as the phase angle gets bigger (lower PF) the error will increase.
H: How does the voltage difference develop when MOSFET is biased with Current Source? I have a circuit in LTSPICE which is performing as desired but I could not understand the working of it. Basically I have an NMOS circuit shown below (fig a): Here are some values which I calculated previously for this specific transistor: (By DC sweep of Vgs at Saturation condition(Uds>Ugs-Ut)): When Vgs=0.8V, Id=63uA When Vgs=1V, Id=104uA When Vgs=1.2V,Id=151uA Now, If I wanted to do current biasing, I set the current (current biasing) and get the corresponding Ugs in the same Transistor and then would distribute this Ugs to a nearby transistor for biasing (voltage biasing). I am not sure if this links with the concept of current Mirrors. The concept which I am not able to understand is "How does the Transistor develop the Ugs with given current?" More specifically, in the given schematic, How does the Vgs(=0.8V) gets developed automatically with the given current of 63 uA) - This seems obvious if I link the calculated value at 0.8V to be 63 uA but I am not able to understand how this voltage is getting developed by MOSFET. Is it like, when we are sending a specific current to the transistor, the transistor then sets up a resistance (Equal to Vgs/Id) and creates this voltage at Gate? So, in MOSFET at saturation region, the internal channel structure looks like: (fig b) In the above circuit, Vgd=0. So, as per this post, Why MOSFET Pinchoff occurs, even though the channel is not formed at the Gate-Drain point, there is no restriction on current flow. So, is the voltage getting developed due to the shape of the channel (More at Source and less at Drain)? Now, the KVL should be, Total voltage, Vt=Vgd+vgs. Since Vgd=0, Vgs=Vt-(threshold voltage)?. So, in the whole, I see that the MOSFET is acting like a resistor (whose value changes as per the current driven) or like a medium which allows current in a specific manner (slope in the p-substrate in the fig(b)) that creates a Gate-source potential with the given Id. Is the above analysis correct? or Am I missing something? (And to the curiosity, why the source current has a negative direction?) AI: In a MOSFET switched as a diode there's a local feedback going on. Imagine that the current source is 100 uA, so Id = 100 uA. Now what if the Vgs of that transistor was very high, much higher than Vt. What would happen? There would be almost no voltage across Vds right? Since Vds = Vgs in this circuit, the above cannot be true. Vds cannot be very small. What will Vds be then ? Well equal to Vgs (obviously) so the Vds must end up at a value which results in a Vgs which makes Ids = 100 uA flow. Suppose this goes wrong for some reason and we end up with a Vgs which is a little too low resulting in the NMOS wanting to make 90 uA flow instead of 100 uA. So 100 uA is coming from above (current source), 90 uA is pulled from below (the NMOS). Now what happens to the voltage at the drain of the NMOS ? The voltage will rise because the current source insists on making 100 uA flow so it raises the voltage hoping to make the NMOS conduct more so that it will allow 100 uA to flow instead of only 90 uA. This rising voltage means Vds increases so also Vgs increases. And AHA our Vgs was a bit low. That's the feedback in action, Vgs is automatically increased if it is too low. Same is true for a Vgs that is too large, then the NMOS wants to conduct more current, for example 110 uA. That would make Vgs lower which would make the NMOS pull less current so that the 100 uA is decreased to 100 uA.
H: Controller to increase phase or gain margin I can't find any good sources on this one. The only textbook I have says nothing about it. I've been restricted to the use of some formulas given by our teacher which I don't understand why they achieve the increase. He didn't want to work much on this kind of controller due to the little time we had left. So how do I go about designing a controller which increase the gain or phase margin? One thought: The gain margin is the gain for which the system is critically stable. Therefore moving the dominant pole further away from the origin could help increase the gain margin. As for increasing the phase margin to a specific value I have no idea. Phase margin is my main concern to be honest. AI: There are plenty of compensator types you can think of when attempting to compensate a given plant. First off, you need the plant dynamic response. This is what is called the control-to-output transfer function. From this response, you can infer what compensation strategy is needed to fulfill your goals. Basically, at least in power electronics, there are three compensator types: type 1, type 2 and type 3. They can be built around an active amplifier like an op amp, an OTA and a shunt regulator (TL431) for instance. The below picture shows you what they can do in terms of dynamic response. You select the type by knowing the amount of "phase boost" you need to meet the phase margin criteria. The phase boost is the amount of positive phase lead you need to compensate the lag incurred by the power stage at the selected crossover frequency \$f_c\$. A type 1 is a simple integrator. It features a pole at the origin and lags the phase by 270°. There is no boost. A type 2 combines a pole at the origin and a pole-zero pair. By adjusting the distance between the pole and the zero, you adjust the boost up to 90° in theory. Finally, a type 3 adds another pole/zero pair to the original type 2 and lets you boost the phase up to 180°. I have a complete seminar on the subject of compensator that you can download here and a book you could consider for closing the loop is this one. Good luck with your project!
H: STM32 HAL UART Transmit DMA problem After setting up my project for a custom STM32F7 board which includes a FT2232H UART<->USB converter I got multiple problems when sending (and receiving data). The code I use is mostly generated by CubeMX and is at the end of the post. First of all I can't get the stm to transmit at baud rates higher then the standard 115200, according to the datasheets both the FT2232H and the STM32F7 should be capable of at least 12M baud. For the FT2232H it's working as I send some characters from my terminal (USB side) and got the char back when I shorted the RX and TX pins on the FT2232H output side. Second is that I can't call the sendUART() function multiple times in a row, why isn't the DMA Fifo used to store the stuff I want to send? Also whats the right way to echo back all received data but make use of the fifo so no data is lost when it's not polled in time? Maybe these are dumb questions but I already tried to find a solution on here and the rest of the internet but can't find any. void MX_UART4_Init(void) { huart4.Instance = UART4; huart4.Init.BaudRate = 115200; huart4.Init.WordLength = UART_WORDLENGTH_8B; huart4.Init.StopBits = UART_STOPBITS_1; huart4.Init.Parity = UART_PARITY_NONE; huart4.Init.Mode = UART_MODE_TX_RX; huart4.Init.HwFlowCtl = UART_HWCONTROL_NONE; huart4.Init.OverSampling = UART_OVERSAMPLING_16; huart4.Init.OneBitSampling = UART_ONE_BIT_SAMPLE_DISABLE; huart4.AdvancedInit.AdvFeatureInit = UART_ADVFEATURE_NO_INIT; if (HAL_UART_Init(&huart4) != HAL_OK) { _Error_Handler(__FILE__, __LINE__); } } void HAL_UART_MspInit(UART_HandleTypeDef* uartHandle) { GPIO_InitTypeDef GPIO_InitStruct; if(uartHandle->Instance==UART4) { /* USER CODE BEGIN UART4_MspInit 0 */ /* USER CODE END UART4_MspInit 0 */ /* UART4 clock enable */ __HAL_RCC_UART4_CLK_ENABLE(); /**UART4 GPIO Configuration PA0/WKUP ------> UART4_TX PA1 ------> UART4_RX */ GPIO_InitStruct.Pin = GPIO_PIN_0|GPIO_PIN_1; GPIO_InitStruct.Mode = GPIO_MODE_AF_PP; GPIO_InitStruct.Pull = GPIO_PULLUP; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_VERY_HIGH; GPIO_InitStruct.Alternate = GPIO_AF8_UART4; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); /* UART4 DMA Init */ /* UART4_TX Init */ hdma_uart4_tx.Instance = DMA1_Stream4; hdma_uart4_tx.Init.Channel = DMA_CHANNEL_4; hdma_uart4_tx.Init.Direction = DMA_MEMORY_TO_PERIPH; hdma_uart4_tx.Init.PeriphInc = DMA_PINC_DISABLE; hdma_uart4_tx.Init.MemInc = DMA_MINC_ENABLE; hdma_uart4_tx.Init.PeriphDataAlignment = DMA_PDATAALIGN_BYTE; hdma_uart4_tx.Init.MemDataAlignment = DMA_MDATAALIGN_BYTE; hdma_uart4_tx.Init.Mode = DMA_NORMAL; hdma_uart4_tx.Init.Priority = DMA_PRIORITY_MEDIUM; hdma_uart4_tx.Init.FIFOMode = DMA_FIFOMODE_ENABLE; hdma_uart4_tx.Init.FIFOThreshold = DMA_FIFO_THRESHOLD_FULL; hdma_uart4_tx.Init.MemBurst = DMA_MBURST_SINGLE; hdma_uart4_tx.Init.PeriphBurst = DMA_PBURST_SINGLE; if (HAL_DMA_Init(&hdma_uart4_tx) != HAL_OK) { _Error_Handler(__FILE__, __LINE__); } __HAL_LINKDMA(uartHandle,hdmatx,hdma_uart4_tx); /* UART4_RX Init */ hdma_uart4_rx.Instance = DMA1_Stream2; hdma_uart4_rx.Init.Channel = DMA_CHANNEL_4; hdma_uart4_rx.Init.Direction = DMA_PERIPH_TO_MEMORY; hdma_uart4_rx.Init.PeriphInc = DMA_PINC_DISABLE; hdma_uart4_rx.Init.MemInc = DMA_MINC_ENABLE; hdma_uart4_rx.Init.PeriphDataAlignment = DMA_PDATAALIGN_BYTE; hdma_uart4_rx.Init.MemDataAlignment = DMA_MDATAALIGN_BYTE; hdma_uart4_rx.Init.Mode = DMA_NORMAL; hdma_uart4_rx.Init.Priority = DMA_PRIORITY_MEDIUM; hdma_uart4_rx.Init.FIFOMode = DMA_FIFOMODE_ENABLE; hdma_uart4_rx.Init.FIFOThreshold = DMA_FIFO_THRESHOLD_FULL; hdma_uart4_rx.Init.MemBurst = DMA_MBURST_SINGLE; hdma_uart4_rx.Init.PeriphBurst = DMA_PBURST_SINGLE; if (HAL_DMA_Init(&hdma_uart4_rx) != HAL_OK) { _Error_Handler(__FILE__, __LINE__); } __HAL_LINKDMA(uartHandle,hdmarx,hdma_uart4_rx); /* USER CODE BEGIN UART4_MspInit 1 */ /* USER CODE END UART4_MspInit 1 */ } } void HAL_UART_MspDeInit(UART_HandleTypeDef* uartHandle) { if(uartHandle->Instance==UART4) { /* USER CODE BEGIN UART4_MspDeInit 0 */ /* USER CODE END UART4_MspDeInit 0 */ /* Peripheral clock disable */ __HAL_RCC_UART4_CLK_DISABLE(); /**UART4 GPIO Configuration PA0/WKUP ------> UART4_TX PA1 ------> UART4_RX */ HAL_GPIO_DeInit(GPIOA, GPIO_PIN_0|GPIO_PIN_1); /* UART4 DMA DeInit */ HAL_DMA_DeInit(uartHandle->hdmatx); HAL_DMA_DeInit(uartHandle->hdmarx); /* USER CODE BEGIN UART4_MspDeInit 1 */ /* USER CODE END UART4_MspDeInit 1 */ } } /* USER CODE BEGIN 1 */ void sendUART(char msg[]){ //HAL_UART_Transmit(&huart4,(uint8_t *) msg, strlen(msg),10000); HAL_UART_Transmit_DMA(&huart4,(uint8_t *) msg, strlen(msg)); } void echo(){ if(HAL_UART_Receive_DMA(&huart4, (uint8_t *)aRxBuffer, RXBUFFERSIZE) != HAL_OK){} else if(HAL_UART_Transmit_DMA(&huart4, (uint8_t*)aRxBuffer, RXBUFFERSIZE)!= HAL_OK){ } } AI: First of all I can't get the stm to transmit at baud rates higher then the standard 115200, according to the datasheets both the FT2232H and the STM32F7 should be capable of at least 12M baud. The hardware supports speeds up to 27 Mbit (well, you haven't told your part number, I'm looking at the F756 datasheet), but according to stm32f7xx_hal_uart.h, HAL won't accept a speed above 9M #define IS_UART_BAUDRATE(BAUDRATE) ((BAUDRATE) < 9000001) Moreover, it depends on the system clock speed, in the default configuration, when you don't touch the Clock Configuration tab in STM32CubeMX, everything operates on the 16 MHz internal HSI clock. That means at most 1 Mbit while you are using UART_OVERSAMPLING_16, or twice that if you switch to UART_OVERSAMPLING_8 (but then you'd lose noise detection). Second is that I can't call the sendUART() function multiple times in a row, why isn't the DMA Fifo used to store the stuff I want to send? Although there is a 16 byte DMA FIFO, it's not accessible to the software. There is no way to just append some more data to an ongoing DMA transfer. HAL does nothing but starts a DMA transfer from the buffer address supplied by the caller. You have to wait until the transfer is finished, or suspend DMA, and still wait until the FIFO is empty. You can of course allocate a buffer yourself, add data as it comes, and restart DMA whenever it finishes and there is new data in the buffer. Also whats the right way to echo back all received data but make use of the fifo so no data is lost when it's not polled in time? It looks like to me, that you can't have both DMA and interrupts on every received character. At least, the ISR status register value would be useless, and the interrupt handler won't be able to decide what to do. Reading it could even interfere with the DMA transfer. Therefore, you must choose one. Using DMA to put the data in a buffer (or two), you can then poll the transfer counter regularly in the idle loop or a timer interrupt. There won't be an immediate response, but it'd perhaps not matter at all, because the USB interface would incur some delay too.
H: when given voltage, reverse voltage according to scale I am given a DC line with a variable voltage along a set scale: let's say 0-10V. I would like to send an output DC line with a voltage reversed along that scale. For example: 10 Volts in becomes 0 Volts out, 9 Volts in becomes 1 Volt out, 5 Volts in becomes 5 Volts out, 0 Volts in becomes 10 Volts out. Note that the polarity is not reversed. My question is: is there device I can buy or a relatively simple wiring scheme that can get me there? My intention is to take a signal from a PID tension-controller that controls a brake, and instead power a motor when needed. Oof, good question on the current. Let's say it is one amp. AI: If you need the voltage just for indication or measurement then a simple DC differential amplifier with unity gain will do the job, because the output voltage is 10V minus input voltage: \$V_{out} = 10V - V_{in}\$. : simulate this circuit – Schematic created using CircuitLab Note that this circuit is not easy to build as it looks even for an experienced person because of the need of a negative supply (-12V). Also, generating the reference voltage ("10V" to the left-side leg on R2) can be problematic (e.g. voltage divider or from an outer generator).
H: Extending length of wire where the spark is produced on a mini tesla coil How can I extend / increase the length of wire where the spark comes from on my mini musical tesla coil? See my mini tesla coil below along with the quick connector I'm using to extend the wire with. When I try and extend the wire using a quick connector the spark goes out (I was only trying to extend the length of the wire about a 1 foot. PS: I believe the circuit is a Slayer Exciter. AI: The circuit is not a slayer exciter, it is a PLL-driven SSTC. I expect your spark is going out because this is a PLL-driven coil. It is designed for a very specific frequency. When you extend the output wire you change the resonant frequency of the system and you will probably lose your phase lock. In order to extend the wire you would have to carefully adjust the capacitor and resistor values on the PLL chip (probably a 4046) in order to move the frequency capture/lock range back to where it needs to be. In other words, you can't simply extend the wire, and unless you're well-versed in phase-locked loops and how this particular circuit works, and how to modify it to meet your needs, you will not be able to make this change.
H: Long distance point to point microwave links - frequency and beam dispersion/angle? I occasionally see dishes on towers pointing horizontally, which I assume are long distance microwave links. Can anyone give me an example of frequency used and beam angle in radians of the directional beam? AI: Frequencies can be anything from a few GHz to a few tens of GHz, and 3dB angles depend strongly on frequency and dish size, usually you don't want to go much below a few degrees in the main beam as that can get very hard to align over a long path. Antenna gains can be north of 30dB at each end of the link, so power input levels are often less then a watt, because you don't need more for most line of sight paths at the bandwidths most of these radios operate. For example Ubiquiti (Which is the cheap stuff) do a 75cm dish for the 5GHz band that has 31 dB of gain and a beamwidth of maybe a few degrees or so, with maybe a watt up it and a 40Mhz channel bandwidth I would expect it to be workable for any tower based path at basically constant altitude (Curvature of the earth would likely be the limit if an intervening hill wasn't). For a given dish diameter, the bean tends to narrow as the frequency increases, so a 24GHz dish will usually be less then a foot or so in diameter just so the beam width remains something sane.
H: A question about differential measurement system Below represents a differential measurement system for a data-acquisition: As far as I understood, in a differential measurement system neither the outputs of the signals nor the inputs to the differential amplifier is tied to ground. That's why many places I read that this makes it immune to ground noise. But there is still a measurement ground (AI GND and G) for the system. My question is about the differential/instrumentation amplifier part: Does such an amplifier above measures the: 1-) voltage difference between the point A and B directly(without any reference) ? or (voltage difference between A and G) - (voltage difference between B and G) ? I can already see both above mathematically equivalent. But does sometimes this measurement ground G is also tied to earth ground which might be bouncing(?). 2-) Is the practice of connecting AI GND and G to prevent exceeding common mode voltages? What happens if AI GND and G are not connected and both signal and measurement side ground is not connected to earth? 3-) Should the point G never be tied to the "earth ground" in any side? If it is, what would be the consequences? Or is doesn't matter? AI: Consider the following differatial amplifier circuit. simulate this circuit – Schematic created using CircuitLab Note the input to the op-amp is a voltage offset and is not referenced directly to the system ground. As such you are pushing/pulling current to/from the +/- pins. The op-amp uses that difference to operate regardless of ground. The key then is that both inputs must have their own common connection. Whether that be through AI ground, or completely isolated. As for grounding. To answer your second point first, a connection to earth ground is beneficial for a number of reasons including static build up protection and EMI/EMC reasons. A single point connection is preferred such that the connection carries no current. Connection between AI Ground and GND on the other hand is more complicated. If they are close then they should normally be connected. If on the other hand the AI Part is on the other side of the building and or there are large currents flowing though the ground bar, then differences in the ground level can introduce unwanted effects if you connect them together. It therefor gets complicated quickly and significant amount of time and effort needs to be put in to come up with the best grounding solution for each application. ADDITION: On re-examining your question I noticed you are using an instrumentation amplifier not a differential amplifier. That changes things since Instrumentation Amplifiers have a buffer stage on each input. For this type of circuit you may need some common reference.
H: Dielectric Transformer Oil Which commercial oil can i use for isolation of a high voltage transformer+voltage doubler?.The output voltage is about max 60kv. AI: Nynas brand transformer oil is pretty common in this voltage range is rated at 25kV/mm. When further "processed" in-house it can be up to 75kV/mm. Typically what happens is when transformer factories get a truckload, they spill out the 1st bucket's worth then fill up their tanks and then sample test it with a slow AC ramp tested with predefined shape smooth electrodes and gap and then measure the breakdown voltage and repeat 10 times. The results are often all over the map from 25kV to 45kV. Test houses like Heismann will usually void the 1st 10 tests and then take 10 more. The reason for the variability is due to contaminants like a few particles of dust. Now imagine normal clean office air has 100k particles per cubic foot. Now figure out how you can avoid air and dust. That may be impossible, even to have a perfectly clean container. Tote boxes are the best idea, not your own. Chances are your gaps are around 10mm so you might think no sweat. Wrong. What happens is the ionic "dust particles" even in AC are attracted to the electrodes which freely release electrons under an electric field and any ionic "dust" or moisture particles may see a different localized charge level due to a difference in conduction and dielectric constant. Air is 1, Oil is 2 and H2O is 80. What happens best case? Nothing. Typically? A slow relaxation oscillator in hours, minutes, or seconds detonates the charged particle as it slams into the electrode and goes "tick." Worse condition: it ticks at a steady rate of once per second or once per cycle (like corona) but still may be a quiet tick sound but starts generating H2 in the oil which will try to rise and release. Normally preventive or "Condition Based Monitoring" of HV assets means annual sample testing of oil for dissolved combustible gases like H2, H2C2, H2C4, H4C8. The higher the detonation energy, the higher the activation potential and release potion of the gas. If you are lucky or good, nothing will happen. But this ticking is called Partial Discharge: PD is the Canary in the mine shaft so if you are aware, no worries. Any AM radio off channel will pick up the tick noise like lighting. Many are now using processed vegetable oil as it can absorb higher humidity. Bigger transformers than 10MVA usually have air separator bladder bags to keep air out and release Hydrogen. For a clean solution, use extremely clean high grade isopropyl alcohol that dries on glass without any residue. bake in oven at 180 deg for a few hours Then test with a slow HV ramp for PDIV threshold (Partial Discharge Inception Voltage). Then if normal quickly apply silicone coatings or high quality silicone sprays and build up a layer in a dust-free environment (class 10k or better, like a HEPA laminar air flow booth.) But for giggles try anything and test for PD activity to get experience and become an expert in this field. PD rise times vary from picoseconds to nanoseconds, so the RF signature goes past UHF. P.S. I investigated one local transformer factory that had supplied many to wind farms in the < 10MVA range from 20k to 50kV and they were failing DGA sample tests for H2 levels rising above the lower explosive limit (LEL). It was a million dollar liability issue, so they invited me to find out where the problems were.
H: STM32F769 USB FS Pin PB13 Question Taken from the datasheet Pg. 186 "When VBUS sensing feature is enabled, PA9 and PB13 should be left at their default state (floating input), not as alternate function. A typical 200 μA current consumption of the sensing block (current to voltage conversion to determine the different sessions) can be observed on PA9 and PB13 when the feature is enabled" I'm not too sure what this means - will it draw 200 uA from PA9 and PB13 when the USB is plugged in ? Right now I have PA9 connected to VUSB, but PB13 connected to an ethernet TXD signal (Unforunately I don't think I'll be able to move the signal to the other place where it's offered - PG14. I've made my design in STM32CubeMX and it seems perfectly fine (all greens). Will it be ok to leave PB13 connected to the ethernet signal ? Thanks in advance! AI: The note in the datasheet should concern you if you have VBUS detection enabled. PA9 and PB13 are used for that and thus they can't be connected to other peripherals in such case. If you don't need to sense USB VBUS, just disable that feature. It can be done by clearing the appropriate bit in the OTG_GCCFG register (check the Reference Manual page 1587), although possibly they may are easier higher level ways if you are using standard or HAL libraries. If you say Cube MX didn't complain, probably it didn't enable it as it's disabled by default, but you can't never know with Cube MX, better check. In short, make sure that VBUS sensing is disabled and you're good.
H: Yamaha DTX Multi-12 drum pad SW/HW hack I have a Yamaha DTX Multi-12 drum pad. It's great, but its fatal design flaw is it doesn't support samples over ~5 sec/ ~100 KB. I would love to be able to import pop-song-length CD-quality samples (+5 min. @ 192 Kbps). I don't really know where to start with this. My first guess: Find a compatible 16GB+ SSD 2.5" hard drive, attach it to the motherboard, then tweak the software so it can recognize and accept importing larger file sizes/longer sample lengths. How you actually do all that is beyond me. Anyone have any insight into a fairly advanced project like this? Thanks! AI: I don't think the software can be 'tweaked' since it's built in some flash/EEPROM and cannot be edited. The most easiest way is probably to use an external sampler, assuming your Yamaha DTX has MIDI output. Modern samplers (or even by using a laptop with MIDI interface) can give very long samples. But of course this would 'spoil' the project.
H: two or more SDP610 Pressure sensor Interface with Arduino uno using I2C I have interfaced one SDP610 Pressure Sensor using I2C with Arduino Uno R3. Now I want to Interface more than one sensor with same controller(uno board). I don't find any higher or lower address in my sensor datasheet. In my code address of sensor is typedef enum { eSDP6xAddress = 0x40, //(64) } PRES_SENSOR_P; The I2C address consists of a 7-digit binary value. By default, the I2C address is set to 64 (binary: 1000 000).The address is always followed by a write bit (0) or read bit (1). The default hexadecimal I2C header for readaccess to the sensor is therefore h81. from section 3.2 in the SDP610 datasheet Is there any chances to change the address of SDP610 sensor and its possible to interface with same I2C bus? AI: You can not change the I2C slave address in this sensor. It's hard-wired inside. If you want to have several of these sensors, you have these options: Create a separate I2C buses. One of them can be bit-banged. [Having said that, I don't know how well the Arduino libraries can support this.] Multiplex the I2C bus. If you connect one SDP610 at a time, there will be no address collision. Use an I2C address translator. Related threads i2C multiplexer to use multiple sensor of the same type How to connect multiple of the same device to an Arduino using I2C?
H: Mobile Power Bank 18650 Battery Charger My question is about the below power bank charging module. We connect 3.7V 18650 type batteries to this module. Then the output voltage of this module via USB is 5V. My question is how it returns 5V using 3.7V batteries. Here is the product link : Mobile Power Bank Module AI: This is a very basic question and you really should have put a little more effort in finding the answer yourself. Anyway, it uses a Boost converter See the black/red copper component at the back, that's an inductor. It is an essential component in any switched converter.
H: beta of inverting opamp amplifier I have doubt about beta of feedback loop of inverting amplifier. The expression for the feedback gain is : $$A_{cl} =\frac{A}{1 + A \cdot \beta}$$ what is value of beta here? and how do I reach to final expression $$A_{cl} =\frac{R_f}{R_in}$$ AI: The circuit is not exactly equivalent to the two blocks. Some steps must be taken before arriving to that formula. \$\beta\$ in fact is not \$R_{in}/R_{f}\$. It is \$R_{in}/(R_{in}+R_f)\$ instead. Which is the same as the non inverting configuration. Removing the internal summing node (notice that the signs are brought to the external one!) you get: (Then if you want you can bring the first block inside the loop, but \$A_{OL}(s) \cdot \beta\$ does not change. And that's the term you'll be considering to analyze the stability of the closed loop system). If \$A_{OL} = \infty\$, then you get: $$ A_{CL}=-\frac {R_{f}}{R_{in}+R_{f}} \cdot \beta^{-1} = - \frac {R_{f}}{R_{in}} $$ In general (not in this case, assuming an ideal OA ), you could also have another block (transfer function) at the output.
H: How to combine and protect two 12V automotive signals into one? In the below circuit, the DC component (6V) of a bridged car head unit speaker wire turns on a LM53601 buck converter. The LM53601 accepts voltages up to 42V. The +12V input into Q2 is protected to withstand the pulses described in ISO 7637. I have used this circuit in my car for some months now, and it works fine. But now I want to extend the functionality so that the LM53601 is turned on either with the +6V speaker signal or when the ignition is turned on. The EN signal of the LM53601 accepts up to 42V, but I believe that I will have to protect this input for the pulses of ISO 7637 as well. My proposed solution is shown in blue where I suggest to use TVS-diodes and a reverse polarity diode. This seems like an overkill solution to me, but I have not found any other way to protect the input. My question is regarding Q2. Should I put a diode (STPS2200U) or a resistor (or both) in the point where I have drawn a red arrow? The 3906 PNP transistor specifications are listed below. Update: The transient protection circuitry was taken from a TI reference design using a similar buck converter that has the same voltage input range (see below). This passed ISO 7637 testing so that is why I thought it was smart to use the same design. The design uses a smart diode controller (LM74610) for reverse polarity protection, but I have used a 200V Schottky instead which I believe should be ok. AI: Q2 is effectively a diode already so no additional diode is required. I'm not too convinced about the clamping circuit though and I don't see a pull down on that enable line so it's effectively floating when off which is a no-no according to the data sheet. I think I'd build it something like this instead. D1 ensures the enable can't wander above the power rail and R1 limits the current through that wire if it does. R2 pulls the enable low when everything else is off. Not sure why you need C1 though. It will cause a significant delay on disable, especially at 10uF. simulate this circuit – Schematic created using CircuitLab ADDITION: For reverse battery protection too, this is probably better. simulate this circuit
H: ARM Cortex M0+ CoreMark Ratings Currently when I work with microcontrollers, I use Microchip PICs and I'm happy enough with them. However, I decided to just take a look at ARM for a possible upcoming project. I wanted the pick the best (fastest at calculations at the cheap/low-power end) ARM. On the ARM website(here), the Cortex M0+ is listed at 2.46 CoreMark/MHz. I thought that CoreMark rating would apply to all microcontrollers with M0+ cores but on the Atmel SAM D20 page the microcontroller is listed as having 2.14 CoreMark/MHz. I read on some websites that the compiler affects the CoreMark score. I have also seen websites list an M0+ as having 1.77 CoreMark/MHz without talking about a compiler (element14). I also noticed ARM talks about the M0+ on a 40LP process while the element14 site talks about the ARM on a 90LP process. Unfortunately I am not knowledgeable about chip scale processor manufacturing. So my questions are; Do variants of the M0+ processor core exist? If yes, how do you spot which is which? If programmed by assembly language, would all microcontrollers with ARM Coretex M0+ cores have the same CoreMark rating? By the way, the micro I intend to use is of the MKL03Z family. Any more info would be appreciated. Thanks! AI: Short answer: Yes No Long answer: ARM cores have features that each manufacturer may or may not decide to implement (e.g. caches, bus fetch width, FPU, MPU, etc. - of course the availability depend on the type of core e.g. 7xx, 9xx, M0, M0+, M3, M7, etc.). Having or not some feature will impact the CPU performance. The following image is taken from the SAMD21 datasheet. As you can see they decide to implement a fast multiplier and a 32-bit fetch width. This probably allowed the SAMD21 to reach a 2.46 CoreMark/MHz figure. The datasheet states: The SAM D21 devices operate at a maximum frequency of 48MHz and reach 2.46 CoreMark/MHz (By the way, the SAMD20 also states that it can reach that figure, and not just 2.14). The SAM D20 devices operateat a maximum frequency of 48MHz and reach 2.46 CoreMark®/MHz. If you programmed in ASM two different Cortex M0+, featuring different options (e.g. one has slow multiplier and 16-bit bus instruction fetch width, and the other has a fast multiplier and 32-bit fetch width), then the results would be different. Results would also be different if the test runned on memories with different access times. Also, the Coremark results, found on the Coremark website, specify the compiler version (and flags used to compile the test). Therefore they are also compiler dependent.
H: Can anyone identify this 12V DC power connector found on a radio transmitter? This connector is used on the back of a QYT KT-8900D radio, in between the radio unit and a cigarette lighter-type plug. The unit itself takes between 12V and 15V, and I think less than 2A of current. I'm looking for some sort of standard name. AI: I believe the Kenwood part number is PG-2N, the connector is actually made by 3M. You didn't say which gender you needed, but you can start here: https://www.amazon.com/Agile-Shop-T-Shape-Kenwood-TM-G707-TM-D700/dp/B01N0E7S8I Bare connector: https://powerwerx.com/oemt-power-connector-source-side BTW, consider just cutting this connector off and putting Anderson Powerpoles on instead.
H: Should I worry about EMC when building a PWM stepper motor driver with 1.5 meter cable? I am building a board that would drive a stepper motor with PWM signal. Is it likely that it will pass the EMC test? Here are the details: I am using Trinamic TMC5130 stepper motor driver chip. The PWM frequency is 78.0 kHz The board and the stepper are connected by 1.5 meter flat ribbon cable, (not shielded, not twisted). The current through the stepper motor coils is up to 900mA. Stepper coil resistance = 1.1 ohm, inductance = 2.6mH, Power supply voltage = 12V. AI: Yes you should worry about it, if you care about EMC. Home switch cables should be far away or else you have to use 0.1uF and 1k pullup. Any analog sensing will be worse. Include twisted pairs (UTP) or better STP cables for stepper and sensor cables. Otherwise with current sensing, you may also be able to hear PWM noise modulating in motor as a high pitch squeel that changes with your hand over the cable, when in idle holding torque , due to EMI ingress from unbalanced and unshielded cables. Often many OEM's put large CM chokes in torroidal cores with wraps around it or clamp type with high mu to raise CM impedance and balance the lines at VHF/UHF range due to rise time rates. (ns) Fig 3-10 also shows methods of reducing egress/ingress. Choice of SMPS supplies and earth grounding is also important to avoid unintended ingress and egress. The cables with pulse voltages from deadtime current act like great antenna unless balanced with CM chokes, shielded and shunted with RF Y caps.
H: How to measure line voltage (220V) with an arduino? I'm a Electrical Engineering student and I want to sense and sample the voltage signal coming from a wall socket (110V - 220V). I came up with the following circuit using a voltage divider with high impedance and a diferential amplifier. I would like to avoid using a transformer because of weight constraints. Is there a better way to solve this problem? Should I use a capacitor divider instead? AI: Regular transformers don't have to be heavy. Figure 1. Miniature transformers. Hammond Manufacturing, for example, make 0.5 VA transformers smaller than a 25 mm / 1" cube. This provides isolation from mains (which your circuit does not). simulate this circuit – Schematic created using CircuitLab Figure 2. Analog interface. To read AC with your micro you'll need to bias the transformer to mid-DC supply as shown. R1, R2 and C2 provide this function. R3 and R4 provide a potential divider to attenuate the transformer signal into the range suitable for your ADC.
H: Can you please help me identify these components and state their function? Can you please help me identify these components and state their function? These were taken from the inside of a digital thermometer. AI: It looks like the baby got your digital thermometer. Can you please help me identify these components and state their function? LCD (liquid crystal display) provides a human readable display of the temperature in adequate light. Metallic contacts provide connections - most likely from the circuit board to the battery (see below). Screw(s) skruː noun, a short, slender, metal pin with a raised helical thread running around it and a slotted head, used to join things together by being rotated so that it pierces wood or other material and is held tightly in place. Probably the push-button membrane which is designed to bridge out contacts on the PCB to turn on and off the thermometer. A 'coin' cell to power the electronics. A piezo buzzer used to generate an audible 'beep' when a stable measurement has been obtained. The battery cap. Check the baby's diaper / nappy for the rest of the parts including the sensor and PCB.
H: Switch case statement in MikroC for dsPIC? I have three decision digital variable. So I have to make 8 decisions with these digital variables. The problem of mine is in understanding the behaviour of the switch-case statement in MikroC. Actually I have to made decision as when three digital pin are 0 0 0, 0 0 1, 0 1 0. How to enter the these value for switch case variable? switch (i) { case 1: i++; case 2: i++; case 3: i++; } Here my i is three digital pins, such as 0 1 0. How can I implement this? AI: Your question is how to enter the case numbers. The answer is to enter them like this: switch (i) { case 0x00: // do something if i == 000 break; case 0x01: // do something different if i == 001 break; case 0x02: // do something yet again if i == 010 default: // do nothing (optional) } If you don't use the break statements, the code 'falls through' to the next case, which may not be what you want (or maybe it is). You can leave out any case numbers that you don't need. Obviously the code you want to execute goes between the case and break statements.
H: What is behavior of this virtual ground circuit when sourcing/sinking current? simulate this circuit – Schematic created using CircuitLab This is the virtual ground circuit from a battery-powered headphone amplifier(direct link to schematic PDF). What is the behavior of the pseudo-ground as the opamp attempts to sink/source current to/from it? A suggestion for improving performance of the amplifier is to switch from this passive circuit to an active one (using opamps, rail splitter ICs, etc.) to provide a "lower-impedance" virtual ground. Before going down that road I'm curious what my starting point so I can know what sort of improvements I might see. EDIT TO ADD: Another way of asking the question is: does this circuit have any benefits over grounding the load to V- (or V+) and decoupling it from the opamp with a 470uF capacitor? AI: It's amazing what passes for circuit design in some of the DIY forums. Your link goes to this schematic: The center point created is limited by the charge on the 220 uF capacitors, so particularly at high amplitude/low frequencies (as the amp draws longer current and perhaps asymmetrical pulses) the center point will wander. This is just the same as if the amplifier output had been capacitor coupled which negates what was probably thought of an advantage (DC coupled output). To fix up this amplifier, I'd simply use a 2 pole supply switch and ground the center point of the batteries ...Voila ....dual supplies and proper DC coupling.
H: Exponential amplifier thermal stabilisation I am currently trying to build a DIY analog synthesizer. I've looked at a lot of different designs and I am quite stuck on understanding how some exponential amplifier designs work. In analog synthesizers an exponential amplifier based on a BJT transistor differential pair is used most commonly. The problem is it's thermal stability. While most of the effects cancel by using a differential pair, an effect related to the thermal dependence in the Ebers-Moll equation will be still present. A solution that I understand completely is using a tempco resistor, just as shown on the first image. However, I've seen also an another solution (in Arturia minibrute, for example) that I do not understand at all. It is shown on the second image - the exponential amp without the tempco resistor next to R5 is on the left and a weird "thermal stabilisation" part is on the right. The question is: how does it work? I don't see any connections with the left part at all beside the fact that all of the transistors shown are on a same CA3046 chip. And which solution will be more stable? AI: As far as I'm aware there are 3 ways of compensating for the temperature drift of an exponential converter. Temperature control - When you use a 'heater' to bring the NPN pair to a stable (high) temperature Passive compensation - When you use an NTC (-3300ppM/C tempco) so that as the temperature rises, you supply less current Active compensation - When you use a semiconductor junction to create a voltage that varies with temperature and use that to control the gain of a VCA Is the simplest design with the smallest part count but it isn't energy efficient since you're heating. I suspect there will be some drift until it reaches the set temperature so you would need to leave it on for some minutes before tuning. Is the most common design. It is used in most analogue synths so it should give you the expected stability. This is the most complicated and somewhat experimental design and the version I've seen was made by Jim Patchel to try and work out how the Curtis CEM3340 works. Jim Patchel shortly compares these 3 methods of compensation here.
H: Bias current in the Amplifier branch In this Common Source CMOS amplifier, the current is set by PMOS (3), it's 2* Ibias due to 1:2 mirror. However NMOS (1) is also a current source with gate at Vi, why the current in the branch is not set by this? AI: I've used this topology ---- constant current source load for bipolar amplifiers ---- to achieve gains of 10,000x (80 dB) in one stage. The output was buffered using a JFET to avoid bias currents. The "vi" was part of a negative feedback loop, thus "vi" was servo-loop adjusted to hold the gain node near the mid-swing for adequate linearity. Summary: Fet#1 cannot be operated as constant current.
H: Guard bands in OFDM Consider the use of OFDM in 802.11a (5GHz band) as shown in this picture taken from NI's White Paper: For a 20MHz channel with a frequency spacing of 0.3125MHz between the subcarriers, there are a total of 64 possible sub-channels. As per the standard, 52 subcarriers are used to transfer data, 4 subcarriers are used as pilot symbols, and the rest are 'null' subcarriers. This brings us to the crux of my confusion/doubt. Are these remaining 8 subcarriers just 'null' or actually used as guard bands? As per the above picture, and CWNP's article as well, these tones are used as guard bands. On the other hand, according to Rohde-Schwarz's White Paper and Revolution-Wifi's article, these bands are simply designated as null subcarriers. Furthermore, if these subcarriers are indeed used as guard bands, is an additional guard band required? There's no mention about the requisite guard bandwidth in the IEEE 802.11 official standard documents. AI: The 'null subcarriers' are not driven, they have nominally zero energy in them, and are used as guard bands. Guard bands have two uses. They protect users of adjacent systems from spillage, or spectrum spread, of our signal into their band. They protect us from the spillage of an adjacent system's signals into our band. To protect adjacent systems, they are generated with no energy in them. Radio signals always spread somewhat due to various imperfections in the generation and transmission process. In the case of OFDM, the 64 carriers are described in the frequency domain, and then Inverse DFT'd into the time domain. The IDFT process means that all the carriers always 'exist'. Pilot subcarriers get a pre-determined description, data subcarriers get a data-dependent description, and the null subcarriers get zero. However limited numerical precision of the IDFT means there is some energy in those carriers after conversion, then signal chain analogue distortion lets a little more energy leak into them. The system has a specification for the minimum ratio of data to guard band energy that a compliant system will achieve. Protection from the spread of adjacent systems is done by simply not attempting to use those carrier positions for data. I think part of the confusion is that the word 'subcarrier' is used in slightly different senses in different sources. It's a definition of a frequency on the air interface, which is equivalent to an index into the signal vector prior to the IDFT. The use of the word 'tone' sounds a bit misleading, as it implies frequency and non-zero level, but in the context of a guard band, the level is always nominally zero, that is several 10s of dB lower than data or pilot tones. The provision of guard bands is system specific. They are sufficient for adjacent 802.11a channels to co-exist. If however an operator with a system with a wider spread was to seek permission to operate adjacent to a block of 802 use, they would be required to keep out of the 802 allocation. How they did this would be up to them, but would probably implement sufficiently wide guard bands within their system to stay within their allocation.
H: How to replace TTL PROMs in a design operating at 20MHz I am attempting to come up with a reasonable way of implementing a design that was intended for cheap computer applications in the 1980s and which was originally designed to use a pair of TTL PROMs (a 28S42 and a 24S10) to control a state machine and decode state to outputs. The usual recommendation I see for these devices (which are still available, but at ludicrously high price -- the cheapest I see from a distributor is in excess of £30 per chip) is to replace them with a parallel EPROM, but the fastest EPROM I can find has 45ns access time which is cutting it very fine for the 50ns cycle time I'm working to. I'd really like to find a replacement part that can match the 35ns access time of the original parts. What kind of options do I have available? I've considered switching to an FPGA, but (1) this is a much more expensive solution than I'd ideally like and (2) FPGAs seem to only be available in surface-mount packages with narrow pin spacing, and I'm not really comfortable soldering those. I've looked at CPLDs, but I can't really tell whether they are suitable for this application: how would I encode the transition table of the state machine, for example? Are there parallel EPROMs or a similar technology out there that can match the access times of these old PROMs? Are there other possible solutions to this kind of problem that I'm missing? AI: Beside the solutions suggested by @Brian Drummond in the comments, you could use: a 5V non-volatile SRAM (e.g. CY14E256LA-SZ25XI). Some NVSRAMs have an internal battery (which typically lasts 10 years). Other NVSRAMs (such as CY14E256LA-SZ25XI) have a built-in non volatile memory, which is copied to the memory at power up. But the recall procedure requires some time (20ms. Can you live with this?). a 5V FRAM such as MSM5412222B-25TK-MTL. any other 3.3V non-volatile memory (if cheaper and fast enough) or 3.3V CPLD. And then use two voltage level translators. Of course all these solutions are not pin to pin compatibile, and you'll need a PCB to make an adaptor. At the end everything might cost well more than the 30£, as Brian Drummond pointed. But if your system uses more than one PROM, you can use only one ("large enough") NVSRAM in place of 2 or more of those PROM.
H: Wireless security for repetitive messages? I'm designing myself a custom home automation system. Eventually, I want to expand the system to include security/alarm. The easiest way I know to do this is to include wireless capabilities. This got me thinking about wireless security. The messages from let's say motion detectors should be encrypted, that's a given. The motion should have a regular keep-alive type message so that when a sensor goes dead or is disable the system would flag it. This is where I see a weakness. If a sensor is constantly transmitting the same message ("no motion detected" for example), the message can be recorded and spoofed. Even if the message is encrypted because it is constantly transmitted, the same encrypted packet would be seen. Without knowing its content the packet could be captured and spoofed after disabling the sensor. I know it's paranoid but I'm trying to cover all my bases. Is there a wireless protocol that automatically accounts for transmission of repetitive messages in their encryption? If not, how can the repetitive transmission weakness be patched? AI: Firstly: wired is going to be more secure than wireless. It is much harder to tap into a wires system and add messages, and (with a bit of thought) it is possible to make it very hard to cut the wires without being detected first. Wireless can be jammed easily, and is more vulnerable to spoofed messages etc. That aside, to avoid a replay attack you need to make sure that each transmission from the remote sensor is different. There are many ways of doing that, offering different levels of security, and which one to use will depend on what you think of as acceptable security, acceptable complexity, acceptable component const, and acceptable energy use at the sensor if it is battery powered. Some ideas: Timestamp each "I'm alive" transmission, before encrypting it. This needs a reasonably accurate realtime clock on the sensor, and probably an occasional re-synchronisation process. The synchronisation process could be as simple as the base station keeping track of the drift as it receives timestamps. Use a nonce. A nonce is a single-use random number. The base station generates it and sends it out with an "are you alive" packet. The sensor includes the nonce in the encrypted reply. Another way of using a nonce is for the sensor to generate one which is almost random but guaranteed not to repeat for a long time. The base station then checks that it hasn't seen that nonce before. The message needs to be signed such that the attacker can't forge it with a new nonce. Both the sensor and base station share a secret, such as a long one-time-use pad, or a good pseudo RNG. Each transmission contains the next part of the secret. The base station should probably accept any one of the next three parts, in case a transmission gets garbled or lost. Even then, it's probably possible to get out of sync. Roll-your-own encryption is not usually considered a good idea. There are probably some good, well researched, and thoroughly checked-out systems out there. The folks over on this stack exchange site might be able to help you pick a good one for your needs.