text
stringlengths
83
79.5k
H: Limitations of When to Use Mesh Current Method For this circuit, is it possible to solve for i solely with the mesh current method? I know that this would be even easier to do with application of the fundamental laws or the node voltage method; I just want to better understand the limitations of the mesh current method, and when it's best to use it. By the way, if you wanted to know, this is where I got the question from: https://web.cecs.pdx.edu/~tymerski/ece241/Examples_Node_Mesh.pdf It's the second question, and they answer it using application of the fundamental laws on the 13th page. AI: Given that you provided the link and also pointed out where the solution steps are given, I'll assume I'm not doing homework for you but simply answering your question, directly. With part labels and current directions chosen (all of them counter-clockwise): simulate this circuit – Schematic created using CircuitLab From this, and always starting from the upper-right corner of each loop, we find: $$\begin{align*} 0\:\text{V}+4\:\text{V}-R_3\cdot\left(I_1-I_3\right)-R_4\cdot\left(I_1-I_2\right)&=0\:\text{V}\\\\ 0\:\text{V}-R_4\cdot\left(I_2-I_1\right)-R_1\cdot\left(I_2-I_3\right)-R_2\cdot I_2&=0\:\text{V}\\\\ 0\:\text{V}-R_3\cdot\left(I_3-I_1\right)-10\:\text{V}-R_1\cdot\left(I_3-I_2\right)&=0\:\text{V} \end{align*}$$ (Note that after solving the above set of linear mesh equations, \$i=-I_2\$.) Using sympy: var('r1 r2 r3 r4 i1 i2 i3 v1 v2') ans = solve( [ Eq(0+v2-r3*(i1-i3)-r4*(i1-i2),0), Eq(0-r4*(i2-i1)-r1*(i2-i3)-r2*i2,0), Eq(0-r3*(i3-i1)-v1-r1*(i3-i2),0) ], [i1,i2,i3] ) for x in ans: x, ans[x].subs({r1:2,r2:2,r3:2,r4:2,v1:10,v2:4}) (i1, -10/3) (i2, -3) (i3, -17/3) So \$i=3\:\text{A}\$. And yes, mesh can easily be used.
H: How do I control a 12V relay with a 1.8V microchip? Background I have a 12V power supply that is connected to a switching regulator to get a 1.8V power signal. I have a microchip running at 1.8V and want to control a 12V relay via GPIO. The relay is used to open and close a lock that only functions on 12V. Help in 1:Must-operate voltage 2:Must release voltage 3:Coil voltage 4:Switching voltage 5:Current rating What I have I think I need operating voltage at 1.8V because the chip sends 1.8V GPIO signals. A release at 12V because that's the power needed for the lock. A coil at 12V with switching at 12V because of the lock. And current at 2A because the lock runs at 2A. References for the lock I need help figuring out the right relay to use and how to hook up the relay to the microchip and lock. AI: You could use a BJT + MOSFET or a very low Vth MOSFET. An example of the latter would be the SQA410, which will dissipate less than 1/4W when on. simulate this circuit – Schematic created using CircuitLab The 2A Schottky diode protects the MOSFET from the inductive kick when the coil is turned off. R1 is to ensure the MOSFET is off (and not partially on) if the MCU output is floating. R2 provides some measure of protection against ground bounce.
H: Type-N vs SMA connector (relative merits) What are the relative merits of using a Type-N connector vs an SMA connector for an RF application up to \$18\,\text{GHz}\$? From the type-N wikipedia page (linked), it's stated that the type-N connector is favored when ruggedness is required. What does this mean specifically? In comparison with an SMA connector, does this mean that the type-N connector has a higher rated number of mating cycles? Some of my additional guesses are that it's more difficult to bend the inner conductor pin of a type-N and that the dielectric between the inner and outer conductors is less exposed to scratching during mating than the SMA connector. An additional advantage of type-N connectors is that they are meant to be hand-tightened, whereas SMA connectors should be tightened with a torque wrench to achieve the best performance. The main advantage of the SMA connector seems to be size, since it's quite a bit smaller. This means that it's easier and cheaper to mount to a PCB (requires less board space) and (as far as I can tell) there are a lot more options for SMA PCB connectors than for type-N. Is there an additional performance benefit to SMA connectors? Some manufacturers offer SMA connectors good up to \$26.5\,\text{GHz}\$, whereas I haven't seen the same for type-N connectors. Additionally, the higher performance connectors (3.5mm, 2.92mm, etc.) look a bit like SMA connectors (and indeed some can be mated with SMA). Does this mean that it's easier to achieve better performance (e.g., lower VSWR) with the structure of an SMA connector? Are there any other advantages/disadvantages to each that I haven't mentioned? Are my guesses correct? AI: You already worked out for yourself some of the important differences. One that you missed is power handling. The N-type connector can typically handle about 3x the signal power of the SMA without overheating: (Image source: Southwest Microwave) The peak power capability (governed by the dielectric breakdown limit) is similarly higher for N-type compared to SMA, due to the greater distance between inner and outer conductors. For example, Amphenol rates their SMA connectors for 1000 V rms (source) and their N-type connectors for 1500 V (source).
H: Help identify part: Schottky diode I have an electronic part with a marking mbr10100, I know it is a Schottky diode for 10 A, 100 V. The thing is, it looks like it has two diodes inside. At least it shows so when I test it with a multimeter. My question is - does it have tolerance of 10 A for each in the pair or will it tolerate this voltage and current when I connect them in parallel? I was not able to find a datasheet for my specific sample. AI: Datasheet says it is 10A per package so 5A per diode.
H: how to recognize interupt handler on register level of efr32fg14 in the code bellow we have interrupt functions which play with the IF register. And there are functions called USART0_RX_IRQHandler functions which we dont call in the main() and in the function itself there is USART_IntDisable which does usart->IEN &= ~flags. But i cant see no where in the code bellow to activate USART0_RX_IRQHandler when ever transmition comes. Where is this mechanism? Thanks. #include "em_device.h" #include "em_chip.h" #include "em_emu.h" #include "em_cmu.h" #include "em_gpio.h" #include "em_usart.h" #include "bsp.h" // Size of the buffer for received data #define BUFLEN 80 // Receive data buffer uint8_t buffer[BUFLEN]; // Current position ins buffer uint32_t inpos = 0; uint32_t outpos = 0; // True while receiving data (waiting for CR or BUFLEN characters) bool receive = true; /**************************************************************************//** * @brief * GPIO initialization *****************************************************************************/ void initGpio(void) { CMU_ClockEnable(cmuClock_GPIO, true); // Configure VCOM transmit pin to board controller as an output GPIO_PinModeSet(BSP_BCC_TXPORT, BSP_BCC_TXPIN, gpioModePushPull, 1); // Configure VCOM reeive pin from board controller as an input GPIO_PinModeSet(BSP_BCC_RXPORT, BSP_BCC_RXPIN, gpioModeInput, 0); // Enable VCOM connection to board controller GPIO_PinModeSet(BSP_BCC_ENABLE_PORT, BSP_BCC_ENABLE_PIN, gpioModePushPull, 1); } /**************************************************************************//** * @brief * USART0 initialization (VCOM on xG1/xG12/xG13 boards) *****************************************************************************/ void initUsart0(void) { CMU_ClockEnable(cmuClock_USART0, true); // Default asynchronous initializer (115.2 Kbps, 8N1, no flow control) USART_InitAsync_TypeDef init = USART_INITASYNC_DEFAULT; // Configure and enable USART0 USART_InitAsync(USART0, &init); // Enable NVIC USART sources NVIC_ClearPendingIRQ(USART0_RX_IRQn); NVIC_EnableIRQ(USART0_RX_IRQn); NVIC_ClearPendingIRQ(USART0_TX_IRQn); NVIC_EnableIRQ(USART0_TX_IRQn); // Enable RX and TX for USART0 VCOM connection USART0->ROUTELOC0 = BSP_BCC_RX_LOCATION | BSP_BCC_TX_LOCATION; USART0->ROUTEPEN |= USART_ROUTEPEN_RXPEN | USART_ROUTEPEN_TXPEN; } /**************************************************************************//** * @brief * The USART0 receive interrupt saves incoming characters. *****************************************************************************/ void USART0_RX_IRQHandler(void) { // Get the character just received buffer[inpos] = USART0->RXDATA; // Exit loop on new line or buffer full if ((buffer[inpos] != '\r') && (inpos < BUFLEN)) inpos++; else receive = false; // Stop receiving on CR // Clear the requesting interrupt before exiting the handler USART_IntClear(USART0, USART_IF_RXDATAV); } /**************************************************************************//** * @brief * The USART0 transmit interrupt outputs characters. *****************************************************************************/ void USART0_TX_IRQHandler(void) { // Send a previously received character if (outpos < inpos) USART0->TXDATA = buffer[outpos++]; else /* * Need to disable the transmit buffer level interrupt in this IRQ * handler when done or it will immediately trigger again upon exit * even though there is no data left to send. */ { receive = true; // Go back into receive when all is sent USART_IntDisable(USART0, USART_IEN_TXBL); } // Clear the requesting interrupt before exiting the handler USART_IntClear(USART0, USART_IF_TXBL); } /**************************************************************************//** * @brief * Main function *****************************************************************************/ int main(void) { uint32_t i; // Chip errata CHIP_Init(); // Initialize GPIO and USART0 initGpio(); initUsart0(); while (1) { // Zero out buffer for (i = 0; i < BUFLEN; i++) buffer[i] = 0; // Enable receive data valid interrupt USART_IntEnable(USART0, USART_IEN_RXDATAV); // Wait in EM1 while receiving to reduce current draw while (receive) EMU_EnterEM1(); // Disable receive data valid interrupt USART_IntDisable(USART0, USART_IEN_RXDATAV); // Enable transmit buffer level interrupt USART_IntEnable(USART0, USART_IEN_TXBL); // Wait in EM1 while transmitting to reduce current draw while (!receive) EMU_EnterEM1(); // Reset buffer indices inpos = outpos = 0; } } AI: It's the USART_IntEnable() line with the comment "Enable receive data valid interrupt".
H: IR2110 Simulation not working in LTSPICE I am trying to implement a half bridge topology as shown in the figure below: I have replaced the IRF450 with STGW40H60DLFB IGBT. I have also replaced the 11DF4 with UF4007. My LTSPICE schematic looks like: My VDS and VGS in the high side and low side is: When I run my LTSPICE simulation with a pulse with 100K frequency, I get: I can't spot a error in my LTSPICE, please let me know if you have any idea what it might be. MODEL Used for UF4007: .MODEL UF4007 D N=3.97671 IS=3.28772u RS=0.149734 EG=1.11 XTI=3 CJO=2.92655E-011 VJ=0.851862 M=0.334552 FC=0.5 TT=1.84973E-007 BV=1000 IBV=0.2 Iave=1 Vpk=1000 type=silicon Spice Model for IR2100 was downloaded from: https://www.infineon.com/cms/en/product/power/gate-driver-ics/ir2110/ Spice Model for STGW40H60DLFB was downloaded from: https://www.digikey.be/product-detail/en/stmicroelectronics/STGW40H60DLFB/497-14364-ND/4696260 AI: It looks like you used an autogenerated symbol for your IGBT, but you connected it wrong, because the order of the pins is D-G-S. But, instead of using an autogenerated symbol you can make things easier for you if you choose the [Misc]/nigbt symbol, which you can use directly as an IGBT. All you have to do is rename NIGBT with STGW40H60DLFB-V2, and you're done. You still have to add the correct prefix (right-click on the symbol, change Z to X), I thought that would be implied. Anyway, the IGBT subcircuit is full of behavioural expressions and it's not very convergence-friendly. I don't know if it will help with your particular schematic, but I poked around and managed to get it working in a simple test by making these changes inside the STGW40H60DLFB-V2.lib file: on line 70 change r_escusione 1z a1 500 to c_escusione 1z a1 10p Rpar=500 on line 73 change r_conv1 1y a1 10 to c_conv1 1y a1 10p Rpar=10 on line 114 there's a Grg1 ...; add this line, crg1 g2 g 10p rpar=1g right below it on line 168 add a ; at the beginning of the line, in front of E2 ..., then add these two lines below: g2 50 40 g d1k 1k r2 50 40 1m on line 204 add a ; at the beginning of the line, in front of E22 ..., and add these two lines below: G22 502 402 ss d1k 1k r22 502 402 1m You can help in your schematic by setting Rser=10...100m for the voltage sources (V3, V4, and V1; V2 can be ignored), adding Rser=1...10m to capacitors (all four), and adding Rpar=10...100k to the inductor. Also try changing V2, A1 and their connections like this: delete A1 and all the connections to the input pins HIN and LIN. V2 should be just sitting there in the schematic with no connections. add [Digital]/buf (not buf1) and connect its input to V2 and its outputs to HIN and LIN pins. There should be a new A1 in the schematic. add vhigh=6 tau=10n tripdt=30n to the new A1. These changes could help, too: add Vp=0.3 to the .model UF4007 card add this model for the 1N4148: .model 1N4148 D(Is=2.52n Rs=.568 N=1.752 Cjo=4p M=.4 tt=20n Iave=200m Vpk=75 Vp=0.3 mfg=OnSemi type=silicon) Try running your schematic with these changes. If you're religious, praying might help.
H: differential instrumentation amplifier in LTspice not working I was trying to simulate an instrumentaion amplifier in ltspice. the problem i am having is the output when measured across C1 is wrong. since i have an amplification factor of 11, i was expecting to get (V2-V3)*11. which should be 11V. but i get 5.5V instead. if i change the power supply for the opamps from single supply (15V,GND) to dual supply +/-15V, then it works perfectly. any help? AI: A few quick calculations might make it a little more clear what is happening. When not considering the supply voltages of the op-amps the voltage across R1 should be 1 V. The resulting voltages at the output nodes would be 8 V and -3 V as illustrated in this picture: The negative voltage at the output of U2 can not be achieved without supplying a negative supply voltage. As a result, the output of U2 is forced to its negative supply rail which is 0 V. This essentially means that the voltage drop across R1 and R4 is 3 V and an output voltage of 5.5 V is the result of that as can be seen in this picture: Notice that when the difference between the input voltages is smaller compared to the absolute voltages, the circuit is operating with a gain of 11 even without differential supply voltages. So depending on the input voltage ranges differential supply might be required or not.
H: Can I increase DC voltage using capactiors in series and then use a voltage regulator? I am (re)learning electronics and I am currently playing around with "simple circuit" to practice a bit. For testing, I am doing a simple circuit that could have 2 different input voltage (not at the same time), one from a 12v and one from a simple Li-Po (3.7v). As I want 5v I use a simple step-down voltage regulator for the 12v. Regarding the battery, I know I could use a step-up voltage regulator, but I would like to know if it was "ok" and "safe" to use two (same) capacitors (non-polarized) to increase the voltage output voltage of the battery, and then use this "boosted" voltage inside my 5v voltage regulator. So I avoid to use 2 ICs for a "simple" tasks. Also, to learn a bit more, what would be the drawback of this solution even if it is "OK" to do it ? N.B.: I know that step-up/step-down voltage regulator exists, but I do not own one and these ICs seems a bit more complex than "simple" fixed step-down voltage regulator. So for now if I can still use my step-down voltage regulator it would be great. NB2: Li-Po have an integrated circuit to protect from overvoltage, so I suppose it's ok to boost the voltage AI: No, you can't use just capacitors to do that. They will just sit there and do nothing unless you use the caps as part of a charge pump which can increase voltage. But it would be a bad idea to boost the battery voltage and then use a 7805 as that would increase losses and reduce battery life. If you don't want to bother designing a switching converter, you can buy cheap canned ones that will convert your battery voltage to 5V. For a one-off design, the choice is easy between spending lots of time selecting components for a DC-DC, and a canned solution... Note it is often the case that you think you need 5V but you don't. Most "5V" microcontrollers like the Atmega in arduinos run just fine on one LiIon cell without voltage regulation.
H: Charging time of capacitor when used with this transistor configuration? I´m trying to calculate the charging time of the capacitor in this circuit. I know the PUT is on when the voltage in the anode is higher than the Vp Va > Vp Va > Vg + 0.7 For this circuit Vcc = 15v Having the voltage in the gate and Vp: Vg = 10v Vp = 10.7v With that the capacitor is going to charge to 10.7v, and then is going to discharge through the Rk resistor. How to find the charging time of the capacitor? AI: With the PUT being off and Q1 operating in linear mode, the capacitor is charged with a constant current. We know that: Ic = C * dv/dt . Ic is constant and has to be calculated. So the equation reduced to (V2 - V1)/T = Ic/C in which V2 is the voltage value of cap, V1 is the initial value and T is the time of charging and has to be calculated: Vb(Q1) = 20/25 * 15 = 12 V(R4k) = (15-12-0.7), V(R4k)/4 = 0.575mA = Ic (10.7 - 0)/T = 0.575 / 1uF find T ... T = 18.6 ms
H: Automatic power switching, grounds not connected I'm currently working on a project that requires a 24V DC input. That is no problem, as I have that power supply available. However, the project should also remain working when the supply fails or becomes unavailable. For that, I want to automatically switch to the backup supply. This will most likely just be a battery, but can also be another generator. There are a few ways to switch from power input automatically, but there is one important rule that I have to follow in this sector: the grounds may not be connected. This, for example, is no solution because of that rule. What would be the best way to tackle this? AI: the grounds may not be connected. I hope it's meant that the main and backup supplies' grounds may not be connected. Here's a concept diagram: simulate this circuit – Schematic created using CircuitLab Main and Backup supplies do not share the grounds here. CONTROL BLOCK is supplied from the Main supply. This block drives the relay and directs the SYSTEM's input to the Main supply by default. When the CONTROL BLOCK decides the Main supply is faulty (e.g. too low voltage) or the Main supply is not applied (i.e. unavailable) then the relay's contacts will switch to NC position and the SYSTEM will get the supply input from the Backup supply. hth.
H: In a NOR gate not connected to any power supply, how can the output be '1' if both of the inputs are '0'? I was reading about SR Latches and this very weird (and silly) question popped up in my head. Here, when A and B is 0, then Q is 1. Suppose that this NOR gate is lying in air, not connected to any power supply. Then as there is no potential or current passing through A or B, we can say that the inputs are 0 and 0. Then according to the truth table, Q must be 1! How is that even possible? What does it even mean? AI: The truth table is only true when there is power to the device. Without power, a NOR gate is a NON-gate, or NOT-A-gate.
H: What is this part in my Royal Enfield motorcycle speedometer unit? My bike meter got stuck after replacing my battery with a new one. I think due to new battery something burned in the circuit. I have images of the same. I have an electrician who can fix televisions etc. I thought he could fix the speedometer. He says the black chip (I think it's a diode, I am not sure though) got burnt. I am not able to find the part as I don't know what it is. I am not an electrical engineer (just a computer programmer.) I would like to know what the issue is, though. So please pardon me for my half-knowledge (rather zero-knowledge.) What does the symbol on it mean (one with the mirrored C and H without the horizontal line.) Is it a company symbol or scientific symbol - and what is this part? If I can get a replacement he says that he can fix it. AI: It's a unipolar TVS (Transient Voltage Suppression) diode. The package size and markings may be able to lead you to a replacement. It may be a 3.0SMCJ24A There is a non-zero chance that whatever killed the protective device (generically called a TVS) has killed more devices. If the proprietary microcontroller (MCU) U4 is fried, you're probably going to be out of luck as far as repair goes. If it's just the power supply parts or it was protecting an input line you may be okay. The battery replacement is probably related in some way. Either because removing the battery allowed the supply voltage to rise very high or because the battery was connected incorrectly for a moment.
H: Constant current LED driver using transistors I have this circuit: I understood the operation: On power-up Q1 and Q2 are off. There is no collector current so L1 is off. If the digital control input on the left is brought high (5 V) Q1 will turn on. Current will flow through L1, Q1 and R2. As the voltage drop across R2 increases to about 0.6 V Q2 will start to turn on and shunt some of the base current away from Q1. The result is that the circuit will settle at whatever Q1 emitter current will drop 0.6 V across R2. Questions: Are both the transistors (Q1 & Q2) in the active region at all times during operation? Will the load current through the load resistor and the LED will always remain constant and not exceed the limit regardless of the situation? Is there any scenario when either of the transistors will go into saturation or cut-off during powered condition? AI: Q2 is approaching saturation as the collector is one diode drop above the base. The saturation region begins where the collector base diode becomes forward biased. Just what constitutes forward bias in a transistor is somewhat subjective; the junction will start conducting at quite a low voltage. If the programmed current was high enough to cause Q1 to heat up significantly and lower the base emitter voltage significantly, (the device has a negative temperature coefficient of -2.1mV per degree C) then Q2 (which would not heat up as much unless it was in thermal contact with Q1) would more closely approach the saturation region. It is highly unlikely that would have any significant effect on the operation of the circuit unless Q1 got really hot (where you would choose a different transistor anyway).
H: Confusion regarding the Q point of a transistor Many standard text books define the Q point as follows : "Q point or the operating point of a device, also known as a bias point, or quiescent point is the steady-state DC voltage or current at a specified terminal of an active device such as a diode or transistor with no input signal applied". Now let me explain why it is confusing to me. As we know the signal is supposed to apply at the base of the transistor, "with no input signal applied" means the base voltage is zero at some point, but that means the transistor is off at that point and as a consequence, there will not any collector current, missing something? AI: As we know the signal is supposed to apply at the base of the transistor, This is true for a common emitter or common collector stage, but for a common base stage, the signal will be applied at the emitter. "with no input signal applied" means the base voltage is zero at some point, This is not correct. "No signal" in this context means there are no variations about the Q point, or that the AC component of the input voltage is zero. It doesn't mean the instantaneous voltage is zero. but that means the transistor is off at that point and as a consequence, there will not any collector current, missing something? It doesn't. The collector current will be whatever is established by the DC bias at the Q point.
H: Attaching SDRAM to the STM32F469AG In the STM32F469-Disco schematics, the MT48LC4M32B2B5-6A is used, which features a 4M x32 (=1Meg x 32 x 4 banks) arrangement for a total of 128Mbits. Now I was wondering if I could attach a IS42S32160F to it, which features 512Mbits of memory in an 16M x 32 (=4Meg x 32 x 4 banks) configuration. The IS42S32160F has the same A[0:12], D[0:31] and control signals as the MT48LC4M32B2B5-6A. Can I use the IS42S32160F, obviously with it's full memory range? During my search I came across this presentation, showing the FMC in a very abstract manner. On page 5 it shows that MCU Bank 1 can use a 64M x 4 configuration. Does that mean that I can use a single 1Gbit memory chip (64M x 32 x 4), or even two 1Gbit (64M x 32 x 4) memory chips, seperated by CS signals? If yes, which signal needs to be routed to the CS pin of each chip? As an extention of the question above, is this also true for the IS42S32160F if we replace the Gbit chip with one of these? I want to use this ram to later boot Linux from it. I did found this question on the site, but I can't seem to fit the answer into my questions. AI: Refer to the STM32F469 Reference Manual, Section 12.7 on page 357. The maximum SDRAM size supported on each of the STM32's SDRAM banks is 256MB, organized internally as 13 Rows, 11 Columns, 4 Banks, and with a 32-bit data bus. Your proposed IS42S32160 has 13 Rows, 9 Columns, 4 banks and a 32-bit bus - so there should be no trouble using it. You could use a 1Gb memory chip (since 1Gb == 128MB), but you'd need to ensure that the rows, columns & internal banks don't exceed what the STM32 can handle. You're likely to find many 1Gb ICs which have 8 internal banks (with fewer rows or columns), which would lead to only part of the memory being accessible). If you're prepared to do some manual bank-swapping you might be able to connect an 8-bank memory IC's BA2 pin to a gpio on your micro. Or alternatively possibly use some external logic with both of the micro's CS outputs to both select the memory MEMCS = !(!CS0 || !CS1) and drive BA2 BA2 = CS0 && !CS1. Of course this method means you can't use 2 8-bank memories as you're using both CS outputs to drive one IC. Considering actual part availability of large SDR SDRAMs supported by the STM32 micro, probably the best way to max out the external ram would be to connect multiple 8- or 16-bit SDRAMs in parallel to the STM's 32-bit data bus. For example: a pair of EM63B165TS 512Mbx16-bit on CS0 and another pair on CS1 for a total of 2Gbit over 4 devices 4 IS42S86400F 512Mbitx8-bit on CS0 and another 4 on CS1 for a total of 4Gbit over 8 devices.
H: Simulation performance of bitwise operators and binary arithmetic Operators In terms of performance if you have say two inputs a and with the same bit width say 64. If you apply a bitwise operator and binary arithmetic operator (verilog/system verilog), which one takes longer to evaluate and why? For example, in terms of a parity circuit when the parameters are change I observed a big difference in terms of simulation time while for an adder, the difference in simulation time isn't much. Code for adder and parity circuits are shown below. module adder #(parameter width=64)(input logic [width-1 : 0] a, b, input cin, output logic cout, output logic [width-1:0] sum); always @(a, b or carry_in) begin {cout, sum} = a + b + cin; end endmodule module eparity #(parameter width=128)(input logic [width-1 : 0] data, output logic p); logic [width : 0] i_parity; genvar i; assign i_parity[0] = 0; for(i = 0; i < width; i = i + 1) begin assign i_parity[i+1] = data[i] ^ i_parity[i]; end assign p = i_parity[width]; endmodule AI: The key difference in a bit wise operator is that the number of bits has no effect on performance—each bit operation is independent of the other bits. But once you get to arithmetic operators then there is a dependency from LSB to MSB that creates a long timing arc. Updated Answer to an almost entirely different question What you are seeing is a difference in the level of coding abstraction. Simulators can add two 64 bit numbers on the host machine in one cycle. But if you wrote out the addition in terms of a sequence of Boolean equations, that would certainly take more simulation time. If you wrote the parity equation as p = ^data; That would certainly take less simulation time.
H: Confused with the formula for Power Factor Correction The formula provided in my lecture notes: $$C=\frac{Q_c}{\omega \:V_{RMS}^{\:2}}$$ Where $$Q_c=Q_{old}-Q_{new}=P\left(tan\left(\theta _{old}\right)-tan\left(\theta _{new}\:\right)\right)$$ Such that \$\theta\$ is the power factor angle, \$P\$ is the real power and \$Q\$ is the reactive power. I've tried starting with $$Q_c=I^{\:2}_{\:RMS}\left(X_L-\left(\frac{X_LX_C}{X_L+X_C}\right)\right)$$ and then solving for \$C\$ but that didn't help. Here is the circuit, \$C\$ is the capacitance of the parallel capacitor that should be added in order to get the desired power factor correction. AI: From comments - I'm not sure which formula you are confused with. It's the first one that I'm trying to derive, but can't seem to reach anything. – Essam Well real power (P) is \$\dfrac{V_{RMS}^2}{R}\$ and reactive power (Q) is \$\dfrac{V_{RMS}^2}{X_C}\$. And, given that \$X_C= \dfrac{1}{\omega C}\$ we can say this: - $$Q = \omega C \cdot V_{RMS}^2\hspace{2cm}\text{or}\hspace{2cm} C = \dfrac{Q}{\omega\cdot V_{RMS}^2}$$
H: What is this MPU? I have a board model SWL-B70F, it's a wifi module used in Samsung AC. This board ship an "unknown" MPU, 256 Mbit DRAM (W948D6FBHX6E), 128Mbit Serial Flash memory (25Q128FVSG), a wifi module (BCM43143KMLG). The Ar18hsfnbwkneu Service Manual contains the block diagram below. On the top there is a row of 11 pin connected to the MPU, and on the bottom a double row of 14 pins also connected to the MPU (I suspect it to be JTAG port). As a starting point, I would like to be able to execute on this board a simple "Hello world" program, then in the future more complex things using the wifi. But I can't find documentation for this CPU. Does anyone knows about it? May I expect to find appropriate SDK, tool chain for it? Thanks AI: After some detective work, I can give you some steps towards an answer but not a full answer. Perhaps you (or someone else) can build on this. Does anyone knows about it? That logo is from "Core Logic" in South Korea (i.e. Corelogic Co. Ltd., who don't seem to be related to any other company of a similar name) so not a surprise that Samsung chose to use them. This also explains the "CL" prefix to the device part number CLM5510. The date code of the device shown in your photo is probably 23rd week of 2015, so that device is around 5 years old. A dead link (explained below) also mentions a 2012 date for that range of devices. The company "Core Logic Co. Ltd." merged with (took over?) the South Korean company "Comed Life Sciences Co. Ltd." in 2017 and "pivoted" into a Bio / Pharma company. The original company website - http://www.corelogic.co.kr - was abandoned in late 2017 (it now shows a "parking page" saying it is for sale by a domain name registrar) then the merged company rebranded to "Synergy Innovation" also in 2017. For a while, starting in 2018 and finishing some time later, the original Core Logic website redirected to http://www.synergyinno.com. That website redirection eventually lapsed. (http://www.synergyinno.com is still the live site for the merged Bio / Pharma company.) I found a distributor of the old Core Logic devices, including the CLM5510, and this webpage includes a (dead) link which claims to be to the datasheet of the CLM5510. From the (misspelled) name in the dead link - http://www.corelogic.co.kr/down_pb/Hwak_Feb2012.pdf - it seems that the CLM5510 was part of Core Logic's "Hawk" range of devices, which is described on that distributor's webpage as: Hawk is an ARM9 based processor optimised for Mobile TV and small multimedia players with full support for various worldwide Mobile TV standards (ISDB-T 1-seg, ATSC-M/H, CMMB). So that's it. In summary: The original manufacturer was "Core Logic" in South Korea, but that company doesn't seem to exist after 2017. If you can find any information about their "Hawk" range of devices, that distributor suggests your device is from that range. I couldn't find a working link to any SDK or other technical information for your device.
H: 3.5mm audio headphone jack I have a broken headphone audio jack from an electronic keyboard. With the jack broken, the speaker audio no longer functions. The manufacturer no longer supplies the part as it has been discontinued, and they claim they manufacture all of their own parts. They advised googling the part number and looking on eBay. I have done so, without luck. The audio jack functions in such a way that when the headphone plug is inserted, the audio to the speakers is shut off. I have the schematic of the jack from the service manual: I have 2 challenges: Find an audio jack that has the same arrangement with 9 terminals; If the above exists, the jack to have the same pin-out and spacing to fit into the PCB. I am unsure of the claim that the manufacturer indeed makes all of their parts rather than source parts from third parties that have already been designed and built, especially for parts of this type. I am assuming that the original manufacture may still produce it. What seems to make this part unusual is the additional of an extra switch. According to the circuit diagram, two of the switches disconnect the left and right speaker and allow the routing of the audio to the headphones. The third switch from my understanding is used as a signal to the piano's microcontroller to detect the insertion of a plug into the jack for the purposes of a headphone test function. I am, however, slightly confused with the schematic. Normally when I see headphone jack schematics the switches have an arrow head that is in contact with the pole or has a small gap, thus distinguishing between normally close or normally open, respectively. This symbol is void of those and I am assuming this a normally closed switch. The other symbols for jack plugs within the service manual do indeed show an arrow of the type I am used to seeing; this is the only one void of these. So my question here is whether others agree with my assumption? It is consistent with the symptoms of no audio through the speakers. My further questions relate to my challenges, and whether anyone knows of a headphone jack with the same arrangement, and a sign-post to such a part? I appreciate that my second challenge is somewhat more difficult, however if I can find a solder tab version, this will be most suitable as a few flying leads will be absolutely fine. I have a pic of the discontinued part obtained from a search using the manufacture part number. This isn't available from the picture source. AI: It is consistent with the symptoms of no audio through the speakers. You could probe with a scope to check if there's signal, or just short the pins that should be shorted by the dead switches and check if the loudspeaker works. I guess this is what the illustration means: When you insert the jack, the springs are pushed (red arrow) so pin 4 no longer makes contact with pin 5. I'm not sure whether pin 3 made contact with pin 5 before the jack was inserted, or if you need to insert the jack to make contact, but you get the idea. The same happens to the springs on the left on the picture, but I was too lazy to edit them with paint... Solutions if you can't find a replacement jack... You could buy any jack with a switch, use that switch to drive a relay, and have the relay switch the speakers and the other signal to the micro. You don't need to desolder the existing jack to do this unless you don't want to make another hole. Other solution: if one of the switches still work, you could cut the traces and use that switch to drive the relay. Or you could just use a manual switch labeled "headphones ON".
H: What's ground? Is it the point defined as 0 V of potential or voltage? If the latter, is it because we measure its voltage with respect to itself? I have some questions regarding the definition of ground that some people use. One question leads to the other so I've organized them below. Some people say that "ground is the point/node defined as 0 V". Is this definition correct? If no, what is your definition of ground? If it's correct, keep reading. If they mean 0 V of electric potential, then just clarify me this and I have no further questions. But if they mean 0 V of voltage, then I have two thoughts: The definition of ground ("the point defined as 0 V of voltage") is meaningless because voltage is the difference between the (electrostatic) potential of two points, yet in the definition of ground, they aren't specifying with respect to which point the ground has 0 volts. An example of this is like saying "I'm 10 cm taller" without comparing yourself to anyone; that phrase has no meaning. If you agree with me, do you have a more precise definition of ground? If you disagree, please explain why. Ground is 0 V not because we define it like so, but because we measure its voltage with respect to itself. If that's true, then the definition of ground (point defined as 0 V of voltage) should be changed, since any point/node has 0 V when you measure its voltage with respect to itself. So in a strict sense, according to the definition of ground, all points/nodes are grounds! If you agree with me, do you have a more precise definition of ground? If you disagree, please explain why. Note: this question is not the same as mine, because they asked about the ground in circuit simulators (which is just a reference node when performing nodal analysis to solve the circuit) and about ground symbols. My question is about the conceptual definition of ground. Edit Sep./22/2021 The following image helps clarify the structure of my questions. (I'm adding it in case someone reads these questions in the future.) I think that what The Photon defined as ground (the node with respect to which we measure the nodal voltages), should actually be the definition of reference node, and that we should define ground as the node/point/region in space defined as zero potential (not zero voltage because that doesn't make sense, as I said above), besides the other uses of ground in electricity (ground of planet Earth as in where we stand, to ground as a verb to connect something to Earth ground, ground busbar, ground rod, ground/grounding wire, neutral/grounded wire, ground loop, the different ground symbols, etc.) AI: Ground is 0 V not because we define it like so, but because we measure its voltage with respect to itself. Ground is the point we choose to measure all voltages in the circuit with reference to. So in a strict sense, according to the definition of ground, all points/nodes are grounds! No, because we didn't choose them as the reference for measuring other voltages. On a different day, we might choose some other node as the reference. That doesn't mean those nodes are the reference today. As a comparison, just because any player on a soccer team is capable of acting as captain on any given day doesn't mean that all of them are captain today.
H: Is there a way to plot differentiation of frequency in LTSpice? I'm simulating a circuit and I obtained the AC simulation (bode plot) of it. However, now I want to plot the group delay, meaning, the negative derivative of the phase in terms of frequency. Is there a way to do that? AI: The easiest answer is to move your mouse over the right Y-axis until it becomes a ruler, right-click on it, then select Group Delay from the dialog. The other, more complicated and not very useful way is to edit the waveform plot by right-clicking on the trace's label (e.g. V(out)), and then modify it like this: d( -atan( im(V(out)) / re(V(out)) ) )/ (2 * pi) Note that atan2() is not recognized, and d() is only recognized in the waveform arithmetic. You will also need to use a linear scale for the Y-axis if you want proper readings. See more in the help under LTspice > Waveform Viewer > Waveform Arithmetic. In case the expression above is too much, there is a shorter alternative, ph(), which calculates the phase, directly, without the need for atan(). It's useful for plotting the phase delay, but it won't work with d(ph()). Be careful, though, that the result will seem wrong, as if having a changed sign: You have to remember that LTspice plots magnitude, not amplitude, and so the phase will show 180o, which means the result has the correct polarity, it's just that the display is awkward. See this for a similar problem. Also, the value on the Y-axis will depend on whether Control Panel > Waveforms > Use radian measure in waveform expressions is checked or not. Default is unchecked, IIRC.
H: Why don't we have a circular USB port? From what I understand, USB has 4 lines just like an audio port. So why can't we have a USB port which is circular? I feel it can be more convenient to plug in and also would have a smaller form factor. May be, we can have the same 3.5 mm jack port switch between analog audio output and a digital USB function. What am I missing? AI: Rotatability while the USB connection is already established is a feature you don't want: While for audio frequencies, the electric ringing caused by imperfect contact are benign enough to not matter, I wouldn't trust that to be true for 1 Mbd (low-speed USB1), and certainly not at 12 Mbd (Full-Speed USB1.1), 480 Mb (High Speed USB2), or the gigahertzes of bandwidth that USB3.0 and later generations use. In fact, for the latter, you need pretty well-matched connectors that are designed to conduct an RF signal instead of a current. Building such connectors even in rotatable form is, if at all possible (never saw a rotatable connector that has more than one waveguide, e.g. coax connection), extremely expensive. Think of > 20€ per connector, even in larger quantities, and, due to physics, probably not very compact. I was surprised to find out there's actually USB3 slip rings! They look like this:
H: Electret microphone and FET pre-amp My question is about designing a microphone pre-amplifier that uses an electret microphone. I saw a video of Doug Ford on the EEVBlog, in which he shows some pre-amplifiers for microphones. My question is: does it make sense build a FET pre-amp for for an electret microphone, considering that a FET it's already inside the microphone package? Eventually, the microphone noise is produced from the FET inside the microphone, so is it pointless to connect a FET pre-amp outside the microphone? AI: You might choose to use a JFET front end for a guitar amplifier in order to preserve the higher frequency artefacts from significant attenuation when running into an amplifier with impedance somewhat less than 1 Mohm but, for an electret microphone I see no point (given that it might be powered by a 1 kohm to 10 kohm resistor). so it's pointless connect a FET preamp out of the microphone package ? I believe that to be a general truth but, I'm hesitant to be definite without a schematic.
H: 2 layer board design and grounding I am using EAGLE and have a 2layer pcb design. I have signals on both sides and ground on both sides of the board. I am not able to connect my THS4509 opamps thermal pad (which is the opamps ground) to my pcbs ground. The thermal pad has its own isle. I can't route the thermal pad to ground because the pins are too close to each other. The thermal pad is pad17 I need to somehow connect it to the ground polygon. But there is no room. How can I solve this problem? AI: You should really start off any two layer design with the intention that the bottom side is reserved for ground and that the ground plane might occasionally need to be interrupted (now and then and minimally) by signal and power tracks that need to cross each other. It sounds to me that you may not have started with this sensible intention/philosophy. If it means ripping up and starting from an earlier point then this is likely to be better in the long run (all things considered) in my opinion. As for the thermal pad what's wrong with placing a number of vias like this: - If they need to be a tad smaller than the other vias, that shouldn't be a cause for concern.
H: Audio Jack Safety 230v DC I'm using a 3.5mm male to female connection (images below) to a voltage divider network to measure voltage from a solar panel with an Arduino. 230V to roughly 4V. My question is whether these connections will be unable to handle the voltage and if so, why? They are rated for 1 amp. AI: This should absolutely not be used above ~20-30 volts, and even that's pushing it. One of the main problems with T(R*)S connectors is that, when inserting or removing the plug, the contacts are briefly shorted together. This isn't much of a problem in audio, but for power applications that means shorting the power supply, and for high voltage sources you can get huge current spikes during the short. And even if you had a workaround for that, these aren't insulated properly for that kind of voltage. You can just touch bare metal on them, and that's not good when that bare metal could be at 230 V with respect to ground! More appropriate for this task would be something along the lines of solar panel connectors, designed for this kind of voltage. Notice how they're large, and insulated all around, so you can't touch bare metal anywhere on them. Some fully insulated banana plugs are also suitable for this purpose, but do make sure you know what you're doing when terminating them.
H: Fan does not spin when controlled by PWM I am using the following fan : ME70201V1-000U-A99, which should be supplied at 12 V and consumes about 150 mA. My schematics is as follows : Than fan is connected to X3. PWM_VENTILO is the PWM output of a microcontroller. If PWM_VENTILO is a simple high GPIO output, it spins at full speed. But if I try to control its speed with the PWM, it does not spin. I have tried different frequencies (some Hz to 20 kHz) and different duty cycles, I always have the same result : it gets stuck and never spins. At low frequency, it kinda works, but the spin is irregular. The MOSFET transistor accepts voltages and current which are necessary. Is there something I am doing wrong ? Thanks AI: The Mouser datasheet says: DO NOT use power or ground PWM to control the fan speed (page 11) This is because it's a BLDC fan which has a small onboard controller of its own. You need a PWM-able one, which may be a three-pin design. (plus points for including the exact fan model which makes this easy to answer!)
H: What logic gate matches the truth table below? I'm just wondering what logic gate has two inputs and only outputs HIGH if one of the inputs is HIGH. Here's a truth table: |-----------|-----------|-----------| | A | B | OUT | |-----------|-----------|-----------| | 0 | 0 | 0 | |-----------|-----------|-----------| | 1 | 0 | 1 | |-----------|-----------|-----------| | 0 | 1 | 1 | |-----------|-----------|-----------| | 1 | 1 | 0 | |-----------|-----------|-----------| A and B would be two inputs. Thanks in advance! P.S. If you're going to close this question, can you at least answer it before you do? Thanks! AI: This is an XOR gate. It outputs HIGH when an odd number of inputs are HIGH.
H: Generate HSPICE input signal I want to add this input signal in my hspice code. Here I am doing transient analysis. How can I do it? AI: Without seeing the schematic and some values it's impossible to tell. You could try to get around using either a behavioural source with int(), round(), floor(), or ceil() -- if a continuous voltage is applied at the input -- or a PWL() source, something like: PWL(1 0 +1m 1 2 1 +1m 3 ...) -- which will not need any input signal. There may be other, more complicated ways, by creating circuits that generate this kind of staircase signal, but since you didn't say anything else, there's not much else to talk about.
H: PCB specs, Meaning of "6-1" diameter I'm using a relay for a PCB, but have trouble understanding some of the specs for the recommended hole drill sizes. As shown below, the spec sheet indicates a "6-1" diameter hole. What is the meaning of this 6-1? It should be something with a diameter of less than 2.22 mm, but it would be helpful to know an exact dimension in case the drawing was not to scale. https://omronfs.omron.com/en_US/ecb/products/pdf/en-g5v_1.pdf AI: What is the meaning of this 6-1? It means there are 6 holes, and the dimension of 1 mm diameter applies to all of them.
H: How much current should a car draw from a battery when it is off? My SUV died last night, and it draws 540mA now, when the car is off, and nothing (that I know of) is on. How much does a car typically draw from the battery when it is off? EDIT 1: I think you've already answered my question (half an amp is an unreasonable amount of power draw with the car off), but I would like to give you more information anyway. I narrowed it down to one fuse, which has a bunch of stuff on it: BCM, Auto Light & Photo Sensor, Room Lamp, Driver/Passenger Vanity Lamp Switch, Date Link Connector, Door warning Switch, RF Receiver, A/C Control Module, Luggage Lamp, IPS Control Module, Instrument Cluster (IND.), Map Lamp, Electro Chromic Mirror I am going to assume that "Date Link Connector" is really "Data Link Connector". "IPS Control Module" is also listed under multiple other fuses, and I don't know what that means. Tried to find out what IPS means, but there is no glossary or abbreviations table, and I'm not sure. "A/C Control Module" is also listed under multiple other fuses as well. Not sure what this means either. EDIT 2: This fuse is part of a two part special assembly called the "Memory Fuse" which apparently is a collection of devices, some having non-volatile memory that gets lost if it is unpowered. Also, that's where the half amp of power is being drawn from (the second of the two fuses). Here are some pictures of the assembly (which has two 10-amp fuses) and relevant sections from the manual: The following portion of the manual explains that some things will not operate if you pull this fuse assembly, namely "warning chime, audio, clock and interior lamps, etc." I like the "etc" part. It also states that "Some items must be reset after replacement" indicating the non-volatile memory will be lost. I'm assuming audio (radio presets), but don't yet know what else that may mean (perhaps the A/V, which we don't have). Here is what the "Memory Fuse" protects, the first fuse for the first section, and the second fuse for the second section (the second fuse is the one drawing the half amp): AI: That’s far too much current and will possibly kill the battery after extended unused periods and a voltage drops below 11.5V when sulphating accelerates. a fresh 50Ah battery may be dead after 100h @ 0.5h rate. Reduce that to <50mA or add a trickle charger and plug in it if you cannot locate or eliminate the TDB load.
H: Which semiconductors are good candidates for constructing a CPU? Wikipedia offers an excellent list of semiconductor materials but does not include which can be used to create integrated circuits or even logic transistors (PNP or NPN). The Integrated Circuit page mentions "transistors built from other materials: graphene transistors, molybdenite transistors, carbon nanotube field-effect transistor, gallium nitride transistor, transistor-like nanowire electronic devices, organic field-effect transistor, etc", which does not lay out other possibilities. If you took away the requirement of being smaller/faster/cheaper than modern Silicon transistors, which semiconductors besides Si, GaAs and GaN could be used to construct logic circuits? I am particularly interested in abundant non-toxic compounds that could be purified and crystallized with relatively few resources. In particular, are any of the following possible candidates? https://en.wikipedia.org/wiki/Tin(II)_sulfide https://en.wikipedia.org/wiki/Iron(II)_oxide https://en.wikipedia.org/wiki/Lead(II)_sulfide https://en.wikipedia.org/wiki/Copper(I)_oxide AI: You can in theory use any semiconductor junction to build nonlinear behaviour, and that's enough to implement logic, and that's enough to implement a CPU. Of course, since you might want to still be able to power them, a composition with a reliable and still low band gap is necessary. You'll find it's no coincidence that Si is the usual semiconductor, and that the available adjacent groups in the periodic systems limit your choice of elements to use as dopant. Realisitically, if any other simple anorganic materials would be easier to handle or cheaper to use to produce working semiconductors, even bad ones, then they would already be in use. You'll find, for example, that Copper oxide- and Selenium-based diodes used to be produced, but they were so bad that the harder-to-make silicon diodes have completely replaced them. Same for Germanium, and a few other materials. You can investigate these obsolete technologies, but you'll quickly find out that people tend to care more about materials not being toxic than they did 50 years ago... A field that might still be yielding new materials that can be used in semiconductor junctions are organic semiconductors, like you already find them in organic light-emitting diodes. That's not saying they are abundant, or can be processed "with relatively few resources", but really, what technical material is? In the end, I think you should be looking into the homebrew semiconductor community. There's people building ICs in their garages! They still have to use rather nasty chemicals, and have to buy raw wafer materials, mostly, but the truth is that a monocrystal of an any significant size is hard to produce without industrial style processes, and that applies to semiconductors as much as to e.g. artificial sapphire or diamonds for technical applications. All in all, I think as broad as your question is, you underestimate the complexity of all this. Building a CPU will take at least hundreds to thousands of transistors, and you need them to be reliable, and you want them to be reasonably effective at what they do, lest you get in real heat and power supply trouble. By the way, aside from maybe the iron(II) oxide, I don't think your substances could be described as non-toxic at all.
H: Discharge risk of an LED fade-in circuit The simplest incarnation of the fade-in circuit I have planned is simulate this circuit – Schematic created using CircuitLab This gets a time constant of ~2.7s before taking into account the dynamic resistance of the amber LED with Vf=2.1V. It's to only fade in once on power-up and doesn't need additional control. My question is: when the power is turned off and C1 discharges through D1, will I risk damage to the LED? This graph from the specsheet: suggests that if the capacitor had charged to 2.1V, on discharge it will momentarily be at the very top of the LED's rated current, 20mA. Adding a resistor inline with C1 would mitigate this risk but may interfere with the fade-in effect, as there would be an initial voltage over the diode at t=0. If I added this second resistor, I guess I could just choose it so that in voltage divider configuration it works out to well below the forward voltage drop of the diode? AI: There is no danger to the LED from removing the voltage source V1 or shorting it to ground. The LED itself limits the voltage across the capacitor, i.e. if it was safe while the voltage source was still connected, it's also at least as safe after it has been disconnected. However, if you allow C1 to charge up to greater than the LED forward voltage and then connect the LED across it you could quite well damage the LED.
H: Notation in transistor circuits These two figures(below) are from book Microelectronics circuits by Sedra/Smith. I have two doubt regarding notation used in this book (or may be in general also) 1.in this figure(1) why there are two arrows ( most of the times only one arrow is there) pointing towards 10v and in general what arrows indicate (or its significance) 2.in this figure (2) how to judge (method?) whether emmiter terimnal Re2 of Q2 transistor is connected to 15v terminal of Rc1 (Q1 transistor) or grounded collector terminal Rc2 (Q2 transistor)? AI: Publications like this use graphic drawing tools to make these illustrations. They don't necessarily use any kind of electronic CAD tool, so there's no checking other than human beings inspecting the illustration. In the first figure, the double-arrow appears to be... a mistake that got past the proofreader. A single arrow is enough to indicate the power connection. In the second figure, you can safely assume that all the upward-facing arrows are connected to the same voltage, that is, +15V. The illustrator either got lazy, or didn't want to clutter the figure with three +15V labels. In an actual schematic drawn in a CAD tool, each power connection would be associated with a specific network (power domain). Common power connection symbols include the arrow, bar, or circle. Ground symbols show the zero-volts reference for the drawing. These can have variations to distinguish between power, signal, and earth grounds.
H: Suitability of standard-gain pyramidal horn for UWB signal Objective I'm interested in evaluating the suitability of a standard-gain pyramidal horn for an ultrawideband (UWB) signal. However, I've had difficulty finding reliable information about the phase response over frequency that would be needed to make this determination. Background UWB Definition In this context, use the FCC's definition of UWB, the relevant parts of which are: Operates in the 3.1 GHz to 10.6 GHz range Bandwidth greater than 500 MHz The application I have in mind will not need to work over every frequency in that band, just a subset of it that is approx 500-2000 MHz wide. Considerations w/ Wideband Antennas There are many types of "wideband" antennas that can easily meet the bandwidth requirement (log periodic, horn, etc.). However, a good UWB antenna should also have these properties (according to [1]): Linear phase over the signal bandwidth Constant phase center over signal bandwidth These conditions are necessary to prevent the UWB pulse from being distorted during transmission/reception. This is relevant to point out because many "wideband" antennas violate these conditions. E.g. a log-periodic antenna has a phase center that changes with frequency (i.e. as the frequency changes the active region changes), and some wideband antennas may have a relatively flat gain response over frequency, but might not have a linear phase over the entire band of interest. Question Does a pyramidal horn antenna meet these requirements? References [1] Stutzman, Warren L., and Gary A. Thiele. Antenna theory and design. John Wiley & Sons, 2012. AI: Now that's a good, hard question, and well phrased! Disclaimer: my antenna engineering experience is 40 years old, but if physics hasn't changed, maybe I can help you some. First, you're right, for this application, avoid log-periodic antennas in favor of antenna designs with a single, broadband feed point, such as the horn you describe. You will have a constant "phase center" over the horn's operating bandwidth. But you can think of the horn as a special kind of waveguide whose dimension varies along its length. And the horn, like any waveguide, will exhibit "frequency dispersion" -- that is, waves propagate through the horn at a velocity that varies with frequency, slowing down especially as you get close to the horn/waveguide's lower cutoff frequency. (I'm skipping over the fine points of group velocity vs phase velocity here; you care about the UWB modulation on the wave, which is affected by group velocity.) Practical thoughts: If you don't need directivity, consider a broadband dipole instead of a horn. If you do need some directivity, choose the shortest, least directive horn available, as it will exhibit less dispersion. Operate as far from the lower-frequency cutoff of the horn as possible If possible, find a horn with an exponential taper, rather than a linear taper.
H: Does the 74LS09 have an adequate internal input pullup? I want to feed an output of the 74LS05 open collector inverter into one of the inputs of the 74LS09 AND gate. The internal schematics in the datasheet shows that there is an internal resistor and 2 diodes between VCC and inputs: Can I treat the AND gate as having an internal pullup and thus avoid an extra pullup resistor for the 74LS05 or do I still need an external pullup resistor? AI: A floating TTL input can be considered 1. Then again, one cannot rely on this. Neither switching speed nor the noise immunity will be the same. Leaving a TTL input to pull itself is good only for "try and see" and nothing more.
H: What are the components symbolized in these 4 highlighted areas of this schematic? I am a beginner and I understand only the most basic portions of this schematic. Could someone smarter please help me understand the highlighted areas? Source: Schematic was copied from page 4 of the manufacturer's product datasheet. Zhengzhou Winsen Electronics Technology Co.,Ltd The circuit design is for a gas detection sensor element. As various levels of the gas being detected is sensed the output voltage should vary on one of the pins. Unfortunately I need this design converted to a PCB that the sensor element plugs into. I have the circuit half done, but the highlighted areas I am confused about. I will likely have a couple follow up questions because as I said I want to understand this circuit, not just get an answer. For example, the block at the top middle, is that a voltage divide circuit? Thanks everyone. :) AI: J177 P-channel JFET R3 resistor 20K RT1 thermistor 10K at 25°C U1A/U1B the halves of a dual op-amp R3,RT1,R5 form a temperature-dependent resistance to help temperature compensate the gas sensor output. You should credit Zhengzhou Winsen with the graphic you copied.
H: Designing a compensator - Control system - Missing value I would like to determine the transfer function of the following circuit made of TL431. According to the different AN that I have read (and about this subject I really really thank Christophe BASSO, its work is so nice to read and the whole power electronic field become so simple to learn), the current mode controller should contain a pull resistor at the input of the compensation pin. I drawn the pull up resistor on the following schematic. Nevertheless for determining the transfer function, I need to know its value... Does anyone know how to determine its value ? Here is the datasheet : http://www.farnell.com/datasheets/1782047.pdf Have a nice day ! AI: I need to know its value You can estimate it from the data sheet: - So, it has an open voltage of typically 5.5 volts and it has a shorting current of 1.2 mA. This means it typically has an internal pull-up resistor of 5.5/0.0012 = 4583 ohms. This is a typical value and as you can see it might be 5.5/0.0022 (which equals 2500 ohms) or 6/0.0012 = 5000 ohms. I'd probably assume it was around 3500 ohms.
H: Why passive component has better drift characteristics than active components had? I came across this paragraph in the classic reference Op Amps For Everyone by Ron Mancini where it says on the first page: Many attempts has been made to make a stable amplifier, but temperature changes and power supply voltage extremes experienced on phone lines caused uncontrollable gain drift. Passive components had much better drift characteristics than active components had, thus if an amplifier's gain could be made dependent on passive components, the program would be solved. I don't quite get what that means. I'm going to take a few assumptions here and hope someone can confirm or correct my understanding. Say when there's a voltage applied across a passive resistor, its resistance changes within some specified tolerance as it heats up overtime by the current. Is that why we see or what we call the drift in the input voltage vs. output current? Let's take a BJT as the active component. Be it configured in common collector, emitter or base amplifier, the only variable in any of the gain equations that is not passive is the \$ \beta \$ or the Hfe. (e.g. \$ A_v = {R_E \over {\alpha r_m + R_E + {R_B \over {\beta + 1}} }} \$ for C.C.). So the volatility or stability of \$ \beta \$ is essentially the main indicator for the drift? Because the \$ \beta \$ is more unpredictable than heating does to a resistor's resistance, hence the drift characteristic in a passive component is better than an active component? AI: I think that the unseen context of the quote needs to be revealed to be able to adequately refer to it in your question. The quote appears to be talking about telephone lines and this suggests to me that the quote is meaningless without more context. Moving away from the quote.... In short, passive components don't suffer from anything like the problems that transistors and diodes suffer from. Diodes and transistors have very non-linear characteristics (e.g. forward diode conduction V/I ratio) and these "characteristics" are very much temperature affected at the atomic level and are uncompensatable other than by using high gain and feedback to linearize things (as a control system would). The diode equation: - At a very fundamental level, the temperature (in degrees above absolute zero temperature) are involved at the very heart of things. Whereas, for a resistor at the fundamental level: - $$I_R = \dfrac{V_R}{R}$$ So, resistors very are linear and, if there is any temperature related resistance change phenomena that is purely down to how the resistor material was mixed or produced (in simple terms). In other words they are basically linear right from the start whereas diodes are just simply highly non-linear right from the start and always.
H: RFID system inductive coupling At low or high frequencies the communication between RFID reader and RFID tag is through inductive coupling. Why do we need a transponder? Don't we need only a receiver? And how is the message of the tag not confused with the RF signal of the reader? AI: transponder /tranˈspɒndə,trɑːnˈspɒndə/ noun a device for receiving a radio signal and automatically transmitting a different signal. Source: Definitions from Oxford Languages A transponder is required because it has to read the trigger / power signal and then send a response. A search such as rfid+how+it+works will give you plenty of reading material.
H: Which parts of an AC relay does the electrical current flow on? Below is a common relay that is used in Arduino projects. When the AC lines are connected, if I touch the screws that holds the AC lines (marked as "1" in the image) do I get an electric shock? Of course, I am not going to intentionally do that, but just want to know. What if I touch the soldered pins at the bottom of the relay (marked as "2" in the image)? Do I get an electric shock? AI: (1) Yes, (2) Yes. They are all connected. If you are goung to be switching AC power, then you need to take appropriate precautions to make sure that nobody touches it. That usually means putting the whole lot into an insulated or grounded box.
H: Implementing a polynomial voltage-controlled current source in LTSpice I want to implement a voltage-controlled current source in LTSpice. The output current should be $$I=a_{0}+a_{1}V+a_{2}V^{2}$$ I know I have to use the POLY element, but I can't figure out how to use it. I am using LTSpice in macOS. Any help please? Thanks in advanced! AI: The POLY source is old as SPICE itself (40+ years). The better choice is to use a behavioural source (voltage or current), not only because it makes things easier in terms of implementing a mathematical expression, but also because LTspice will replace it, behind the scene, with a behavioural source, anyway. This is how the expression would look like: V = a0 + a1*V(x) + a2*V(x)**2 where I presumed to be V(x) instead of V. It's not necessary to use voltages, currents and other time-dependent quantities are allowed, with some conditions. Read more in the help, under LTspice > Circuit Elements > B. Arbitrary Behavioral Voltage or Current Sources. Note the exponentiation, **, not ^ (which means XOR, unless it's a Laplace expression). If you're using a current source, then it would be I = ....
H: Can a general-purpose AA charger be used for rechargeable AA lithium-ion batteries? I recently learned that rechargeable lithium-ion AA batteries, 1.5VDC 3200mAh, have become available. For instance: https://www.amazon.com/EBL-USB-Rechargeable-Lithium-Batteries/dp/B0828KRQZ3 https://www.google.com/shopping/product/13543601031434520708 Can a charger supporting >3V Li-Ion and 1.2V NiMH batteries, be used for these? And if not, what would be the necessary technical changes? AI: You can't charge such batteries via their +/- terminals. These must be charged through micro USB port on the side. This is confirmed on Q&A of the product page. Basically these are casual 3.7V Li-Ion chemistry inside with some step-down DC-DC module built-in. And a charger circuit which actually charges them from 5V USB as a casual Li-Ion cell charger. Due to this their discharge curve are pretty flat. You'll get almost constant 1.5V until a Li-Ion actual cell voltage drops to somewhere around 3V where internal protection circuit just cut if off and you'll get 0V output. So you can't charge it with such chargers as it won't drop below 1.5V like normal NiCD/NiMH cells do. Even if you apply higher voltage on +/- terminals basically you'll apply it to DC-DC output.
H: ATmega: Can one determine if an address is program memory or data memory? On an Atmel ATmega1284 (or any of its AVR cousins), is there any way to determine at compile time or at runtime whether an address that is passed to a function belongs to program memory space or data memory space? Specifically, I want to provide a debug trace where the input string may be stored in program memory space or in data memory space, but I prefer to use just one function for it so the caller doesn't need to bother: if the input string is stored in program memory, I can copy it to a stack-allocated buffer and then pretend it was always in data memory. For example, in C-like pseudocode: function my_trace(const char *string) { if (is_in_program_memory_space(string)) { strcpy_P(local_buffer, string); print_the_(local_buffer); } else { print_the_(string); } } const char data_memory_string[] = "data memory string"; const char program_memory_string[] PROGMEM = "program memory string"; my_trace(data_memory_string); my_trace(program_memory_string); I tried to put in a breakpoint in the compiled code and it seems the addresses of a data-memory–allocated string and a program-memory–allocated string happen to be very close (like 0x800040 versus 0x800360) so I can't rely on address ranges. (And, doing that would probably be unsafe anyway, but that's a different discussion.) Edit: I'm using the toolchain that comes with Atmel Studio, which in this case is version AVR_8_bit_GNU_Toolchain_3.6.2_1778. Edit 2: The solution provided by user @NStorm indeed solves the problem. However, on my system it was necessary to typecast the return value of __builtin_avr_flash_segment() to a signed char. AI: As noted in other answer, __memx might help. Check it's description: This is a 24-bit address space that linearizes flash and RAM: If the high bit of the address is set, data is read from RAM using the lower two bytes as RAM address. If the high bit of the address is clear, data is read from flash with RAMPZ set according to the high byte of the address. See __builtin_avr_flash_segment. So it's possible to tell where data are stored in FLASH or RAM by checking that high bit. But let's take a look at that __builtin_avr_flash_segment description: char __builtin_avr_flash_segment (const __memx void*) This built-in takes a byte address to the 24-bit address space __memx and returns the number of the flash segment (the 64 KiB chunk) where the address points to. Counting starts at 0. If the address does not point to flash memory, return -1. Seems to be just what you need. So basically you can go with this example (taken from here): void func (void * dest, const __memx void *source) { if (__builtin_avr_flash_segment (source) < 0) strcpy (dest, source); else strcpy_P (dest, source); } PS. I suggest to read whole Named Address Spaces manual page. It has some caveats and limatitions.
H: Help with calculating frequency offset A 10 MHz TXCO is driving a receiver’s front end and has a frequency deviation of 10 ppm. It drives a direct downconversion RF front-end mixing the L5 frequency. What is the frequency offset at baseband due to the TXCO? L5 frequency = 1176.45MHz I dont quite understand this question, help pls. Thanks in advance. AI: So, the TCXO is driving a direct downconverter (quadrature demodulator). Since the carrier is at 1176.45 MHz and the TCXO has a 10 MHz output, the TCXO output is most certainly being used as the reference for a frequency synthesizer such as a phase-locked loop (PLL) - otherwise, the problem would not make sense. Presumably, the PLL is configured to output the exact carrier frequency (1176.45 MHz). Since the PLL is referenced to the 10 MHz TCXO, the frequency error of 10 ppm propagates through. Thus, our local oscillator frequency range is: $$1176.45\textrm{ MHz}\pm 10\textrm{ ppm}=1176.45\textrm{ MHz}\pm(10\times 10^{-6}\times 1176.45\textrm{ MHz})$$ $$ = 1176.45\textrm{ MHz}\pm 11.8\textrm{ kHz}$$ Since our local oscillator can be off by up to 11.8 kHz, this means that the carrier will appear shifted in the quadrature baseband by up to 11.8 kHz in either direction. Assuming that the 1176.45 MHz signal is not already offset due to frequency error from the transmitter or Doppler shift, we would expect to see the received carrier offset by up to 11.8 kHz (positive or negative) in baseband.
H: confusion regarding bias voltage of a transistor "this dc voltage is known as bias voltage and it's ...regardless of the polarity of the signal source" now I have this question : **1.**Do i manually need to choose bias voltage so that so that transistor remains on even when negative cycle kicks in? for example let us say my signal source has value of 5 volts peak to peak , so now if I choose 7 volts or 12 volts as a bias voltage it should work perfectly, this also means that the bias voltage is not unique , it depends on the signal we are applying, we can change bias voltage according to our source signal, is it correct? (assuming everything is sine wave) AI: The circuit you have is a highly non-linear one, even if you assume that the AC voltage itself is exactly zero volts, peak to peak. Let's look at it without regard to the AC, for a moment. It'll be easier to show what I mean, then. simulate this circuit – Schematic created using CircuitLab I've removed the batteries, as they really just add "noise" to the diagram without helping at all in understanding it. It's enough to simply label the voltages, as they are implicitly referred to the ground reference. Let's say that \$R_\text{C}=4.7\:\text{k}\Omega\$, \$V_\text{CC}=10\:\text{V}\$, \$V_\text{BB}=650\:\text{mV}\$ and that therefore \$I_\text{C}=1\:\text{mA}\$ (simplified Ebers-Moll model in active mode.) Then we'd expect to see a voltage drop of \$4.7\:\text{k}\Omega\cdot 1\:\text{mA}=4.7\:\text{V}\$ across \$R_\text{C}\$, meaning that the collector voltage sits at \$V_\text{C}=10\:\text{V}-4.7\:\text{V}=5.3\:\text{V}\$. All seems well, as the collector voltage is higher than the base voltage and so the BJT is really in active mode. What happens if we change things so that \$V_\text{BB}=750\:\text{mV}\$? In this case, temporarily ignoring the collector resistor and assuming that the BJT stays in active mode, I'd expect the collector current to change by a factor of almost 50, to about \$I_\text{C}=50\:\text{mA}\$. (This is because there will be a factor of 10 change in the collector current for each and every increase of \$60\:\text{mV}\$ in the base-emitter voltage.) But now the voltage drop across \$R_\text{C}\$ should be nearly \$235\:\text{V}\$!! And that's just not possible. So already we know that even a very tiny adjustment of the base voltage leads to an impossible result -- if we assume the BJT is still in active mode. Instead, what really happens is that the collector voltage will smash down close to the emitter voltage (about \$0\:\text{V}\$) and we will have a collector current nearing \$I_\text{C}=\frac{10\:\text{V}}{4.7\:\text{k}\Omega}\approx 2.13\:\text{mA}\$ and the BJT will be in saturation mode. By now, I think you may be able to see why I hate that diagram so much. The BJT is a non-linear device. It responds wildly with respect to minor base-emitter voltage changes. When your textbook says, "This d.c. voltage is known as bias voltage and its magnitude is such that it always keeps the emitter-base junction forward biased regardless of the signal source...", it is just begging you to do exactly what you thought about doing -- adjust \$V_\text{BB}\$ so that no matter what the input signal AC peak magnitude might be the base-emitter junction remains in forward bias. If you take their comment "literally" and apply it for any-sized AC input signal, then the entire idea becomes impossible! This is why you are asking a good question, right now. When I was first trying to understand BJTs more than 50 years ago, these same stupid examples and very similar writing about them caused me similar confusion for a time. And I'm not happy to see that nothing has changed in the education about BJTs in all that time. Let's see why, using your own example of \$V_\text{AC}=5\:\text{V}_\text{PK}\$ for the AC source. You might then consider \$V_\text{BB}=6\:\text{V}\$, since that's large enough to keep the base-emitter junction forward-biased. But that means that the base voltage is \$1\:\text{V} \le V_\text{B}\le 11\:\text{V}\$! But the collector current is a non-linear function of the base voltage (the emitter is grounded.) And I mean highly non-linear. For every \$60\:\text{mV}\$ change, the collector current is multiplied (or divided) by 10!! Look at that voltage swing, \$\Delta V_\text{B}=10\:\text{V}\$! How many orders of 10 in collector current does that imply??? About 170 orders of magnitude change! Do you think there is any chance in this universe of that happening? The impossible implications from taking their comment as being true under all circumstances you can imagine for the AC signal source means that you must let go of the idea that their model works for any imaginable AC signal magnitude. In short, you must instead imagine that it only applies for a very tiny AC signal magnitude. One that does NOT materially change the "bias-point" of the BJT. Or, put still another way, doesn't have a noticeable impact on the magnitude of \$V_\text{BB}\$. In that case, the schematic they provide works. But then you probably have another good question. Why did they bother to write, "This d.c. voltage is known as bias voltage and its magnitude is such that it always keeps the emitter-base junction forward biased regardless of the signal source..."? I mean, if the AC signal is assumed to be so tiny, what's the point of writing it that way? Also, the way they write that almost makes you think that even if you keep the AC signal magnitude small, you might still be able to let the base-emitter junction swing widely if perhaps not as widely as you were imagining. For example, suppose \$V_\text{BB}=700\:\text{mV}\$. Then you might imagine that the peak AC voltage might be \$500\:\text{mV}\$. But that would NOT be valid for this circuit. Yet it meets with their statement, just fine, and you didn't even change \$V_\text{BB}\$! So I think they chose poor writing. Why not just say, "The AC peak voltage must not exceed a couple of millivolts in peak magnitude?" That's easy to read and directly addresses the situation and you'd still also know that the base-emitter junction is forward-biased, too. I don't know why they didn't just do that. You'd then know that you aren't free to make much adjustment to the AC voltage and things would be a lot better for the example at hand. (It would keep you nearer the much simpler "linearized" small-signal model of the BJT and well away from its global, highly non-linear behavior.)
H: How to determine current flow direction in a circuit? I have this diagram: The datasheet specifies I should bias the COMM+ line around 1.5V. However, when I measure the voltage at TP1 with the oscilloscope probe connected to ground, the output from the oscilloscope is always 0V (i.e Ground) I want to understand which way the current flows in the given circuit and what voltage I should see for every division. The main current source is coming from 3V3. The other source is signal coming from the Coil attached to C23 and R18. I am trying to get 1.5V at COMM+ through a resistor voltage divider. AI: There is some missing information in your question. In any case, if you are wondering how you can calculate currents flowing through circuit branches and voltage differences across nodes you can use the constituent equations of the devices in your circuit and apply Kirchhoff's laws. Furthermore, in complex linear circuits you can apply mesh analysis.
H: 4BIT DAC has negative voltage at op amp output My school task was to simulate a 4BIT DAC converter in multisim. Im supposed to simulate the circuit when the switches are on/off and measure the output on the op amp. I think the circuit works fine but I dont quite understand why the voltage is negative. I checked the poles and tried to invert reinvert it but nothing works. Is it standard for it to be negative? Or? If my circuit is built wrongly please tell me how to fix it Thank you very much AI: As you're using an inverting amplifier, this is kind of expected – in fact, not only kind of. This is your design intent - inverting the voltage. If you don't want inverted voltage, don't use an inverting amplifier configuration for your opamp, but a non-inverting one (there's gazillions of things on the internet about opamp configurations, do your own research. Even Wikipedia has a page on opamp applications).
H: Editing/Creating a Triangular Solder Bridge Footprint I wish to use a triangular solder jumper (solder-jumper) in my PCB and I am using KiCAD. The only available solder jumper - the one pointing in one direction (as in the attached image), is 2.45 x 1.50 mm. I wish to edit this and make it 1.25 x 1.25 mm. Issue is I am unable to change the dimensions of the footprint in the footprint editor ( I have a copy which I am editing not the main one ) nor am I able to create an entirely new one as whenever you change the dimensions in the edit dialog box, the size remains the same. The Triangular Cu area seems specifically edited in some way. Is it possible to make a custom F.Cu shape like the one used for the Solder Jumper Triangular in KiCAD ? If yes, how to do it ? Thank you. AI: Such footprints are made out of a graphic shape (a polygon) and merged with a pad to provide the track anchor point. How to change? select the pad you want to modify and right click to view the "explode pad" option Now you will be able to see a small SMT pad and the polygon. The SMT is where KiCAD will track to. Resize the polygon to your desired shape Finally right-click on the polygon and select "Create pad from selected shapes" This should be easier in V6 as there is direct poly to copper flood
H: What cause sudden spike in the output voltage of transformer? What issues can cause the output voltage of a transformer to suddenly increase from 230 volts to 400 volts (in the case of any transformer and in the case of the delta to Y configuration of the transformer)? What would be the voltage between a phase and ground if a coil of the Y side (secondary) gets short because of heating or any other reason? AI: The cause could be a broken neutral wire shorting with a line.
H: Is it generally OK to ignore the current in and out of a MCU pin when analyzing circuit external to the MCU? From here, I see below illustration about the pull-up resistor. And the article says: With a pull-up resistor, the input pin will read a high state when the button is not pressed. In other words, a small amount of current is flowing between VCC and the input pin (not to ground), thus the input pin reads close to VCC. It gives me an impression that the current in and out of a MCU is so small that it can be ignored when analyzing the behavior of the circuit external to the MCU. Such assumption may simplify things a bit. But is it OK? AI: Maybe. For values in K ohms usually you can ignore it, unless you're doing something with unusual operating environment or high reliability. It never hurts to read the datasheet thoroughly when you are considering spec'ing a new part though. Of course if you enable internal pull-ups or pull-downs then you will generally have to take those into account. If you try to minimize switch-closed consumption by using a resistor value like 10M then you will find that the MCU maker won't necessarily guarantee that will work. For example, the Microchip ATMega328p has the following guaranteed characteristics: 1uA of leakage is enough to overcome the few hundred nA from the 10M resistor, so it might not work. Chances are the current will be much lower at Ta << 105°C so it might be okay for a toy or consumer device. There are reasons not to use too low a resistor value (wasted power, possibly switch wear in extreme situations) and too high a value (switch reliability, leakage due to the semiconductors or contamination on the PCB and possible pickup of EMI).
H: Can I connect an inverter and a solar charge controller together in parallel with battery terminals? I have an inverter works on AC mains and has battery charger inbuilt. I have solar panels with a charge controller. Can I connect both power sources wires to battery terminals in parallel. So In case of power interruption, battery could charge using solar. AI: With your title, you just described the usual motorhome setup. Solar, AC charger and inverter all connected to a battery. So yes, it is possible and a lot of people do. Then again, it is up to you to match the both chargers, the battery, the inverter and the load together. Failing to match and connect them properly can result in anything in between poor performance, fire or personal injury. And, if I have solar power, I would like the battery charged by the solar first and to use AC only when the solar power is not available or not enough. AC power costs money, solar power is for free.
H: How to properly wire a five color RGB LED strip? I've worked with common anode and common cathode RGB LEDs either 12 volt or 5 volt using PWM mostly to change color. Usually then these products that I work with have the typical four color wiring where color may differ a little bit depending on being common anode or common cathode. I haven't yet worked with RGBW, but I think the white seems to make use of yellow wiring in most situations I've seen. I often get products given to me that I eventually put to use. I received a light bar that I believe is RGB, but I don't know if its addressable. The person that gave it to me doesn't know where they got it from so they cannot look the product up to identify it or get tech specs for it. I'm puzzled as to what it is with the wire colors and markings inside the light bar. I did take a 12 volt battery and hooked up black to negative and white to positive and got red led color. Other than that I'm unable to get any other color to reveal, but at this time I'm afraid I may burnout something through trial and error. I could not find any other markings on the strip's exterior or on the short wiring it came with either. I placed an AA battery next to the light strip just to give size perspective. The wires have five colors coming out Black, White, Red, Green, and Blue. I have no idea what role each has. Inside the led bar I see markings where the wire is soldered to the LED strip. It shows GND, L, 12, and +12 from top to bottom. There is potting covering the solder connections so I can't make any positive identification of where the colored wiring is going. I've also never seen these labels before so I'm stumped! AI: There are no resistors visible, and 2 types of LEDs: 2 pin with one chip 4 pin with 2 chips LED chips aren't the typical color of white LEDs (ie, some tone of yellow phosphor) so they're not white LEDs. There's some red since you managed to light them, so it's probably RGB. All the LED pinouts support series connection, so you probably got series strings in parallel. If you don't want to take it apart, I'd recommend using a 12V or more power supply with a 22k resistor in series to limit the current to a value that is safe for LEDs both forward and reverse, then trying all the pin combinations.
H: Increase runtime: batteries in parallel -> buck || batteries in series -> boost I have 18650 batteries, I created 4S packs with 6P in each. I have a 12 V 2 A solar panel and I want to use this panel to charge them ( i know I need a charge controller and I must balance charge them ) My main question is: I have two 5 V devices (Arduino and a power bank charger) that I want to power up with these batteries. I have two 12 V LED strips each drawing ca 1 A. Should I put all these 24 batteries in parallel to get my 5 V and use a boost for 12 V or use my current 4S6P and step down to 5 V? Which setup will give me more runtime? Having 24P will have an easier charging circuit. AI: You say that having 24 batteries in parallel gives you an easier charging circuit and that is true but, it's also a technical compromise in that you cannot ensure charge current is balanced to all 24 parallel batteries equally. Having the batteries in series means that you can make a much better job of charge current balancing the batteries but, it does have more complication. Therefore, my preference is "series connected" because it brings an improved level of security in charging the batteries. Also, a buck regulator that is used to step-down to a lower voltage, provides slightly better efficiency than using a boost regulator to step-up voltage (for the same circuit complexity). Clearly using a synchronous regulation is more efficient than using a non-synchronous circuit too (because you don't lose power in "the diode"). If you decided on a parallel arrangement I would prefer to have independent parallel strings of series-connected batteries so that you could properly charge balance each cell in each string. This of course means a charge balance circuit for each cell but, it's still my preference. Which setup will give me more runtime? Well there won't be much difference in either I suspect so, it boils down to which set-up can produce the run-time but also, produce that run-time reliably time and time again.
H: Adjustable Capacitance Circuit I'm designing a circuit where I want to be able to adjust the effective capacitance between two points, A and B. To do this, I've essentially put a dip switch into a circuit with a bunch of caps on the other side. My goal is that I'd like to be able to turn off and on various combinations of capacitors to be able to achieve different levels of total capacitance. In the diagram below, ignore the part numbers and just assume the caps can take on any value. Am I doing this sort of thing right? Is there a better way? Also, is it OK to have the return path on the bottom and still expect the capacitor values will add up as a linear combination? Typically when I see this sort of thing the last capacitor in the parallel group is the return path. As some context, A and B will get connected to pin 8 and the junction on pin 1 in this circuit. AI: The setup may be okay for trials, to arrive at the optimum value of capacitance, after the short circuit is removed.
H: Effect of right half plane zero on bandwidth of control system It is known that the right half zero and the delay limit the achievable bandwidth of a control system. Can someone please explain the reason, how the right half plane zero limits the achievable bandwidth and how the time delay in a control system limits the achievable bandwidth of a control system? AI: As you know, poles and zeroes are the respective roots of the denominator and the numerator of a complex transfer function. Once they are known, they can be placed on a map - the \$s\$-plane - depending on their real and imaginary components. The below drawing shows an example for a transfer function featuring one zero and three poles: The left-side of the map is called the left-half-plane abbreviated LHP while the right-side is the right-half-plane or RHP. Depending on the real value of the roots, poles and zeroes can be in either side. Without entering into the details (the literature abounds on the subject), you can infer the position of the zero or the pole if the transfer function is written in the following way for a pole: In the above drawing, the "+" indicates a pole located in the left-half plane, leading to a damped response. The phase starts from 0° and asymptotically hits -90° as frequency approaches infinity. This is a classical pole also named stable pole. Take the same transfer function and replace the sign by a minus, and you propel the pole in the right-half plane to make it a RHPP. The magnitude response is unchanged but the phase response is now the opposite as before. It still starts from 0° but increases to 90° as frequency approaches infinity. Having a RHPP in the open-loop gain implies a robust compensation strategy bringing the pole back in the left-half plane once the loop is closed. A control system featuring a closed-loop transfer function with a RHPP cannot be operated. Similar observations apply to the zero whose phase response also changes depending where it stays in the \$s\$-plane: With a LHPZ, the phase starts from 0° and increases up to 90°. We say the zero boosts the phase when it appears in a transfer function. Now take the same zero and push it in the right-half plane then the phase response changes: the RHPZ no longer boosts the phase but lags it down to 90° as frequency approaches infinity. Let's have a look at a transfer function having a LHP zero and two poles. The response is shown below. The phase response of the process to be compensated nicely lands to -90° as the LHP zero response compensates the lag of the high-frequency LHP pole. Assume you want to crossover at 5 kHz or so. You think of a compensation strategy featuring some response to obtain a good phase margin at 5 kHz. I have arranged a pole and a zero to meet this goal as shown in the low side of the above drawing. Now, think of the same transfer function but having an extra RHP zero on top of the existing LHP zero. The plant response is no longer the same with the RHPZ arbitrarily placed at 8 kHz: The phase now hits -180° at high frequency, consequence of the RHP zero. Should you try to keep the same compensator, the phase margin would no longer be as high as before: To maintain a good phase margin despite the RHP zero, you will have to crossover at a lower frequency, where the effects of the phase lag brought by the RHP zero are less observable. In other words, stay away from the RHPZ and its phase stress. By doing this, you slow down your converter but enjoy an acceptable phase margin: Now let's see what it implies in terms of control system. I will take the example of a switching converter as it is my field of expertise. Think of a converter transferring the energy in a two-step approach, like a boost or a buck-boost converter: first you store the energy in the inductor during the on-time then transfer it to the load during the off-time. Should a sudden power demand occur, the converter can't immediately react and must go first through another storing-energy phase before answering the demand. This intermediate phase naturally introduces a delay in the response to a change: the current in the inductor has to grow cycle by cycle (it cannot instantaneously jump to the next current setpoint) but this current increase is hampered by the inductor value and the available volt-seconds. The mathematical model of this delay in the response is the familiar RHP zero which appears in the control-to-output transfer functions of the said converters. The only way to stabilize them is to select a crossover far before the RHPZ phase lag brings troubles. Usually, people adopt a crossover placed 20-30% below the worst-case RHPZ position. A pure delay also introduces a phase lag while its magnitude is constant to 1 or 0 dB. It can be inserted in the transfer function as shown below. This is excerpted from an APEC seminar I taught in 2012 The Dark Side of Control Theory: This delay \$\tau\$ which in Laplace can be expressed as \$e^{-s\tau}\$ will lag the phase and affects the plant response. The phase margin can suffer and you should account for its presence (and variability) especially if you shoot for a high bandwidth. Delays are caused by propagation times, conversion times etc. Here, you see that a 250-ns delay incurs a phase lag of 9° at 100 kHz. Not a big deal if you plan on closing the loop for a 1-kHz crossover but if you shoot for a much higher figure, you need to account for its contribution. Finally, keep in mind that the Bode stability argument is meant for minimum-phase transfer functions (no delay or RHP poles and zeroes in the expression). When delays are present, the Bode plot can mislead you in determining the system stability.
H: Where can I connect the unused input to preserve the logic expression? So there is an unused input for the bottom NAND gate (even though it is not shown in the picture) and the question wants us to connect it while preserving the logic expression. I know when we connect it to the ground, the lower NAND gate never outputs 0. So I assume we are not going to connect it to the ground. But when we connect it to the +5V supply or E, we can get 0 or 1 as an output. Connecting it to E makes it work like it is a two-input NAND gate. I know I have to choose one of them but which one. I would appreciate it if I can get an explanation of what I should do when I come across these kinds of problems. By the way I assumed we needed to connect it to E and made these answers (I would appreciate feedback to these also): WIRING DIAGRAM CIRCUIT SCHEMATIC TRUTH TABLE For the truth table I used this logic expression: !((A&B)&(C&D)&(!(D&E))) AI: when we connect it to the +5V supply or E, we can get 0 or 1 as an output. Connecting it to E makes it work like it is a two-input NAND gate. I know I have to choose one of them but which one. You could do it either way and the logic would work the same. So why choose one over the other? With 2 inputs connected together the signal loading is increased. This could be a problem if the signal driver is close to its fan-out limit. Unused inputs wired to Ground or V+ should take a short route directly to the IC's own supply pin, to avoid possible EMI and supply voltage differences. Many ICs have an input voltage limit of ~0.3V above and below the supply lines, which can easily occur on large boards with high current draw. If the IC is in a socket or breadboard then the supply pin might accidentally become disconnected, causing the IC to draw full supply current through the unused input's protection diode. Connecting adjacent pins usually makes PCB routing easier. In your case you have the obvious choice of connecting pin 13 to pin 14 for logic high and using pins 1 and 2 for inputs, or doing it according to your wiring diagram. Until you route the PCB you won't know which one is more convenient. In more complex circuits with dense layouts it is quite common to see signals routed to different pins than expected. For example a data bus might be wired 'randomly' to a bus buffer or RAM chip because it doesn't care which bit is which. PCB layout programs often have 'pin-swap' and 'gate-swap' functions which allow the PCB designer to easily optimize the layout.
H: diode required for smoothing capacitor I never quite understood the finer details of why a diode is required for turning capacitors into "smoothing" capacitors instead of just low pass filters. The simulations I have done in multisim just seemed to add into the confusion. I realize that capacitors just adds a phase shift to the current and does nothing to smooth out AC voltages by themselves unless we add a diode in series with the AC signal. However the question comes when we try "smoothing" DC voltage with added thermal noise. below is a circuit with diode D1: and here is the oscilloscope plot : However when I remove the diode D1 : the plot becomes : Now the biggest issues comes when I change the thermal noise frequency to 100hz, now both design without or without the diode are able to perfectly smooth out the noise. The 100hz is chosen at random and not a fixed threshold. The R and C forms a basic low pass filter with cutoff at around 98Khz and are modeled as ideal components. My question is why is the diode required for the capacitor to perform its "smoothing" operations and why at 100hz it doesn't matter if the diode is there anymore, also why a diode is needed in general in series with an AC signal for capacitor to smooth out AC into DC signals. Finally does that mean i should add a diode to LDO or buck regulators if I want capacitors to "smooth" out ripples instead of bypassing high-frequency components? Thanks AI: A 1 kΩ thermal noise with a bandwidth of 10 MHz at 27 °C produces an RMS value of 12.87 μV. At 100 Hz, the value is 0.04 μV i.e. over 300 times smaller. I can't explain your scope noise pictures but the numbers speak for themselves. also why a diode is needed in general in series with an AC signal for capacitor to smooth out AC into DC signals The diode is acting as a peak detector. Once the capacitor is charged up by the voltage peak, the subsequent falling voltage at the diode's anode reverse biases the diode and the capacitor remains charged to that peak. Finally does that mean i should add a diode to LDO or buck regulators No, you shouldn't.
H: What type of transformer is used in LLC converters? Is resonant or pulse the same thing? Often in the description of LLC converters, I hear the mention of a resonant transformer. On Wiki, I found both resonant and pulse transformers. In the photos of the LLC converters - the transformer is very similar to the pulse one. Only there, next to it, there is also a magnetizing inductor (Lm). Questions: What kind of transformer is used in LLC converters? Are resonant transformer and pulse transformer the same? If not, what are their main differences? (constructive / theoretical). Thank you! AI: A transformer used in an LLC converter could be a pulse transformer or, in fact, any transformer suitable for use at the operating frequency. It's not the transformer itself that is resonant; it's the use of it along with the capacitor shown as \$C_r\$ that produces a resonant frequency. Providing that the magnetization inductance (\$L_m\$) and leakage inductance (\$L_l\$) of the transformer's primary is well defined and fairly stable, then you get a defined resonant frequency. Using an extra inductance external to the transformer makes life easier of course but I'm not sure in your diagram if that's the case. Only there, next to it, there is also a magnetizing inductor (Lm). All transformers have a primary magnetization inductance - it's basically the inductance of the primary winding (secondary ignored or not connected). It will usually dominate \$L_l\$.
H: How Single Pole Switching of Circuit Breaker can improve the transient stability of the Power System? There are numerous methods we can use to improve the transient stability of the power system. Visualization of methods like using fast excitation system, High-speed governors or using dynamic resistance switching is easy. But, How the implementation of single pole circuit breaker can increase the transient stability? AI: The majority of faults in a grid transmission system are phase-to-ground. So, we can clear the fault by tripping only the faulted phase. This helps transient stability because you can still transmit power on the 2 un-faulted phases. The easiest way to think about it is to picture a machine connected to the grid via one transmission line. If the machine is operating at some electrical output P, and you open all three poles (phases) of the transmission line then P will go into speeding up the machine - since it can't get out over the transmission line. Remember, at an operating equilibrium you have P mechanical input = P electrical output + losses. But, if you only open 1 pole, a significant amount of P can still exit the generator and the increase in machine speed will be much less. Read section II of this paper and you will see what I'm describing.
H: Why LDO regulator rejected ripple better than normal Linear regulator Both type contain feedback loop but different type of output transistor . Linear regulator use BJT LDO use FET (which can operate at lower dropout voltage) But what make LDO better noise rejection? AI: HDO (high-dropout regulators) versus LDOs... I'll use positive regulators only for the sake of simplicity... Both can be made with BJTs or FETs, hell you could even use JFETs, vacuum tubes or whatever pass device is suitable for the job. Here's a PNP based LDO. Tradeoffs are what you'd expect: BJTs require base current, so a PNP-based LDO will throw away some current into ground whereas a PMOS LDO will not, which makes the PMOS option more efficient, especially at low dropout where the PNP begins to saturate and requires more base current. However, at low dropout voltages, MOSFETs are quite a bit slower than BJTs, and parasitic capacitance increases, so there's a compromise. Positive LDOs traditionally use P (PNP or P-channel) devices because that's simpler. But you can make a positive LDO with a N device too if you have a higher bias voltage available, or are willing to pack a charge pump. For example, say you have some 1.5V for your CPU, and you need some 1.2V at 1A for your FPGA or whatever. You could use a switcher, but you could also use a LDO, with that voltage drop that would still be pretty efficient. Problem: the chip needs a pass transistor which will fully turn on at 1.2V gate drive and pass the required current with a low dropout. For a PMOS this is a difficult combination. Even worse if you want 1V output. Also the whole drive circuit has to fit in the output voltage, and the internal error amp and reference have to work with the input voltage. Not easy with 1.5 volts. Solution is a NMOS based LDO which uses an additional higher voltage to power itself and drive the pass transistor. But what make LDO better noise rejection? Mostly the fact that LDOs are more modern, whereas oldskool high-dropout regulators are more ancient. PSRR depends on two things: At high frequency, how much parasitic capacitance there is across the pass device. More capacitance means more noise gets through. So a larger device, capable of more current, will be worse. P-channel will be worse. And since capacitance of transistors increases as Vds or Vce decreases, low dropout will be worse. At lower frequency where the feedback loop is regulating, PSRR depends on how the characteristics of the pass device depend on dropout voltage, and how much feedback is available to correct these variations. For example, at low Vgs a MOSFET begins to turn into a resistor so it has strong dependence of Id versus Vgs, so it will need a stronger gate drive to keep Id where it needs to be. BJTs also get slower close to saturation. On the other hand, at higher dropout voltage, the pass device behaves much better and is faster and easier to drive. All this means that the PSRR performance of any regulator will degrade at low dropout voltage, no matter the polarity of the pass device. This is not visible in high-dropout regulators, but in LDOs it is something the datasheets usually say... without saying it to clearly, because "more bigger numbers sell". SO the answer to your question is: LM317 was developed in 1976. Sure, modern regulators can have higher PSRR. This is not due to being LDO or not, but to better IC technology, faster error amps, higher bandwidth, etc. Note many modern LDOs are much worse than LM317, because they compromise to optimize other stuff, like low quiescent current. LM317 uses a huge quiescent current. For any regulator, no matter the topology, PSRR is worse at low dropout. Modern LDOs get good performance in spite of low dropout because technology has advanced a lot. Example: ADP151, a pretty good low noise regulator. Graphs at 2.8V out, 3.3V or 3.5V input. So dropout is 0.5 or 0.7V. Notice the two curves at 10mA are quite similar. However, at 200mA, with the higher 3.5V input, PSRR begins to degrade above 1kHz. And at 3.3V input, with 0.5V dropout, at 200mA it loses about 15dB PSRR over the whole low frequency range because it's running out of Vds for this current. That's because it's a 200mA regulator, so the internal PMOS is sized for that. Using a larger PMOS would make it more expensive and slower, thus worse at 100mA. So when you check PSRR numbers, always look at the curves, at what frequency, at what current, what output caps, and at what dropout it's actually measured. If the datasheet proudly says "50mV droput!!" but the PSRR curves are measured at 0.5V dropout, they'll look a lot worse at 50mV.
H: Possible to control a transistor by BC terminals? I'm thinking about a strange working mode: the CB part is forward biased and trying to control EC current by CB current. EB is reversed. May it work? Transistor is BJT. AI: Yes, BJTs will work in reverse mode (collector and emitter switched). Most types will have very low gain (5 or 15 rather than 100 or 300) and the breakdown voltage is generally very low, less than 10V. A few are made to be symmetrical and don't much care which way you connect them.
H: Do I need USART1 to program the bootloader with STM32 - Or could I use SWDIO & SWCLK? I have heard that if I buy an empty factory made STM32, it will come with no bootloader at all. To program that STM32 with a bootloader, I need to do that through USART1. And later when I want to implement the program onto the STM32, I need to use SWDIO & SWCLK. Is that correct? Or could I program the bootloader with USART1 and SWDIO & SWCLK, the same way I can program the STM32 with USART1 and SWDIO & SWCLK as well? AI: The STM32 will come with factory bootloader. That bootloader is the one that enables programming over various interfaces if SWD or JTAG is not used. Different models support different interfaces, but UART1 is always supported. You can always use the SWD and JTAG interfaces as they do not even need to run bootloader code for that to work.
H: Is there a difference between these two circuits? I might not have drawn the second one right, but imagine if it was a live wire in a bathtub going through the water and the plumbing to ground instead of returning to neutral, except way more volts. Is there a difference between the voltages between the two sets of prongs, if the source voltage is the same and high? AI: Your first drawing is an isolated DC power supply such as a battery with no connection to ground. The voltage measured between the terminals will be the battery voltage. Your second supply has two breaks in the circuit. The supply is still isolated so no current will flow. The voltage measured between the terminals will be zero as the battery negative is not connected to anything. You could test this with a multimeter and a battery: connect one meter lead to an earthed point (cooker / fridge chassis) and the other lead to the positive of a battery (with no connection on the negative). Then how does my bathtub analogy work? Clearly there is current running there. So does that mean that that current comes back into the source through the ground? simulate this circuit – Schematic created using CircuitLab Figure 1. Your domestic supply is earthed at the supply transformer. Earthing the bath ensures that in a fault the voltage of the bath is kept close to earth potential. With a high-current earth fault the fuse will blow. RCD / GFCI devices add an additional level of protection. Also doesn't real current run from the negative, No. The convention is that current flows from positive to negative. It's true that in metals that the charge carriers are electrons and that they drift in the opposite direction to conventional current. In other circumstances current flows in positive ions. Don't get hung up on this. ... so if I connect the positive of the battery to the fridge, there would be 2 reasons for why the multimeter reads 0. No, we're reading voltages so it doesn't matter whether current is flowing or not. Links: https://electronics.stackexchange.com/a/213003/73158.
H: BJT Current Mirror Problem Problem: A simple current mirror is set up with two NPN transistors. \$V_{CC}=12\$ Volts \$+V_{in}= 30\$ Volts. Load resistance \$R_L\ =\ 1\ K\Omega \$. What is the resistance value interval for output transistor to stay in forward-active region and not to be deformed? ( \$V_{BE}=0.7\ V, V_{CE,SAT}=0.6\ V,\ V_{CE,MAX}=25\ V, I_0=I_R\$) simulate this circuit – Schematic created using CircuitLab What I have tried so far: $$ I_{R} = \dfrac{V_{CC}-V_{BE}}{R} $$ $$ I_{R}=I_C+\dfrac{2I_C}{\beta}=I_C \Big(1+\dfrac{2}{\beta}\Big) $$ $$ \therefore I_0=I_C=I_R \times \dfrac{1}{1 + \frac{2}{\beta}} $$ $$ I_0 = I_C = \dfrac{V_{CC}-0.7}{R}\times \dfrac{1}{1+\frac{2}{\beta}}$$ but I am stuck here since \$\beta\$ value is unkown could not take it further. PS: I am not asking for a full solution. AI: Your question is about the so-called "compliance voltage" - a very important property of current sources (despite the respectful name "current mirror", here Q2 is just a simple transistor current source... or, more correctly, a sink). When you begin increasing RL, the voltage drop VRL across it will begin increasing as well. To compensate this increase, Q2 will begin decreasing its collector voltage with the same rate. Figuratively speaking, you can think of the Q2 collector-emitter part as of a variable (dynamic) "resistor" that complements the RL resistance to a constant value; hence the constant output current. Finally, the Q2's Vc will reach VCEsat = 0.6 V. This will happen when VRL = +Vin - 0.6 V and, as they say, Q2 becomes saturated. After that, the "magic" of Q2 constant-current source ceases. It cannot decrease its "resistance" anymore... and the current will begin decreasing (according to Ohm's law) when you continue increasing RL.
H: Why multiple coils on a ferrite rod antenna? I'm trying to build some ferrite rod antennas to test some avalanche rescue technology and I've been looking at the internals of an old BCA Tracker avalanche beacon to figure out how I should go about designing my own. I have two ferrite rods (from Fair-Rite) that I plan to try out and test separately. I've done some hand calculations using the formulas: $$F = \frac{1}{2\pi\sqrt{LC}}$$ (from What do I need in order to build and test a ferrite rod antenna for a specific frequency?) $$L = K \cdot u_0 \cdot u_{rod} \cdot N^2 \cdot (\frac{Ae}{l}) \cdot 10^4$$ (from https://www.fair-rite.com/product/rods-4078377511/) After picking some arbitrary target values for L and C given F = 457kHz, I have a general idea of how many wraps of the wire it will take to get close to the target frequency. However, after looking at the antennas in the BCA Tracker and poking around with the oscilloscope I'm not entirely sure how to start. Here's a photo of one of the BCA antennas: Here are my questions: Why are there three coils and why is the center coil smaller than the outer ones? Why not just one coil centered on the rod? Is it necessary to tape or insulate the rod with something before wrapping it with wire? How would you transmit 457kHz from one of these antennas? Is it as simple as applying a voltage to one of these coils? AI: Here is a link to a bunch of AM radio schematics. A common theme is that the loopstick antenna is either a transformer or an autotransformer, tuned to resonate at the desired broadcast frequency by one-half of a dual-gang variable capacitor that also tunes the local oscillator. As above, this covers (in theory), two of the three coils in your image. https://images.search.yahoo.com/search/images;_ylt=A2KLfRtt.5VfaeQAktJXNyoA;_ylu=Y29sbwNiZjEEcG9zAzEEdnRpZANBMDYxNV8xBHNlYwNwaXZz?p=six+transistor+am+radio+schematic&fr2=piv-web#id=5&iurl=http%3A%2F%2Fwww.sw-em.com%2Fam_radio_schematic2.jpg&action=click
H: Why is the output current not equal to the input current in this current mirroring circuit? Why is the output current not equal to the reference current in this current mirroring circuit? (I have been explicitly told that they are not equal). Both of the transistors are equal and in the saturation region. I thought that the voltage applied to the drain of the second MOSFET (Q4) did not alter the output current as long as it remained in saturation? AI: As a hint, what effects in a MOSFET cause the channel current to depend on the drain-source voltage, even in saturation mode?
H: How do you choose components for a gps antenna I have to solder an active gps antenna on a pcb. But I don’t know how to choose components. In my case antenna will be used outdoor on a pet collar. First of all there are a lot of different ceramic antennas, what are they differences? Mainly they have various sizes but other parameters are the same. I’ve read on one forum that smaller ones are much worse. And amplifiers, I want to use some cheap circuit like here - https://www.arrow.com/en/categories/amplifiers/gps/gps-amplifiers But most of them have very poor datasheets. How to choose such ones? Could They be connected to passive ceramic antenna directly? Are other components needed ? Like filters or so? What would be main differences between such antenna and for e.g. topgnss active one? AI: If I was doing this in one off or small volume I'd choose an RF chain that had the most chance of working with the minimum of effort. That would almost certainly have antenna, RF amplifier if used and GPS IC integrated in a module that I connected power and basic signals to. Even trying to implement a reference design with separate antenna + amplifier + GPS IC would be liable to be one to quite possibly two orders of magnitude harder. (= 10 to 100 times harder.) It may be small and look trivial, but antenna implementation can be demanding beyond reasonable belief. If meeting an extraordinary a challenge is not the aim then learning how to tame GHZ level RF is best left for another occasion - on each occasion when the need arises :-). As for specs - I'd read spec sheets. If a device did not have a good spec sheet I'd look for one that did have - I'd buy a poor spec sheet device only if it was well known and recommended by a large community of users and if there were technically compelling reasons to do so.
H: How to read MOSFET datasheet to determine when the MOSFET is fully on I'm currently trying to find an N-channel MOSFET to act as a switch to be controlled by my Arduino, and I've come across two particular MOSFETS so far that seem like they would be a good fit to the application I am trying to do. The first one is this one: https://components101.com/mosfets/irfz44n-datasheet-pinout-features The second one is this one: https://www.infineon.com/cms/en/product/power/mosfet/12v-300v-n-channel-power-mosfet/irlz44n/. For the first MOSFET, in that link it says that the MOSFET will turn on starting at 4V and won't fully turn on until it exceeds 10V, so its recommended to use the second MOSFET I've linked. I'm kinda new to this so I was wondering how do I know that the first MOSFET requires 10V to fully turn on? I looked at the datasheet and while I don't understand most of it, I did see the section labelled as Vgs(TH). The min voltage stated is 2V while the max voltage stated is 4V. Does the 2V not mean anything then? Why does the datasheet state 4V is that max Vgs(TH)? Looking at the absolute maximum ratings, it seems that Vgs should get nowhere near 20V, so how do I know that at 10V it will start to be fully on to use as a switch? In the link to the first MOSFET, it also actually recommends using the second MOSFET I've linked if I want to use a microcontroller to control the transistor. They've come to that conclusion because the second MOSFET can be fully turned on using the 5V from an Arduino. AI: In this specific case the way you can tell that the second one is a better choice is that Rds(on) is specified down to 4V, but in the first one, Rds(on) is specified only at 10V. In power switching applications I usually try to stick to MOSFET's that have Rds(on) specified at the Vgs I will be using (or lower). Vgs(th) is only marginally useful in figuring out power switching applications. The reason is that Vgs(th) is usually defined as the gate-source voltage when drain current = 250uA or 1mA or some other low current. In order to switch a high current with low Rds, the voltage will need to be much higher than Vgs(th). Vgs(th) may be a useful parameter in cases where the drain current you plan to switch is very low, comparable to the drain current used in defining and measuring Vgs(th). In cases where Rds is not specified at your voltage, you may be able to figure it out or at least constrain the possibilities by studying the curves in the datasheet. But it is easier to find another FET that has guaranteed Rds at or below the voltage you plan to use.
H: Australia–ASEAN Power Link - why not build the solar farm near Singapore? World's Largest Solar Farm to Be Built in Australia The 10GW, 120 square km solar farm in Australia is connected to Singapore via a 4,500-kilometre high-voltage direct current (HVDC) network. The Australian desert might be a good location for the solar farm, but how does that justify the 4,500-kilometre HVDC? Does Singapore have cloudy weather, perhaps? (It is closer to the equator.) Wikipedia, Guardian AI: The land area required is unavailable in Singapore. The land area is potentially available in Malaysia or Indonesia. Land in the Australian outback is available and generally of low cost relative to even Asian locations. Australia is liable to be politically reliable over the long tern relative to other potentially viable countries. This is liable to be a significant factor. The cable has to transit Indonesia (or it has been chosen to do so for a small part of the route) but presumably this is considered to be an acceptable risk. For perspective. A Haifa, Israel to London England great circle path is 3500 km. (2200 miles). Add up to 1000 km for political expediency and you are up to the 4500 km (2800 mile) Australia-Singapore distance. Finding room in the Negev desert should 'not be too hard'. Whether security warrants such is moot. Tunis to Moscow is about 3300 km. (2050 mi.) Tunis to Tehran is about 3900 km. (2400 mi.) Nice great circle and driving distance calculator here Haifa - London - 3500 km Electrical technical: Technically the cable is a HVDC link. Brian correctly notes that the recently commissioned 3500 km Chinese 12 GW 1.1 MV HVDC link greatly exceeds the SUED link. I had said - I have been unable to find projected voltage but the previously longest proposed SuedLink in Germany will be 750 km (465 mi.) and 525 kV DC. Long distance submarine high voltage links are invariably HVDC due to losses in AC systems related to the close proximity of water rather than air to conductors In New Zealand the HVDC Inter-island link has a 40 km (25 mi.) submarine section and 610 km (380 mi.) total length and operates at 1.2 GW. The whole link operates at DC despite the submarine portion being only about 7% of the length. (3 submarine cables for redundancy - very occasionally someone damages one despite VERY comprehensive precautions. That must be exciting! :-) ). Commissioned in 1965 with mercury arc tubes it was latterly upgraded to (large) thyristors in 1992. If the Australia-Singapore ASPL link operating at 10 GW operated at 1 MV differential it would require 10,000 A - not a vast amount IF length is ignored. To lose say 1% of energy resistively the cable resistance need to be about 1 Ohm. Higher losses would allow a higher resistance approximately linearly with loss %. For 1% loss I estimate that a pair of copper cables would need to be about 79000 mm^2 (96700 kcmil) ~= 450 mm (18") diameter each, for the actual conductor, to achieve 1 Ohm loop resistance over 4500 km. (E&OE). Higher loop resistance at the same current would see losses increase with the square of the decrease in conductor area. Use of aluminium or other material would probably allow a lower cost if secondary factors were not overly significant. For comparison, the two main cables on the Golden Gate bridge have a diameter of slightly over 900 mm (36") diameter each. So it's BIG, but not beyond imagination. For many websites relating see this Gargoyle search. Very nice technical cable guide - 92 pages Power cable catalog here - 222 pages.
H: Can this capacitor circuit on the left be rewritten as the one on the right In other words, it should be in parallel. AI: Figures 1a and 1b. What's important here is what nodes the components are connected to. It should be clear that in both (a) and (b) above that C2 is connected to A and B in both configurations and that C3 is too. The circuits are equivelant. Because the ends of both components share common nodes they are deemed to be in parallel. The schematic on the right makes this more obvious and would generally be considered a better way to draw the circuit.
H: Seeking deeper understand of reading a schematic with an op-amp IC (AD-8574) A couple days ago I was seeking some education on reading a schematic, I had great answers for replies and I feel like I am closer. What I am trying to get my head around is the use of a dual channel op-amp circuit. I think the circuit shown below is indicating the exact part number I need to use, as well as the pins I would use from the IC itself. Source: Schematic was copied from page 4 of the manufacturer's product datasheet. Zhengzhou Winsen Electronics Technology Co.,Ltd Looking at the part number's datasheet (the source of the below pin out diagram as taken from Mouser.com) it looks like I would just need one of these chips on the board, is that correct? Is that IC in the pinout a quad channel operational amplifier? And last question, in the schematic above, is the schematic indicating the actual pin numbers to run the traces to? Seems like a clearly defined blueprint of what exactly I should be doing or am I just being overly optimistic? AI: Is that IC in the pinout a quad channel operational amplifier? Figure 1. It's written so big that you missed it! Yes. It's a quad amplifier. ... is the schematic indicating the actual pin numbers to run the traces to? Yes. Seems like a clearly defined blueprint of what exactly I should be doing or am I just being overly optimistic? The schematic shows the schema and good one's make it clear what each section is supposed to do. At high frequency layout becomes important as stray capacitance and inductance can affect the signals. This circuit will probably run fine on breadboard or prototyping board if you're running, say, at audio frequencies or less.
H: Wheatstone bridge: Modifications for different temperature sensor resistance I am trying to build a Wheatstone circuit to detect temperature changes of a low resistance (about 10 ohm) wire. For a first try I would like to use a circuit proposed in this Maxim Application Note 6144 How to Linearize Wheatstone-Bridge Circuit for Better Performance, Figure 7: I am a beginner in using and understanding schematics. So in general I have two questions for the beginning: As stated, the circuit in figure 7 is an example to be used with a PT1000. I guess that's why R1 and R3 were chosen to be 1k Ohm? What needs to be changed if R3 is only 10 ohm for example? In the example, some capacitors don't have values. Some are names C_FLY, C_FELT, C_HOLD or don't have a name. Can you guys explain to me why they are named like that and what values would make sense in this circuit? AI: As stated, the circuit in figure 7 is an example to be used with a PT1000. I guess thats why R1 and R3 chosen to be 1k Ohm? Exactly! And what needs to be changed if R3 is only 10 Ohm for example? In a wheatstone bridge, you want the resistors on opposite sites to have the same product. So, in this circuit, $$R_1 \cdot R_4 \overset!= R_2 \cdot R_3.$$ So, if R3 is 10 Ω, you could keep R2 and R4 as 10 kΩ, and set R1 to 10 Ω. However, it's usually a good idea to keep the relative orders of magnitudes, so, R3=R1=10 Ω, R2=R4=100 Ω would theoretically work better. You might be getting into current sourcing ability problems of your 2.5V source and your opamps with a 100Ω load only, but: I've never seen a 10 Ω temperature resistor (not a good idea to make the impedance of sensor so low), so I presume this remains a theoretical problem. In the example some capacitors don't have values. Some are names C_FLY, C_FELT, C_HOLD or don't have a name. Can you guys explain me why they are named like that and what values would make sense in this circuit? Yes, but it makes no sense for us to explain them: they're explained in the MAX44267's datasheet quite explicitly, and better than we could do it here ad hoc in an answer. Watch out, though: the labels on the schematic in the Application Note are um, badly done. C_FELT is connected to pin 6, which is Vss according to the datasheet. So that's the "low-ESR bypass capacitor C_FILT" described in the table at datasheet page 11.
H: Transistor module and its maximum allowable power dissipation I have a question regarding the selection of transistor modules. These modules often have a parallel diode. When calculating the losses and thus the expected heat development, I am not sure how these diodes have to be considered. The maximum allowed power dissipation is (according to this Infineon application note, page 12 equation 2): I know that both the transistor and the diode has a thermal resistance. Still, I don't see how it affects the temperature rise since, e.g. in a buck-boost converter, the diode and transistor in the module will never conduct at the same time. AI: Still, I don't see how it affects the temperature rise since, e.g. in a buck-boost converter, the diode and transistor in the module will never conduct at the same time. Just because two circuit elements are electrically connected in series or parallel doesn't mean their thermal paths are connected the same way. In fact, normally we'd treat the thermal paths as independent, unless the parts are actually physically stacked on top of each other or something. The concept of thermal resistance is independent of electrical resistance. We call it a resistance because the equations for thermal conduction (\$W=\frac{\Delta T}{\theta}\$) has the same form as the equation for electrical conduction (\$I=\frac{V}{R}\$). But that doesn't mean that the thermal path and the electrical path are the same. Generally the thermal path for a semiconductor device is from the junction area through the semiconductor, through the package leadframe, to the surrounding air. That is why your thermal resistance formula has two resistance terms, the first one, \$R_{thJC}\$, from junction to case and the second one, \$R_{thCA}\$, from case to "ambient". So the fact that the transistor and diode are not electrically conducting at the same time has no relation to the fact that they are both conducting heat from their active regions to the air around them pretty much all the time. These modules often have a parallel diode. In the device you provided a link to, it's not clear that there is any separate diode. The diode shown in the equivalent circuit diagram most likely is just the body diode formed between the gate and drain implant regions and the bulk or body of the transistor. More details about the behavior of the body diode of SiC MOSFETs can be found in a Rohm application note.
H: Where to select the voltage from, from a voltage divider? I want to utilize 2.5V, from a 5V source. This is a diagram: My question is, where do I draw the 2.5 volts from? Is it from the wire I painted pink, or from the wire I painted green? Or it doesn't matter since the voltage in the sides of a resistor is the same? I have to make sure. AI: simulate this circuit – Schematic created using CircuitLab Figure 1. An adjustable potential divider and the fixed equivalent potential divider when the wiper is in the centre position. In both cases the voltage at the bottom is 0 V, at the top it's 5 V and in the middle it's 2.5 V. The difference with the potentiometer is that it's adjustable. Turn the wiper to the bottom and you get zero volts. Slide it up and you can get any voltage you want between 0 and 5 V. It adjusts the "potential" (voltage) between the wiper and the terminals, hence "potentiometer".
H: Understand an I2C clock line implementation I am trying to understand a C function from a legacy project which is supposed to manage the clock line for an I2C interface. It basically does as follow: Set pin to output: P0_1 |= 0b00000010; and then drives it low: P0_1 = 0;. Or set as input: P0_1 &= 0b11111101; and wait for the clock line to be high: while (!P0_1) { wait(1); }. What I don't understand is the second case, code comment says: Set as input with pull-up to wait for the slave to be ready before driving the clock line to high. How does this work? Is this how the master slows down its clock rate to let the slave follows it? AI: This is related to a feature of the IIC protocol called clock stretching. If a peripheral device is unable to process data from the bus master in time (or prepare its output to be transmitted back to the master), it will continue hold the SCL line low (remember that it is open-drain, so there is no bus contention), until it is ready for transmission to continue. Once it releases the clock to indicate that it is ready for the next cycle, the master can then continue clocking the bus. An IIC-protocol-compliant master is required to support clock stretching after the ACK bit (in high-speed mode) and at all rising edge clock transitions (in low-speed mode). If it uses a delay to blindly clock the bus, it may transmit/issue edges before the peripheral is ready, causing communication failures. Famously, the Raspberry Pi had a defect in its drivers/firmware such that it did not wait for peripherals that were stretching the clock -- a delay-only realization of an I2C master would suffer from this exact defect unless it were only used with peripherals guaranteed not to stretch the clock.
H: Serial Communication with HV5623 I'm just trying to understand the communication protocol with the HV5623. Here is the Function table: I want to be able to hold the HV output while I shift in the new states into the shift register, then enable the HV output then rinse and repeat. My understanding is that I should send the data with LE Low, BL high, and POL high and then pulse LE in order to set the shift register values to the HV output. For sending data am I able to use I2C where Data->SDA and CLK->SCL? Does I2C continuously pulse the clock or does it only pulse the clock when there is a bit to send? My understanding is that if the clock only pulses when data is being sent then after the data is shifted in, then this set up will work, but if the clock is always running, then the LE pulled high will have to be timed fairly precisely. Any confirmation (or explanation of why I'm wrong) that using I2C with a switching LE and BL/POL held high would be appreciated AI: An I2C host would fail to communicate as it would not see that chip as a valid I2C device, because it is not an I2C device so it won't talk I2C protocol. The interface is a simple serial shift register. It can be communicated with SPI peripheral or simply by toggling GPIO pins in software. You can leave BL and POL high.
H: Need Clarification on Choke Placement (Vintage PS) I came into possession of an old Electro D-612T variable DC power supply. Based on what I've read here and other places it was manufactured in the 1950s, and sold for a whopping $39.95 US at Radio Shack, which today would run around $390 US. These were apparently used quite often at car radio repair shops. I wanted to clean it up a bit so that it could be used 'safely' in the garage for odds and ends where I don't need super clean filtered DC, and might need higher currents. This meant removing the selenium rectifiers and cloth insulated wiring. I started tearing it apart last year, and as these things sometimes go I did not document the original wiring. After sorting out the windings for the variac portion, the rest is fairly simple to put back together. This brings me to my question. There is a reasonably large iron core coil, which I am fairly certain is meant as an AC line choke, but wanted to get some clarification in case someone else on here has come across one of theses supplies or similar. I have seen chokes in configurations both before and after the variac on various power supply circuits, but in this particular case I am leaning towards it being on the input due to the fact that the variac steps down the input mains voltages to a range of 0 to 12VAC. So I am thinking it would be more effective to filter harmonics / noise before the step down occurs. Would love to get feedback on this to make sure I do this correctly. AI: This site says this about the circuit (clues): - It has two ranges, 0-6 VDC and 0-16 VDC, and meters output voltage and current. It can deliver 10 amps at 12 volts continuously and 20 amps intermittently. It consists of a variable transformer, four large selenium rectifiers in a bridge, and an L-C filter consisting of an 8 millihenry choke and a 6,000 microfarad capacitor. Those of you used to dealing with chokes on the order of 10 henries in the high voltage supplies of radios and other tubed equipment may think that 8 millihenries is mighty small, but this choke is physically large to handle more than 20 amps of intermittent current, and the large filter capacitor makes up for its small inductance. The clue from above is that it talks about the choke (8 mH) being on the 20 amp side and that has to mean on the low voltage side of the variac. So I am thinking it would be more effective to filter harmonics / noise before the step down occurs. Not according to the website above if we're talking about the 8 mH choke (no other is mentioned). It should go on the AC feed to the rectifier and not on the DC output because it would more readily saturate the inductor core on the DC output and that is to be avoided. If in doubt, use a simulator to check the basic functionality. One more thing; it now being on the low voltage side means there is less danger of breakdown causing a fire.
H: Switching based on voltage I am trying to figure out the supply voltage for an 12V induction probe. The probe requires 12v and the supply could either be 12 or 24v. Currently, I have a SPDT switch to manually choose the supply voltage. When the switch is set to 24v the output goes through a 12v voltage regulator and when the switch is set to 12v it just passes through. I was thinking a buck-boost convertor would work, but space is limited. I am looking at using a low-side transistor switch as a possible solution, but I am not sure. Probe LJ18A3-8-Z/BX http://www.zyltech.com/18mm-proximity-sensor-lj18a3-8-z-bx/ The probe's description says it supports DC 6-36V, but I have found that I get the most consistent measurements at 12v. AI: That's a proximity sensor, not a "probe". Unfortunately there is no datasheet in the link you provided - just a basic specification so it may not be a good choice. The probe's description says it supports DC 6 - 36 V, but I have found that I get the most consistent measurements at 12 V. This may be true although it is unlikely. The sensor will (on a good one) regulate the voltage internally so that the sensitivity is independent of supply voltage. I am looking at using a low-side transistor switch as a possible solution, but I am not sure. A low-side transistor switch is not a voltage regulator. It's a switch. Check your hunch about voltage sensitivity by mounting it in a way that you can make repeatable measurements. Be aware that their recommended method of operation is to have the target move across the face rather than towards the face. Edit your measurements into your question.
H: What does this plot represent for 3 phase induction motor? I got a 3 phase induction motor with the following plot (please see below). On Y axis: Pm + Pfe represent the mechanical and iron losses, Pm being the mechanical losses, and PFe the iron losses. On X axis: That's the electrical nominal tension U. In this case U10n = U1n. The 3 phase induction motor is being charged at tensions between 80% U1n and U1n where U10 is the charging tension. Can someone please explain why the plot behaves like that? Like what would be the mechanism behind that behaviour? Many thanks! AI: Mechanical losses are bearing friction and aerodynamic drag. Those losses are not influenced by voltage. Power lost to bearing friction is directly proportional to speed because the torque loss is not much influenced by speed, but power is torque multiplied by speed. Power lost to aerodynamic drag is proportional to the cube of speed because is is like the power to drive a fan. Some of the drag loss is literally due to driving an internal or external fan that cools the motor. The fact that mechanical losses are constant tells you that the change in voltage as no effect on speed. Not much speed change would be expected if the voltage changes without changing the frequency. However changing the voltage does change the available torque capability. For that reason, it is likely that the plot represents on-load motor operation. Iron loss in an induction motor is proportional to voltage. Increasing the voltage increases the motor's magnetizing current and the iron losses. That loss is represented in the equivalent circuit by a resistor that is nearly directly across the power line terminals. Only the small stator winding resistance and inductance is in series.
H: EAGLE - set part to a different layer without mirroring it I need to set a new layer for a part without actually mirroring it. I have many pin headers already set up on my board and wired up, but I need some of them to face the other way (Set them to bottom layer). This is just an aesthetic feature, but it would be nice to know if I should populate it on top or on the bottom layer. I can't just mirror it, because the pins would be mismatched and would not work. Is there some way this could be done? Creating a new library part would be a possible solution, but with how many pin headers (and different sizes of them) there are, it would be very time consuming. (But it seems this is the only way so far) The only thing I want is to print the outline on bottom layer of the silkscreen. On the attached picture, I need one of them to face the other way, without changing the wiring, if possible. Thanks. Edit: I only need the outline of my pin header (what is visible on the silkscreen after manufacturing) to be on the bottom layer. The routing will stay the same. AI: OK, I think I understand based on your comment. You want the silkscreen to move to the bottom for one of the two headers you've depicted. The easiest way to achieve that will be to make a duplicate library part, then reassign the layer of the silkcreen from tPlace to bPlace. Then in your schematic, replace the component(s) you want to be drawn on the back with your duplicate library component by right-clicking on it and choosing "Replace" and selecting it from library. Changing the layer of the silkscreen in the library editor is most easily achieved by turning off all the layers except tPlace, then use the "Wrench" tool to select Layer => bPlace and just click on all the tPlace segments. You can speed this up a little by using the select area tool first to select the whole group at once, then click the Wrench tool, then click to apply to the whole group at once.
H: MAX233 problem with receiving data I need to make serial connection between regular PCs and a 3.3v TTL device, I using the Maxim 233 which has internal capacitors. I have the PCs incoming TD connected to R1_IN and RD T1_OUT. The ground pin5 is connected to the frame ground as well and its connected to the maxim233s GND. What is unique in my setup that I running this chip from 3.3V instead of 5V because if I run it from 5V it will give out 5V on its low ttl part and can hurt the micro. The outgoing data from the device to the PC works. The incoming data from the PC to the device does not. I can measure -10V if I check on the GND and the incoming wire. Any ideas how to debug this any further? All the connections are triple checked and if it works one way there must be some minor issue. I use the same 9600 baud data rate for all tests on both ends. I have tried to replace the chip just in case that didn't solve anything. AI: Datasheet specifies an operating supply voltage between 4.5V and 5.5V. So part is not rated to work at 3.3V.
H: programmable timer / clock generator I need to generate a square wave / clock with configurable frequency between say 60 to 500Hz. There are not stringent requirements on the frequency stability. I need to be able to modulate this frequency at run time periodically (say once every few seconds) using an external uC using either GPIO or I2C. I have looked at programmable oscillators and programmable alarm ICs and have yet to find any that can do what I want. I am OK with having a roll your own type of circuit for this as long as it's not super complicated. The purpose of this is to provide timing control to some external power FETs which will switch a regulated DC voltage on and off. I don't want this switching to be halted if the software running on the uC dies for some reason. So I want an external IC to control the switching. Can you recommend a circuit that can meet my needs? AI: Most microcontrollers have a PMW/Counter logic block available that runs independently of the CPU. Check the datasheet to see if that's available on your micro, it's free and easy.
H: Can this battery power the motors? I'm trying to find a battery which can power two 24volt motors. They each use 12amps, so I think collectively they will use 24 amps. I want the battery to at least last an hour. This battery I'm considering supposedly has 42980 mAh but I'm not sure because it says 3.7/42980mAh. Is this normal or am I being mislead into thinking it has 42980mAh when outputting 24volts. If you need more information, it can be found at the listing https://www.amazon.com/TalentCell-PB240B1-Rechargeable-Batteries-Smartphone/dp/B07SWBS55F/ref=cm_cr_arp_d_product_top?ie=UTF8. If it can provide 42980mAh it should provide more than hour because: Approximate Running Time(h)=Battery Capacity(mAh) / Operating Current(mA) =42980/24000 = 1.79; So it should provide roughly 1 hour 48 minutes The main two things I am concerned is if two 24volt motors drawing from the same 24volt battery will cause a dramatic voltage drop and if I am correct in the assumption that the battery indeed has 42980mAh. Here's the main question: Am I right in the assumption that this battery can power both motors? As you can probably tell I'm new to electronics and any help would be greatly appreciated! I'm not sure if it's helpful but here are the motor stats: AI: The first line in the "About this item" section says "output: 24V (29.4V-21V. Same Voltage Compare to 24V Lead Acid Battery) /5A Max.", so it is not suitable - it can't deliver enough current for one motor. From the specs, I suspect that the 42980 mAH is at 3.6 volts, so the capacity would only be about 6400 mAH at 24 volts.
H: Modeling IF transformers in PSpice How can I create an "IF AMPLIFIER TRANSFORMERS" in PSpice? Which one should I choose? "Forward" or "Two Winding"? And if there is no problem, just according to this site (https://www.electronics-tutorials.com/filters/if-amplifier-transformers.htm#:~:text=IF%20amplifier%20transformers%20are%20simply,coupled%20by%20an%20active%20device.), tell me how much should I put the values of the inductor, resistance, and transformer ratio? Circuit: I need a "RED IF" in this circuit! Input: Carrier: 1.6 Vp-p | 200-400 kHz Message: 0.3 Vp-p | 1 kHz AI: Double_tuned IF transformers need to be sharp enough attenuation (Quality Factor) to adequately suppress equally_strong signals that are only 2_channel_spacing away. WIth such attenuation of near_by stations, even city listening ot AM has some hope of succeeding. Assume you want 30dB attenuation, double_tuned, for 20KHz carrier spacing (2 of the standard USA spacing of 10KHz).
H: How does this RC circuit in a fan regulator work? simulate this circuit – Schematic created using CircuitLab I came across this circuit in this link, but I am not able to understand how this RC circuit works. Could someone explain it to me? Should I consider this as a parallel RC circuits? If the input is a 220V 50Hz sine wave, what will the waveform be? How are the calculations of capacitors done? AI: The schematic is poorly drawn, if you would connect a fan to the red and black wires then the fan can never run as it receives no power. Assuming this circuit is in series with the fan and that series combination is connected to AC mains voltage, then the relays either make: a direct connection between the red and black wires, the fan will run at full speed. or connect the left circuit (R1, R2, C2) and/ or the right circuit (R3, R4, C2) in series with the fan. R2 and R4 are only 2.2 ohms so these don't do much (you can consider them a short). Probably R2 and R4 are added to limit the current when the capacitors are discharged and the mains voltage is applied. R1 and R3 have a high value, they are there to discharge the capacitors C1 and C2 when everything is switched off. C1 has a value of 2.2 uF and C2 is 3.3 uF. These capacitors are connected in series* with the fan. These capacitors have a certain impedance (similar to resistance) at the mains frequency. A larger capacitor (C2) will have a lower impedance. The impedance of these capacitors is in series with the fan and that makes the fan run slower. All waveforms remain sinusoids as these RC circuits are linear, they do not switch.
H: Charge Li-ion battery with protection directly through dev board I am buying some Li-ion pouch batteries (3.6 V) and charging circuits for a project. The batteries come with overheating, over-voltage, and over- and undercharging protection circuitry. In the reviews, someone mentions that you can use any source to charge it due to the protection. I googled it but it seems people make a difference between protection and charging circuitry, and I cannot find an example of someone using it without charger. I see why, but wonder if a protective circuit would work as a make-shift charging circuit, at least it won't explode, right? In short, can I just hook the battery on the Vin on my D1 mini, supply the board from the battery, and charge the battery when hooking the board onto USB? AI: No, don't. A BMS is not a charger. A BMS can't limit the charging current, let alone provide a constant current for the CC charging stage; it can only stop charging altogether when the current is too high; A BMS doesn't terminate the charging properly; some just keep the battery floating at its maximum voltage indefinitely; The maximum voltage on many BMSes is set at just over the maximum recommended voltage for charging, and this will overcharge the battery; Etc. Even if you get away with it (as in "no immediate fireworks"), it will seriously limit the cycle life of the battery, and possibly not charge it at all if you don't limit the charging current. Also, your ESP is 3.3 V, and you can't charge a Li-po with that, and simply connecting a 5 V voltage source from the USB to a Li-po battery is not a good idea, BMS or not. There are plenty of cheap ICs and modules (with USB on their input side) around for charging Li-po batteries; you should use one of those. The BMS is there as a last-ditch protection when things go wrong.
H: QC1602a Font Color I have an unbranded QC1602A V2.0 display connected to a Raspberry Pi over 4-pin interface mode. Backlight and contrast are set in hardware, neither are controlled over the Raspberry Pi. If I setup the display as it should be (16x2 characters of 5x8 pixels each), the text is displayed in a very hard-to-read dark color: But if I set the display to 16x1 characters, 5x8 pixels each, the font suddenly switches to a very clearly readable white font: The code I used for testing was this: from RPLCD.gpio import CharLCD lcd = CharLCD(rows=2,cols=16, pin_rs=15, pin_rw=18, pin_e=16, pins_data=[21,22,23,24], numbering_mode=GPIO.BOARD) lcd.write_string("###FIRST PAGE###") # Black font lcd.write_string("1234567890123456") lcd.command(0b00100000) # Set to 16x1 characters lcd.write_string("###SECOND PAGE##") # White font lcd.write_string("ABCDEFGHIJKLMNOP") And the same behaviour holds true if I do this: from RPLCD.gpio import CharLCD lcd = CharLCD(rows=1,cols=16, pin_rs=15, pin_rw=18, pin_e=16, pins_data=[21,22,23,24], numbering_mode=GPIO.BOARD) lcd.write_string("###FIRST PAGE###") # White font lcd.write_string("1234567890123456") Does anyone know what the reason could be for this strange behaviour? I would be especially interested to find out how I could use the white font in 16x2 characters mode. AI: When I see how this LCD displays the character I immediately think contrast setting. Most 1602 displays have a contrast setting pin, usually it is the 3rd pin. If you connect that pin to a potmeter like so: You can adjust the contrast. I guess you might have connected the pin to a GPIO output and that would result in the switch between white and weird (black) font that you see. As a test, just disconnect the contrast pin and see what happens.
H: Charge 5,4V supercap with 12V solar panel I'm building a circuit to charge a series of two supercap (500F, 2.7V each) with a solar panel (wich outputs, not sc, [email protected] with direct sunlight at 12pm). Those supercap will then power an esp8266 and a WS2812b strip of 5 led during night. simulate this circuit – Schematic created using CircuitLab I've drawn this circuit to charge the supercaps, with D1 in parallel to them to stop charge when they reach 5.1V and D2 to prevent supercaps discharge during night time, when D1 should be a closed circuit, due to low voltage from solar panel (is this right?). Because i'm using a 1N5231B (0,5W power dissipation) do i need to put a series resistor to limit the current from the solar panel? Due to voltage drop across D2, do the 1N5231B works well, or it'll be better something with higher Vz (around 6V maybe)? And last but not least, do i really need D2? p.s. I'm using SB5100 and 1N5231B only because i already have them. AI: What current does the "TO CIRCUIT" load take from the supercaps? @Andyaka about 0.4A Here's the problem as I see it - your energy equation doesn't stack up - with a net capacitance of 250 farad and a terminal voltage of 5.4 volts, the maximum energy that can be held is 250 x 5.4\$^2\$/2 = 3645 joules. If your LEDs take 400 mA and require a terminal voltage of 2 volts (very minimal I suspect), the power is 0.8 watts and per hour, that is an energy of 2880 joules. In other words your SCAPs would be dead after 90 minutes. But it's worse than that because as the SCAPs get discharged of energy their voltage level goes down and this means you can probably only use about 50% of the energy stored in them. Of course you could employ clever circuits to extract a bit more energy from the depleting SCAPs but you realistically might only get 60 minutes. Maybe if you considered a load of 60 mA you might get through the night but that then exposes the poor design of the charging circuit - using a zener like this means it will smoke in seconds when the solar panel is on full sunlight - you need a specialist SCAP charger and balancer.
H: OP1678 running hot I have built a circuit that converts a 0-5 V signal to a +/- 9 volt signal using OPA1678 Op-Amps (original design from Here). The 0-5V signal is received from a 10k potentiometer across the +5V and GNDA Rails. The circuit operates exacly as desired, save one thing: the circuit even without a load attached runs hot, too hot to touch after about 1 minute. I chose the OPA1678 for its rail-to-rail operation as we desired max swing output from my limited supply. (my Vin is limited to +/- 9V) I'm wondering if I made a poor component choice or if something else is wrong. AI: The large capacitor on the output of U3B is more than likely causing your amplifier to oscillate, dissipating lots of excess power. Looking at the datasheet, you can see the following graphs: With just 600pF the phase margin is only about 15 degrees. With 0.1uF you're in big trouble. The datasheet has this to say also about capacitive loads: 8.1.1 Capacitive Loads The dynamic characteristics of the OPA167x series are optimized for commonly encountered gains, loads, and operating conditions. The combination of low closed-loop gain and high capacitive loads decreases the phase margin of the amplifier and can lead to gain peaking or oscillations. As a result, heavier capacitive loads must be isolated from the output. The simplest way to achieve this isolation is to add a small resistor (RS equal to 50 Ω, for example) in series with the output. This small series resistor also prevents excess power dissipation if the output of the device becomes shorted. So if you need a filter on the output of U3B, make it an RC filter, decoupling the capacitance from the amplifier.
H: What are the maintenance steps for PCBs? If I have a multimeter that can measure current, voltage, capacitors, resistors, ..etc., how can I detect the damaged component(s) in a PCB? I try to visually detect if there are any damaged components such as a blown fuse, broken resistor, black components (due to excessive heat), weak wire welding, or sockets that are uninstalled properly. I try to find if there are any short circuits using the multimeter. I test the terminals of every transistor, resistor, ...etc. The worst part is that I measure the values of each component and see if the measured value matches the written value or not. But "loading effect" here is very annoying because I might measure a damaged capacitor but the measurement is close to the right one due to loading effect. I might measure a healthy capacitor but the measurement is incorrect due to loading effect as well. I don't know how to test ICs. Sometimes, I meet new weird components that I've never seen before. Of course, I don't know how to test them. The three approaches that I mentioned are not effective at all especially when it's a big PCB and have a lot of components. So how can I do effective maintenance to different electrical devices? AI: The first thing I will say is that for a modern PCB, it will often be impossible to repair in this way. You really need a schematic, and oscilloscope, and an a stereo microscope to really "see" what's going on. Even then it will be difficult unless you designed the board yourself and know exactly how it's supposed to work. One common exception to this is replacing electrolytic caps. You may be able to repair a PCB by replacing blow capacitors, especially if you're repairing something like an LCD TV where this is a well known procedure. One other failure that's fairly common that you may be able to fix is replacement of a shorted surface mount component (like a MLCC or diode). I have a special trick for this. You can use this method on boards that have a hard short between power and ground that you can't locate. First you need some kind of temperature sensor, an IR camera is best, but an IR temp gun or just your finger can work too. Hook up your board to an adjustable power supply, and slowly increase the voltage until you're flowing 1A or so through the short. Now use your IR camera (or finger) to look for hot components on the board. If you find one, then that will be the failed component.
H: What is the function of diode and capacitor in RS232 shifter circuit? as shown in the picture, can anyone explain the capacitor and diode in detail? Thank you! AI: RS232 logic levels are (from memory) high: -12 to -3 V. low: +3 to +12 V. Since your level shifter is interfacing with a device with only a single-ended, positive supply there is a problem sending out a high signal on RX-RS232. The trick here is to steal some power from the TX signal of the other device when it is high (negative). The diode charges the capacitor to a negative voltage and this is used to bias the RX line negative if the TX-TTL signal is high (which will turn the PNP transistor off). When the TX-TTL signal is high and the PNP turned off there is a potential divider formed between the three 10k resistors, 1, 2 & 3. The best that can be obtained from this arrangement is that the RX line will be a little more positive (due to the diode drop) than \$ \frac {2}{3}V_{TX} \$. If VTX is weak (close to -3 V) this circuit may not work as the voltage to the other device's RX will be out of specification.
H: How to protect H-bridge from burning I'm building an H-Bridge to drive a 5v DC motor. We made the circuit with SMD components And everything works properly. But the problem arises when the motor output gear is held manually. The motor's two-pin voltage is reduced ,and And the current passing through the transistors, especially the PNP transistors, increases. And those transistors smoke. Thank you for expressing your opinion about current or voltage control, or any other method that prevents the burning of transistors. AI: First, understand what is occuring. You are commanding the H-Bridge at some duty cycle to move the gear, which will draw a certain current to move the motor. If you stall the gear, the stall current of the motor is now the load, which could exceed the current ratings of your bridges. Several ways to address: Beefier bridges Find some larger discrete devices that can handle the stall current of the motor, with some margin. Re-settable fuse Putting a re-settable fuse in series with the VCC line or with the Motor terminals will provide a cutoff mechanism. This can be tricky to size depending on the stall current and hold/trip currents. Current sensing H-bridge circuit An integrated H-bridge chip will typically include current sensing to prevent this exact scenario. These will sense the current and then cutoff the bridge drive. Motor feedback sensing If you have motor feedback, then you can sense a stall condition (e.g. the motor isn't moving!) and then stop giving a command to drive the bridge.
H: ESP32 AND CP2102 Serial data reading issue I have a PCB on which i am taking a 5v - 1A DC power supply , and again providing 3.3v to ESP32 through AMS-1117 3.3v. Basically things are not working on when i solder the esp32 on my pcb but things are working when i check it with NodeMCU version of ESP32 ( development board with USB connection ). My question here is , for a dubugging purpose ( to check the flow of my program ) can i connect the CP2102( USB to TTL converter ) to the esp32 ( RX,TX connections ) and a common Ground ( CP2102 Ground with PCB's Ground ) and get the data on the Serial Monitor on the laptop where CP2102 is connected. I am attaching the image which will describe my idea more. Please guide me. AI: Yes, but only if the CP2102 is operating at 3.3V rather than 5V. You said "USB-to-TTL" which implies, to me, 5V logic levels. On the other hand, your block diagram shows the CP2102 running at 3.3V....make sure that it really is. And don't forget to make the common ground connection.
H: Input resistance We are learning about opamps. Currently the topic is about input/output impedances. It is stated, that this opamp has input resistance of 2k. https://www.radio-electronics.com/info/circuits/opamp_basics/operational-amplifier-input-impedance.php states that "input impedance for the circuit is at least the that between non-inverting and inverting inputs, which is typically 1 MΩ to 10 TΩ, plus the impedance of the path from the inverting input to ground" The opamp is considered ideal. Now, how can we get 2k, if we do not know R4 value? It is parallel to R3, and is part of the - path to ground. AI: The input impedance is that seen by the circuit driving this circuit. Figure 1. Input impedance is that seen between points 1 and 2. In your case the driving circuit can only 'feel' the impedance between 1 and 2. As you correctly state the input impedance of the op-amp itself is extremely high in comparison with R1 and R2 and, as a result, doesn't load the input of the circuit to any significant extent. That leaves us with only R1 and R2 to worry about and they sum to 2k as is stated in the topic of study. R4 presents itself as a load on the output of the op-amp (3) and doesn't affect the driving circuit. Homework Now, you should be able to work out the divider ratio for the input easily enough. If you want the gain of the whole circuit to be x 5 then what value would you assign to R4? R4 should be 4k if I am not mistaken. The op-amp with R3 and R4 form a non-inverting amplifier. The gain of these is given by \$ \frac {R_3 + R_4}{R3} = \frac {1k + 4k}{1k} = 5 \$ but you missed my gentle prompt that there is a 2:1 divider on the input so you need a gain of 10 on the amplifier. Oops! What does R5 does for the circuit? Nothing. It is, presumably, the load that the circuit is driving. And if R2 was not connected, would the input impedance be infinite or 1K? Since there is no path to ground but through opamp itself? Infinite, as you suspect.
H: P-Channel Mosfet simulate this circuit – Schematic created using CircuitLab I have a FQP27P06 P-Channel MOSFET that I am trying to drive. My source to my motor is 12v. From my understanding, I should be able to turn this "ON" with -10v (so I should be able to use 1-2V or either put the gate to ground?). I have tried everything and I can not get this thing to conduct. Here is my schematic. What do I need to do to get this to work? I've wasted two days on this. simulate this circuit AI: The above schematic does work. I don't know why I couldn't do it on a breadboard at all. I soldered it to a test board and it works fine. I guess I need to improve my breadboard skills or by a better one instead of the cheap ones. Thanks for all the input.
H: Q: Slew Rate of MCP601 I am using the MCP601 for rectification of small AC signals. I got it working. simulate this circuit – Schematic created using CircuitLab however at high frequencies (20kHz) weird behavior starts to surface such as this. I understand is has something to do with the chip spending too long in saturation and can't catch up. I also heard that you can get a faster OP AMP in terms of slew rate and it would fix the problem as well. But shouldn't this OP AMP Already be suitable as the Slew rate equation is \$2*\pi*F\$ if we do the math \$SR = 2*\pi * 20kHz\$ \$SR = 0.1256V/uS\$ the op amp is stated for 2.3V/uS ? Can anyone explain to me why is this is happening with this op amp? and why isnt the current slew rate satisfactory enough for this current application. AI: The op-amp output has to go from saturation at 0V to +0.6 instantly to keep the rectified output from swinging below zero. That's because of the diode forward drop. The op-amp recovery time from saturation is not specified by the slew rate, and in fact it's seldom specified at all. It can be much, much worse than you would think from the slew rate. Eg. LMV2011 with slew rate of 4V/us but \$t_{rec}\$ recovery time from saturation of 50ms typical.
H: Can SPI and I2C be implemented using GPIO pins? Can SPI and I2C be implemented using digital GPIO pins? I.e. can the SDA and SCK pins of I2C and the MISO, MOSI, SCK, and SS pins of SPI be emulated by setting GPIO pins as input and output accordingly? For example in the case of an Arduino/Atmega328p, certain pins have been dedicated for SPI and I2C use. Can the other digital GPIO pins perform the same function?1 Is there something fundamentally different in how they behave? Maybe an internal register or something to collect the incoming serial data? In the diagram, one can see that the SPI pins are shared with digital GPIO pins. Inside the Atmega328p die, are the two distinct elements/submodules with something like a buffer determining which will drive the pin? Or are they the same thing... 1 For the SCK emulation, assume that somehow you are able to pass the clock pulse to a GPIO pin set in output mode. Note: The Arduino is just an example. I am asking in a general sense. AI: Sometimes you can bit-bang these interfaces but it will take more resources and may not work on all cases. For example, SPI master can easily be bit-banged but SPI slave at 10MHz is probably impossible on a slow micro, and even at much lower speeds you would likely have to block interrupts. The specified pins are connected to dedicated hardware inside which makes it easier to use and it generally will work better, faster and will use less resources. For example, you can be off doing something else whilst the bus is sending or receiving data. Read the MCU datasheet to see which pins are used and if there are an alternates available.
H: Via stitching removes solid ground plane I read in some places that having a solid ground plane is needed . Does not via stithing causing the solid ground plane to become broken? Is this okay? Also when everyone talks about return current, are they talking about the current flowing underneath the trace itself or the plane beneath the trace?(because skin effect makes a trace a 3d structure, can return current flow below on the same trace?) AI: Stitching vias are typically used to connect multiple ground planes on different layers together. Since they are connected to ground there is no break in the plane other than the drill hole which is negligible. Return current flows on the plane directly under the trace but spreads out some distance from trace with the strength of the field dropping off further away from the trace. Typically a distance of 3 to 5 times the width of the trace or dielectric height is used to determine spacing to an adjacent trace that will not result in significant cross talk. Although current flow is on the surface of the trace and ground plane (due to skin effect) the field is actually in the space between the two conductors.
H: Odd distortion in op-amp circuit for audio amplification I'm new to electronics, so I apologize if this is simple. I'm trying to create a simple amplifier mainly for a guitar input, but testing it with an output and input from my sound interface. The circuit produces a weird distortion that I don't know what is, but it is not clipping. Here is the signal I'm sending And here is the signal I'm receiving And here is the circuit simulate this circuit – Schematic created using CircuitLab I would love to know why this is happening and how to fix it. Thanks a lot in advance AI: Actually, it is clipping. You are apparently driving a speaker, or perhaps a scope with the input set to 50 ohms input impedance, rather than an amplifier. Your complete circuit looks like simulate this circuit – Schematic created using CircuitLab You are feeding the circuit with a large signal, 1 V rms would be my guess, and this has a pk-pk of 2.8 volts. Multiplied by your gain of 10 gives about 28 V p-p, and this is clipped to about 9 V p-p. This (roughly) square wave then passes through a high-pass filter produced by the cap and speaker, and the speaker impedance is so low that the time constant is well below the period of the signal. This gives the characteristic steep rise followed by an exponential decay. Either reduce your input level by about a factor of 10, reduce your amplifier gain, or increase your load impedance. EDIT - In comment you say you're feeding a 2k impedance, and I believe I mis-calculated the high-pass effect. 2k/0.1 uF will give a corner at about 800 Hz (time constant of 0.2 msec). This will have a noticeable effect on any audio you play through your amplifier. and it accounts for the waveforms you show, assuming you used about a 1 kHz signal. You'll need to increase your C3 from 0.1 uF to about 10 uF to get the break point down to 80 Hz, and 100 uF would be even better in terms of audio quality. END EDIT
H: Why does a Pull down resistor not affect AC I've been reading up and trying to learn about Guitar Pedal circuits. I have a very basic question about how a Pull Down resistor is used at the beginning and end of a circuit to avoid a popping noise. See Diagram below. Based on my understanding - A popping noise is usually caused when there is some leakage from the Capacitor. When the circuit is activated - the leftover leakage is sent to the resistor rather than the circuit, therefore avoiding a "pop" coming out of the your pedal. This makes sense to me. However, my question is more based around when the circuit it is activated. Why does the pull down resistor not affect the guitar signal? Every description I've read about a pull-down resistor implies it has no affect when the circuit is switched ON. Suggesting it's not drawing current at all? For example " And that's where our unicorn pull-down resistor comes in. It sits there biding its time, drinking tea, eating crumpets and politely gives the potential bleeding somewhere to go, aside from our amplifiers, speakers and ultimately ears. Can someone help and explain why the resistor only really has an effect on the signal when there is leakage? OR, is the answer it does affect AC signal, but because its in parallel with the circuit, or because ioff its value, its not an issue? AI: Based on my understanding - A popping noise is usually caused when there is some leakage from the Capacitor. If the left side of the capacitor is not connected to anything, the capacitor can slowly discharge by itself. When it's discharged, that means the left side and the right side have the same voltage. In this case the left side will slowly go towards 4.5 volts. When the circuit is activated - the leftover leakage is sent to the resistor rather than the circuit, therefore avoiding a "pop" coming out of the your pedal. This makes sense to me. Not really - when the circuit is deactivated, the tiny amount of current that leaks through the capacitor will flow through the pulldown resistor, keeping the left side at 0 volts and stopping it from going up to 4.5 volts. I must stress that leakage is a tiny amount of current. That's why a 1M resistor isn't too big. However, my question is more based around when the circuit it is activated. Why does the pull down resistor not affect the guitar signal? Because it's such a big resistance, not much current flows through it. It only has a tiny effect. Every description I've read about a pull-down resistor implies it has no affect when the circuit is switched ON. Suggesting it's not drawing current at all? This is incorrect, it draws current like any other resistor. But as I just said, it's a big resistor (not size, I mean it has a lot of resistance), so it draws hardly any current.