text
stringlengths
83
79.5k
H: Tantalum capacitors in synchronous DC/DC converter I'm designing a synchronous DC/DC converter with discrete components for a project. The overall power supply, which I'll refer to as the DC bus, is 5V. I wanted to know if it's advisable to use a tantalum capacitor in the DC bus of the circuit to mitigate the noise generated by the converter. The switching frequency is 100kHz AI: It is not necessary to use a tantalum. Generally, for the input side ceramic capacitors are sufficient, combined with bulk electrolytic or Al-polymer if needed to handle load transients. The DC-DC output cap type will depend on what the regulator expects for the cap Equivalent Series Resistance, or ESR. Check the data sheet. Still, this will be an electrolytic, Al-polymer or MLCC. These days, there's very, very few cases where tantalum caps are a go-to choice, and should be avoided as tantalum is a conflict mineral. They do have better long-term stability than electrolytic, and are more dense than MLCCs. That said, MLCC's are preferred in most applications, with advances in density making them viable tantalum replacements for dense devices like mobile handsets. Al-polymer caps offer similar benefits to tantalum when larger values are needed.
H: Can you transmit a number stored on a digital counter over a network protocol without a micro-controller? TL;DR I am storing a 24-bit number that represents number of pulses from a clock on a digital counter IC, and want to transmit that number to a more sophisticated device to perform a speed = distance/time calculation. Is there a simple way to do this over an internet protocol like HTTP, or utility like netcat, without a CPU? Conceptually the problem is very simple - I have a 24-bit number that I need to transmit to a hard-coded network address. I am making a speedometer using digital ICs and motion sensors - when an object moves past the first row of motion sensors, it starts the timer (LM555 IC), and when it moves past the second row it stops the timer. The timer is pulsing a 24-bit counter (74HC590's chained together). In order to calculate the speed of the object, we need to calculate speed = distance / time = distance_between_sensors / (sampling_frequency * num_pulses). This computation, though possible to perform using ICs, would be simpler to perform on a more sophisticated device because it involves floating point numbers up to 24 bits. I am trying to do this project without a CPU to keep costs lower and make it easier to reproduce. Most people alredy own multiple sophisticated computers and devices with network acccess that I would like to make use of. Is it possible to transmit the 24-bit number (representing number of pulses from the clock) to such a device without a CPU on the speedometer? Ideally there would be a black-box device that would require a hard-coded network address for HTTP or other network protocol like netcat. This address would be serving requests of a fixed format from the device, and performing the computation. AI: If you are measuring the time for an object to move between two points using discrete ICs, then you have already spent as much on components as you would have spent on a microcontroller. Discrete component circuits are good learning tools. They teach you about what goes on inside of more complicated systems. The problem is that they become really expensive and complicated when you try to expand them. You've found that it is relatively simple to count the time period with a timer IC and some counter ICs. Now you'd like to do some math, and the project complexity is going to explode. You can't really "offload" the calculations to a PC or other full sized computer. If you try to implement, for example, an Ethernet stack so that you can send the count to a PC you will end up implementing a tremendously complex pile of electronics that will (at some point) end up implementing all the things contained in a typical microcontroller. On top of that, you have to implement the hardware interface - Ethernet isn't simply some wires connected to a chip. There are transformers involved and line drivers to push a signal with the proper levels through long wires. The cheapest way to implement what you want to do is to use a microcontroller development board. Something like an Arduino, or any one of many alternatives. I can buy Arduino Nano knock-offs for about $4 US each. That's probably less than you spent on your 555 timer, the counter ICs, and whatever glue logic you needed to make it go. You could program a Nano to measure the time period easily, then adding the speed computation is trivial - and you can easily connect the Nano to a display of some sort to show the result. You could also put an Ethernet adapter on the Nano and send the data to a server some where for logging or analysis. Even easier would be an ESP32. $5 US each in a pack of three, and it has WiFi built in. I'm all for using discrete components to learn, but it is usually cheaper and simpler to throw a microcontroller and some software at real world tasks. I find your 74HC590 at $0.88 US at DigiKey, and the LM555 for $0.80. That's already $3.44 US. That's already most of the price of an ESP32. You are headed for a really expensive project if you stick with discrete parts.
H: PWM signal gets disrupted when writing to a file (TMS320F28379D) I am trying to establish a form of communication between my DSP (TMS320F28379D launchpad) and a .py script on my laptop. I have a .csv file with 7 duty cycle values from 0.2 to 0.8. My objective is to output a PWM signal corresponding to a given duty cycle value and when the user enters an input value of '1' into a dedicated variable in CCstudio, the DSP should save '1' to the .txt file which when read by the python script enables saving the oscilloscope screen to another file. After saving successfully, the .py code should erase and save '0' to the same .txt file. This will be read by the DSP and then when it reads '0', it will move to the next duty cycle value and a corresponding PWM signal is generated. My issue: Saving the data and PWM signal generation works fine. The problem is while scanning or printing '0' or '1' by the DSP. Whenever, it enters a while loop containing fgetc or fputc, the PWM signal is disrupted for a second or so. It seems as if the CPU has halted. But, once this reading or writing operation is complete, everything goes back to normal. I tried increasing the heap and stack size within the linker file for the DSP but the issue persists. Removing this issue is important because I wish to interface this DSP with a gate driver circuit. Here is the C-code for the DSP: #include "F28x_Project.h" #include "F2837xD_device.h" #include "stdint.h" #include <stdio.h> #include <stdlib.h> #include <string.h> //#include "device.h" #define BufferSize 256 extern void Init_SysClk(void); extern void Disable_Watchdog(void); extern void Init_Buffer_ADCResult(void); extern void Config_ADC(void); extern void Config_GPIO(void); extern void Init_EPWM1(void); extern void Init_EPwm1GPIO(void); extern void Config_EPWMADC(void); extern void Read_D_and_freq(void); // The function to read the .csv file to get D and f values. extern void continue_flag(void); extern void Init_read_txt(void); extern Uint16 ADCA_Data[BufferSize]; // Buffer for results extern Uint16 index; // Index into result buffer extern volatile Uint16 buffer_Full; // Flag to indicate buffer is full extern long int f_BCLK; // The final PLL clock frequency which is 100 MHz. extern float duty[]; // The duty cycle of the PWM signal. extern double T_BPRD[]; // T_BPRD = f_BCLK/(2*f_PWM) (in counts). extern long int freq[]; // Frequency of the generated PWM signal. extern int conf[]; // The confirm signal array within the .csv file. extern double comp[]; // The comparator value to generate the PWM signal (comp = duty*T_BPRD). unsigned int to_save = 1; // Used to call the read_txt_isr1 interrupt in main(). extern int end_here; // The number of useful rows in the .csv file. extern char data_from_csv[100]; extern char *sp; // Declarations related to the .txt file that is used to communicate with the .py script... FILE *txt_file; // The .txt file. FILE *csv_file; char txt_filepath[100] = "C:\\Users\\J\\Desktop\\Work_related\\SIG.txt"; // Path char csv_filepath[100] = "C:\\Users\\J\\Desktop\\Work_related\\sig_data.csv"; int loop_count = 0; // Can be used for iterations. long int temp = 0;//discard after use! (just a dummy var). int c = 1; int temp2 = 0; char csv_data[100]; char *csv_entry_onebyone; char *one_to_txtfile = "1"; int j = 0; int skip_line = 0; int user_input = 0; Uint16 a; Uint16 ReadData; __interrupt void epwm1_isr1(void); __interrupt void adca1_isr1(void); __interrupt void sciarx_isr1(void); __interrupt void read_txt_isr1(void); // Creating an interrupt to read the .txt file to get confirmation signal from the .py script. char circ = '0'; void main(void) { DINT; Disable_Watchdog(); Init_SysClk(); Config_GPIO(); Init_EPwm1GPIO(); Read_D_and_freq(); // Function to read the D and freq values from the .csv file (Intentionally kept before the PWM and ADC functions). Init_EPWM1(); // Set the duty cycle for the EPWM1A generation here. Init_read_txt(); Config_EPWMADC(); Init_Buffer_ADCResult(); Config_ADC(); EALLOW; PieCtrlRegs.PIECTRL.bit.ENPIE = 1; // Enable vector fetching from ePIE block //PieCtrlRegs.PIEIER1.bit.INTx1 = 1; // Enabling ADCA1 at PIE level //PieCtrlRegs.PIEIER3.bit.INTx1 = 1; // Enabling EMPWM1 at PIE level //PieCtrlRegs.PIEIER9.bit.INTx1 = 1; // Enabling SCIA_RX at PIE level //PieCtrlRegs.PIEIER3.bit.INTx12 = 1; // Using EPWM12 interrupt to read the .txt file. //PieVectTable.ADCA1_INT = &adca1_isr1; // Mapping ADC1 ISR to PIE Vector Table. //PieVectTable.EPWM1_INT = &epwm1_isr1; // Mapping EPWM1 ISR to PIE Vector Table. //PieVectTable.SCIA_RX_INT = &sciarx_isr1; // Mapping SCIARX1 ISR to PIE Vector Table. //PieVectTable.EPWM12_INT = &read_txt_isr1; // Mapping Read_txt_ISR to PIE Vector Table. EDIS; IER = 0xFFFF; // Enabling group3 interrupts at CPU level EINT; // Enable global interrupt loop_count = 0; txt_file = fopen(txt_filepath, "w"); // Tells the python script to save the data from the scope. fprintf(txt_file, "%d", 1); fclose(txt_file); while(loop_count < end_here) { GpioDataRegs.GPASET.bit.GPIO18 = 1; EPwm1Regs.TBPRD = (int)T_BPRD[loop_count]; // Determines the frequency. EPwm1Regs.CMPA.bit.CMPA = (int)comp[loop_count]; // Duty ratio = 0.5 for(temp=0; temp<10000000; temp++); // A small delay. if(user_input == 1) // "user_input" = 1 tells whether to save the data. { // Wait until the save process is complete... while(1) { txt_file = txt_file = fopen(txt_filepath, "r"); // Tells the python script to save the data from the scope. if(txt_file == NULL) { fclose(txt_file); break; } } for(temp=0; temp<10000000; temp++); // A small delay. loop_count++; user_input = 0; // Resetting the user input to 0 upon save completion. txt_file = fopen(txt_filepath, "w"); // Tells the python script to save the data from the scope. fprintf(txt_file, "%d", 1); fclose(txt_file); } GpioDataRegs.GPACLEAR.bit.GPIO18 = 1; } } I tried everything mentioned on TI's forum, but nothing solves the issue. I am using .txt files to communicate here as I just need '0' or '1' to be sent between the two devices. AI: CCS uses C I/O to perform stdio.h functions. As per Stop-mode C I/O this involves the target stopping on a breakpoint which disrupts the target operation. Some possible alternatives are: Using Real-time Mode which can allow interrupts to still be serviced while the processor is halted at a breakpoint. Use C2000 GUI Composer to communicate with the target. Target Communication has some more information on the possible communication mechanisms. Changing to use GUI composer will require a change to the target code. Admittedly I haven't tried the above myself.
H: NFC antenna design - moving from rectangular to circular I'm working on a design using an NXP PN7150 NFC chip with PCB antenna. My initial prototype used their online tool for the antenna design, to generate the values for the matching network, however this is strictly for rectangular configurations. I wish to move to a circular (spiral) design due to some other design requirements. Is it simply a case of creating a spiral with the same overall length as the rectangle, with the same track size, spacing etc, or is there a tool I can use for spiral designs? AI: Is it simply a case of creating a spiral with the same overall length as the rectangle, with the same track size, spacing etc No, it will be kind of luck if that manages to work out. You might get close to the frequencies you want to by using that way though. is there a tool I can use for spiral designs? Not a fan of non-open source/free software but you can use Matlab, the last years they implemented PCB antenna design tools, (watch tutorial here) Under the "Antenna design" app they feature spiral antennas. They are still updating their antenna design libraries so make sure u got the latest year's matlab. The good thing about Matlab PCB antenna design is that its fast(er) to design your antenna on the PCB. For example other software(s) usually ask you to configure the offset of the heights of each layer because you have 3 axes (X,Y,Z) on your design. In matlab no height position offset needed, the layers are already on top of each other as you dont have a third axes, only (X,Y). I used the PCB antenna design (in this ongoing project here) and the simulations are pretty accurate!
H: Why does the air gap between the electrode and the overlay of a capacitive sensor have a negative impact on the touch performance? I see in the design guidelines that the airgaps between the electrodes and the overlay of the capacitive sensor have detrimental effect on the performance of the touch sensor and they should be filled with conductive foam. Wouldn't having air gaps will reduce the base capacitance and improve the sensitivity of the sensor? AI: The problem is that air gaps not only influence the base capacitance, they also influence the capacitance of the touch sensor when someone actually touches it. The air gap essentially acts like another capacitor that you put in series with the touch sensor. This additional series capacitor worsens the coupling between the microcontroller (or other sensing circuit) and the actual sensor. Here's an example: Let's say you have a touch sensor that has 10pF capacitance when it's idle, and 100pF when someone touches it. That's a difference of 10:1, and easily detectable. Now let's add an air gap with an equivalent series capacitance of 10pF. This means that, in the idle state, you have a 10pF air gap in series with the 10pF of the sensor. That's 5pF when the sensor is idle. When someone touches the sensor, you still have the 10pF air gap, but in series with 100pF from the sensor. 10pF in series with 100pF is about 9pF. Now the difference between the "idle" and "touched" states is only 1.8:1! (9pF when touched, 5pF when idle.) This is much, much harder to detect. So, in this example, adding an air gap reduced the sensor's sensitivity by more than a factor of 5. This applies to mutual-capacitance touch sensors as well: If you have an air gap in one of these, you'll get a "gap-plates-gap" arrangement, which makes the problem even worse as there's now two air gaps in series that reduce the sensitivity. Let's run the numbers for an example mutual-capacitance sensor as well. We'll use the same numbers: 10pF mutual capacitance when not touched, 100pF when touched. (10:1 difference) Again, we'll assume that the air gaps are each equivalent to one 10pF capacitor in series. In the idle case (not touched), you'll get 10pF (gap) - 10pF (plate-to-plate) - 10pF (gap). That's three 10pF capacitors in series, which equals 3.3pF. In the touched case, you'll get 10pF (gap) - 100pF (plate-to-plate) - 10pF (gap). These three capacitors in series give you 4.8pF. Now your 10:1 difference decreased to just 1.45:1, which is even worse than the self-capacitance sensor with the same air gap. We can also look at it this way: If gaps would increase the sensitivity of capacitive sensors, those sensors would be able to detect someone touching a plate somewhere on the moon with the sense electrodes being down on earth - the bigger the gap, the better the sensitivity, after all. That's quite obviously not the case.
H: What makes some diode surge rated higher than others? The title explains it all. How various diodes can be assigned distinct surge current ratings? I understand that higher current implies greater power dissipation (don't know if its true for diode but true for resistors), leading to potential diode breakdown at a specific temperature threshold? I am looking at the LL4148 diode with a maximum surge current of 2 A: and S1MFL series with max current of 30 A: Any helps would be much appreciated. AI: You can classify component heating into two extremes. 1 ) Long term 2 ) Pulse 1 ) Long term heating is limited by the thermal resistance from the junction to the leads and body of the device, and ultimately to ambient. This is also called isothermal - the component stays at a constant high temperature. 2 ) Pulse heating happens so fast that the body of the component stays cold, and only the semiconductor junction heats up, with no time for the heat to spread into the contact areas, the leads or the body. This is also called adiabatic heating. The more mass there is in the junction itself, and the more uniform the heat-producing part of the junction is, the more energy it can absorb before it reaches a damaging temperature. Of course, real life falls somewhere between the two extremes, and many data sheets have graphs for how the component handles 'long pulses' as well, where some of the body starts to absorb heat. In a diode designed to tolerate surges, the junction will be big, and uniform. It might still be packaged in a small outline, with a low long term current handling. While the two ratings are not totally independent, there is scope to make a device that emphasises one or the other rating. A couple of examples are the 1N400x and 1N540x families, with 1 A continuous 30 A surge, and 3 A continuous 200 A surge ratings respectively. Do read the datasheet carefully about the surge rating. The rating will be highest for a single pulse, starting with the junction at a low temperature, usually 25 C. This is quite realistic for a rectifier diode, which will be charging up its power supply capacitors only on equipment switch-on, so once and from cold. The time duration often matches a half cycle of mains waveform for just this application. There will generally be a lower rating for 'repetitive pulses', especially if the junction starts from hot.
H: Pspice error ERROR(ORPSIM-16362): Name on .ENDS does not match .SUBCKT for LMV751 I downloaded the Texas Instrument's LMV 751(link is here ) opamp in .cir format. Then, I made a .lib conversation and associated it with the model. However, I get an error like ERROR(ORPSIM-16362): Name on .ENDS does not match .SUBCKT what is the problem? AI: Does the following information in [FAQ] PSPICE-FOR-TI: PSpice gave an error message about missing ".ENDS" in my model (or, PSpice says "Name on .ENDS does not match .SUBCKT"). How to fix this? help to resolve the issue: A simulation error often happens when using 3rd party models. There are two error messages: Missing .ENDS in SUBCKT or Name on .ENDS does not match .SUBCKT The root cause is the same. This problem is caused by the end-of-line character. The Windows system uses 2 characters, "CR" (carriage return) plus "LF" (line feed) to mark the end of a line. PSpice must delete the last 2 characters from each line to understand the input file. Where the above FAQ describes some options to correct the line endings to resolve the error. I'm not a user of Psipce to know if will fix the error, and found by searching for the text of the error message. The FAQ also contains: If the model is produced by TI, please let us know and we'll get it fixed. Since the question gives a link to a model on the TI website, consider reporting the error to TI.
H: Resistor switch I'm exploring the idea of creating a resistor switch. The specific switch I plan to use is a 2-terminal pin (no common/GND pin) DIP switch SPST as I'm unable to source an SPDT DIP switch. Is there any resistor series/parallel hack method to achieve the desired value? I've been scratching my head for a week but haven't figured out how it can be done. My goal is to construct a switch that allows me to choose between a 33-ohm and a 249-ohm resistor. Unfortunately, I can't opt for a different switch due to space constraints on the PCB. Still, I'm open to using multiple resistors to achieve values closest to 33-ohm and 249-ohm. AI: Yes. Use this schematic: R1 has to be 249 Ω so that it reads as 249 Ω with the switch open. We need to calculate R2, however, which you can do as follows: $$R1 || R2 = 33\ Ω$$ $$249\ Ω || R2 = 33\ Ω$$ $$\frac{1}{249\ Ω}+\frac{1}{R2}=\frac{1}{33\ Ω}$$ $$\frac{1}{R2}=\frac{1}{33\ Ω}-\frac{1}{249\ Ω}$$ $$\frac{1}{R2}=30.303\ \mathrm{mS} - 4.016\ \mathrm{mS}$$ $$\frac{1}{R2}=26.287\ \mathrm{mS}$$ $$R2=38.042\ Ω$$ The closest standard value to this is 37.9 Ω, though if you want an E96 value you'll need to go with 38.3 Ω. Either of these will get you pretty close to your desired 33 Ω. Note that there is no simple way to do this if you want it to be 33 Ω with the switch open, and 249 Ω with the switch closed. This method requires that the switch-open state be the one with the higher resistance. I am also assuming that your switch has negligible on-state resistance. If it's a high-resistance switch (like a very low current photorelay, for instance) you can compensate by subtracting the switch's on resistance from the value for R2. I mention this only because at such low resistances, it's possible for even 1 Ω (not that much for a very small MOSFET-based switch) to be significant. If, on the other hand, we were talking about 249 kΩ and 33 kΩ, then your switch would need a kΩ of on resistance to have the same effect--which I would call a faulty switch. (I know you, the one asking the question, are using a mechanical switch, but this is important for anyone else who might have the same issue but need to use a MOSFET!)
H: Why should I route the feedback trace under the output capacitor? I'm looking at the TI TPS5430DDAR. In the reference circuit, it has a note to route the feedback trace under the output filter capacitor. It feels like it would be much more simple/direct to not do that, especially given the rest of the reference layout. What's going on here? (This excerpt comes from the TI datasheet for the part, found here https://www.ti.com/lit/ds/symlink/tps5430.pdf) AI: I doubt it matters. Appnotes love to go overboard with layout instructions, but without explaining what and why; let alone giving quantitative values, or comparing alternative layouts with measurements. Mind, so do I. It's fun to ponder layouts and optimize them to find what the limits are. The difference is, I do not fool myself into thinking I'm having a performance impact on the circuit; I do it strictly because it's fun. So then, What is the performance impact? The diagram in question, indicates a connection point mere mm away from a potential closer location. The maximum stray inductance corresponding to this routing decision is a couple nH. This is because permeability of free space is 1.257 nH/mm. Without stopping to give proof, we can apply dimensional analysis to this constant: simply multiply by the length of trace to get its [low-frequency equivalent] inductance. There is also a geometric constant involved (typically 0.2-0.5, depending on cross-section of the trace with respect to the current return path i.e. ground plane or other nearby conductors), but we can use an overestimate (say 0.5 to 1 nH/mm) when we just want to know a worst-case sort of figure. Thus, an extra couple nH have been added to the route. The VOUT trace, too, has some ~nH, as does the capacitor (inductance applies to component lead and body lengths, too!), thus we have some ESL in the path between inductor, capacitor and ground. This gives us an inductor divider equivalent circuit, where the PH waveform (up to 36V p-p) drops across the filter inductor first (~10s µH), and then the ground-return inductance (including this stray inductance, and the output capacitor's; say ~10 nH total). This gives a worst-case p-p ripple purely due to switching transients alone, on the order of 36mV, at the filter inductor's VOUT pad, and perhaps 10 or 20mV further down (where only the capacitor's ESL applies). Assuming the feedback trace has no mutual inductance to the VOUT route, the above gives the total and that's that. If not (say this is a 2-layer board), there will be some mutual inductance between these conductors, and perhaps it's closer to 25-30mV even with the distant connection point, or an improvement of maybe 5mV in that case. Indeed, we see partial confirmation of this analysis in the provided transient waveforms: The output voltage ripple can be expressed as the superposition of four different waveforms: A quadratic (near-sinusoid; a triple-integrated square wave, with harmonics going as 1/N3) component, due to triangular inductor current ripple filtered by the output capacitance A triangular component, due to triangular inductor current dropping across capacitor ESR A square component, due to capacitor ESL acting as an inductance divider with PH (square wave); or equivalently, the ESL acting as a differentiator against the triangular current ripple Random noise due to the oscilloscope, and perhaps other quirks of the system (external interference, other ringing content (particularly after the switching edges), etc.) The noise level looks typical of a contemporary TDS460 or similar family oscilloscope, so we can ignore (4). We're most interested in the step-like aspect, which is... probably on the order of 2mV here? Assuming the reference design, L = 15uH, and the capacitor 10TPB220M (still available today, impressive!) is 7mm long so we can expect at least 3.5nH ESL; out of 12V supply, we expect 2.8mVpp -- hey, not a bad estimate. So what? What difference does it make to the controller? Say we have 2.8 vs. 5.6mV applied to VSENSE, what might happen? The controller is voltage-mode type. Its bandwidth must be a sizable fraction of Fsw, indeed below \$F_c = \frac{1}{2 \pi \sqrt{L C_\textrm{out}}}\$, and evidently a zero due to capacitor ESR is required as well (a common requirement for voltage-mode controllers). (For these reasons, we see L and Cout constrained by several relations in the given in the application section; a lossy type is also required, or an alternative circuit is given for ceramic capacitors.) Thus, the controller will be insensitive to changes up here. Given the block diagram, we might guess, at worst, peak noise could cause additional switching edges; but it seems likely to me, a latching driver circuit is used, so will ignore additional edges through the PWM comparator. (At that, if the error amp passes such frequencies at all.) Due to the low bandwidth, I expect it is unlikely the controller responds to transients in this frequency range -- that is, Fsw and harmonics. Personally, I would have no problem using a straight-line connection. My layout might end up different, anyway (due to other constraints, like nearby parts), but what they have is definitely usable. Don't worry about the dog-leg connection, just make it wherever is nearby and convenient. Other Explanations Maybe the diagram is not to scale. Perhaps they drew it this way, to imply that the connection should be made, indeed perhaps quite distant from the regulator, to compensate for DC voltage drop inbetween. I don't see any reference to this offhand. And it's not like the voltage is very accurate: the tolerance of 2% is better than some, but hardly high-precision. Current isn't extreme either, at only 3A; I would gladly deal with DC resistance by simply making the layout resistance low enough not to care. It certainly can't be connected downstream of subsequent filter components, which would incur excessive phase shift and thus oscillation. (A "zero" cap could compensate for this, like used in the ceramic cap section, but then we're changing the circuit again.) Another possibility, is that the controller is sensitive to RF noise, and high frequencies (Fsw harmonics) should be minimized at the pin. This doesn't seem likely: the regulator is most likely of CMOS design, and the input structure is likely a CMOS op-amp that is relatively immune to RFI. But, it might also be BiCMOS, and exhibit input rectification due to a bipolar input structure. (The effect of such interference, would be to shift the voltage setpoint up or down, probably by a small amount, given we're talking single-digit mVs here.) It seems to me, optimizing for stray inductance, without concern towards performance, is the most likely explanation. Put another way: this effort is useless, and its uselessness wasn't realized, basically because appnotes tend to be very shallow and avoid analysis. About The Regulator As long as I'm here, I might as well add a few nickels reviewing the part choice itself. It's not a design win for me. I would recommend changing it to something more effective, if this is a new design. If an existing design, leave it alone. Rationale: Voltage-mode controls are bad. There are a variety of complex reasons for this -- some of which I've already touched on, such as compensation -- but the most prominent concern not yet mentioned, is the current limiting behavior. The description claims some manner of per-cycle current limiting, but stops short of explaining the function, nor of how it transitions to hiccup mode. (My guess is a count of cycles terminated by peak current limiting, which is decremented or reset by "normal" (non-current-limited) pulses, and when incremented to a limit, triggers hiccup; I have seen other devices documented in this way (by TI), and it's anyone's guess why they chose not to discuss it here. Alas, this is par for the course: the general quality of TI's documentation is poor.) It's noteworthy that the current-limiting functionality is tacked onto the control. Contrast with a peak- or average-current-mode control, where current is strictly defined by the very nature of the control itself; tolerance of startup and short-circuit conditions is natural, a given. Now, this behavior need not be a show-stopper for any particular application; likely it works fine for 95%+ of generic power converter applications. What is a concern, is if you have an application where you did want continuous current limiting, you can't get it, there's no way to disable the hiccup function (and not that you would want to operate a "voltage mode" converter in current limit for very long anyway; they implement hiccup for good reason!). It's inflexible, and if you have a problem with these functions (maybe your design becomes heavily loaded as elements are added onto it, and it starts triggering the hiccup mode), you have little recourse but to replace the entire chip -- and thus more design work putting in a new one, choosing new components, etc. Likewise, you have no choice over soft start (fixed ~8ms ramp time), and compensation is fixed internally. The other aspects are general dynamics: voltage-mode controls generally perform poorly, needing overly large inductor and capacitor. Newer parts of comparable ratings might use 4.7uH and 22uF for example, ceramic capacitor is no problem, and offer comparable or better load-step response. A minor downside is, due to the higher inductor current ripple (say 30-100% of DC out), and perhaps higher Fsw, core losses may be more important; but compact, high-quality inductors are commonplace these days (for exactly this reason), so it's not hard to meet or beat the efficiency. (Not to mention, synchronous converters are readily available, doing away with the schottky's losses as well. These ratings are available in a SOT23-6 now!) In conclusion, the most important take-away is this: traces, components, etc. (everything really, even distance itself) have inductance. We can apply hand-waved approximations to get a rough estimate (say order of magnitude or better) of stray inductance in a real layout. We can then apply expected signals to these inductances, and estimate the noise voltage due to them. Don't loose sight of when something is worth improving, versus when it makes no further difference. Layout questions tend to be very uninteresting: most often, they either make a trivial difference, if any at all (which I expect is the case here), or such a gross difference as to be non-functional as given (e.g., ground routed circuitously on thin traces, no local bypass, etc.). So don't worry about it. Keep the priority routes close (Cin, regulator, diode), and don't be careless with the rest (don't make a huge loop with the inductor and output capacitor, etc.). Most likely, that will be good enough, and anything additional is wasted time. "Wasted", of course, in the strict functional sense: we can still have fun thinking about these things, and better informing our intuition about how to lay things out, for those occasions when it does matter (higher current, higher speed).
H: Sharing ground between circuit and audio input? I'm trying to build a VU meter circuit, and followed this guide/diagram (link) I understand how the circuit should work in theory (mainly the OP-amps and the "logic"), except for the ground for the input signal. In the version I sketched up in CAD I hooked up what would be AudioIn+ to R1 and AudioIn- to GND. Would that cause any problems? If not, would it even work as intended? Thanks. AI: If AudioIn+ AudioIn- are line outputs or headphone outputs then yes, you can connect them to this circuit as you described. However, this is a very basic circuit with a lot of issues. It won't handle negative voltages well, it will probably clamp them, so if you are listening on this same signal, it may be distorted. The resistor ladder is linear, human hearing is logarithmic, you really want a logarithmic resistor ladder. You could change this to a logarithmic ladder, but obtaining the proper resistor values is a challenge. A better circuit would have peak detectors with decay. If a working circuit is more important than learning about opamps, then consider an LM3915 chip instead. It has all of this functionality in one chip.
H: 1 input signal, two edge detectors I have a simple rising edge detector on the top and a falling edge detector at the bottom. The switches represent low-side switched sensor. In fact, I only have one sensor. How do I use one sensor signal to trigger both rising and falling edge detector, separately? (I know I am wrong but it is like somehow the red line is connected.) Or maybe there is a more elegant way to achieve this with a simple IC chip? AI: How about something like this: simulate this circuit – Schematic created using CircuitLab The diodes protect the B-E junctions of Q2 and Q3 from reverse voltage, and make sure that the capacitors "reset" (discharge) properly.
H: Help identifying a component from a barcode scanner I have a broken Honeywell model 1902 barcode scanner that I want to repair. It is the component labelled "6409" as shown in the photo: . AI: It could be this IC: Transistor: P-MOSFET; unipolar; -20V; -4A; 1.3W; TSOP6 Manufacturer part number: AO6409A TME Symbol: AO6409A Reference: https://www.tme.com/in/en/details/ao6409a/smd-p-channel-transistors/alpha-omega-semiconductor/ I can see that Pin 2 is not connected and in the IC schematic, Pin 1 and Pin 2 is connecting to "D"
H: AOZ2261 buck switching regulator doesn't give reasonable output I am designing a brushed motor driver unit that controlled with a ln STM32F410 microcontroller and a DRV8243S H-bridge driver. I am using an AOZ2261NQI-12 to supply the H-bridge driver. The IC is supplied with 24V. The polarization is guaranteed by a smart diode and MOSFET. I couldn't get any reasonable output. The PGOOD pin always gives LO. When I activate the EN pin, the output is at millivolt level. Something is wrong with the device configuration. Maybe I damaged while soldering at 310C with heat gun to melt solder (Sn62, Pb36, Ag2.) AI: The IC can't work with VCC = 3.3V because UVLO threshold for rising VCC can be as high as 4.5V. Apply 5V to VCC pin. The datasheet recommends VCC = 5V which is the voltage that the IC is characterised at. Also make sure the correct solder bridge is shorted i.e. mode is selected.
H: Why does the current into a capacitor stop when it's fully charged? I am a newbie at electronics and I want to ask when the capacitor is fully charged why the current is stopped. AI: When there's a potential difference between two nodes there's a current flow. The amount of current is set by the resistance across these nodes. When there's no potential difference, no current flows.
H: MOSFET trouble with Arduino I am new to electronics. I have a MOSFET that I'm gonna use to activate a solenoid, but for now I wanted to test it with a resistor. This is my design. simulate this circuit – Schematic created using CircuitLab When I hook up a power supply to the gate and source, it turns the MOSFET on and off as expected, causing the meter to read 5V when on and 0V when off. The problem is when I try to have an Arduino Uno take the place of the V_gs power supply. I first connect the circuit ground to the GND pin. Then I connect the gate to the 5V pin. It turns on normally. The problem is when I connect the gate to one of the other GND pins on the Arduino, it stays on. The only way I can turn it off is by physically touching the gate and source wires together. How is that possible? I thought all of the GND pins were internally connected. What should I do to fix this? I'm using an FQP30N06 MOSFET. Thanks. AI: It sounds like the GND pin you think you are connecting to is actually floating for some reason or another (bad jumper wire, bad solderless breadboard connection, bad solder joint, wrong pin selected, wrong markings, something like that). Also the FQP30N06 is inappropriate for 5V drive, it's specified with 10V drive. It will work with a relatively high value resistor, but may be marginal, overheat or whatever with a high current solenoid. The solution is to pick a better MOSFET (logic level drive) specified at 4.5V or less Vgs.
H: How to connect a 12V electromagnet to 220V power adaptor I'm not very familiar with electronics, but I want to create a basic automatic pet feeder for my outdoor dog house. My idea is simple: a wooden box with a floor that drops at a set time, controlled by an electromagnetic lock and a standard Plug-in Timer. Since there's no Wi-Fi, I'm keeping it simple without any 'smart' systems. Questions: Should I use a lock that releases on power-on or power-off? Power-on release: Amazon link Power-off release: Amazon link Can I just cut the connector and solder it onto a power supply adapter like this one? Amazon link AI: Should I use a lock that releases on power-on or power-off? You have a couple of considerations: The lock should last much longer if it is only powered for a short while. Energise-to-release would be more power efficient. What do you want to happen if the power fails? Keep the door locked (prison mode) or release the lock (emergency exit mode). Can I just cut the connector and solder it onto a power supply adapter like this one? That would be fine - but not with that power supply. Down low in the Amazon ad for the lock it states: Electrical Parameters: DC 12V 2A The PSU you have chosen is only rated at 12 V, 1 A. By the way, 12 V × 2 A = 24 W. If you have that powered for 24 hours (which I know you won't in your application) the power consumption would be 24 W × 24 h = 576 Wh = 0.576 kWh so it would cost you > half a unit per day. That's just another little bit to add to somebody's electricity bill!
H: What shapes can IC dies be? I know an IC die can be cut into a rectangular shape. Apparently triangles and rhombuses are also possible, but not used. What other tesselations can they be cut into? AI: The main restriction is that the easiest and cheapest way to cut a wafer is in straight lines that go all the way across the wafer. It's hard to start and stop cutting in the middle of the wafer, which would be required for tightly-packed hexagonal dice: (image source) Note how there are no lines here that go on indefinitely; they all stop in the middle of the wafer. If you really wanted hexagonal dice, and you can afford to sacrifice a bit of wafer area, you could use a trihexagonal tiling like so: (image source) Note how here, the extra triangles mean that all the cuts do go on indefinitely, so this could be made with a couple passes through an ordinary wafer sawing machine. You'd need three passes instead of two as for rectangular dice, but that's not a big deal. Triangles and parallelograms (which includes rectangles) tile with zero wasted space with straight-line cuts. There's never any advantage that I'm aware of to making triangular or non-rectangular parallelogram dice, though, so they're all rectangles. (image source) Note that in these triangular tilings, you make three sets of cuts. If you make just two sets of cuts (remove one set of parallel lines), you get parallelogram tilings. It would also be possible to cut a wafer into slices like a pizza, but there's little reason to do that either. That's a good metric for it though: Any shape you can cut pizza into is a shape you could cut a wafer into. The one other shape you sometimes see used is circular, where the entire wafer is used to make a single device with no cutting involved at all. This is used for extremely high-current diodes and thyristors, which can be recognised by their distinctive hockey-puck or coaster-like shape: (image source) (this device is rated for 103 kiloamps peak!) It is possible to cut dice into arbitrary complex shapes by using other cutting methods, but these are much more expensive than simply using diamond saws. The disadvantages outweigh the advantages, for most applications.
H: How to change the FPGA supply voltage (VCCint and VCCBRAM) beyond recommended operating conditions on Vivado? ​ I am new to the field of FPGAs. The FPGA I am using is Virtex-7 VC707 -2 speed grade. In my research project, I am required to reduce the supply voltage (BRAM's specifically) to a low value, say 0.7 Volt. The board datasheet says that the supply voltage has a minimum and a maximum of -0.5 volts and 1.1 volts respectively but Vivado is not letting me alter the supply voltage beyond the recommended operating conditions (between 0.97 volts and 1.03 volts). How do I alter the supply voltage beyond the recommended operating conditions? Thanks AI: The FPGA I am using is Virtex-7 VC707 -2 speed grade However, VC707 isn't the part number of a Virtex-7 but rather an evaluation kit - AMD Virtex 7 FPGA VC707 Evaluation Kit. The FPGA on the VC707 evaluation kit is XC7VX485T-2FFG1761C but Vivado is not letting me alter the supply voltage beyond the recommended operating conditions (between 0.97 volts and 1.03 volts). I'm not aware that Vivado allows the supply voltage to be adjusted, since VCCBRAM is provided by an external regulator external to the FPGA (and which can vary from board to board). The VC707 Evaluation Board for the Virtex-7 FPGA User Guide Figure 1-33 shows VCCBRAM is generated by U62 on the board: And Table 1-32: shows VCC_BRAM is controlled by a UCD9248 PMBus Controller: The VC707 user manual contains the following about using the PMBus on the VC707: The PMBus connector, J5, is provided for use with the TI USB Interface Adapter PMBus pod (TI part number EVM USB-TO-GPIO), which can be ordered from the TI website [Ref 25], and the associated TI Fusion Digital Power Designer GUI (downloadable from [Ref 25]). This is the simplest and most convenient way to monitor the voltage and current values for the power rail listed in Table 1-30, Table 1-31, and Table 1-32. From looking at the VC707 schematic the PMBus signals are also connected to the Virtex-7, so logic in the FPGA could also access the PMBus. How do I alter the supply voltage beyond the recommended operating conditions? While the VC707 user guide mentions using PMBus to monitor the voltage of the power rails, it could also potentially be used to adjust the VCCBRAM voltage. The UCD9248 used on the VC707 have an Arm-7 core, so you would need to check if as-delivered UCD9248 program supports adjusting the VCCBRAM rail via a PMBus command or not. I am required to reduce the supply voltage (BRAM's specifically) to a low value, say 0.7 Volt. What is the reason for reducing VCCBRAM to 0.7 V? That is below the PG On Threshold (V) of 0.9 V shown in Table 1-32 of the VC707 user guide. Power Good is normally used to sequence power-up of multiple rails. If you need to reduce the VCCBRAM below the PG On Threshold (V) you would need to make more changes to change the PG On Threshold (V). Power good sequencing is normally used to allow the board to power up correctly, and if modify that could potentially damage the board or FGPA. So, why do you have a requirement to operate the BRAM beyond the recommended limits? Is that for some security research?
H: FatFs. How to find the end address and start address of a file on a USB drive? USB drive connected to STM32F407VGT6 I need to find the end address and start address of text and non-text files. The file is located on a USB drive. Are there special functions for finding the end and beginning of a file? Like the fseek function. (SEEK_SET, SEEK_END) The fseek function does not suit me, because I need to work with a USB drive. I could find the address of the end of a text file like this: char line[1]; while(f_gets(line, 2, &USBHFile)) { } pos= f_tell(&USBHFile); printf("%d\n", pos); And I think the address of the beginning of the file is always zero. Non-text files do not have EOF and the end of the file cannot be found this way. AI: The fact that you are using a USB drive has nothing to do with what you want to do. What matters is you seem to be using FatFS library to access the file system. FatFS does not provide a standard C file API so there is no matching fseek() function to seek using the end of file as origin and then read the file position indicator with ftell() to find the size of the file. Each file begins at offset 0 and ends at offset size-1. If you want to seek to end, then there are two options. If file is opened in read-only mode, you can try to seek to offset that is larger than the file. Or, you can get the size of the file with f_size() and you know how long the file is and what is the last offset and where to f_seek() if you want to read a .BMP file one line at a time. Of course you could just believe the .BMP header about file size to be correct as you need anyway multiple pieces of information such as how many lines there are in the file to read and what is the length of each line in bytes based on pixel data format. Depending on what you want to do with the file, you could also just read the image data as it is in the file and write the data from end to beginning of your frame buffer.
H: How to supply a constant current to a load regardless of its resistance? I am tasked with building a circuit which generates PWM signals from a microcontroller with an amplitude of 3.3V and then amplifies them to an amplitude of 12V.The second requirement is that those signals "should be able to supply at least 0.8A of current". Since they come to the effect of powering a device, I find it the most straightforward to think that I have to drive such current regardless of the resistance of the device (about which I am not given information) and without changing the amplitude voltage. The voltage amplification part went smooth, but I still haven't found a way to satisfy the current condition. As of now, this is my work-in-progress circuit (right pane): I decided to approach the problem by building a constant current source driving the necessary current through the load whenever the PWM signal is high. For that I found a basic setup with an NPN transistor and a zener diode and built it such that the current flowing through the emitter resistor is fixed to the breakdown voltage of the diode and Re at 0.8A. And current reading across Re was indeed 800mA before I connected it to the load. After connecting it though, it showed the reading on the left pane, which of course makes sense in light of Ohm's law but that doesn't help me in my situation. I can't assume the load will be of conveniently low resistance (I think...) and that's why I am simulating it with a 1kΩ resistor. I hence have two questions: Do I understand correctly what I need to do from the initial requirements, and if I do, is there a current source configuration (or perhaps some alteration to my configuration) which would allow me to drive at least 0.8A provided I don't know what load resistance I am working with? I would appreciate any advice, especially since I am a novice in building circuits. Edit 1 The device being powered is a stepper motor. AI: If the load draws a maximum current of 0.8A from 12V then your driver should be able to supply at least 0.8A. That's why your requirements say should be able to supply at least 0.8A. If you want to buy a tape measure to measure things up to 2 metres then you want your measure to be at least 2 meters long. Your question is not a current-source-related one. No need to make it over-complicated. The title, a constant current to a load regardless of load resistance, is actually the definition of the ideal current source which has infinite output impedance and therefore infinite supply voltage. The current source you used in your simulation (which I totally ignored initially but should be mentioned anyway. Thanks to @V.V.T for pointing this out.) is not a current source. It's actually a strange voltage source configuration because the emitter will see one diode drop less than the Zener voltage at the base (neglecting the drop across the base resistance) thanks to the local feedback. This configuration would be a current source with a collector load. Using the op-amp's output as the main power source of the transistor-based configuration (even if it was a current source) is not a good idea because the constant current will be supplied from the op-amp's output and none of the ordinary op-amps will be able to source at least 0.8 Amps (except power op amp's which could be an overkill for your project). If you want to use an op-amp as a voltage translator (to convert 3.3V to 12V) you don't have to build it as a non-inverting amplifier. Instead, a comparator configuration would work as well. Another option is to use transistors: simulate this circuit – Schematic created using CircuitLab The load is a motor and draws a good amount of current, so your best bet is to make your driver have as low output impedance as possible. One of the ways is to make a totem-pole driver: simulate this circuit Important points for BJT totem pole: It is a voltage follower so VOUT = VIN. Make sure your 3.3V is translated to 12V before. Maximum collector-emitter voltage, VCE is a limiting factor (should be greater than 12V). Saturated collector-emitter voltage, VCE(sat), is a limiting factor, and the maximum allowed continuous collector current, IC, is a limiting factor (should be greater than 0.8A). The product of the two above, P = VCE(sat) · IC, is the power dissipation and it should be less than the maximum power dissipation of the device, Ptot. Consider heatsinking as well. Important points for MOSFET totem pole: This is an INVERTER i.e. if you apply 0V to the input the output will be 12V (If you have a simple single-transistor inverter to drive this section be aware that the input capacitance of the low-side MOSFET along with the pre-inverter's output impedance will basically slow down the switching action). Again, make sure the input voltage is translated to 12V. If you apply 3.3V directly and if it's above the threshold gate-source voltage (for both), VGS(th), then both MOSFETs will turn on. Be careful. The voltage and current considerations apply here as well. Choosing the MOSFETs with low enough on-resistance, RDS(on), is one of the key points. The high-side MOSFET will be supplying the load (by connecting 12V to the load) so make sure the conduction loss, P = I² RDS(on) is less than the MOSFET's maximum allowed power dissipation. Check the SOA graphs. Consider heatsinking. Unless you are building an H-bridge the low-side MOSFET/BJT doesn't matter and may not be needed. But it's good to keep it there to tie the OUT to the ground (to keep the load grounded) when there's no input signal.
H: IR emitter circuit not working I am struggling with IR emitter circuit which should replace remote control for few devices. I designed some circuit and already made PCB which uses P-channel mosfets for 12V device control. As I am using additional gate driver it's quite fast. I would like to have possibility to connect IR emmiting circuit instead of one device so it will be working as remote controller. Unfortunatelly it's not working. However when I use the same code and connect IR diode with resistor directly to my microcotroller it works without issues however range is quite limited. Code is the same. Diode is also the same. Osciloscope charts also looks very similar. I would like to make this circuit works by changing only right part of it. The idea is to have possibility to use MOSFET switch for different purposes. Not working circuit Working one I tried some pull down resistor to made this work but it didn't help. I would really appreciate any help. AI: There's many reasons they don't work or work poorly. You are transmitting out only the baseband of the IR signal, which looks like NEC protocol. More specifically, it looks like Extended NEC protocol based on the address bits of the data. What is important is that the pulses should be modulated with a carrier wave. In case of NEC protocol, the carrier is 38 kHz. So instead of turning the LED simply on for 9ms for start or 560us for bit pulse, it needs to be driven with 38 kHz carrier wave. The carrier wave is usually not even a 50% duty square wave, but maybe 33% or 25% to allow higher pulsed currents. And based on the waveform how FET drives the LED, it is very slow to turn off, so the 10k pull-up cannot turn off the LED fast enough evem for the baseband signal. So you need some software and hardware modifications anyway, to be able to output the 38 kHz carrier and be able to drive the LED with it.
H: Flyback Transformer Winding Direction Looking at the basic Flyback Converter Topology, the coupled inductor polarity should be opposite in order for the circuit to operate properly. However, I am confused on how to implement the correct polarity in a physical sense when winding the primary and secondary windings on the bobbin itself. I've tried looking at some other answers to this question online, but the answer given seems to change from source to source. Therefore, I drew an image of what I believe is happening. The black rectangle represents the bobbin on which the windings are to be wound around. Lets say the red loop represents the primary winding and the blue loop represents the secondary winding. In the real world the two windings would be ontop one another, but for the sake of drawing a clear image, the windings have been separated. If we also assume that we are using some type of EE or E style core, the core will sit in the middle of the bobbin. Ok, so if we are looking on the top of the bobbin, the red (primary) wire is wound in a counter-clockwise direction. If a current were to flow into the primary as shown by the red arrows in the picture, this would create a flux in the upward direction from the right hand rule. Then if the secondary winding (in Blue) was wound in the same direction relative to the primary then, from my understanding of Lenz's Law, then the induced current in the secondary would flow in such a way as to create a flux which opposes that of the primary as shown by the blue arrows. If we accept the dot convention, where the dot on the primary represents the current entering the primary winding and the dot on the secondary represents the current leaving the secondary winding, then the dots should be placed as shown in the drawing. Thus, coils that are wound in the same direction relative to one another would be in phase and the polarity dots are placed at the start of each winding. Therefore, for a Flyback circuit, like the one shown above, I would connect pin 1 of the primary to the input voltage source, pin 2 of the primary winding to the top of the switch, pin 3 of the secondary to ground, and pin 4 of the secondary to the anode of the diode, correct? Similarly, If we consider the case in which the secondary is wound in the opposite direction than the primary, then the polarity dots would be opposite like the ones drawn in the picture below. In this case pins 1 and 2 of the primary would still be connected to the voltage source and switch respectively. However, pin 3 of the secondary would now connect to the anode of the diode and pin 4 would now connect to ground. In summary, windings that are wound in the same direction relative to another winding are in-phase and the polarity dots are placed on the starting ends of the wire, and windings that are wound opposite to one another are out-of-phase, and the polarity dots are place on opposite ends of the wire (starting end on the primary, and the finishing end of the secondary). Since, I've read so many conflicting answers online, I've gone and confused myself it seems, so my question really is, am I correct in the way that I have explained how the polarity is marked given the winding direction of the primary and secondary relative to each other, especially for a Flyback, or am I missing something? AI: I would connect pin 1 of the primary to the input voltage source, pin 2 of the primary winding to the top of the switch, pin 3 of the secondary to ground, and pin 4 of the secondary to the anode of the diode, correct? Correct. am I correct in the way that I have explained how the polarity is marked given the winding direction of the primary and secondary relative to each other You are correct. or am I missing something? I think you shouldn't really talk about an induced current in relation to Lenz's law: - then, from my understanding of Lenz's Law, then the induced current in the secondary would flow in such a way as to create a flux which opposes that of the primary as shown by the blue arrows. Consider an ideal 1:1 transformer with perfect coupling between primary and secondary. We know that if the primary is driven with a voltage we get an induced voltage at the secondary terminals and, we know that if the two windings are identically wound, that the voltages are in phase like this (from my basic website): - It then follows that if there is a load on the secondary, we get a secondary current like this: - However, that secondary current does not oppose the original field that induced the secondary voltage because, if it did, then we would have a significant change to that voltage and, this just isn't true for any ideal or practical transformer. What actually happens is that an additional primary current is taken. That current generates a magnetic field that entirely cancels the magnetic field from the secondary current. What remains (as always) is the original magnetic flux in the core (aka magnetization flux) as produced by the primary voltage: - Also, current is not induced and Lenz never said that current was induced. It's all about conservation of energy.
H: STM32 vector table with wrong addresses I tried to manually setup a STM32 project for my nucleo board, however the binary behaves strangely during compilation. The current project state is located in a github repository. I noted two problematic items, but I don't know if they are related: When I set a breakpoint in the entry point function, "Reset_Handler", with gdb-multiarch, the breakpoint is never reached, and the debugging session breaks in a random function: Checking the objdump of the binary, arm-none-eabi-objdump -D build/stepper-driver.elf > stepper_dump, I noticed a mismatch between the addresses of the handler functions and the values stored by the vector table: Vector table: 08000000 <g_pfnVectors>: 8000000: 20020000 andcs r0, r2, r0 # equivalent to .word _estack 8000004: 08000389 stmdaeq r0, {r0, r3, r7, r8, r9} # equivalent to .word Reset_Handler 8000008: 08000257 stmdaeq r0, {r0, r1, r2, r4, r6, r9} # equivalent to .word NMI_Handler 800000c: 080002b9 stmdaeq r0, {r0, r3, r4, r5, r7, r9} # equivalent to .word HardFault_Handler 8000010: 080002cf stmdaeq r0, {r0, r1, r2, r3, r6, r7, r9} # equivalent to .word MemManage_Handler 08000388 <Reset_Handler>: 8000388: f8df d034 ldr.w sp, [pc, #52] @ 80003c0 <LoopFillZerobss+0x12> 800038c: 480d ldr r0, [pc, #52] @ (80003c4 <LoopFillZerobss+0x16>) 800038e: 490e ldr r1, [pc, #56] @ (80003c8 <LoopFillZerobss+0x1a>) 8000390: 4a0e ldr r2, [pc, #56] @ (80003cc <LoopFillZerobss+0x1e>) 8000392: 2300 movs r3, #0 8000394: e002 b.n 800039c <LoopCopyDataInit> 08000256 <NMI_Handler>: 8000256: b480 push {r7} 8000258: af00 add r7, sp, #0 800025a: e7fe b.n 800025a <NMI_Handler+0x4> 080002b8 <HardFault_Handler>: 80002b8: f01e 0f04 tst.w lr, #4 80002bc: bf0c ite eq 80002be: f3ef 8008 mrseq r0, MSP 80002c2: f3ef 8009 mrsne r0, PSP 80002c6: 4671 mov r1, lr 80002c8: f7ff bfc8 b.w 800025c <HardFault_Handler_C> 80002cc: bf00 nop 080002ce <MemManage_Handler>: 80002ce: b480 push {r7} 80002d0: af00 add r7, sp, #0 80002d2: e7fe b.n 80002d2 <MemManage_Handler+0x4> The addresses of the vector table is incremented by one of the correct values. My current theory is that the execution of my binary starts from the beginning of the vector table, and the mismatch among the addresses caused the error described by the previous item (random function during the debugging session). What's happening in the first error I described? Does my theory make sense? (explained in first answer) What am I doing wrong that's causing the addresses mismatch and how can I solve it? (solved - not an error) Update The addresses mismatch is already explained by a comment, however the initial problem still happens. Update 2 Including the memory dump around Reset_Handler function memory dump: 0x8000370 <SystemInit+60>: lsls r1, r0, #12 0x8000372 <SystemInit+62>: str r3, [r4, #112] @ 0x70 0x8000374 <SystemInit+64>: ldr r3, [r4, #0] 0x8000376 <SystemInit+66>: bic.w r3, r3, #16777216 @ 0x1000000 0x800037a <SystemInit+70>: str r3, [r4, #0] 0x800037c <SystemInit+72>: ldr.w r3, [r4, #140] @ 0x8c 0x8000380 <SystemInit+76>: bic.w r3, r3, #16777216 @ 0x1000000 0x8000384 <SystemInit+80>: str.w r3, [r4, #140] @ 0x8c 0x8000388 <Reset_Handler>: ldr r3, [r4, #8] 0x800038a <Reset_Handler+1>: bic.w r3, r3, #3 0x800038e <Reset_Handler+5>: str r3, [r4, #8] 0x8000390 <Reset_Handler+7>: ldr r3, [r4, #8] 0x8000392 <Reset_Handler+9>: bic.w r3, r3, #240 @ 0xf0 0x8000396 <Reset_Handler+13>: str r3, [r4, #8] 0x8000398 <Reset_Handler+15>: ldr r3, [r4, #8] 0x800039a <Reset_Handler+17>: bic.w r3, r3, #7168 @ 0x1c00 0x800039e <Reset_Handler+21>: orr.w r3, r3, #6144 @ 0x1800 0x80003a2 <Reset_Handler+25>: str r3, [r4, #8] 0x80003a4 <Reset_Handler+27>: ldr r3, [r4, #112] @ 0x70 0x80003a6 <Reset_Handler+29>: bic.w r3, r3, #768 @ 0x300 0x80003aa <Reset_Handler+33>: orr.w r3, r3, #512 @ 0x200 0x80003ae <Reset_Handler+37>: str r3, [r4, #112] @ 0x70 0x80003b0 <Reset_Handler+39>: ldr r3, [r4, #112] @ 0x70 0x80003b2 <Reset_Handler+41>: orr.w r3, r3, #32768 @ 0x8000 0x80003b6 <Reset_Handler+45>: str r3, [r4, #112] @ 0x70 0x80003b8 <Reset_Handler+47>: ldr r3, [r4, #8] 0x80003ba <Reset_Handler+49>: bic.w r3, r3, #123731968 @ 0x7600000 0x80003be <Reset_Handler+53>: str r3, [r4, #8] 0x80003c0 <Reset_Handler+55>: mov r1, r8 0x80003c2 <LoopFillZerobss+20>: movw r0, #3326 @ 0xcfe 0x80003c6 <LoopFillZerobss+24>: bl 0x8000298 <HardFault_Handler_C+60> 0x80003ca <LoopFillZerobss+28>: ldr r3, [r5, #4] 0x80003cc <LoopFillZerobss+30>: and.w r3, r3, r9 0x80003d0 <LoopFillZerobss+34>: str r3, [r5, #4] 0x80003d2 <LoopFillZerobss+36>: ldr r2, [r5, #12] 0x80003d4 <LoopFillZerobss+38>: movw r1, #3326 @ 0xcfe 0x80003d8 <WWDG_IRQHandler>: rbit r3, r1 0x80003dc <__libc_init_array>: clz r3, r3 0x80003e0 <__libc_init_array+4>: lsl.w r3, r3, r8 0x80003e4 <__libc_init_array+8>: movs r7, #3 0x80003e6 <__libc_init_array+10>: lsl.w r3, r7, r3 0x80003ea <__libc_init_array+14>: bic.w r3, r2, r3 0x80003ee <__libc_init_array+18>: rbit r1, r1 0x80003f2 <__libc_init_array+22>: mov.w r0, #256 @ 0x100 0x80003f6 <__libc_init_array+26>: str r3, [r5, #12] 0x80003f8 <__libc_init_array+28>: mov r1, r6 0x80003fa <__libc_init_array+30>: bl 0x8000298 <HardFault_Handler_C+60> 0x80003fe <__libc_init_array+34>: ldr r1, [r5, #12] 0x8000400 <__libc_init_array+36>: mov.w r3, #256 @ 0x100 0x8000404 <__libc_init_array+40>: rbit r2, r3 objdump generated from elf file: 8000370: f8d3 3088 ldr.w r3, [r3, #136] @ 0x88 8000374: 4a03 ldr r2, [pc, #12] @ (8000384 <SystemInit+0x50>) 8000376: f443 0370 orr.w r3, r3, #15728640 @ 0xf00000 800037a: f8c2 3088 str.w r3, [r2, #136] @ 0x88 800037e: bf00 nop 8000380: bd80 pop {r7, pc} 8000382: bf00 nop 8000384: e000ed00 and lr, r0, r0, lsl #26 08000388 <Reset_Handler>: 8000388: f8df d034 ldr.w sp, [pc, #52] @ 80003c0 <LoopFillZerobss+0x12> 800038c: 480d ldr r0, [pc, #52] @ (80003c4 <LoopFillZerobss+0x16>) 800038e: 490e ldr r1, [pc, #56] @ (80003c8 <LoopFillZerobss+0x1a>) 8000390: 4a0e ldr r2, [pc, #56] @ (80003cc <LoopFillZerobss+0x1e>) 8000392: 2300 movs r3, #0 8000394: e002 b.n 800039c <LoopCopyDataInit> 08000396 <CopyDataInit>: 8000396: 58d4 ldr r4, [r2, r3] 8000398: 50c4 str r4, [r0, r3] 800039a: 3304 adds r3, #4 0800039c <LoopCopyDataInit>: 800039c: 18c4 adds r4, r0, r3 800039e: 428c cmp r4, r1 80003a0: d3f9 bcc.n 8000396 <CopyDataInit> 80003a2: 4a0b ldr r2, [pc, #44] @ (80003d0 <LoopFillZerobss+0x22>) 80003a4: 4c0b ldr r4, [pc, #44] @ (80003d4 <LoopFillZerobss+0x26>) 80003a6: 2300 movs r3, #0 80003a8: e001 b.n 80003ae <LoopFillZerobss> 080003aa <FillZerobss>: 80003aa: 6013 str r3, [r2, #0] 80003ac: 3204 adds r2, #4 080003ae <LoopFillZerobss>: 80003ae: 42a2 cmp r2, r4 80003b0: d3fb bcc.n 80003aa <FillZerobss> 80003b2: f7ff ffbf bl 8000334 <SystemInit> 80003b6: f000 f811 bl 80003dc <__libc_init_array> 80003ba: f7ff ff49 bl 8000250 <main> 80003be: 4770 bx lr 80003c0: 20020000 andcs r0, r2, r0 80003c4: 20000000 andcs r0, r0, r0 80003c8: 20000000 andcs r0, r0, r0 80003cc: 08000444 stmdaeq r0, {r2, r6, sl} 80003d0: 20000000 andcs r0, r0, r0 80003d4: 2000001c andcs r0, r0, ip, lsl r0 080003d8 <ADC_IRQHandler>: 80003d8: e7fe b.n 80003d8 <ADC_IRQHandler> ... 080003dc <__libc_init_array>: 80003dc: b570 push {r4, r5, r6, lr} 80003de: 4d0d ldr r5, [pc, #52] @ (8000414 <__libc_init_array+0x38>) 80003e0: 4c0d ldr r4, [pc, #52] @ (8000418 <__libc_init_array+0x3c>) 80003e2: 1b64 subs r4, r4, r5 80003e4: 10a4 asrs r4, r4, #2 80003e6: 2600 movs r6, #0 80003e8: 42a6 cmp r6, r4 80003ea: d109 bne.n 8000400 <__libc_init_array+0x24> 80003ec: 4d0b ldr r5, [pc, #44] @ (800041c <__libc_init_array+0x40>) 80003ee: 4c0c ldr r4, [pc, #48] @ (8000420 <__libc_init_array+0x44>) 80003f0: f000 f818 bl 8000424 <_init> 80003f4: 1b64 subs r4, r4, r5 80003f6: 10a4 asrs r4, r4, #2 80003f8: 2600 movs r6, #0 80003fa: 42a6 cmp r6, r4 80003fc: d105 bne.n 800040a <__libc_init_array+0x2e> 80003fe: bd70 pop {r4, r5, r6, pc} 8000400: f855 3b04 ldr.w r3, [r5], #4 8000404: 4798 blx r3 AI: From the arm Cortex-M3 vector table documentation: The least-significant bit of each vector must be 1, indicating that the exception handler is Thumb code, see Thumb state. Because the vector addresses will always have at least 16bit alignment, the LSB is not required to be able to find the correct ISR (address LSB will always be 0). The designers decided to make use of this LSB to indicate whether the vector ha dler is Thumb code. So in summary, you are doing nothing wrong, the addresses in the vector table are correct. The vector should be at the location specified in the vector table bitwise-ANDed with 0xFFFFFFFE.
H: AL8116 datasheet error? I am trying to understand Typical Application Circuit from AL8116 datasheet and I am struggling to make any sense of it. For example, what is the point of having transformer connected though diode to GND and Vcc if those pins will always be at the same potential? Am I missing something or it is an obvious mistake? AI: The diode isn't directly connected to VCC and GND; there is a diode and capacitor that form a half-wave rectifier. Presumably, the transformer has some primary winding connected to an AC supply (possibly mains or the flyback transformer of the AL1665/AL1666 if configured that way) which is not clearly shown in this image. As long as the capacitance is sufficient to hold up the voltage and avoid triggering the undervoltage-lockout in each AC cycle, the chip will be powered from that rectifier.
H: P-Channel Mosfet as a switch, always open when voltage to source and digital HIGH to gate I have a signal provided by an Attiny (in this case I am using an Arduino to simulate the signal). It's digital out, so HIGH and LOW. I pass this through a NOR Gate to invert the signal. When I simulate this circuit, the Drain of the MOSFET is always +18v. It completely ignores the signal at the Gate. I expected the Drain to switch between +18v and 0v, but it remains at +18v. Now, if I disconnect the gate (delete the conection and leave it without connecting), the MOSFET will remain switched off. How come my digital signal at the Gate is not switching the MOSFET on and OFF? I've looked at many sample circuits and I can't seem to see where I'm going wrong with it. The signal from the NOR gate outputs at +5v so it should be enough. NOTE: The reason I am using this configuration with a P-Channel MOSFET is that I will be filtering the output square wave to sine, so the Load needs to be after the MOSFET. AI: You have the NOR gate outputting 5V which, as Michal Podmanický said, is not high enough to turn off the transistor. Enhancement mode FETs need to have zero (or near-zero) voltage relative to the source to turn off; the +5V relative to ground that you are applying when the NOR gate's output is high is -13V relative to the source, so it is very much still on (Vth for the IRF5305 is -3V or so). Note also that you cannot power the 4001 off of 18V and drive the input with 5V; the Vih (input high signal voltage) threshold for 15V supply is minimum 11V, so that just shifts the problem from the PFET's gate to the NOR gate's input. Fortunately the fix is easy: replace R5 with a 2N3904 or similar NPN transistor, and drive the base with the Arduino through, say, a 10k resistor. The PFET will be conducting when the Arduino's output is high, rather than the other way around as it looks like you were trying to do with the NOR gate, but that should be a simple change to your code. simulate this circuit – Schematic created using CircuitLab
H: 1950s Era primary and secondary low voltage intercom system I have a 1959 low voltage panel right next to my 120 volt panel in my laundry room. This panel controlled an Old intercom system as well as the lights for the house. Inside of the panel there seems to be a number of rectifiers or relays, I'm not 100% sure how to tell. I've never seen a panel like this before because it appears that all the commons and all the grounds for both panels are Daisy chained together. I've lost power to half of my light fixtures, as well as multiple fixtures turning on and off, or flickering constantly. I am planning on replacing the transformer in the secondary box with a 20V>1A dc supply. My question is about a relay next to the transformer. I have shut off power to the relay at the fuse panel on the primary box hours ago, but the relay remains hot to the touch. The relay is a BT 58 Class II 115V 60C .30A 3v DC What could be the cause of this, and would it be smarter to use a higher voltage power supply? AI: With a low voltage system you should have a box full of relays. The relay contacts control your 120 V lights. The relay coils are energized by a low voltage. So your light switches will switch the low voltage to energize a relay coil and the relay will then switch 120 V to whichever lights are connected to it. If you have shut off power to a relay at the 120 V fuse panel it could still have it's coil energized by the low voltage, and that could cause it to feel hot. As for using a higher voltage supply, that would not be wise. The relay coils are designed to work at a certain voltage, increasing the voltage will likely cause them to overheat. From the relay description they appear to be 3 V relays, so going to 20 V is not going to be good, I'd expect smoke and destroyed relays. Keep in mind that the relays will have 120 V on them, so while it's called a low voltages system it can still kill you. You may want to consult a qualified electrician, and since for some reason you've decided to just throw a random power supply in there I'm going to say it's not even may want to, it's will want to.
H: How to convert an interrupt signal to toggle a continous signal I am working on a project where a momentary contact micro switch is used to activate the Vdd pin of a 555timer IC - [Datasheet Link]. Thus, I need to toggle the Vdd pin every time a micro switch is pressed. In essence, every switch press toggles the IC from ON to OFF or vice versa. The current requirement of the Vdd pin is less than 1 mA only and the working voltage is 12V. The circuit looks somewhat as follows - There is no micro-controller or ADC on board which makes things a little complex. Switch details - Link AI: A simple solution would be a DFF: simulate this circuit – Schematic created using CircuitLab Every time a rising edge (this is important so the DFF should be of a rising-edge or high-level triggered one) is seen on the CLK pin the output will be the inverse (NOT) of the current output state, hence D = \Q. R1 keeps the CLK input at 0 when there's no input applied. R1-R2-C1 network helps to filter out switch bounce. If you can find a DFF that works with 12V (I haven't checked) then all you need is a post totem pole buffer to increase the current output capability (NOTE: you said the current requirement is less than 1 mA, but the presence of the 68n capacitor across the 555's supply rail increases the initial peak current demand): simulate this circuit If you can't then you'll need a voltage level translator instead, like the one following (you don't need the post totem pole driver above): simulate this circuit
H: How can I separate the switched-mode power supply from the RF PCB to avoid interference? I want to construct an autonomous RF PCB module powered by a solar panel. To achieve this, I have obtained solar harvester and battery charger integrated circuits (ICs) designed to charge a Li-Po battery with a voltage of 4.1 V. Following this, my intention is to establish a connection between the power section and the RF/microcontroller (MCU) section. To mitigate potential noise issues, it has been suggested to use a low dropout regulator (LDO.) The drawback of LDO lies in its lower efficiency compared to switching mode regulators, particularly concerning thermal power losses. This becomes especially critical in the context of solar autonomous power, where efficiency is a key consideration. Upon my understanding, the primary source of noise is the electromagnetic interference (EMI/EMF) generated by the high frequency of the switching mode regulator. To address this, I am contemplating splitting the entire project into two distinct parts. The first part would encompass the power components, including the harvester charger, battery, and a switching mode regulator providing a 3.3 V output. The second part would involve the RF/MCU components. Power would then be transmitted from the power PCB to the RF section using wires with a length of 10-30cm. This approach is aimed at creating sufficient distance between the switching mode and the RF components to prevent EMI. I seek your advice on whether this approach would effectively solve the noise problem, or if using an LDO remains the optimal solution. AI: That sounds like a broad question but I'll try my best to give some general info whilst keeping the answer not too long. The presence of a switching regulator is enough to get both conducted and radiated EMI problems. But LDO helps to solve a limited portion i.e. radiated EMI will still be there (partly or fully). If you are size-limited you may want to consider buck regulators running at a switching frequency of at least 1 MHz. In this case the harmonic content (switching noise) may contain higher frequencies with higher amplitudes. So if you run 30-centimetre long cables directly from the output of a switching regulator, depending on the switching frequency and frequency (noise) content of the output, these cables may radiate ~1/2/4 GHz (full/half/quarter wave). Good layout and filtering are required. The regulator's choke can be a good source of radiated EMI. Consider shielded chokes along with good layout and, if necessary, external shielding and filtering. Flat-wire chokes have one of their legs connected to a longer wire segment due to their construction (see below). This longer wire can radiate. So, if you use such a choke don't connect this long-wire leg to the switching node. A switching regulator tends to pollute the input line. So the supply cables can radiate. A good input filtering is required.
H: How much can throughput of high throughput fizzbuzz be exceeded with special processor? A popular answer on code golf stack exchange achieves 52 Gbps on fizzbuzz program. I am curious if the limits of throughput can be exceeded by using specialised hardware, processor, co-processor. How would one approach this problem and is there any prior work on something similar? AI: Your question can't be definitively answered. It's definitely possible to improve performance with specialized hardware. A case study here is Bitcoin mining, where specialized ASICs have been developed. The cost will simply exponentially increase with target performance. As each new bottleneck or physical barrier is encountered, the complexity and cost to remove the barrier will be higher and higher. For Bitcoin there is a moving equilibrium where implementation cost equals revenue. For fizzbuzz there's no revenue, so it just depends how much money you want to throw away at it. The author of the post you linked has indicated that the bottleneck in that implementation is L2 cache write speed. So the first thing to throw at this implementation would be a faster L2 cache.
H: Why do I still have latches even though I include everything in sensitivity list? The following is the code snippet which generates the inferred latch warning. signal SDA_OUT : std_logic; signal SDA_OUT_data : std_logic; signal SDA_OUT_start : std_logic; signal SDA_OUT_stop : std_logic; signal SDA_OUT_ack : std_logic; select_SDA: process(start_stop, SDA_OUT_start, SDA_OUT_data, SDA_OUT_stop, SDA_OUT_ack) begin SDA_OUT <= SDA_OUT_start when start_stop = "00" else SDA_OUT_data when start_stop = "10" else SDA_OUT_stop when start_stop = "01" else SDA_OUT_ack when start_stop = "11"; end process; The warning from vivado is here. [Synth 8-327] inferring latch for variable 'SDA_OUT_reg' AI: Generally, latches are inferred when an if else or case type of statement has a missing if-else or case somewhere. In other words it's incomplete. I think you might need to check what possible values a "std_logic" defined variable may take. A quick check will reveal : type std_logic has 9 possible values (‘U’,’X’,’0’,’1’,’Z’,’W’,’L’,’H’,’-’) In your case you are assuming that no other values other than the 2 bits will start_stop ever take. This is not what the compiler thinks. You could try to fix it by replacing the last check, with : SDA_OUT_ack when start_stop = others; (or whichever way others is to be used in this context to imply a check for ALL other possible values). This would then take into account all other possible values that start_stop could take, and should probably remove the inferred latch... NOTE as Dave Tweed mentioned below, one could essentially remove the last statment which implies ALL other values are taken as default, so : select_SDA: process(start_stop, SDA_OUT_start, SDA_OUT_data, SDA_OUT_stop, SDA_OUT_ack) begin SDA_OUT <= SDA_OUT_start when start_stop = "00" else SDA_OUT_data when start_stop = "10" else SDA_OUT_stop when start_stop = "01" else SDA_OUT_ack; end process; Give that a try.
H: What are PIO asm pindirs and how do they work? What precisely does "pindirs" accomplish? I initially believed it defined the pin direction, indicating whether it's set as an input or an output. However, in a Raspberry Pi forum (link) it appears that "pindirs" can not only determine the pin direction but also concurrently set the outputs to high and low. So, does it not just set the pin direction, but does it also simultaneously set the pin to a high state based on the discussions in the forum? set pins 3 set pindirs 3 [31] set pindirs 1 [31] set pindirs 2 [31] set pindirs 0 [31] loop: jmp loop AI: This technique is not specific to the RP4020, so is described generically. In brief The direction register is selecting between a high when dir is out (caused by 1 in the data output register) or a low when dir is input (caused by the pulldown). This technique is used so that another circuit (often the same at the other end) can pull the line up when we are weakly pulling it down. Thus we can make a bidirectional circuit that can be driven from either end. Detailed Explanation As with pretty much any GPIO pin, there are a few things to consider: The direction If it's an input then Is a pullup is enabled? Is a pulldown enabled? If an output, then What is the output data bit set to? set pindirs X does exactly what you expect: sets the pins to outputs where the bit is 1. If it's an output, the pullups don't do anything. If it's an input, the data bit (pins) doesn't do anything. We are considering the case where there is no external circuit on the physical pins, pulldowns are enabled, and the pins register is set to 3, ie binary 11. When an output the voltage on the pin is the value of pins, which is 1, so high. When an input the voltage on the pin is the pulldown, which is low The four phases in your example are: Pindirs Dir1 Dir0 Pin1 Pin0 Pull1 Pull0 Volt1 Volt0 3 Out Out 1 1 x x High High 1 In Out x 1 Down x Low High 2 Out In 1 x x Down High Low 0 In In x x Down Down Low Low x means "Don't Care", which is to say, it doesn't affect the output voltage. Pin1 means the value of bit 1 of pins; pin0 is bit 0. Pull1 means the configuration of the pullups/pulldowns for each pin Volt1 means what we'd measure The Pico PIO system is vastly more sophisticated than this, that's not relevant to this question because the individual GPIO pins are conventional except that pullups and pulldowns are both available. Many IO devices don't have configurable pulldowns. The PIO system of the RP4020 is much more sophisticated than most, as it has built-in state machines which execute tiny programs. Linking to the documentation as it's a little tricky to find. Good article about Pico's PIO system Admantium Definition documentation of RP 2040 PIO system chapter 3
H: Soldering paste for SMD on a laptop motherboard I want replace a SOIC8 SMD EEPROM on my laptop's motherboard. I have both a hot air gun and a soldering iron. I bought tin-bismuth soldering paste (Tin 42% - Bismuth 58%), but then I saw on most SMD soldering tutorial that clear rosin-based flux is used instead. From my researches, I came to the conclusion that application wise it comes down to selecting a solder temperature. I have a few questions: Can they be used interchangeably for my purpose? Why is flux sufficient to resolder a component on a motherboard? From what I saw on internet it is used alone to clean soldering surfaces but it must be used in combination to metal to perform soldering. What will it change to solder at the bismuth tin temperature instead of the one needed for rosin-only soldering? AI: When you remove an IC, there will be solder left on the pads. You can often simply apply a little flux and solder the new part in using the solder that was left on the pads. I'd remove the old part, remove the existing solder with some desoldering braid, place the part, and solder it in by hand. It's only an SOIC 8. That's about the easiest SMD part to solder by hand. You can remove the IC using hot air or by hand (link to one of my blog posts.) Here's how you put the IC back on (link to another of my blog posts.) The bismuth solder just means your hot air doesn't have to be as hot to solder the new part on. You can use it if you like, or just use regular solder with your soldering iron.
H: Unit of integral windup reset parameter? What is the unit of integral windup reset parameter? Is it the same as the unit of time, seconds? AI: There is an integral parameter \$k_i\$, with units of seconds. There is a reset parameter \$\frac{1}{k_i}\$, with units of resets/minute. There is no parameter integral windup reset.
H: Does "heat" affect signal integrity? I have a PCB that is having signal integrity issues. This is a four-layer board, the top and bottom layer is signals and power and the inner two layers is a solid ground plane with no breaks in it. My SMPS seems to causing the PCB FR4 material to heat up to around 65 degrees Celsius (when it is loaded.) If I touch anywhere on the PCB it causes one of the data lines to invert its signal. Sometimes even binding a THT electrolytic capacitor to lay directly on the board causes issues. When the PCB is not loaded, I have no signal integrity, even if I touch the PCB or lay the capacitir on the board. Does heat (in this case at 65 degrees Celsius) cause signal integrity issues? How does it cause it and what can I do to solve this issue? N.B This PCB is heatsinked through an aluminum enclosure. AI: The temperature of the PCB material (If it's below the glass transition temperature of the material, typically above 160 C, and below the maximum ambient operating temperature of the chips involved, typically 85 C or higher) is not likely to affect signal integrity directly. If you want to test whether temperature is the actual culprit you could run the circuit with a light load on the power converter but with hot air around the board (either from an air gun or a thermal chamber) to raise the temperature to 65 C.  However, a hot PCB indicates the DC-DC converter is handling higher power, and therefore higher currents. There are a few ways this could affect the data signals, including Higher currents in the converter produce higher magnetic fields which are more likely to interfere with data signals. If the inductor in the converter is saturating due to high current it will both produce even greater magnetic fields around the inductor, and likely produce more energy in higher harmonics of the converter's switching frequency, which is more likely to produce harmful interference. If the converter is failing to produce sufficient power for the circuit, it is likely to produce high ripple voltage on its output. If this output is powering the data-transmitting circuit, that could cause failures. At high power, the converter might produce high ripple voltage or sag on its input power rail. This could cause failure even in circuits that aren't powered by the converter. (Saturation or partial saturation of the converter's inductor could make this much worse) Frankly, direct electromagnetic interference with data signals at the rates of SPI and I2C interfaces is fairly unlikely. A failure mechanism related to power integrity is much more common in my experience.
H: LED Control Using SPDT Switch I am designing a circuit to control 2 LEDS: green for one position on a SPDT switch and red for the other position. The LEDs are rated: 12 Vdc, 20 mA. The source is a 12 Vdc constant supply. I have chosen two 592ohm resistors to limit the current as shown in the figure below: My question is: is it better to place the resistors before or after the LEDs ad will it effect the light emitted? AI: If your LEDs are really rated for 12 Volts, they will include a current-limiting circuit, so you don't need to add an external resistor. A "bare" LED will have a rated forward voltage that depends on the LED colour - 1.8 to 3 volts, or so - you do need to add a current-limiting resistor with these. The order of LED and resistor doesn't matter in this case.
H: Why are PPTC based fuses not used in power sockets? I live in the UK. The fuses that I have always seen are the glass fuses which look like this: When they blow they need to be replaced. They are present in all AC plugs. They could also be present in the wall sockets. I recently came across "polymeric positive temperature coefficient device (PPTC) resettable fuse". The device is also known as a multifuse or polyfuse or polyswitch. Why isn't the PPTC based resettable fuse used in all applications since it does not need to be replaced? I have never seen one of these yet so it makes me think that perhaps, these PPTC based fuse devices are actually not often used. AI: The PTC devices are fine for low voltage (up to about 60V, with some exceptions), low breaking current (up to some tens of amperes) applications. They are not good for applications with potentially high fault current, high voltage, or applications where 'leakage' in the off-state could cause problems. They also have higher losses than fuses (roughly double, according to Littelfuse). The leakage current of a tripped PTC device might be >100mA. The equivalent would be less if a higher voltage device was available, but likely not a safe level for human contact. Fault currents in mains circuits can be perhaps <100A to 10,000A depending on whether it's an industrial circuit or a home circuit. A typical 5x20mm fuse might be able to break 100A at 250VAC and 10kA at 125VAC. They will literally explode if called upon to break an industrial 240VAC/60Hz direct short. Generally, the designer of a product does not include the fuse with the expectation that it will require frequent replacement. Often it's a safety device that prevents fire or other damage when something else fails permanently. Similar to the one-time thermal fuse in many appliances that will permanently disable the power in case there is overheating that exceeds whatever 'softer' means exist to limit it (eg. a temperature sensor or microcontroller circuit fails and the applicance begins to heat to the point where the enclosure is softening). As far as fusing wall sockets, all the characteristics of the PTC resettable fuses are show-stoppers or disadvantages for that application except the resettability. For that, we have per-circuit circuit breakers in the 'fuse panel', at least in North America we do. In really old houses you can still find actual screw-in fuses like this:
H: Phase difference between voltage across a capacitor and the source in a low pass filter Consider I have a low pass filter like that in the figure. simulate this circuit – Schematic created using CircuitLab I have considered the voltage source to be \$V\$ and the voltage across the capacitor to be \$V_c\$. Since, this source is an AC source, we know that the reactance of the capacitor is \$X_C\$. What I want to do is, I want to establish an equation expressing the phase difference between the voltage drop across the capacitor and the voltage source. Here, I shall start by considering: $$V_C=i_CX_C=\frac {i_C}{\omega C}$$ In order to calculate \$i_C\$, we can use the equation: $$i_C=\frac {dq}{dt}=C\frac{dV_C}{dt}$$ I have tried putting \$V_C=V-V_R\$ but have no idea of what to do next. I couldn't calculate any derivative since the function seems to be bizzare to me. AI: Well, the transfer function of the circuit is given by: $$\underline{\mathscr{H}}\left(\text{j}\omega\right):=\frac{\displaystyle\underline{\text{V}}_{\space\text{o}}}{\displaystyle\underline{\text{V}}_{\space\text{i}}}=\frac{\displaystyle\frac{\displaystyle1}{\displaystyle\text{j}\omega\text{C}}}{\displaystyle\frac{\displaystyle1}{\displaystyle\text{j}\omega\text{C}}+\text{R}}=\frac{\displaystyle\frac{\displaystyle\text{j}\omega\text{C}}{\displaystyle\text{j}\omega\text{C}}}{\displaystyle\frac{\displaystyle\text{j}\omega\text{C}}{\displaystyle\text{j}\omega\text{C}}+\text{j}\omega\text{CR}}=\frac{\displaystyle1}{\displaystyle1+\text{j}\omega\text{CR}}\tag1$$ So, for the argument we get: $$ \begin{alignat*}{1} \arg\left(\space\underline{\mathscr{H}}\left(\text{j}\omega\right)\right)&=\arg\left(\frac{\displaystyle1}{\displaystyle1+\text{j}\omega\text{CR}}\right)\\ \\ &=\arg\left(1\right)-\arg\left(1+\text{j}\omega\text{CR}\right)\\ \\ &=0-\arctan\left(\frac{\displaystyle\omega\text{CR}}{\displaystyle1}\right)\\ \\ &=-\arctan\left(\omega\text{CR}\right) \end{alignat*}\tag2 $$
H: Audio switch using NMOS (low level signal) A common-emitter amplifier can include a bypass capacitor from the emitter to ground, resulting in considerable extra gain (1000uF in the picture). What happens if the bypass capacitor is switched "ON" and "OFF" using an N-channel MOSFET, as shown in the picture? Will the MOSFET produce any distortions to the audio signal? I've read that MOSFETs alone are not good switches for audio signals, particularly when voltage goes above 0.6V and the FET is OFF. But voltage here going through the bypass capacitor would be max. 200mV. PD: Spice simulations and breadboarding seem to work fine, but I'm still unsure. AI: This will work perfectly fine. As long as you don't make the MOSFET's body diode conduct with a reverse voltage greater than a few hundred mV, there won't be any difference between a simple NMOS transistor and a proper transmission gate / analog switch. In other words: Just keep the peak-to-peak amplitude of the signal at the BC549's emitter below 600mV or so (300mV in either direction). If you want the circuit to function with slightly larger signals as well, you could do something like this: simulate this circuit – Schematic created using CircuitLab This makes the circuit's biasing more robust as the common-emitter amplifier's gain is now 1 at DC (instead of 10), and it also decreases the voltage across the MOSFET, giving you more headroom before the body diode begins to conduct. If you can fit an additional MOSFET, you can also use this configuration to avoid the body diode problem entirely: simulate this circuit This switch configuration only requires the gate voltage to get significantly higher than the maximum signal voltage in order to turn it on, and lower than (or equal to) the signal voltage to turn it off. For example, if you have a signal that swings between 0V and 2V at the BJT's emitter, a gate voltage swing of 0V / 12V would be more than sufficient to cleanly turn the two-MOSFET switch on or off. Note, however, that you might need a stronger gate driving signal to avoid charge coupling from the floating source node into the gate. You could, for example, drive the gates hard to GND / VCC with a SPDT switch. If you do this, you should still add a pull-down resistor (i.e. 100k) to prevent the gates from floating during the switch transition. Additionally, you'll need to use a break-before-make switch. It's also quite important for this configuration that the position of the capacitor is swapped as shown, as this prevents negative voltages from developing at the two-MOSFET switch's terminals. Otherwise you'd need a negative gate voltage to turn it off reliably.
H: Is there a way to tell if I have MLV or ELV transformers driving my 12V 50W LEDs? Is there a way to tell if I have MLV or ELV transformers driving my 12V 50W LEDs? I want to dim a small row of ceiling downlights with 3-Way Lutron Maestro dimmers. The actual fixture is a Nora NL-301 downlight can in the ceiling, but I don't know what type of transformer is driving the low voltage. AI: ELV = Electronic Low Voltage MLV = Magnetic Low Voltage. The latter will be much heavier and somewhat larger and will have almost no components inside, just the transformer. The former type of 'transformer' is actually a switching power supply (not what we'd call a transformer here). This particular fixture comes in both types. I would say if it has the part indicated below (with that appearance) it is of the magnetic type: If it looks more like this: It is of the electronic type.
H: In the circuit of Fig. 3.49, current i1 is: (a) 4 A (b) 3 A (c) 2 A (d) 1 A I am starting this subject i need help for this question. If anyone could solve it please tell me. I don't understand what to do with the nodal analysis. I am thinking to add the 2ohm and 3 ohm resistor in series but I don't know if it is valid or not. If I add them in series I get answer but can I add them in series even if a battery is in between them. AI: There are three unknown values in the circuit: v, i1 and i2. To solve this, you need to formulate three equations. The two loops would give two equations. Apply KVL in each of these loops to get the two equations. Isn't the 2A current source in the middle equal to (i2 - i1)? There you go. You now have three equations with three unknowns. Solve this to get the values for i1, i2 and v.
H: What is this part from an LCD monitor power supply? A Samsung monitor (S32D850T) has this axial glass component on the power supply PCB. It was identified with a reference designator of “SA” and represented with a schematic icon on the back of the PCB that I did not recognize. What is it? AI: As commented by Tim Williams, the component designator "SA" is commonly used for "Sparkgap Arrestor".
H: Defining vector in Verilog Given the following code: module add16 (input [15:0] a,b, output[15:0] sum, output court); Are both input a and input b 16 bits long, or is only input a 16 bits long? AI: The format is: <direction> <type> [<packed size] list,of,var,names The same principle applies to all variables. So in your example as b has no direction/type/size definitions, it will take the same definitions as the port that came before it, a, hence both should be the same size and direction. Personally I avoid this syntax an prefer to specify each ports direction, type and size individually. I find it clearer and avoids any ambiguity.
H: Which is the more important switching parameter in MOSFET datasheet? Turn on/off/delay times or Input/Output/Reverse capacitances? I see both capacitance and timing details mentioned in the MOSFET datasheets.(Ref:) Which one should be considered for comparing the MOSFET switching performance between 2 MOSFET part numbers? Note: I understand turn on/off/delay times are based on test circuit conditions as well ( Rg, VGs etc) and not an intrinsic property. AI: Which one should be considered for comparing the MOSFET switching performance between 2 MOSFET part numbers? Neither, if you are interested in a real circuit that switches a real load powered from a power supply that is reasonably close to the normal limit that you would use this device. So, this device is rated (absolute maximum) for a \$V_{DS}\$ of 150 volts and, a current of 100 amps but, if you look at the data sheet on page 3 you will see a figure called total gate charge. For this device it is typically 70 nC. Along side that figure are some words describing the test. Those words describe the scenario of 75 volts across the device and, when activated, a load limits the current to 100 amps. The gate voltage is also mentioned as rising to 10 volts from 0 volts. So, with 70 nC needed to raise the gate to 10 volts we can say that the effective input capacitance is 7 nF (Q = CV). But, it could be as high as 9.3 nF. That's almost double of what the stated input capacitance is. You then have to consider what your gate driver circuit can deliver in terms of current and, you might decide that it is equivalent to 10 Ω source impedance. That 10 Ω and the 9.3 nF produce an RC time of 93 ns and I would say that is a decent benchmark figure for making comparisons with other MOSFETs. But, you have to read the words carefully and decide if the drain voltage and drain current scenarios are similar enough in different data sheets to make a valid judgement.
H: Under which situation must a signal be defined as a reg? For example, in this code. The answer is none of the signals must be declared as reg. So, if this is truly the case, then what is the use of this code? I think the question's ambiguity lies in must. I think that in this case, the signals r1, out_reg and out_but should be declared as a reg and w1, out should be declared as wire. module temp(...); ... assign w1 = a ^ b; always @ * begin r1 = a & b; out_reg = a | w1; out_bus = 5'd25; end assign out = r1; endmodule Or, this is just a question for test. AI: Under the old version of Verilog (IEEE Std 1364), continuous assignments using the assign keyword would typically be to signals declared as wire, whereas procedural assignments, inside an always block, for example, would be to signals declared as reg. This compiles: module temp( input a, b, output out ); wire w1; reg r1, out_reg, out_bus; assign w1 = a ^ b; always @ * begin r1 = a & b; out_reg = a | w1; out_bus = 5'd25; end assign out = r1; endmodule However, this is no longer necessary under the current version of Verilog known as SystemVerilog (IEEE Std 1800). You may still use wire and reg as above if you like, or you can use reg for all in your code. This compiles: module temp( input a, b, output out ); reg w1, r1, out_reg, out_bus; assign w1 = a ^ b; always @ * begin r1 = a & b; out_reg = a | w1; out_bus = 5'd25; end assign out = r1; endmodule The logic keyword introduced in SystemVerilog can be used in place of reg. This all depends on how well your simulation tool adheres to the Std.
H: High-Side Protection IC: what is the Vgs when IC is disabled? I want to use the LM5060 High-Side Protection IC, where you can add an external N-channel enhancement type MOSFET to turn it on/off. I will bias it with 36 V that I need for my PCB. I try to understand what will the Vgs value be when the IC is off/not-enabled but the 36 V are there. This will be my hookup: And I don't see anywhere mentioning what the gate pin voltage will be when the EN pin is low. I am afraid when the EN=0V, then that gate pin will bias the MOSFET's gate with 0 V, so I will need to use a MOSFET with \$ V_{GS}~ \pm36V \$ (which are expensive and hard to find for large current draws). I see the plots show the gate voltage reaching 0 V: While I assume under no circumstances the GATE pin should bias voltage below the ~Vin voltage, because a) it doesn't have to, the N-FET will turn off at VGS=0 V and b) it might go below the N-FET's VGs limit. I think the whole document assumes that all the Vgate values are measured in reference with the Vsource of the FET and not the GND. After all, in their LM5060EVAL Evaluation Board they use a Mosfet SUM40N10-30 with \$ V_{GS}= \pm20V \$ and they allow Vin up to 37 V. At the end of the day I will try the IC and find out. But I would like to know why, where is it mentioned in the datasheet (and possibly how) is the gate pin biasing the gate with what voltage when EN=0 V and Vin is high. AI: Short answer: Zero. The external gates are driven with current, not voltage. A charge-pump-driven current source (IGATE = 24uA to the top-left of the block diagram below) drives the output, and an internal zener limits the gate-source voltage to 16.8V. Smaller turn-on current allows the external MOSFET to be used as an inrush limiter by keeping the ohmic-region active for a longer time. To turn the external MOSFET off, the IC uses two independent current "sinks": One for normal off (IGATE-OFF = 2.2mA for quicker turn-off), One for fault off (IGATE-FLT = 80mA for even quicker turn-off). These can be clearly seen from the block diagram: I didn't read the entire datasheet but probably Normal OFF and/or Fault OFF switch(es) remain closed, making the GATE and OUT voltages around zero, therefore VGS of the external MOSFET zero.
H: Are Differential Line Drivers same as FDA or similair as Balun ? How we use them differently? I'm trying to understand the difference between differential line drivers, FDA and baluns. I know that FDA and Blaun both can be used to driver ADCs and baluns are usually AC coupled and adding less noise. But I don't know much about differential line drivers for example SN65LVDS1 from TI. They are LVDS drivers but in terms of impedance, speed (freq.), etc, it seems they also can be used in RF applications... so what's the difference between them? AI: LVDS is a digital signaling standard; it can only transmit two states. Fully Differential Amplifiers are analog, able to transmit a continuum output voltage. Baluns perform no amplification or buffering/isolation, being passive devices. They only serve for impedance matching and common/differential mode conversion.
H: What is this comparator in the Arduino Uno Revision 3 doing? I was looking into the schematic of the Arduino Uno Rev3 particularly the built-in on board LED (connected to pin 13) and was curious about what was happening. Here you can see pin 5 of port B being connected to what appears to be a comparator which is in feedback with its own output. Why not just connect it via a 1K resistor to yellow LED L? AI: That would load the pin significantly if it was configured as an input. Using half of a dual op-amp as a buffer is a reasonable design choice, when you need the other half elsewhere. Might not be optimal in all cases, but it's not unreasonable. Another choice might be to use a 2N7002, with or without a series gate resistor and an LMV321 single op-amp for the other requirement.
H: 5V Relay, triggered with power on/off I am new to this forum and came across it trying to find a solution for what I thought would be an easy problem to solve. I work in custom AV installations and I was hoping to find an off-the-shelf solution but either it doesn't exist or I am simply using the wrong terminology whilst searching for a product. We have to install a TV with a motorized bracket. I am trying to avoid the need for an additional remote control to operate the bracket so I am trying to automate the process so that the bracket, which has a contact closure input, is triggered when the TV is turned on. I spoke to the manufacturer of the TV and they have confirmed that the 5V 1A USB output on the TV turns on/off with the TV and does not remain on when the TV is in standby. Being clueless I hastily went out and bought two different 5V relays, thinking either will do the trick. I started testing them and only then realized I will need a secondary low level trigger. https://www.switchelectronics.co.uk/products/5v-1-channel-relay-board-module and https://www.switchelectronics.co.uk/products/5v-1-channel-solid-state-relay-board I was hoping that a relay exists which changes the output state when it is connected/disconnected from it's own power source. If anyone knows of anything 'ready to buy' that will do the job, that would be greatly appreciated! AI: The first board at least (I didn't look at the second) is "active low," so if you want the relay to be actuated anytime it's powered, just tie the logic input to USB ground. (If you tie it to VBUS, it will always be unactuated, either because it's unpowered or because the active-low control input is high = off). From the photo, the relay itself appears to be a SRD-05VDC-SL-C and there's a link to a proper datasheet. The power consumption is worth taking note of. It will pull about 90 mA (0.45 W) from the TV's USB port, which is well within the base USB specification of < 500 mA. Another thing to watch out for is flyback voltage spike from the relay coil. I don't see proper schematics for the board, but the photo shows a diode, so hopefully that's a snubber. If not, you may need to add one externally, otherwise the voltage spike on powerdown can be high enough to damage the USB port. You can check this with an oscilloscope if you have one (most multimeters are not fast enough to catch it).
H: Residual current relay continuity Even if the leakage current relay is 300 mA, it cannot provide human protection in such a case. Then, if I make a circuit like this, if there is already 270 mA leakage, then if there is an extra 30 mA leakage, it will switch to the relay. simulate this circuit – Schematic created using CircuitLab Even if this circuit works, will it cause any damage to the leakage current relay? NOT: The rest of the electrical installation is ignored. ALSO, it is assumed that the device operates with 270 mA current. AI: The RCD should not be damaged by triggering at its design leakage trip point, as long as the interrupted current is within the design range. Most devices will have durability limits of 1000s of trips. As to "protection", you absolutely can't assume that a 300mA RCD will provide "protection" at the 30mA level by ensuring there is 270mA of background leakage. This partly because of variability in the background leakage, and partly because the % error in the trip point will mean you can't be sure of 30mA "remaining" sensitivity.
H: Am I connecting these MOSFETs in parallel correctly? I am designing an electronic load for a battery, and I don't want to use a single MOSFET that carries the entire current and has to be bolted to a large heat sink. So, I thought about connecting 3 MOSFETs in parallel to distribute the current and, consequently, the total power. However, despite searching for information on the correct connection of MOSFETs in parallel, I'm not very sure if I'm doing it correctly. Here, I show you the design I came up with. Note: The SET_VOUT signal has the wrong name; it should be SET_IOUT. All values are the final values, meaning that these are the values I would solder into the circuit. The op-amp's power supply is 5V. I_SENSE is an analog signal to a microcontroller to measure the current in the electronic load (it seems obvious, but I mention it anyway). The SET_VOUT signal has the wrong name; it should be SET_IOUT. AI: The most important thing to do is to split your source degeneration resistors, R1-R3. That way, if one FET is running a bit hot, it'll be less able to steal current from the other two. In order to get your current feedback, though, you'll need to isolate those nodes from each other -- that's the function of R7-R9. Note that typical power FETs simply aren't designed for sustained operation with high \$\mathrm{V_{DS}}\$ -- so be careful about applying this circuit, and derate the transistors heavily (or use power BJT transistors, and a base driver that can supply enough current). simulate this circuit – Schematic created using CircuitLab
H: Problems with choosing resistors for a common emitter - voltage divider bias amplifier circuit I'm designing a common emitter - voltage divider bias amplifier circuit like below: Requirements: \$|A_v| = 5\$ \$Z_{in} \geqslant 10k\Omega\$ \$P_{consumption} \leqslant 0.2W\$ \$V_{CC} = 15V\$ Procedure: Assuming \$I_C \approx I_E\$ We have: \$|A_v| = \frac{R_C}{r_e} = \frac{R_C}{\frac{26mV}{I_E}} = 5\$ \$\Leftrightarrow R_C = \frac{130mV}{I_C}\$ On the other hand, we have the DC loadline equation: \$I_C = \frac{V_{CC} - V_{CE}}{R_E+R_C} (1)\$ For optimal operation, we assume that the Q-point will be in the middle of the load line, or: \$V_{CE} = \frac{1}{2}V_{CEmax} = \frac{1}{2}V_{CC} = \frac{1}{2} 15V = 7.5V(2)\$ \$I_C = \frac{1}{2}V_{I_{Cmax}} = \frac{1}{2}\frac{V_{CC}}{R_C+R_E} = \frac{7.5}{R_C+R_E}(3)\$ From \$(1)\$, \$(2)\$, and \$(3)\$: \$\Rightarrow \frac{130mV}{R_C} = \frac{7.5}{R_E+R_C}\$ \$\Leftrightarrow \frac{R_E}{R_C} = \frac{737}{13}\$ This is an unrealistic ratio for \$R_E\$ and \$R_C\$ as I always see \$R_E\$ smaller than \$R_C\$. Was my calculation wrong, or was it my approach? AI: There's nothing wrong with your calculations and approach. The issue is with the design because of a relatively low voltage gain without any emitter degeneration. If we were talking about higher (e.g. 3-digit) voltage gains or higher current-demanding loads (relatively smaller load resistors) then we'd consider shorting emitter degeneration resistor. The emitter degeneration resistor (RE) appears at DC only to bring a local feedback and therefore set the quiescent current \$I_C\$ (As you can see, the entire circuit forms a constant-current sink at DC). So if RE wasn't shorted (presumably) with C2 then the gain equation would be $$ A_V\approx -\frac{R_C}{R_E} $$ and therefore the collector resistance would be greater than emitter resistance. Since the emitter resistance is shorted at AC, it has to be much greater than collector resistance, ~57 times for your circuit. I'd remove C2.
H: Do I need pullup resistors for a voltage level shifter driven by a push-pull GPIO output? I need to shift a 3.3V, push-pull SPI bus from my STM32F3 controller up to 5V to drive a MAX7219 LED controller. To do this, I aim to use an NVT2022 level shifter, datasheet here. The first paragraph of chapter 7 states the following: The NVT2001/02 is ideal for use in applications where an open-drain driver is connected to the data I/Os. The NVT2001/02 can also be used in applications where a push-pull driver is connected to the data I/Os. In the typical application circuit underneath, the device is connected with pull-up resistors on both inputs and outputs, implying open-drain functionality. In my use case, I am driving the input pins with a push-pull signal from GPIO, which is why I assume that I don't need pull-up resistors on the input lines (pins A1 and A2) of the level shifters, but do I need them on the outputs (pins B1 and B2)? AI: With push/pull signals, you do not need pull-ups at the inputs. The NVT2002 has open-drain outputs, so you need pull-ups at the outputs. The NVT2002 is a clone of TI's LSF0102, which has an application note that describes it in more detail.
H: Why NPN Transistor AND Gate not working? I am trying to form an AND Gate using 2 NPN Transistors, But the led lights up when i turn on the right switch and the left switch doesn't make any change, Can Someone please point out the problem, AI: I believe this is what you're trying to achieve: falstad simulation Edit: A Tinkercad simulation showing the same circuit can be found here. The discussions in the comments are follow-ups on OP's question and go into how to design your own CPU like Ben Eater's 8-bit CPU from TTL ICs (which is how I learned about what really happens on a very low level) and the downsides of using TTL vs CMOS: https://www.youtube.com/watch?v=HyznrdDSSGM&list=PLowKtXNTBypGqImE405J2565dvjafglHU Other great resources for learning CPU design are NandGame and Turing Complete
H: What is the difference between voltage mode control and current mode control for SMPS? What is the difference between voltage mode control and current mode control for SMPS? I also wanted to know what kind of modelling techniques are used model these controls (i.e. average models... etc.)? AI: The difference is in how the control loops are configured. An oversimplified picture follows: In voltage-mode control, the voltage is sensed and compared to a reference, and this difference is applied as the control signal to the converter, like so: simulate this circuit – Schematic created using CircuitLab In current-mode control, the voltage is still sensed, but this error signal is applied as the input to an inner current-control loop instead: simulate this circuit The use of this inner current loop improves stability, though it does require additional circuitry to prevent subharmonic oscillation. The improved stability means that many, if not most, modern switching controllers use current-mode control. (aside: the current measured for current-mode control is usually either the inductor current or the primary switch current, not the output current as shown here. Simplifying the converter to a block labelled "converter" limits my options for positioning the sense resistor.) Either control method can be modelled by a number of different models, depending on how accurate you need your model to be.
H: How to design an I/V Converter for Arduino I'm new to the world of electronics and could really use your expertise. I'm attempting to create an I/V converter for a NOSHOK series 653 pressure sensor, and you can find the datasheet at this link: https://www.farnell.com/datasheets/3932164.pdf My sensor operates on a 4 mA to 20 mA 2-wire setup and is powered by 18V. I'm aiming to capture pressure information using an Arduino Nano, converting it to a range of 0 to 4.5V. To achieve this, I believe I need an operational amplifier (OPAMP). In my research, I came across the INA122 OPAMP, but I'm uncertain if it's the right choice for this project. Frankly, I'm completely lost when it comes to the circuit design. As for the Arduino code I've developed: const int sensorPin = A0; // Conversor analog pin float voltage; float mappedValue; // Value mapped to 0-4.5V range unsigned long previousMillis = 0; const long interval = 1000; // Reading interval in milliseconds (1second) void setup() { Serial.begin(9600); // Initialization of serial communication } void loop() { unsigned long currentMillis = millis(); // Get the current time if (currentMillis - previousMillis >= interval) { // Check if the interval has passed previousMillis = currentMillis; // Update previous time int sensorValue = analogRead(sensorPin); // Analog pin reading // Convert the read value to voltage (0-5V) voltage = sensorValue * (5.0 / 1023.0); // Map the read value to the range of 0-4.5V mappedValue = map(voltage, 0, 5, 0, 4.5); //Print the values to the serial monitor Serial.print("Raw Sensor Value: "); Serial.print(sensorValue); Serial.print("\tVoltage: "); Serial.print(voltage); Serial.print("V\tMapped Voltage: "); Serial.print(mappedValue); Serial.println("V"); } } If anyone could offer additional information, considerations, or examples that I should take into account, I'd greatly appreciate it. I'm eager to learn more. AI: For a simple setup, i would use this approach: NOTE: I don't fully understand why you want a 0-4.5V signal range? simulate this circuit – Schematic created using CircuitLab NOTE: This circuit assumes, that one side of your current loop can be GND referenced and safely connected to the same potential as your PC-GND and therefore earth! The sensor you linked, can be used in such arrangement if you can connect your e.g. 24V supply to the same GND as your PC is on (e.g. in a Test-Setup)! (1) The 250Ohm resistor transform the 4-20mA signal into a 1-5V signal. This signal can then be safely sampled by your arduino, as its max range is 0-5V. (2) The code must (deduced from your example) print the current sensor value in a loop. I added a 0-100% and a 0-10.000psig output to reflect the 4-20mA range and the current pressure. #define PINMAP_ADC A0 /* use ADC0 to sample */ #define INTERVALL_MS 1000 /* 1s between samples */ #define SENSOR_FSR 100 /* Pressure in [psig] per % of 4-20mA Range. Example is 10,000 psig FSR */ unsigned long previousMillis = 0; void setup() { //Init Serial with 9k6 8n1 Serial.begin(9600); //Check if Sensor connected //Check time to ensure intervall //If expired, update current time while( (float)(analogRead(PINMAP_ADC)* (5.0 / 1023.0)) < 1.0 ){ unsigned long currentMillis = millis(); if (currentMillis - previousMillis >= INTERVALL_MS) { previousMillis = currentMillis; Serial.println("Sensor not connected...."); } } //Print the CSV header if sensor is ready Serial.println("Raw_ADC_Value[cnts,0-1023];Voltage[V,0-5];Current_FSR[%,0-100];Pressure[psig,0-10.000];SampleValid[bool,0-1]"); } void loop() { //Check time to ensure intervall //If expired, update current time unsigned long currentMillis = millis(); if (currentMillis - previousMillis >= INTERVALL_MS) { previousMillis = currentMillis; //Read in Sensor data ranging 0...1023 //Scale it to raw Voltage (0...5V) //Scale it to % (4-20mA/1-5V Range) //Scale it to [bar] int rawAdcCnts = analogRead(PINMAP_ADC); //0...1023 float rawAdcVoltage = rawAdcCnts * (5.0 / 1023.0); //0...5 float scaledIPercen = ( rawAdcVoltage - 1 )*25.0; //0...100 float scaledIBar = scaledIPercen * SENSOR_FSR //0...FSR //Check if sensor is connected //and does not have an error if ( rawAdcVoltage >= 1.0 ){ //Print the values to the serial monitor Serial.print(rawAdcCnts); Serial.print(";"); Serial.print(rawAdcVoltage); Serial.print(";"); Serial.print(scaledIPercen); Serial.print(";"); Serial.print(scaledIBar); Serial.print(";"); Serial.println("1"); } else { Serial.print(";"); Serial.print(rawAdcVoltage); Serial.print(";;;"); Serial.println("0"); } } //Idle } NOTE: Due to the big 1000ms delay between samples this code is not very usefull for dynamic monitoring applications. You could add a min/max per sample intervall. Also, due to the low communication speed and the use of ASCII_String output, the current code is extremly limited in the maximum sample rate. TIPP: If you want to do dynamic analysis as well (e.g. log data with high frequency to a csv and analyse it later on), you can increase the effective communcation bandwith by altering the baudrate and sending a structured data format, which can then be "interpreted" by a custom terminal application (Have a look at C#...Such applications with a minimum GUI are done in ~100lines of code). As my morning lecture was boring, i designed a complete solution for your problem and simulated it. (1) The 5V Rail provided by the arduino is LC-filtered to provide a VCC rail clean of digital noise. (2) The input current is scaled using a 200Ohm resisots (See the part-number). After scaling, the input is TVS protected and buffered (U3). This makes the input more safe for industrial applications. Note: The input is not reverse polarity safe! (3) This now buffered input is used to derive I>20mA and I<4mA digital outputs via U1/U2. These can be used for error detection but can also be removed from the design. Note: If you are using a Quad-OpAmp, i would include it. (4) Then, U4 is used for the equation Vo=Nx(V1-V0). The 4-20mA input range is used to derive the output VOut = 0.3125V/mA x ([4-20mA]-4mA). Hence, the nominal 16mA FSR is projected onto a 5V FSR which can be sampled by the ADC directly. Note: You should play with the Resistor values a little, as i just did some "in-my-head calculations". Also, you can add RC-Lowpass filters on the input or delays/hysteresis on the digital outputs. As is said, here is a little guid for how to speed up your data transmission: //Declare a struct prototype and //the associated typedef to store the //sampling result. Also provide a memory //instance for use. struct DataStruct{ int RawAdcCnts; float AdcVoltage; float ScaledIPercent; float ScaledIBar; }; typedef struct DataStruct t_DataStruct; t_DataStruct dataStructToUse; void loop(){ //Instead of directly printing the data //populate the struct first. //Then send the raw struct via UART. dataStructToUse.RawAdcCnts = ?; dataStructToUse.AdcVoltage = ?; dataStructToUse.ScaledIPercent = ?; dataStructToUse.ScaledIBar = ?; //Send data struct by casting the pointer Serial.write((char*)(&dataStructToUse), sizeof(dataStructToUse)); //NOTE: Youll need a desktop application //of some sort to "recover" the binary data //and to write it to a CSV. } (1) Basically, you declare a struct (which is a "range of sequential memory slots" in your MCU) and populate it with the data sampled. Instead of sending "String that say something: + Value" you'll send only the binary representation of your data contained in the struct. (2) For the example above, this is 3x4Byte (float) + 2Bytes (int) = 14Bytes. When your a sending a float with the value of 4.32458752 via the Serial.print(), each digit is send as a char - therefore as a single byte. So the number would use 10 bytes (9Digits plus the .). So far less efficient. (3) However, you'll need a desktop application (C# works well and is simple) to "decode" the binary data and print out a string into a CSV.
H: Custom buck converter putting out 1V instead of 5V I "designed" a simple buck converter, which is basically a clone of the 5V reference design in the TPS6293x datasheet. Unfortunately, my converter is only putting about 1V (closer to 0.95V). This voltage is independent of the input voltage, and I've reproduced it on two separate boards. I used a multimeter to verify the two resistors in the voltage divider are accurate. I'm unsure what else to check. The voltage being regulated, just at the wrong level, makes me think something is wrong with the resistor divider, but the values match the example reference design almost exactly. Here is my schematic and board layout: AI: C15 isn't connected properly. It should go between the BST (pin-6) and SW (pin-5) pins. This is the bootstrap capacitor which provides the level-sifted voltage necessary to drive the high-side transistor.
H: Output-Z (electret) higher than Input-Z (preamp) What happens if an electret microphone capsule (JFET in catridge) is connected to a preamp with considerably lower input impedance than the capsule's output impedance itself? ECM output Z: 2.2K Preamp input Z: 680ohm The immediate effect will be attenuation by overload obviously, which is not an issue in my case but rather desired–I'm trying to design a preamp with different gain stages and big range. But will the overload affect the capsule's frequency response, or produce predictable distortions? I've read this cannot be done with ribbon microphones for example, or passive capsules in general. But since (these) ECM's are buffered with a JFET, I'm in doubt if only attenuation is effected. AI: Your High-Z microphone may struggle to provide the current needed for the Low-Z preamp, which in turn results in a smaller signal (as you mentioned). Ideally, for good amplification you want a Low-Z output to a High-Z input (and matched impedances for best power transfer when hooking up a speaker at the final amp stage). Example for reference: simulate this circuit – Schematic created using CircuitLab To answer your question, in short: Check the ECM's datasheet for its capabilities. My personal opinion is that It shouldn't affect any frequency response characteristics. If you want to be extra sure, you can always match impedances using a T or PI matching network, or have a simple common emitter amplifier before your lower impedance amplifier.
H: What does "strap input" mean as applied to the DS90UB960WRTDRQ1? I am using a DS90UB960WRTDRQ1 in my design. In the pinout functions the definitions of the I/O pins are given as shown below: What does "strap input" mean? What is the difference between a strap input and a general input? AI: Don't focus on pin being called a strap. Look what the strap pins do in the table. IDX and MODE are the only pins called strap pins, and these select the I2C address and operation mode of the chip when it starts up. The word could be changed from strap pin to configuration pin. You can also use find feature of your PDF reader to search for the word "strap" and you find more info on what strap pins are, what they do, and how to use them. Many chips call these initial mode and configuration pins as straps. It may also be related to "bootstrapping" - there might be a MCU hidden inside. The "strap" might relate to the fact that you need to "tie" the pin to a certain logic state, or in this case, as they are analog input pins, to a certain voltage level (within tolerance) to start the chip in the operating mode you want.
H: CE certification for changing USB port I have a CE certified product that uses USB micro port. In lieu with EU requirements that all products must use USB-C port, do we need to apply for get a new certification for CE if we change the USB port only? AI: Yes, you must. The certification is no longer valid if you modify anything on the product. CE regulations require you to keep a technical file on the product at the disposal of the inspection authorities, in EU. The product cannot be modified because it would be different from the technical file.
H: Why is the output of this LM324 op-amp circuit 0V? Orcad/pspice question I'm trying to make a function generator using three op-amps, specifically the LM324. Right now I'm trying make the first square wave output using a DC input: My OrCad simulation's output constant comes out as a zero, no matter what I do: What did I do wrong? This is the full circuit I am trying to make: This is my attempt at it. Since my op-amp outputs keep coming out as a 0 I can not seem to produce any wave function at all: I feel like if I can just get the first op-amp result going then the rest of my op-amps will produce the desired graphs as well. (Unless I'm missing something.) AI: Seems that power supplies are not "wired" (no dots ?). Your timebase is too short. It should be at least of 300 ms. With my simulator, it start promptly. (2 ways). Here is the simulation of the circuit you are trying to make ... Change of one resistor R4 (100k -> 10k).
H: Use nodal analysis to find Vo in the circuit of Fig. 3.72 I have been doing this sum with nodal analysis. As nodal analysis says when a voltage source is between to make a super node. But I cannot do KVL for the super node. Please help me with this question. (30-3v0/60)+3 = (v0/30)+(v1/60) I applied super node. AI: Call the bottom node ground. Then the node at the intersection of those three resistors is \$v_o\$. Call the node with the \$3\:\text{A}\$ current source entering it as \$v_x\$. Then in super-node form, you may write it as: $$\begin{align*} \frac{v_o-0\:\text{V}}{R_1}+\frac{v_o-V_1}{R_2}+\frac{v_o-\left(v_x+3\cdot v_o\right)}{R_3}&=0\:\text{A} \\\\ \frac{\left(v_x+3\cdot v_o\right)-v_o}{R_3}+\frac{v_x-0\:\text{V}}{R_4}&=I_1 \end{align*}$$ But you could just as well have called the (+) node of the dependent source as \$v_y\$ and then, recognizing that \$v_y=v_x+3\cdot v_o\$, may have instead written: $$\begin{align*} \frac{v_o-0\:\text{V}}{R_1}+\frac{v_o-V_1}{R_2}+\frac{v_o-v_y}{R_3}&=0\:\text{A} \\\\ \frac{v_y-v_o}{R_3}+\frac{v_x-0\:\text{V}}{R_4}&=I_1 \end{align*}$$ Exact same thing. Or you could swap the series resistor and the dependent source, so that there would be a dependent source of \$v_o-3\cdot v_o=-2\cdot v_o\$ for the super-node. Then you'd write: $$\begin{align*} \frac{v_o-0\:\text{V}}{R_1}+\frac{v_o-V_1}{R_2}+\frac{\left(-2\cdot v_o\right)-v_x}{R_3}&=0\:\text{A} \\\\ \frac{v_x-\left(-2\cdot v_o\right)}{R_3}+\frac{v_x-0\:\text{V}}{R_4}&=I_1 \end{align*}$$ Again, same thing. Just different ways to see the problem. Note: I've used \$R_1=30\:\Omega\$, \$R_2=60\:\Omega\$, \$R_3=30\:\Omega\$, and \$R_4=60\:\Omega\$, with \$R_1\$ being the left-most resistor and \$R_4\$ being the right-most resistor. \$I_1=3\:\text{A}\$ is your current source.
H: Why I am getting one clock cycle delay in Verilog case statement? I have a Verilog code here: module VendingMachine( input clk, input reset, input [1:0] coins , output reg serving, output reg [1:0] state ); parameter [1:0] WAIT = 3'b10; parameter [1:0] READY = 3'b01; parameter [1:0] SERVE = 3'b11; always @(posedge clk or posedge reset) begin if (reset) begin state <= READY; serving <= 0; end else begin case(state) READY: begin if (coins == 10) state <= SERVE; else if (coins == 01) state <= WAIT; end WAIT: begin if (coins == 01 || coins == 10) state <= SERVE; else state <= WAIT; end SERVE: begin serving <= 1; state <= READY; end default: state <= WAIT; endcase end end endmodule The output of the run is as follows: 0 clock=0 coins=xx state=01 serving=0 50 clock=1 coins=xx state=01 serving=0 100 clock=0 coins=xx state=01 serving=0 150 clock=1 coins=xx state=01 serving=0 200 clock=0 coins=01 state=01 serving=0 250 clock=1 coins=01 state=10 serving=0 300 clock=0 coins=01 state=10 serving=0 350 clock=1 coins=01 state=11 serving=0 400 clock=0 coins=01 state=11 serving=0 450 clock=1 coins=01 state=01 serving=1 500 clock=0 coins=01 state=01 serving=1 I was expecting to get the value of serving to change from zero to one when the value of state changed from 10 to 11, but there seems to be one complete clock cycle delay before the value of serving changes. AI: At the positive edge of clk you change from WAIT state to SERVE state. You don't change the serving signal. On the next clock cycle whilst in SERVE state, you change the serving signal to 1. Notice how these two things happen in different clock periods? Essentially the value of the state machine itself always changes one clock before anything in the corresponding case statement happens. As an aside you've written coins == 10 (decimal 10), instead of coins == 2'b10 (two bit binary number 10, decimal 2). This works by sheer chance, as decimal 10 truncated to two bits equals 2.
H: Is the Gray code unique? After googling, I found that a Gray code is not unique and the only requirement for a sequence of binary strings to be a Gray code is that two adjacent binary strings differ by only one bit. Can we consider 010 → 110 → 100 → 000 to be a Gray code sequence? If so, it seems that many sequences comply with the requirement to be a Gray code, so when we apply a Gray code in real life, such as for error correction, can we use our own Gray code (such as defining a sequence of my choosing that meets the requirement of being a Gray code)? AI: Your example is a code that only changes in a single position between successive elements. But: It's not a well-chosen code, especially not one found as binary reflected code, the method described in said patent to construct such codes, because it uses 3 bits to encode only four states. That's superfluous, 01, 11, 10, 00 would have done the same, and having more bits doesn't help. I'll go and say that if you want to Gray-encode a sequence of \$2^n\$ values, a Gray code must consist of words of exactly \$n\$ bits length, otherwise the encoding is wasteful. Then, Gray refers to binary-reflected codes in his patent. Let us look at these: by design, binary-reflected codes not only have a Hamming distance between successive values, but also between the first and the last element – by design. I'd require that property from any code you're calling Gray Code. Transistor's answer's rotational encoding disk graciously illustrates why: only these can encode things that happen in cycles (angles, phases) with the desired 1-bit difference between successive values. Therefore, any cyclic shift of a binary-reflected code still has the Hamming distance property you want – neighbors differing only in a single bit. I'm willing to call these shifts of binary-reflected codes "Gray Codes" as well. So, if you construct one \$2^n\$-length Gray code, you automatically get \$n\$ Gray codes, as there's inherently \$n\$ positions by which you can shift the code. So, yes, there's a lot of them – since we can always a construct a binary-reflected code for any word length \$n\in\mathbb N\$, and due to the shifting, there's at least \$n\$ different Gray Codes. There's actually more! You can choose a Gray Code according to your needs. Due to shift invariance as described above, you can freely choose any sequence element from a Gray Code you find as a starting point – handy if you don't want to start with the all-0 code word, for example. But, there's also opportunities to change properties. Binary-reflected codes have bad properties if you want, for example, send them through a parallel data connection, where the first physical line transports the first bit, the second line the second bit, and so on. Look at the disk in Transistor's answer: the outermost bit toggles every two code elements, the next-inner every four, and so on. Therefore, the physical line to transport the outermost bit would need to have \$2^n\$ the bandwidth than the innermost one! Terrible! Different kinds of cables for different bits, with different delays? That is very annoying to implement. So, balanced Gray Codes change bits by a different order. There's also at least one set of \$n\$ such codes that you can find (again, due to finding one meaning finding \$n\$ that are just cyclic shifts) for \$n\$-length words for \$n=2^m,\quad m\in\mathbb N\$, and because by definition binary-reflected codes (and their shiftings) are not possibly balanced and hence a disjoint set, we've now already found at least \$2n\$ different Gray Codes for power-of-two word lengths. So, yes, you get to choose your Gray code.
H: I2S communication WS line function I was reading the official I2S documentation from Philips and I have some doubts on its functioning (I'm new to this protocol with respect to PDM). The WS line is used to select either the left or right audio channel. What does this mean in practice? Does it refer to the channels of a single microphones or of two microphones in stereo configuration? If the microphones are in stereo configuration, how can I assign the left channel to one of the two and the right to the other? AI: I2S protocol can transfer 2 channels, so the WS selects which one is currently being transmitted. It does not matter what the channels are, left and right of stereo channel, or two unrelated mono channels. Now generally I2S uses PCM so it is not PDM. If you have two mono microphones connected on same I2S bus, then they must somehow know which one is which. They might have a select pin, or you may need to buy two different microphone parts that are configured by the factory which one is which.
H: Buck converter has low frequency ripple with high amplitude I "designed" a simple buck converter, which is basically a clone of the 5V reference design in the TPS6293x datasheet. The converter is outputting the correct voltage on average, but has quite a lot of ripple. The ripple frequency and voltage depends on the input voltage to the buck converter, but seems to vary between 30-120Hz and 100-300mV. This is with the buck converter unloaded. If I add a load, the ripple voltage stays about the same, but the frequency shoots up into the dozens of kHz. There is no detectable ripple at the input to the buck converter. Below is the schematic, board layout, and o-scope readings at various input voltages. Note that the board has a bodge due to the schematic initially being wrong. AI: This is "PFM" mode, or pulse frequency modulation. At light loads, instead of keeping the frequency constant, the converter enters a hysteretic mode where it switches only when the output voltage falls below some threshold. Note fig 10-13 in the datasheet: If the load is very light it can take a long time for the output voltage to fall back below the PFM threshold and therefore the switching frequency can get very low. The frequency and amplitude of the output ripple are dependent on the load, inductor, and output capacitance. The output ripple is typically higher in this mode as well. The upside is that the efficiency is greatly improved at light loads. The downsides you have noticed- Variable switching frequency that can sometimes cause audible noise, and higher output ripple. As @TQQQ mentioned a feedforward cap in parallel with R12 may help reduce the ripple, as will adding some minimum load. If this operation mode is a problem for you, you can replace your controller with the TPS62933F which always operates in forced continuous conduction fixed-frequency mode.
H: Simple phototransistor circuit to turn on P-channel MOSFET I'm trying to fully understand how phototransistors work and was wondering if this circuit would work to turn on a P-channel MOSFET when the phototransistor isn't detecting light. This is powered by a 3.7 V lithium battery and will be driving ~125 mA of LEDs for a night light. Inspired by the circuit here. Phototransistor - KDT00030ATR P-channel MOSFET - NTA4151PT1G [Edited per comments below.] AI: The circuit looks correct in principle. But you missed some details: simulate this circuit – Schematic created using CircuitLab Your thinking is: When there's no light the Vgs of the MOSFET is -3.7 (the gate voltage is zero and the source voltage is 3.7V) which is sufficiently lower than the threshold (-0.5 ~ -1.2V). When the phototransistor gets light it'll draw current from 3V7 (sourcing towards R18), so the gate voltage will increase and finally turn the MOSFET off. However, since the threshold voltage can be as high as -0.5V, the gate voltage (the voltage across R18) should approach 3.0 ~ 3.3V (or above) to turn the MOSFET off. So the light current should be at least 0.6 mA. If you check the datasheet, you'll see the light current information: The light sensor seems to be colour-sensitive: The light current is relatively small for cool/daylight white light (0.25 mA for cool/daylight white, 1 mA for warm white). 6500K 1000 lux: If you can't imagine what 1000 lux of light intensity is, here's a reference: A T35 35W 6500K fluorescent tube's luminous flux is about 3000 lumens. About 1.7 metres of distance, you'll get ~1000 lux: $$ I_V = \frac{\Phi_V}{d^2}=\frac{3000 \ \text{lm}}{2.89 \ \text{m}^2}\approx 1040 \ \text{lx} $$ Now let's take 0.22 mA as a reference. Under 1000 lux of light intensity from a 6500K source, the voltage drop across R18 will be around Vg = 5k9 x 0.22m = 1.3V, making the Vgs of the MOSFET Vgs = 1.3 - 3.7 = -2.4V which is still more than enough to keep the MOSFET on because it's way beyond the threshold. So the MOSFET will NOT turn off. To get 0.6 mA to turn the MOSFET off the light intensity should be around 2000 lux. For the same light source (6500K T35 35W fluorescent tube), the distance between the tube and the sensor should be no longer than 1.2 metres to turn the MOSFET off. 3500K 1000 lux: If your light source's CCT (colour temperature) is about 3000K (warm white e.g. ordinary incandescent bulb) then the light current at 1000 lux of light intensity (60W incandescent bulb, ~1 metre distance) will be around 1 mA. This will be enough to turn the MOSFET off because the voltage across R18 will hit 3.7V. The requirement was at least 0.6 mA. To get 0.6 mA under a 3000K light source, the light intensity should be ~700 lux. For the same bulb (~900 lumens), the distance between the bulb and the sensor should be no longer than 1.1 metres to turn the MOSFET off. As you can see, using the phototransistor with the configuration in your question appears to be light-dependent. To decrease the light dependency: Increase R18. This is the simplest solution. Take the dark current (leakage) and your light threshold into account, and select a high resistance so the MOSFET remains on under the dark. Use a current amplifier to drive the MOSFET (This configuration can be useful for low-light environments and higher source-sensor light angles): simulate this circuit
H: How to measure flux in transformer in LTspice? I wish to check the flux through a transformer core in my LTspice simulation. I have set up a push-pull converter simulation using a transformer with a turns ratio of 1 : 1.1, a supply voltage of 12 V and two FETs pulling either side low for almost 1 µs in turn. There is a small dead time of 50 ns, where neither of the FETs are conducting. In order to determine the flux in the core, I know of two tentative approaches: Determine the product of inductance \$L\$ and current \$I\$ for each winding and sum them: $$\Phi_1(t)=\sum_i^\text{windings}{L_i\cdot I_i(t)}$$ Measure the volt-second integral normalized by the turns ratio \$N\$ of the windings and sum them: $$\Phi_2(t)=\sum_i^\text{windings}\int_0^t{\frac{V_i(\tau)}{N_i} \text d\tau}$$ So, I set up two bv sources in LTspice to calculate each of the two expression: I know that the absolute turns count is missing in the second expression, only the turns ratio is respected. So it should be wrong by a constant scale factor. However, what I see is that both expressions yield qualitatively different results, which are plotted below. Question Which of these two expressions (if any) is correct and why is the other one (or both) wrong? AI: The flux in the transformer core can be assumed as the integral (over time) of the voltage at the inner winding, divided by the number of its turns. So the expression for the voltage should taking only one of the windings.
H: ESP32-WROOM-32 and WLED not communicating correctly to WS2812B LED strips I am trying to follow Youtube tutorials to run WLED (a program) on an ESP32-WROOM-32 board I purchased, to control addressable LED strips (WS2812B). I bought a board that does not exactly match the tutorials I've found, so I'm having to interpret the information appropriately, but I think I might be failing at this step. The problem: When I connect power and ESP32-WROOM-32 to the strip, the LEDs light up but does not respond to controls. With the strips length set to 30, 8 LEDs light up, 7 white and last one green. If I increase the length to 300 (60/m * 5m), all the LEDs light up, but they seem to just flicker randomly, as if the pin is just echoing the traffic on the COM port and the LEDs are displaying bits of information. Every time I send a change to the board the flicker activity increases briefly. Setup process: I had to download the virtual COM port drivers to get the ESP32 to show up in Win 11, but once I figured that out, I was able to flash the WLED app onto the ESP32. WiFi is setup, connected and shows up when I run the WLED app on my phone. The ESP32 is still being powered via the USB micro port, and the LED strip is being run off a 5 V PSU via two extra included power injection wires on the strip. Only the data line is hooked up to the ESP32, currently it's set up to communicate via pin 16, but I also tried it on pin 2 and got the same results (once WLED was updated with the matching pin). Here is the LED strip that I'm using, and this is the ESP32-WROOM-32 Photos of the setup: My ESP32-WROOM-32 https://photos.app.goo.gl/fhgxVe7dYariZrLh7 2102 chip I used to ID the driver https://photos.app.goo.gl/oW256YpDvMkrzt1K6 How it's wired https://photos.app.goo.gl/utnvGa2UMFpCBtFC7 A photo of jump lead on pin 16, showing the pinout on bottom of board https://photos.app.goo.gl/NZtb3RS9PfuXDZUs5 Really it's on pin 16, right? Am I doing this right? https://photos.app.goo.gl/HTNPdqBUXvL9ysm46 8 Lit up LEDs (Length set to 30) https://photos.app.goo.gl/NJnCC7Wm8yVNvMMw5 Short video of me turning the lights on and off in WLED android, and the lights flickering. Video What can be the problem here? AI: You have no ground connection between your strip and your microcontroller and thus no circuit connecting them. Add a return path for current back to the microcontroller ground.
H: Non inverting opamp with bias DC I'm doing a lab work on a circuit that takes a square signal in the input and first amplify it, then shape it and the amplify it again. So the circuit is actually composed by: an amplifier (inverting), a shaper, a second amplifier (non-inverting). The problem is in the last part, where our professors asked us to bias DC the second amplifier because our signal is between [0;0.256]V, then it is amplified to [0,1.8]V and then we want the bias to offset our signal so that it stays in [-1;+1] with a bias voltage of -0.9V. Our professor said that we should take our 5V generator (that's the only one we have left to use), use a split resistor to make it to 0.9V and then use a buffer to separate that part of the circuit from the last amplifier (so that the gain of the last amplifier depends only on the two resistors R1 and R2). When I do the simulation with LTspice without the bias part (putting to ground R1), the signal is correctly amplified to [0;1.8]V. But, when I try to insert the bias DC, the signal is shifted to [-5;-3]V (more or less) which is too much. I just want it to be shifted with a factor of 0.9 to the bottom (so that it stays between [-1;1]V). Can you help me to understand what I am doing wrong? I'll just link the simulation, the file and photo I get AI: You did nothing wrong. Just choose the value of your resistors R3 and R4 (My R9 and R10). Like this ...
H: Algorithm for substituting any resistance network to one resistance I am wondering what the common approach is to substituting any resistance network to one resistance only given that the a voltage is applied between two random nodes. For example this network took me ages to solve. I first combined the upper corner (two serial resistances and one more in parallel,) then I realized I could not make progress without transforming the rectangle connection into a star connection. Certainly, there must be a better way of doing it. This might be even harder, just in case. Edit: Some may say there is no common approach but there must at least be an algorithm since computers do the job all the time. AI: Some may say, there is no common approach but there must at least be algorithm since computers can do the job all the time. More than a few ways. But time for just one. Since you bring up that computers do this routinely, then: Graph Theory: Spice-Like We'll follow a concrete example from start to finish to help illustrate a method that is similar to what Spice programs use. It's not the exact same method. But your resistor case is a subset of the problems Spice handles and for these cases the method I'll discuss is quite close enough that the differences are negligible. If you want more specific details about how Spice programs approach this in more general cases, then please see "SPICE2: A Computer Program to Simulate Semiconductor Circuits" by Laurence W. Nagel, directly from Berkeley. This one is entirely free. Just click on the PDF link at that site. The details are addressed quite early in the text. So feel free. Concrete Example Here's a schematic in LTspice, including its netlist: This netlist is in simple ASCII text. You can parse it, trivially. And I mean trivially! There are 7 lines you care about, obviously. Just write up a routine to read any resistor netlist saved by LTspice (or any Spice program, as all of them support these netlist Spice cards.) Make sure to label the two nodes that you care about making your measurement from. I've called them A and B but obviously you can make up anything you want, there. Its Directed Graph: aka Incidence Matrix The code you write (or the method you would use by hand) just converts the above to this directed graph matrix: $$ \begin{matrix} \quad\fbox{A} & \\ & \begin{matrix} & & \text{nodes} \\ & & \overbrace{\begin{matrix}N_1 & N_2 & N_A & N_B\end{matrix}} \\ \text{edges}&\left\{\begin{matrix}R_1\\R_2\\R_3\\R_4\\R_5\\R_6\\R_7\end{matrix}\right.& \left[\quad\begin{matrix} \vphantom{R_1}\hfill 1 & \hfill 0 & \hfill 0 & \hfill -1 \\ \vphantom{R_1}\hfill 0 & \hfill 1 & \hfill -1 & \hfill 0 \\ \vphantom{R_1}\hfill 1 & \hfill -1 & \hfill 0 & \hfill 0 \\ \vphantom{R_1}\hfill 0 & \hfill 0 & \hfill -1 & \hfill 1 \\ \vphantom{R_1}\hfill 0 & \hfill -1 & \hfill 1 & \hfill 0 \\ \vphantom{R_1}\hfill 0 & \hfill 1 & \hfill 0 & \hfill -1 \\ \vphantom{R_1}\hfill 1 & \hfill 0 & \hfill -1 & \hfill 0 \end{matrix}\quad\right] \end{matrix} \end{matrix} $$ All I did was to read a line (by eye, obviously) and place a -1 for the first node and a +1 for the second node shown in the netlist for each part. Everything else gets 0. This is just falling off a log. Anyone can do this. And so can a very short piece of code. Using freely available SymPy and SageMath for this, running on a variety of environments and using the Python language, write: A = Matrix([[1,0,0,-1],[0,1,-1,0],[1,-1,0,0],[0,0,-1,1], [0,-1,1,0],[0,1,0,-1],[1,0,-1,0]]) You could, of course, generate that code automatically. And you may generate it as initializers for a matrix in any other language of your own choosing. Could not be easier. Its Conductance Matrix You will need a diagonal conductance matrix. This one is still easier (as if that were possible): $$ \begin{matrix} \quad\fbox{C} & \\ & \begin{matrix} & \begin{matrix}R_1 & R_2 & R_3 & R_4 & R_5 & R_6 & R_7\end{matrix} \\ \begin{matrix}R_1\vphantom{\frac1{R_1}}\\R_2\vphantom{\frac1{R_1}}\\R_3\vphantom{\frac1{R_1}}\\R_4\vphantom{\frac1{R_1}}\\R_5\vphantom{\frac1{R_1}}\\R_6\vphantom{\frac1{R_1}}\\R_7\vphantom{\frac1{R_1}}\end{matrix}& \left[\quad\begin{matrix} \frac1{R_1} \\ & \frac1{R_2} & & & & \large{0} \\ & & \frac1{R_3} \\ & & & \frac1{R_4} \\ & & & & \frac1{R_5} \\ & \large{0} & & & & \frac1{R_6} \\ & & & & & & \frac1{R_7} \end{matrix}\quad\right] \end{matrix} \end{matrix} $$ Coded up as: var('r1 r2 r3 r4 r5 r6 r7') C = Matrix([[1/r1,0,0,0,0,0,0],[0,1/r2,0,0,0,0,0],[0,0,1/r3,0,0,0,0], [0,0,0,1/r4,0,0,0],[0,0,0,0,1/r5,0,0],[0,0,0,0,0,1/r6,0], [0,0,0,0,0,0,1/r7]]) The above conductance matrix can also include capacitors and inductors, as well. But the details applied by Spice programs become more nuanced and it's better that you refer to "SPICE2: A Computer Program to Simulate Semiconductor Circuits" by Laurence W. Nagel for some expansion. For now, let's just move on with simple conductance matrices built only from resistor values. General Matrix Solution Equation and the Schur Complement It turns out that the product, \$A^T\,C\,A\$, always results in a square symmetric matrix seen as this block matrix: $$W=A^T\,C\,A=\left[\begin{matrix}P&Q^T\\Q&R\end{matrix}\right]$$ At this point, we've two choices. Either choice works just fine. One is that we set \$v_a=1\:\text{V}\$ and \$v_b=0\:\text{V}\$ and find out what currents we get. The other is that we set the KCL sum at \$v_a\$ to be \$1\:\text{A}\$ and at \$v_b\$ to be \$-1\:\text{A}\$ and find out what the resulting node voltages are for \$v_a\$ and \$v_b\$. If we set the currents and solve for all the node voltages, the matrix solution is harder since you need to invert a larger matrix. So it's better (my view) to set the node voltages and find out what the resulting currents will be. This allows the use of the Schur complement. So we will instead just set \$v_a=1\:\text{V}\$ and \$v_b=0\:\text{v}\$ and solve for the complementary currents (what enters must be the same as what leaves.) The value, \$R_{total}=\frac{1\:\text{V}}{I}\$, will be the resistance. The full-blown setup follows (made with MiKTeX and Latex code I learned about here): (Note that you would likely use numerical values for \$C\$ and so \$W\$ wouldn't look that complicated. It would just be some numbers there.) We want to solve for \$\hat{i}\$. We don't care about \$\hat{v}\$, as it really doesn't matter to us. So we use the Schur complement to avoid having to solve for \$\hat{v}\$ (since we don't care about it.) By ignoring what we don't care about, the computation problem is made simpler. The Schur complement is trivial to derive. Note that (1) \$P\hat{v}+Q^T\hat{e}=\hat{0}\$ and (2) \$Q\hat{v}+R\hat{e}=\hat{i}\$? From (1), solve as \$\hat{v}=-P^{-1}Q^T\hat{e}\$ and then substitute that into (2): $$\begin{align*} \hat{i}&=Q\hat{v}+R\hat{e} \\\\ &=Q\left(-P^{-1}Q^T\hat{e}\right)+R\hat{e} \\\\ &=-QP^{-1}Q^T\hat{e}+R\hat{e} \\\\ &=R\hat{e}-QP^{-1}Q^T\hat{e} \\\\ &=\left(R-QP^{-1}Q^T\right)\hat{e} \end{align*}$$ And that's the Schur complement. Do note that it reduces the size of the required matrix inversion, this way. Just a 2x2 rather than inverting an entire 4x4 (which would succeed at getting the right results, but is generally more work because it also solves for those two voltages, \$v_1\$ and \$v_2\$, which we really don't need to know.) Also, you can easily invert a 2x2 by hand. Not so much with a 4x4, where it starts to get more painful. Performing the Indicated Computations So here's the remaining code. I'll later then assign values to each of the resistors where their value is the same as their identification number and get some results. W = A.T * C * A # square symmetric block matrix P = W.extract([0,1],[0,1]) # P-block Q = W.extract([2,3],[0,1]) # Q-block R = W.extract([2,3],[2,3]) # R-block excite = Matrix([1,0]) # excitation of 1 V and 0 V for u in list(( R - Q * P.inv() * Q.T) * excite ): u.subs({ r1:1, r2:2, r3:3, r4:4, r5:5, r6:6, r7:7 }) 315/523 -315/523 So, we'd find that the total resistance is \$\frac{1\:\text{V}}{I=\frac{315}{523}\:\text{A}}=\frac{523}{315}\:\Omega\approx 1.66\:\Omega\$. That's it. Seriously. (Note for later that \$I=\frac{315}{523}\:\text{A} \approx 602.294\:\text{mA}\$.) Trust, but Verify! So, let's verify. Here's the initial schematic modified to include all the extra stuff needed to get a numerical solution: Note that the current leaving the voltage supply has the magnitude of \$602.294\:\text{mA}\$ and that \$\frac{1\:\text{V}}{602.294\:\text{mA}}\approx 1.66\:\Omega\$. And that is a match!!! (And the current is also seen to have matched, too.) Summarizing the Algorithm So, here's what you need to do: Define for yourself a line by line way of listing out the resistors and the nodes they connect up with. This is called a netlist. You can use the Spice format for this -- it is what I'd choose to do because then I could just use the schematic editor to create any kind of crazy schematic. And it would support other methods of generating netlists beyond the use of the Spice schematic editor. You don't have to do that. You can create your own format. Doesn't matter. You just need something simple to work with. Write code to read what you produce for step #1 and generate a directed graph matrix \$A\$ from it. Write extremely simple code to produce the conductance matrix \$C\$. Using the matrix math I've pointed out above, solve for the resulting current that is developed. Divide that current into the \$1\:\text{V}\$ excitation supplied to the network to get the resistance. That's all there is. There are other ways. But this one is straight forward and will handle any set of cases you present, so long as it is a fully connected graph. Fully Connected Graphs: What??? Well, there are methods you can apply to the incidence matrix, \$A\$, to verify this. For example, looking at its right nullspace will tell you if it is fully connected: pprint(A.nullspace()) ⎡⎡1⎤⎤ ⎢⎢ ⎥⎥ ⎢⎢1⎥⎥ ⎢⎢ ⎥⎥ ⎢⎢1⎥⎥ ⎢⎢ ⎥⎥ ⎣⎣1⎦⎦ Getting only one vector with all 1's tells you this is a fully connected graph. This vector says that the only way to get zero currents (the KCL null-space) is to have all of the existing nodes set to the same voltage. All of them. Not just some of them. (Otherwise, you'd see more vectors here.) (Oh, I should include a short synopsis. So here is a description of the four fundamental subspaces for a short summary.) Added Value for Graph Theory: Meshes! It's easy to find the independent meshes using the left null-space: pprint(A.T.nullspace()) ⎡⎡1 ⎤ ⎡0⎤ ⎡-1⎤ ⎡0 ⎤⎤ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢⎢-1⎥ ⎢1⎥ ⎢0 ⎥ ⎢-1⎥⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢⎢-1⎥ ⎢0⎥ ⎢1 ⎥ ⎢-1⎥⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢⎢1 ⎥, ⎢0⎥, ⎢0 ⎥, ⎢0 ⎥⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢⎢0 ⎥ ⎢1⎥ ⎢0 ⎥ ⎢0 ⎥⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎢⎢0 ⎥ ⎢0⎥ ⎢1 ⎥ ⎢0 ⎥⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥ ⎣⎣0 ⎦ ⎣0⎦ ⎣0 ⎦ ⎣1 ⎦⎦ Shows you one set of independent meshes in the circuit's directed graph. There are four, in this case. These can be combined in various ways to find other meshes. But there will only be four independent ones, regardless. Looking at the circuit's directed graph (made with freely available Dia): The first column vector above tells you that if you follow edge \$R_1\$ with its arrow, then go oppositely against \$R_3\$, then go oppositely against \$R_2\$, and then go with \$R_4\$, that you will have one mesh there. The second column vector says with \$R_2\$ and with \$R_5\$ is another mesh. (Parallel resistor pair.) The third column vector says against \$R_1\$, with \$R_6\$, with \$R_3\$ is another mesh. And the fourth column vector says against \$R_2\$, with \$R_7\$, against \$R_3\$ is another mesh. These four meshes are colored in below: Oh! Wait. I added a 5th one, didn't I? The purple circle. What about that mesh? Well, it's just the sum of the yellow (1st), blue (2nd), and red (3rd) meshes. Let's double-check that: U = A.T.nullspace() pprint( U[0] + U[1] + U[2] ) ⎡0⎤ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ ⎢1⎥ ⎢ ⎥ ⎢1⎥ ⎢ ⎥ ⎢1⎥ ⎢ ⎥ ⎣0⎦ Yup! Perfect. So it seems that other meshes can be constructed from these independent ones. If we added this one to the list and deleted the red one, then we'd still have four independent meshes that span the same KVL nullspace. So you can add or subtract meshes to make other meshes. But it takes exactly four independent ones, in this case, to span the KVL (left) null-space. If more than four, then at least one mesh must be dependent on others in the list. This is because the rank of this incidence matrix \$A\$ is 3. Given that there are 7 rows, this means \$7-3=4\$ vectors required to span its KVL null-space. So long as the vectors are the minimum sufficient to span that null-space, you are good. This is a cool way to find meshes needed to solve, when using the mesh method of analysis, by the way! Final Note: Selecting Random Nodes only given that the a voltage is applied between two random nodes. The incidence matrix \$A\$ would keep all the rows just the same as before, since the rows correspond to resistors that are connected up into the circuit and you wouldn't be altering their topology. So that stays the same. But you would need to move the columns around so that the two nodes having the excitation applied to them are in the last two columns in order to avoid the need for horrible syntax. (Technically, we could write some insane syntax. But let's avoid going there as it moves away from understanding things and towards language syntax geekland.) For example, suppose we wanted to supply the excitation to the first and last nodes (\$v_1\$ and \$v_b\$) as shown in the following schematic: The schematic topology is unchanged. So the rows in \$A\$ don't change, either. But the excitation can now be applied to any two nodes at will with the above schematic. In the case shown above, it's the first and last nodes of \$A\$. The following code would take what has already been done above and apply it slightly differently (moving columns around with SymPy syntax) to get the desired result: A1 = A[:,[1,2,0,3]] W1 = A1.T * C * A1 P1 = W1.extract([0,1],[0,1]) Q1 = W1.extract([2,3],[0,1]) R1 = W1.extract([2,3],[2,3]) for u in list(( R1 - Q1 * P1.inv() * Q1.T) * excite ): u.subs({ r1:1, r2:2, r3:3, r4:4, r5:5, r6:6, r7:7 }) 28/23 -28/23 (The [1,2,0,3] bit above says to build a new \$A\$ from the old \$A\$'s 2nd col, 3rd col, 1st col, and 4th col, in that order.) So we find \$I=\frac{28}{23}\:\text{A}\approx 1.21739\:\text{A}\$. A run of the above schematic shows: I(V1): -1.21739 device_current Which is the expected result. (So the resistance in this case would be \$R_{total}\approx 821.43\:\text{m}\Omega\$. And given the value of \$R_1\$, we'd expect something less than 1 here.) This exposition not only shows you how to develop a solution for one case, but also how to adjust it to ask about pairing any two selected nodes. The topology of the resistor circuit is unchanged. It's just some syntax changes needed to construct the right Schur complement matrices for the solution. That's all. I'll provide a final demonstration that is more about the power of software like Python/Sympy than the power of graph theory. The power of graph theory is already shown. But this just puts a cap on the example resistor circuit I started with. So may as well. I can define a function that accepts a list of the four nodes, but ordered in such a way that we test every node pair for its effective resistance: def findR(L1): global A, C # access to our A and C matrices A1 = A[:, L1] # temp re-arranged A W1 = A1.T * C * A1 # temp symmetric block matrix P1 = W1.extract( [0,1], [0,1] ) # temp P-block Q1 = W1.extract( [2,3], [0,1] ) # temp Q-block R1 = W1.extract( [2,3], [2,3] ) # temp R-block return 1.0/list( ( R1 - Q1 * P1.inv() * Q1.T ) * Matrix([1,0]) )[0].subs({ r1:1, r2:2, r3:3, r4:4, r5:5, r6:6, r7:7 }) With that in hand I can now perform the following: for u in [ list(set([0,1,2,3])-set(i))+list(i) for i in itertools.combinations([0,1,2,3],2)]: u, findR(u).n() ([2, 3, 0, 1], 1.48571428571429) ([1, 3, 0, 2], 1.67222222222222) ([1, 2, 0, 3], 0.821428571428571) ([0, 3, 1, 2], 1.09126984126984) ([0, 2, 1, 3], 1.59285714285714) ([0, 1, 2, 3], 1.66031746031746) And there you have all of the allowable equivalent resistances for that circuit, using any pair of nodes. (The last two nodes listed are the ones being probed.) Summary of Graph Theory There's a lot you can do when you better understand how KCL, KVL, the column-space, row-space, and two null-spaces all play well together to describe circuits. I hope this should be sufficient to motivate you to experiment on your own, firming this up in mind. Further reading: Wai-Kai Chen's "Graph Theory and Its Engineering Applications". The author assumes some maturity. But by the 2nd chapter you are off and running with circuit graphs that are practical and meaningful. Gilbert Strang (anything by him is good and usually targeted at undergrads just learning this stuff), 5th edition, "Differential Equations and Linear Algebra". Gilbert Strang really can help you with understanding linear matrices, vector spaces, and their application to solutions. (He taught his last class, May of 2023, at the age of 88! A great many owe him so much for his dedication to those starting out on this kind of learning path. I owe him what I can never hope to pay back. My best to him!) I've been sniped!
H: TIA: Is OP-AMP necessary when low speed and high current gain? I've read here. Regarding this picture: If there conditions are satisfy: Gain resistor is very high e.g. 10Meg rather than 10k Low speed close to DC Connect this circuit output to ADC input, and input impedance of ADC is about 300 ohm 10Meg || 280 does not seems to have loading effect then shouldn’t I need OP-AMP(TIA)? AI: This circuit has a rather large "input impedance" equal to R, while a TIA has essentially 0 input impedance. As a result, the voltage over the diode will be not constant and this will deteriorate linearity. Moreover, the output impedance is also R, so connecting it to a low impedance ADC is not gonna work at all. Better, but still without a TIA, would be to strap a basic current mirror between the diode and the I-to-V- conversion resistor.
H: Ground loop with oscilloscope probes I am trying to better understand how ground loops affect a circuit and why it is important to do not have ground loop on a circuit. So I am trying to look at systems with ground loop to further understand and see the effects. Here is a system with ground loops: As all the BNC connectors of the oscilloscope are connected between each other, there is a ground loop when we used two probes connected to the same ground of a board. Nevertheless, I never see problems on my measures due to ground loops or at least it was not impactful. So why this kind of ground loop has so little effect? And The ground loop with oscilloscope probe is pretty large! So it should have a large effect. So is there something to prevent the ground loop effect into the oscilloscope? Does anyone know when ground loop become relevant into a system? AI: Try adding a noise source that represents the difference in potential between two points "supposedly" connected to the same earth ground point: - The dotted line represents the actual ground/earth loop that you refer to \$V_N\$ is the potential difference between ground/earth points that are connected They may be connected but, earth is also used by other equipment in an electrical installation Other equipment is outside of your measurement control This can drive a current (can be amps of AC) that flows down your oscilloscope probe wires It screws-up the measurement because there will be a volt-drop down the cable due to its non-zero impedance
H: Is it possible to connect a MIPI CSI-2 deserializer to SC206E with CSI-1 interface I am interfacing a deserializer[DS90UB960-Q1] with Quectels smart module[SC206E]. The deserializer supports MIPI CSI-2 and SC206E supports CSI-1. May I know CSI-2 is backward compatible or not. I have seen a question similar to this.But it is closed. Don't know why. EDIT:1 It was a mistake by Quectel AE team,that they told us [SC206E] won't support CSI-2. Today we received the information from Quectel R&D team that [SC206E] supports CSI-2. AI: May I know CSI-2 is backward compatible or not. There can be device-specific compatibility but the standards/specs do not specify backwards compatibility (Tom Carpenter says exactly the same thing in his comment under the closed topic you linked). Even different versions of CSI-2 (e.g. CSI-2 v3.0 and CSI-2 v2.0) do not guarantee inter-compatibility although there could be some devices supporting, for example, CSI-2 v3.0 and all previous versions. Some devices might be compatible so it's better to ask the component manufacturers or their application engineers.
H: How to understand schematic circuit diagrams in functional safety I'm having a hard time understanding the kind of diagrams which frequently appear in literature relating to ISO 13849. Are there relevant instructions or guides on how to understand this kind of diagrams? What do the dashed lines mean? Why are there three different symbols with the Q1 label? Do the three different symbols have different meanings? Why is there a part (which I marked with a green rectangle) separate from others? How can I derive from Figure 2 and the context that the two channels are B1-Q2 and B2-K1-Q1 as stated in Figure 3? AI: What do the dashed lines mean? In the case of Figure 2, Q1 they show the mechanical connection between the coil and the contact. This is a bit redundant as both are labelled Q1. In the case of the Figure 2 cam switches B1 and B2 the contacts aren't named and so the dotted line indicates which contacts are actuated by the cam-following rollers. In the position shown, B1 not actuated, the left contact is closed and the right contact is open. Why are there three different symbols with Q1 label? Do the three different symbols have different meanings? Q1 is a relay which by definition will have a coil and at least one contact. The symbol on the bottom right is the coil, the bottom left is a normally-open contact and the upper one is a normally-closed contact. Why is there a part (which I marked with a green rectangle) separate from others? The three diagonal lines indicate that this is a three-phase power circuit for a three-phase motor. The relays are used to control (with low voltage / low current) a high power load. The relay isolates the high voltage circuit from the low voltage control circuit. How can I derive from Figure 2 and the context that the two channels are B1-Q2 and B2-K1-Q1 as stated in Figure 3? I don't think you can. The PLC is a programmable device and we can't see what the program is. I don't think that this is good material. It gives the impression that a standard PLC can be used as a safety control system. This is generally not true as the failure mode is not predictable. (The output transistor could fail as a short-circuit or open-circuit.) You might find my answer to this post useful: https://electronics.stackexchange.com/a/240352/73158
H: BD8306MUV and idle current I need to generate 3.3 V from a 3.6 V battery to power a sensor. I choose the Rohm buck-boost converter BD8306MUV for that, but I discovered that it consumes nearly 5 mA of current in idle mode without a load. The circuit is as recommended by the manufacturer. A novice question; Is this normal, or do I have an error somewhere? Schematic: AI: Is this normal, or do I have an error somewhere? Looks normal to me. Apparently, you copied the first application circuit given on the p.11 of the datasheet. There are some performance graphs on p.13 and onwards. The efficiency graph shows an efficiency of 13~14% for 1 mA output current when the input is 3.7 VDC. This is close to your case for comparison. Now let's work out the input current at this "minimum" load: $$ \eta=0.13 = \frac{3.3 \cdot 0.001}{3.7 \cdot I_{in}} \Rightarrow I_{in}=6.9 \ \text{mA} $$ 1 mA and idle (zero load) are different things but should be enough to give an idea: For 3.3 mW of output power (3.3V - 1 mA) the circuit consumes roughly 25 mW, according to the performance graphs. Now let's think straightforward: If we assume the 3.3 mW is transferred to the output with no extra loss then the input power at exactly zero load (idle) would be 25 - 3.3 = 21.5 mW, making the input current roughly 6 mA. So it looks normal to see 5 mA input current at zero load.
H: Identify antenna connector type Is any one able to identify the type of connector shown below? It is for the FM antenna on a Sony HiFi, model CMT-SBT20. The pitch is 2.5mm, only one crimp is fitted. I've seen the assembly shown below can be purchased on eBay/Amazon etc but I'm really after the connector details. AI: My Identiconn Connector Identification Utility identifies that as a JST XHP-2. (Or a knock-off by some other company.) {Digikey}
H: Is it okay to call a UART “RS-232” if it doesn’t respect voltage levels specified in the RS-232 standard? I have been using this converter call it converter A to communicate an MCU through its UART pins RX TX. And I wouldn't call this RS232 because RS232 is about physical layer and voltage levels as explained below: But now there's a driver which for communication requires this RS232 TTL module call it converter B. Is this still the same type I have or different? I mean is the converter A and B same type? If so, why confusing naming them as RS232? AI: No, it is not OK to call an UART as RS-232, because they are two completely different things, located at different layer on OSI model for networking. RS-232 would be the physical layer, and UART would be the data link layer. Which means, UART can be used with other physical layers too, such as RS-485, RS-422, etc. Some people mix RS-232 physical layer with UART, because technically all RS-232 ports were implemented with UARTs, and when you just want the UART many chips are still called 232 like FT232 because it is a USB UART which can be used to implement an RS-232 port. Both those modules are just 3.3V TTL/CMOS level UART modules. They have nothing to do with RS-232. Or rather, they can be thought as typical computer RS-232 serial ports but with the RS-232 part left off, which is what might explain why they are called incorrectly. RS-232 is a standard for physical electrical interface. It defines things like voltage levels and connector. What it does not define is the logical bit serial protocol, line code, or encoding. Those are defined by the UART. The UART uses logic level voltages, and these logic level voltages are converted to RS-232 voltage levels with a level conversion PHY when making an RS-232 interface. But in your case, you do not need RS-232 and need only the logic level UART signaling.
H: Wurth transformer secondary pins on left and right side of transformer I am looking to use Wurth Transformer 750310988 and I have noticed its pinout is very bizarre. My application is I want to float the secondary of this transformer up to 500VDC and provide a isolated 50V supply rail floating at 500V. Normally transformers have one side as the primary and another side as the secondary and creepage and clearance is across the entire transformer body. However, there transformer I am looking at has a center pin (pin 5) on the primary side as per the image below. Is this correct, and would this mean I would need to maintain creepage and clearance between pin 5 and pin 3 on the primary side? AI: Certainly looks that way. If you look at the chips it is designed to work with they are low voltage to low voltage so they may not have considered high primary-secondary voltage likely (though they do say they test to 1500VAC). You could have a PCB slot milled through where pin 4 is (or should be) but I suspect the transformer itself won't have acceptable creepage and clearances.
H: DC Motor RPM Drops with Load in Direction of Rotation I am trying to understand the kinematics of a machine that uses a parallel indexing cam. As you can see, this mechanism has a dwell phase where there is minimal load on the cam, and a motion phase where the follower is quickly accelerated and decelerated to an indexed position by the cam. The machine is driven by a small 12V hobby motor. I've plotted the angular velocity of the cam and follower relative to the angle of the cam based on experimental results. The blue plot is the cam velocity and the red plot is the follower velocity. As you can see here, the RPM of the motor drops as the follower is accelerated. From what I understand, the RPM is linearly proportional to the load, so this makes sense. What I don't understand is the second drop in the deceleration phase of the the motion. At this point, there is a torque exerted on the motor in the direction of rotation. This is where my limited understanding of DC motors fails me. Would it ever make sense for the RPM of the motor to drop in this case? One theory I have here is that during this rapid deceleration, the pressure exerted by the bearings on the cam increases, which increases friction in the system. Is this a reasonable explanation, or could something else be going on here? AI: It looks like it's a friction thing. Especially if the follower does not have a bearing and is just two parts sliding against each other. If that's the case, lubrication may have a significant effect. Keep in mind that the roughly inverse linear relationship of RPM to torque of a DC motor (between no-load or zero torque RPM and zero RPM stall torque) is only true in the steady state. Inertia is a factor in all the parts. You can see that when it has accelerated the follower the motor begins to recover RPM (but does not get there in the available time). In steady state, if the no-load RPM is n0 and the stall torque is Mh then the RPM with a torque M is just n0(1 - M/Mh). Image from here
H: Transmitting from PC via USART to MCU working but very slow I am sending a small 16-Bit RGB BMP image file from my Linux PC via USART (with a loosely wired USB to serial converter) to an AVR ATmega MCU sitting on a breadboard, running at 8 MHz with the internal oscillator. It is working, but only very slow. In the controller in the main loop, I am checking if data was received: if (bit_is_set(UCSR0A, RXC0)) { char data = UDR0; handle(data); } For now in handle(data), I'm just counting the bytes received to be as inexpensive as possible but still being able to know if at least the correct number of bytes was received. The main loop does nothing else. USART is set to BAUD = 38400 and initialized like this: void initUSART(void) { UBRR0H = UBRRH_VALUE; UBRR0L = UBRRL_VALUE; UCSR0B = (1 << TXEN0) | (1 << RXEN0); UCSR0C = (1 << UCSZ01) | (1 << UCSZ00); } So I set GTKTerm to 38400-8-N-1 and send the .bmp as raw file. This fails however, only about 50 of the total 650 bytes are sent and then it just stops. The exact same happens if I just cat small.bmp > /dev/ttyUSB0 So I have come up with this super clumsy way of slowly sending byte by byte from the console: count=0; while([ $count -lt 650 ]); do byte=$(xxd -p -l1 -s $count small.bmp); echo -en "\x$byte" > /dev/ttyUSB0; sleep 0.05; ((count=$count+1)); done This works, but is obviously extremely slow, and if I reduce the sleep to below 0.05 seconds, transfer is unreliable. Some info about the PC's USB: Bus 002 Device 107: ID 10c4:ea60 Silicon Labs CP210x UART Bridge Bus 002 Device 095: ID 03eb:2104 Atmel Corp. AVR ISP mkII Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 060: ID 0424:2514 Microchip Technology, Inc. (formerly SMSC) USB 2.0 Hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub What am I missing? Update: It seems that printing some debugging info back to the PC while sending data to the controller can also break communication. AI: an AVR ATmega MCU sitting on a breadboard, running at 8 MHz with the internal oscillator. This is likely to be your problem. This Atmel application note (source) suggests that an uncalibrated internal clock can have a tolerance of as much as ±10% (depending on the model) which is too loose for reliable serial communications. You can calibrate the internal oscillator but temperature variations can still make it unsuitable for serial comms (see fig 1-2 of the app note). You can verify if this is the problem by looking for framing errors from your ATmega USART. You would be better off using an external crystal. This earlier question might help.
H: Simple inverting op amp not working I'm trying to amplify an electret microphone signal with an inverting op amp (LM358). I don't have access to an oscilloscope at the moment but with a voltmeter I read around 12mV at V1 and around 12mV as well at V2. For testing I am just playing a 1kHz test tone by the mic. Does this mean my op amp is not actually amplifying the signal. If my gain is -100 currently, then I should be getting 1.2V at the output of the op amp correct? On a side note, if the microphone is outputting a varying voltage centered at 0, shouldn't the voltmeter give me 0V? What is measured by the voltmeter, is it peak to peak? Average? Also I am a first year EE student so please take into account my limited technical knowledge. Yes the ground rails on the left and right are connected (not shown in pic) Thank you AI: Where is the voltage supposed to come from that ever could get the "-" input below the "+" input? You either need a -9V supply for the opamp, or you need to put its "+" pin on a higher operating point, like using a voltage divider (2×4.7k, for example) and a smoothing capacitor (22µF, say) to arrive at 4.5V. Of course, if you do that, you will have this as DC on the output and will want an output capacitor for removing it, too.
H: Is it theoretically possible to make an anti-jammer for WiFi? From the little I understand of WiFi jamming, it uses strong counterfeit signals that overwhelm legitimate signals. I thought perhaps a device that emitted the same frequency but offset so that it cancels the jammer out might be able to negate the negative effect to some extent. Would this idea work? AI: No, you cannot jam a jammer away. For two signals to cancel, they need to be perfectly 180° out of phase. But the phase changes with distance from both transmitters - so the only possible scenario where that would work would be if both jammer and jammer-jammer would lie on a straight line with the receiver in the same line. And there must be no reflections of the transmission on anything around these things! Since you can neither guarantee the receiver to hold perfectly still and stay on the exact same line going through jammer and your anti-jamming device that can't work. Furthermore, WiFi never takes place in outer space but practically always in multipath channel scenarios, there's always going to be multiple ways in which the jamming signal reaches the receiver, so it's impossible through that, as well. Then, the usual jammer for wideband communication systems will not be a nice predictable waveform that you could just somehow generate the "opposite" of. It will be perfectly random noise - and no anti-jamming device could cancel that out, because you can't invert what you don't know. You need to learn a bit more about fundamentals of RF propagation and about channel modeling, as well as probability theory for understanding random signals and random channels if you're into making receivers less susceptible to interference. It's an interesting aspect of most communication system designs, but the way you ask betrays that you're still at the very beginning of understanding it, maybe haven't yet had a lecture on the basics of communications engineering; since the people designing these systems all have learned all the basic stuff, you still have little chance of understanding what they do if you don't have a relatively broad basis in communications engineering. Luckily, that's part of the modern EE curriculum in most universities!
H: Pull-down resistor with LED on a logic input Imagine I have the following circuit: A register (74LS377N) has the inputs connected to a bus. In the picture, BUS0 is pulled down with a 10k resistor, so if nobody outputs to it the register will read 0 in the first input. BUS7 is connected to an LED in series with a resistor, so I can visualize when that pin is high or low. The resistor of the LED is quite high because for lower values of R the LED is too bright, so I have empirically determined that 5k works well for the LED. Now, this is a green LED that has a forward voltage between 2-3V, enough to be interpreted as HIGH for the register. My question is: When the bus is in high impedance, would the LED + resistor correctly pull down the voltage to 0V, or because of the forward voltage of the LED the input might be higher and therefore cause garbage in the input of the register? Should I put a 10k resistor in parallel to the LED+5kR or is the LED+5kR enough to force a 0 in the input when the bus is in high impedance? According to the datasheet, a LOW level has a maximum input voltage < 0.8V and a high level has a minimum input voltage of 2V, so the forward voltage of the LED is what scares me. AI: The LED and resistor cannot reliably pull down the BUS7 input to the 74LS377 to a logic LOW. The LS family have a high input leakage current out of a LOW input. The TI 74LS377 datasheet states that an input driver must sink 400 uA max. from the input pin to GND to drive the input LOW. (It must source only 20 uA max. to drive it HIGH.) The LED will always have too high a voltage drop to pull your input LOW, especially sinking the relatively high current of an LS input. Instead, add a pull-down resistor from BUS7 to GND, in parallel with the LED circuit. The pull-down resistor value depends on (a) the leakage current from the 74LS377 input and (b) any current from the BUS7 driver, be it leakage or otherwise. Allowing for dropping 0.5 V when sinking 0.5 mA, you'd need a 1K pull-down resistor. This will load the circuit that normally drives BUS7 so you'll have to weigh that up if that's possible. Using values like 10K will just make the units unreliable if you're building a number of them. If you can use a newer family like a 74HC(T) gate, the input leakage currents are much lower and higher pull-down resistor values can be used.
H: Output stage filter of a buck regulator and ESR I am trying to design a two LC stage output filter on a buck converter. And as the needed peak to peak ripple has to be pretty low ~ 1 mV, the overshoot and undershoot need also to be pretty low, I have to be fairly precise on the design of this output filter At the beginning, I was thinking that I could have a pretty good estimation of the capacitors by looking at the datasheet of the "real" capacitances due to bias voltage and temperature dependency. But actually this is not so easy! The data are not so easy to find and apparently each ceramic X7R capacitor has its own curves in function of bias voltage and temperature. It makes the estimation of the peak to peak ripple not so easy. Also and this is the subject of the question, there is a certain ESR for every capacitors. I was thinking that the ESR was a constant value, but it appears to not be the case as the below graph is showing: It makes me confused about what is really ESR. Whatever is ESR, it seems that it is varying according to the frequencies. So suppose I have a step load on my output filter, how can I know what would be the undershoot as the ESR is not constant over the all frequencies spectrum of the step load. Also, sometimes it is not possible to have the graph above provided by KEMET website, and so it is more difficult to estimate what would be the ESR of a ceramic capacitor. It is important to know what is the ESR of my output stage filter as it will have an impact on the undershoot/overshoot of the voltage after a step load. It will also have an impact on stability. I was thinking that It would be possible to estimate the ESR with dissipation Factor (DF) but it is not really simple as it is as the dissipation factor is given for a certain frequency, generally @ 120 Hz. In my case, the step load has an infinite content of frequencies and so my ESR is not equal to the ESR that I would have @ 120 Hz and in any case for an SMPS working at more than 100 kHz as a switching frequencies it has no sense to have the DF @ 120 Hz for a ceramic capacitor... So the question is how to estimate ESR of capacitors for an output stage filter? What hypothesis I have to take? Thank you very much, AI: It makes me confused about what is really ESR. Whatever is ESR, it seems that It is varying according to the frequencies. KEMET uses SPICE models to plot ESR and Impedance (Z) graphs. Following is the SPICE model for a 10u/50V 1206 X7R capacitor: .SUBCKT C1206X106K3RAC 1 6 *Temp = 25°C, Bias = 0VDC, Center Frequency = 10000 Hz *KEMET Model RLC Cerm R1 3 4 0.0437875837087631 R2 2 5 0.810000002384186 R3 1 6 100000000 L1 1 2 3.70000002858362E-11 L2 2 3 7.03000005430887E-10 C1 4 6 9.63658385444433E-06 C2 5 6 6.00000023841858E-13 *ENDS simulate this circuit – Schematic created using CircuitLab L1 is ESL, R3 represents self-discharge, C1 is the actual capacitance, etc. So, apparently, KEMET's ESR plots are actually the equivalent of ESL, ESR, and other parasitic components e.g. \$R_e=\sqrt{L^2+R^2}\$. This makes the ESR plot frequency-dependent. As for your main question, normally, overshoot/undershoot (OS/US) amounts depend on the output capacitance, crossover frequency, and the amount of step change i.e. \$\Delta I_O\$. If you want to get a very low OS/US as well as 1mV of ripple then Crossover frequency should be very high: This will cause noise issues. Practically, 1/5 to 1/10 of the switching frequency is a good starting point. This brings a limit to the OS/US amount. Output capacitance should be very high: This will make the ripple very low but will cause startup/shutdown issues. Total ESR should be very low: This could be possible with MLCCs but you may end up with a ridiculous number of MLCCs connected in parallel. So the question is how to estimate ESR of capacitors for an output stage filter? What hypothesis I have to take? That's a difficult question for me to answer because such low ripple anf OS/US have never been a requirement for my designs or the designs I was involved/took part in. I usually neglect ESR's effect on OS/US. Instead, I approach ESR more seriously for ripple. A general formula for OS/US amount: $$ \Delta V = \frac{\Delta I_O}{2\pi \ f_C \ C_{OUT}} $$ This explains the relation between the OS/US amount and the crossover frequency, and the output capacitance. For very low ripple, maybe you should consider multi-phase converters or linear regulators instead.
H: How to control unrelated circuit with a transistor? I recently hit this issue that seems to be too complicated for google to find results for. Is it possible to control a different circuit with a transistor from a separate circuit? Below is a simplified schematic for what I mean. Lets say that the transistor is controlled by some sort of a complicated power source, not a battery, but a USB port or usb power bank or a switching power supply, and the transistor is supposed to switch a different separated circuit with its own separate power source of a different voltage. Is that possible without connecting the two circuits together? I think it is not, but I do not know the theory behind it. If it is not possible, how is this problem usually solved without a relay, if there is a way? AI: Normally, it would be fine to connect the ground wires in the circuit you show. You have two battery powered supplies, so nothing bad will happen if you connect the circuits together. If you really needed for the two circuits to be electrically separated, then you could use an optocoupler. That could be needed if one of the power supplies is not isolated from the line voltage. Do not do this: simulate this circuit – Schematic created using CircuitLab That has the battery powered side connected to directly to the outlet. Neutral is supposed to be safe, but you cannot guarantee that the wire you intended to be on neutral is really neutral. The whole circuit has to be handled as if it were at 240VAC - including all parts of the battery and button. simulate this circuit With the optocoupler circuit, there's no electrical connection between the battery and the neutral. The battery side of the circuit is safe to handle. An optocoupler is a photodiode and phototransistor in one package. They are separated from one another by a small gap such that the light from the photodiode can trigger the phototransistor, but no current can jump the gap. They are usually specified to withstand a thousand volts or so.
H: What is meant by active and passive in the context of transistors in ICs (Gray/Hurst)? At the start of Chapter 3 in their Analysis and Design of Analog Integrated Circuits, Gray, Hurst, Meyer, and Lewis remark that The next three chapters analyze various circuit configurations encountered in linear integrated circuits. In discrete-component circuits, the number of transistors is usually minimized. The best way to analyze such circuits is usually to regard each individual transistor as a stage and to analyze the circuit as a collection of single-transistor stages. A typical monolithic circuit, however, contains a large number of transistors that perform many functions, both passive and active. It is this last sentence which I'm not following. In what sense is a function "active" or "passive"? Is this a colloquialism for the fact that some transistors are just there to bias etc. others, and these others "actively" do the amplification or whatever the function of interest is? I ask because it seems the use is different than that alluded to in circuits textbooks, as discussed here for example. AI: A MOS transistor (a PMOS, generally) can be used as an on-chip weak pull-up (This is quite a common approach that can be seen in today's MCUs). So the transistor here is used as a passive device. An NMOS with gate and source shorted together can be used as an ESD protection diode which is a passive device (Here, not the MOSFET itself but its body diode is actually used but those two are not considered two independent components). This also is a common method in IC design. You can see a lot of BJTs with base and collector shorted together are used as diodes inside, for example, op-amp chips. That's another passive function There are those who claim that a diode is an active device, but let's ignore it for now. Active functions must be the functions that the transistors are used as active devices such as switches or amplifiers. PS: I don't have the book, so I can't check if it's explained somewhere. But maybe you can.
H: Simulation of pulse through transmission line with IC load with ringing There is a pulse going through a transmission line. There is a phenomena where if we put a pulse through a transmission line then after it reaches the IC we can have ringing instead of good pulse. In LTSPICE simulation connector is represented as pulse voltage source of 6V with resistor in series. Transmission line is represented as RLCG. The load is 2 Ohm because 6V/3A=2Ohm. As you can see i could not get oscillations from this transmission line. I was expecting to see a picture as shown in the end, instead I got in the plot saw shapes which very stable response. Where did I go wrong with this simulation? [![enter image description here][1]][1] [![enter image description here][2]][2] Wikipedia Eric Bogatin book on signal integrity Editied:i removed photos from books and wikipedia because i still need to learn how to properly give credit. AI: Where did I go wrong with this simulation? You lumped all the cable's distributed capacitance and inductance into two components and that won't cut the mustard when looking at discontinuity reflections. They are called reflections btw and, although they look like ringing, they are not: - Also, you appear to be expecting a 50 Ω cable but, the characteristic impedance is more like 316 Ω. You can of course you LTspice's built in transmission line but, don't ask me where it is because I don't use LTspice. You also need to put links in your question for the images so you can credit them. Ringing or reflections Here is a simulated 1 metre, 50 Ω transmission line driven by a 5 MHz squarewave having a zero ohm source impedance. The cable has a 300 Ω termination: - Those effects on Vout are not ringing but reflections traveling back and forth down the cable summing with each other positively and negatively. And, this isn't some theoretical effect either. I was shown this at college in the 1980s where a prof set up an experiment with a few metres of coax just so that we didn't confuse ringing (an LC filter for example) with the reality about reflections. Here's what happens when the load is significantly lower (10 Ω) than the characteristic impedance of the cable: -
H: A circuit fulfilling equation I am curious about establishing a circuit that satisfies the equation below: −Vout =3⋅Vsource + 2000 ⋅ Isource I tried to use an operational amplifier but could not provide any output supplying the equation and the values in voltage and current sources. I would appreciate your help. AI: I have a simulation of a working 2 op amp design that will provide a function according to the formula you provided. Simulation Here Note: you will want a dual rail (+V/GND/-V) power supply for your op amp! What is happening here is I am converting the current source into a voltage. Using ohms law we can see that V = I*R, so choosing a resistance of 2000 ohms we can get the coefficient of 2000. I then buffer the voltage generated at the resistor to prevent any loading from the summing configuration of the next op amp circuit. The last op amp simultaneously sums, amplifies, and inverts the two voltages, V1 = V(source) and V2 = I(source)*2000. It separately amplifies the two, so I have an amplification of 1 (unity) on the voltage produced by the buffer, and I have an amplification of 3 on the V(source) to provide the 3*V(source) portion of the formula. The op amp circuit follows the formula -V(out) = 3*V(source) + 2000*I(source) Check this resource on op amp design Here
H: Second-order Butterworth gain calculation at cut-off frequency The Butterworth second-order low-pass filter has the following transfer function: $$\frac{\omega_c^2}{s^2+\sqrt2s\omega_c+\omega_c^2}$$ We know that at \$s=\omega_c\$, the filter's attenuation is -3 dB, but mathematically, the only way to achieve that using the transfer function is to have a -1 coefficient in front of the \$\omega^2\$ term like so: $$\frac{\omega_c^2}{\omega_c^2+\sqrt2\omega_c\omega_c-\omega_c^2}=\\\frac{\omega_c^2}{\omega_c^2(1+\sqrt2-1)}=\\\frac{1}{\sqrt2}=0.707$$ Thus, the gain at the cut-off frequency is \$20\cdot \log(0.707)=-3\ \mathrm{dB}\$ But every text book I researched shows the last term is +1, making the gain at the cut-off frequency equal to \$20\cdot\log(\frac{1}{(2+\sqrt2)})=20\cdot \log(0.2928)=-10\ \mathrm{dB}.\$ What went wrong? AI: Your mistake is setting \$s=\omega_c\$. Recall that \$s=\sigma + j\omega\$. In the case of sinusoidal signals \$\sigma=0\$ and only \$j\omega\$ persists. In that case, the Butterworth lowpass filter transfer function becomes $$H(j\omega) = \frac{\omega_c^2}{(j\omega)^2+\sqrt{2}j\omega\omega_c+\omega_c^2} $$ When \$\omega=\omega_c\$ we have $$H(j\omega_c) = \frac{\omega_c^2}{-\omega_c^2+j\sqrt{2}\omega_c^2 + \omega_c^2} \Leftrightarrow$$ $$H(j\omega_c) = \frac{\omega_c^2}{j\sqrt{2}\omega_c^2}=- \frac{j}{\sqrt{2}} = \frac{1}{\sqrt{2}}e^{-j\frac{\pi}{2}} $$ which shows that at the cut-off frequency the gain is \$-3 \ \text{dB}\$ and the phase shift is \$-90^\circ\$.
H: SNR simulation and "desired output signal" I'm simulating an amplifier and want to know if it is possible to define its SNR. A simulation in LTSpice shows that the circuit generates 42nV/Hz, but this value is independent from the input/output voltage. Obviously the signal-to-noise ratio will vary depending on the output signal. Is there maybe a standardized output voltage in relation to which SNR is usually expressed? I see that audio equipment, like preamplifiers and audio recorders, always indicate their SNR in dB. But in relation to what output voltage? Also, output voltage will depend on input voltage and the overall noise of a circuit should vary according to it, I guess. ie. thermal noise from resistors should change with input voltage. Does Spice not take this into account? Probably I'm falling into misconceptions or I am not simulating this properly. Spice directive: .noise V(n001) V1 dec 10 30 15k AI: I see that audio equipment, like preamplifiers and audio recorders, always indicate their SNR in dB. But in relation to what output voltage? Well, the audio company marketing people will always try to ensure that the most optimum conditions apply and therefore yields the greatest value of SNR possible. However, SNR is the property of a signal and not a direct property of an amplifier. Look at it this way, if the input had a poor SNR then the output will have a slightly poorer SNR. Maybe you should look for the term noise factor: This tells you how much an amplifier degrades a signal and is based on the worsening SNR: - Noise factor = \$\dfrac{SNR_{INPUT}}{SNR_{OUTPUT}}\$ Maybe this wiki page might be useful to you: Noise figure (NF) and noise factor (F)
H: What is the point of signal amplification if noise is also amplified? Sorry for my ignorance but there's is immense effort in electrical engineering in the topic of amplification. I can understand the reason for circuits rejecting noise or filtering noise, but have difficulty to grasp the reason behind signal amplification such as a single ended input output amplifier. If signal is amplified the noise is also amplified so SNR will remain the same. SNR might even be worse because of the amplifying active circuit will introduce extra noise. Can you give me a reason to amplify let's say a 0-50mV noisy signal to 0-3.3V range by a single ended amplifier which is coupled to a 0-3.3V 12-bit ADC? Or another example that makes the linear amplification important`. EDIT UPDATE: After reading the answers especially Spehro Pefhany's answer and the comments sections, I tried to quantify the estimate SNRs for both cases. I ignored external interference or ADC inherent thermal noise, I also ignored amplifier noise. I only used the 1mV rms signal noise and the uncorrelated ADC quantization error for calculations. Please see the following equations in the Python code and let me know if there's any mistake: import numpy as np; ADC_range = 3.3 ADC_resolution = 12 ADC_quant_error = ADC_range / (2**ADC_resolution -1) # Without amplification V_input_sig_range = 0.05 V_input_sig_range_rms = 0.05 / np.sqrt(2) V_noise_rms = 0.001 V_total_rms_noise = np.sqrt(V_noise_rms**2 + ADC_quant_error**2) SNR = 20 * np.log10(V_input_sig_range_rms / V_total_rms_noise) # With amplification V_input_sig_range_amplified = 3.3 V_input_sig_range_amplified_rms = 3.3 / np.sqrt(2) V_noise_rms_amplified = 0.066 V_total_rms_noise_amplified = np.sqrt(V_noise_rms_amplified**2 + ADC_quant_error**2) SNR_amplified = 20 * np.log10(V_input_sig_range_amplified_rms / V_total_rms_noise_amplified) print(SNR) print(SNR_amplified) The above results as: SNR = 28.795 dB SNR_amplified = 30.968 dB The difference is 2.17 dB AI: In the case of the ADC you should consider the noise the ADC adds. If it's an ideal 12-bit converter and you are using 0-50mV out of 0-3.3V full scale you effectively have a 6-bit converter (5.95 bits). The quantization noise (even for an ideal converter) is quite large. Reality will be worse than this.
H: Small circuits schematics that can teach you something I’m looking for small simple circuits where one could learn something interesting and new about electronics from it. Not just solving circuit examples for solving sake, but small circuits with basic components that one could actually feel to be useful. Some examples: the voltage adder circuit with an op-amp the OR-Diode with two diodes AND/OR gates with mosfets voltage follower with an op amp Integrator with op amp and capacitor I’m looking for small circuit designs, with basic components (no sensors), like those that could teach something very important about circuits. Thank you! AI: You need a copy of the old National Semiconductor Quadzilla applications pamphlet. It has a great collection of circuits and descriptions. And the National Linear Applications data book. Both are online. And there is a masterclass in op-ampery from Analog Devices where they made a lot of stuff for analog computing and low noise instrumentation. Note: I can not find a download of Quadzilla and I have thought about putting them all in KiCAD and sharing. Or at least the LM324 section. https://ia601607.us.archive.org/23/items/fe_Op_Amp_Applications_Handbook_Newnes_Walt_Jung/Op_Amp_Applications_Handbook_Newnes_Walt_Jung.pdf Try this: https://archive.org/details/bitsavers_nationaldaLinearApplications_29136087
H: What is the theoretical maximum capacity of 72-pin RAM modules? I'm asking, because the information on Wikipedia is extremely lackluster, perhaps even incorrect. This is my current understanding: A 72-pin module has 12 address pins, 4 CAS, and 4 RAS pins. (For simplicity, I'm disregarding parity here). Data is 32 bit wide (4 bytes). A read cycle consists of applying a 12-bit row-address, pulling a CAS pin high, then applying a 12-bit column-address, and pulling one of the RAS pins high. In effect, a memory location address is 24 bit wide (12 bit for row, and 12 bit for column). So that's 2^24 = 16777216 = 16M memory locations per one combination of CAS/RAS selection. However each module allows up to four CAS and RAS pins. Each time, only one one of each must be strobed. So one out of 4 CAS and one out of 4 RAS, giving 16 combinations. With those, that's the 16M locations from above, times the 16 combinations of CAS/RAS strobes, resulting in 16M × 16 = 256M locations. Since each location applies a 32-bit word on the data bus, that results in 256M × 4 byte = 1024MB (or 1GB) of capacity. That's the theoretical maximum capacity I understand is possible. On the Wikipedia page, it's however mentioned that only a maximum of 128MB was ever possible (32M locations). I don't understand how they arrive at that conclusion, but that aside, I wonder if there's something wrong with the way I understand addressing. I am aware, that smaller modules either disregarded the MSBs of some of the address pins, or used only two RAS pins, but that doesn't answer the question for the theoretical maximum. I'm also aware, that many mainboards didn't support modules larger than 128MB (or even 64MB). As another reference I used the datasheet for one of those modules made by Micron Technology Inc. AI: Did you look at the functional block diagrams in that datasheet? The four CAS# lines are used to enable the four bytes of a 32-bit word.1 Furthermore, two RAS# lines are used for each 32-bit word. This is one "bank" of memory, and the other two RAS# lines can be used to enable a second bank.2 In that particular datasheet, they're only using 11 address bits, so they can address 222 words per bank for a total of 16 or 32 MB. If you have a module that uses all 12 address bits, then you can address 4× as much memory, which is 64 MB for one bank or 128 MB for two banks. Also, the basic timing sequence is: Apply row address. Assert RAS# low. Apply column address. If write cycle, also assert WE# low and drive the data. Assert CAS# low. If read cycle, capture data after appropriate delay. Negate RAS# and CAS# (high). 1 This is what makes the memory byte-addressable without having to do read-modify-write cycles. 2 I believe it was done this way so that each RAS# and each CAS# line would be connected to at most 4 physical chips. It also means that the module can be wired up directly to a 16-bit bus as two or four 16-bit banks.
H: Can you replace ceramic or tantalum capacitors with aluminum electrolytic capacitors? I have a question about the hookup for my 12VDC to 5VDC step down module - the K7805JT-500R3. It states in the datasheet that ceramic capacitors are what can be used but it also states that tantalum capacitors can also be used. It never says anything about being able to use aluminum electrolytic capacitors. Are there any issues with me using aluminum electrolytic capacitors instead of the ceramic or tantalum capacitors? AI: "Can" yes, there are situations where that can be done. When, is the real question. That question in turn, is answered jointly by these: Ceramic (MLCC) versus Tantalum capacitors Why do some switching regulator require tantalum or electrolytic capacitors instead of ceramic? When replacing ceramic with electrolytic for power supply purposes, the value should probably be larger, so as to keep the ESR low. By how much, depends on the ripple current drawn, acceptable ripple voltage, and maximum capacitance limits of the supplies/converters used. The datasheet gives a maximum 680µF load, and your 12V source may have a maximum figure as well. The datasheet does not indicate whether ESR is required for stability or EMI, nor how applications might be concerned about ESR. It's probably safe to assume they're concerned about ripple, but beyond that, I don't know. The values probably should not be less than specified. Any value within these ranges should suffice. Do mind any other limitations not defined here (e.g. required PDN impedance range for downstream circuitry, etc.).
H: Measurement of resistance of the Earth From "Learning the Art of Electronics" by T. Hayes and P. Horowitz (Cambridge University Press, 2016): If we attempt to use a digital multimeter to measure the resistance between A and B, the measured value would be hopelessly skewed due to "loading" effects [internal resistance of the multimeter is much smaller than that of the Earth, assumed to be composed of dry sand (a good insulator)] Is it possible to measure the resistance in this example by using some instrument which has internal resistance which is much higher than the "true" resistance of the Earth ? AI: Yes. One can measure very large resistances in the TΩ range and above with suitable instruments, for example for insulator characterization. However, such materials usually do not have a constant resistance. It strongly (orders of magnitude) depends on temperature, humidity/chemical environment and on voltage. Therefore, the earth does not have a well-defined specific resistivity. Think of salty ocean, moist soil, granite rocks and dry desert sand.
H: Can someone explain to me why the math for PCIe bandwidth doesn't add up? Since PCIe g1 x1 is based off PCI 32/66 bandwidth of 2133.33 Mb (1066.67 Mb * 2), theoretically PCIe g3 x1 should be about 8.533 Gb if we simply double the bandwidth every generation. However this is not actually the case as PCIe g3 x1 is recorded as being 7.8768 Gb which is only 1.8x scale from PCIe g2 x1. Can someone explain to me what I'm missing here? Is it a clock-rate thing or a physics thing that's the reason for this? AI: The Comparison table on the Wikipedia PCI Express page has the following entries relevant to the one lane operation from generation 1 to generation 3: Version Line Code Transfer rate per lane Throughput (x1) 1.0 8b/10b 2.5 GT/s 0.250 GB/s 2.0 8b/10b 5.0 GT/s 0.500 GB/s 3.0 128b/130b 8.0 GT/s 0.985 GB/s So, in going from generation 2 to generation 3: The Line Code changed from 8b/10b to 128b/130b, which is 1.2307 times more efficient (in terms of less overheads per bit of data transferred). The Transfer rate per line increases from 5.0 to 8.0 GT/s which is 1.6 times faster. Multiplying the above means generation 2 to generation 3 is 1.969 times more bandwidth. From the question: However this is not actually the case as PCIe g3 x1 is recorded as being 7.8768 Gb which is only 1.8x scale from PCIe g2 x1. I'm not how the 1.8x scale from PCIe g2 x1 to PCIe g3 x1 in the question was determined, since from the above analysis can't seem to repeat it.
H: Is it possible to drive a relay with 220VAC coil without a triac? Can I drive this relay RT314730 directly with MOC3041M? I want to save some space and money. AI: It will probably work. You may need an RC snubber to get it to commutate reliably with an inductive load like that. Keep in mind that 400V is a bit marginal with 220VAC mains anyway, so I would suggest MOC3163 series which has better dv/dt immunity and 600V breakdown rating. And I'd still use a snubber.
H: Balancing inductive and capacitative phase shift The following statements are results of my research: If a circuit contains an inductor or capacitor there is reactive power. The vector product of reactive power and actual power is the apparent power. To make an electric device work we need to transmit the apparent power via the grid. To minimize the need to install transmission infrastructure we want to reduce reactive power because this also reduces apparent power. Inductive reactive power and capacitive reactive power cancel each other out. In a grid both kinds of reactive power are generally not balanced. Special stabilization service providers are getting paid to do the balancing. Question: Can we balance reactive power right within devices in order to relieve the grid and make the balancing service providers redundant? Why isn't this currently done? AI: Can we balance reactive power right within devices in order to relieve the grid and make the balancing service providers redundant? Why isn't this currently done? Because it costs money, people need to be motivated by financial reasons or laws. In many areas, factories are penalized by the electric company if their power factor is poor. For them, it pays to add capacitors to improve the power factor. For residential devices, The EU has passed laws that requires manufacturers to meet certain power factor requirements for some devices, see the last paragraph in this section: https://en.wikipedia.org/wiki/Power_factor#Switched-mode_power_supplies
H: Find Thevenin's voltage How could I find Thevenin's voltage in this circuit? I can't use KVL since i don't know the voltage rise of the current source after removing the load. AI: In comments you asked, Would it be possible to further simplify this circuit? You can further simplify this circuit by removing the voltage source and resistor, leaving just the current source. Any elements in series with an ideal current source have no effect on the behavior of the circuit. So, back to your main question, How could I find Thevenin's voltage in this circuit? You cannot. This circuit behaves exactly like an ideal current source, and therefore it does not have a Thevenin equivalent. Note: I am ignoring the ground symbol in your schematic, assuming it is only there to make the Falstad simulator happy. If it is an intentional part of the design then you can not form a thevenin equivalent for this circuit because it is not a 1-port network --- the ground symbol gives a third connection point that might affect any external circuit that also has a ground symbol.
H: Delta to Y with sources? I'm wondering if this transform is possible: I tried doing a source transformation to change this to a regular delta network with two current sources going to nodes a and b, but wasn't sure how to proceed, since we don't know what could be happening beyond nodes a and b. EDIT (important!): I forgot to mention that $v_a$ and $v_b$ can be connected to anything, so the transformation has to be valid whatever is happening at and beyond $v_a$ and $v_b$. In regular delta-to-y that is possible. Is it possible here? AI: You know a transform under which voltage sources with series resistors become current sources with parallel resistors. That transforms your left circuit into something you can already solve. Done! The end result is unambiguously impossible to transform into your right circuit.