text
stringlengths
83
79.5k
H: What are the assumptions involved in below rectifier output waveform? The below waveform shows peak current occurs at the instant when diode starts conduction. Why is the peak current not happening at the max input voltage point? Usually the rectifier output caps are large in value, making the output voltage more or less constant. Current = (Vin-Vc)/(R1) ( neglecting diode drop) -> As Vin is increasing, shouldn't the current also keep increasing? What am I missing here? AI: When the diode starts conducting and dumping current into the capacitor, the rate of change of the AC voltage is much bigger than when the AC waveform hits the top of its peak. Given that capacitor current is proportional to \$\frac{dv}{dt}\$ it follows that the biggest peak in the capacitor current occurs when the diodes start conducting. Consider this simpler scenario: - Image from here showing how when an AC voltage is applied to a capacitor, the maximum current occurs when the voltage waveform is at its maximum \$\frac{dv}{dt}\$ AND zero current flows when \$\frac{dv}{dt} = 0\$.
H: Earthing requirement according to power supply I was wondering regarding the requirement of earthing that is it dependent on the Input value of power or not? If a device is powered using 24V, 1.5A adaptor then is it mandatory to provide earthing in the product? Kindly provide the range of inputs in which earthing is required. AI: It's way more complex than that. Earthing requirements in fact depends on the local laws. In most parts of the world, if the equipment is not of a peculiar type (like medical) it is not required to provide an earth for such a kind of load if you ensure that double insulation is provided (and that's a can of worms for itself) If you are unsure about your safety requirements the best course is simply to buy a ready to use pre-certified power supply.
H: Identify this cable connector Please help me identifying this cable connector by part number/source. 4-pins crimped inline in series on the cable 4 mm pin spacing 18 AWG wire Has the letters: VEN AI: We used to call those "Panduit" IDC connectors, because that was the popular brand. IDC is Insulation Displacement Connector which basically means the wire is pushed into a forked contact that cuts into the insulation so you don't have to strip the wires and they can be easily daisy chained by just punching them down along a set of wires. The one pictured appears to have a piece on the top that pushes the wires all down at once and then clips in place. They aren't as popular as they once were, but you can still find them made by a number of manufacturers. I've seen a couple of different sizes, probably the most popular pin spacing is specified as 0.156", which would be right about 4mm.
H: Maximum current in a relay I have this datasheet for a Goodsky relay. It has 4 poles. The datasheet mentions that the contacts are rated for 5 A. Is this the rating of a single pole? Does this mean the relay can switch 20 A if all the poles are connected to a single load? In the banner it mentions 2P and 4P are "available with a switching current of 5 A. In the main text it mentions carrying current is 5 A. Does this mean that the carrying current and switching current are same for this relay? Edit. The load to be switched will be mostly a combination of household appliances for lighting, cooling (refrigerator, airconditioning,) etc. The relay would be ON all the time except in the case of a fault trip. AI: The rating is for a resistive load. You cannot parallel contacts to a single load to increase capacity. You should not exceed 5A, and de-rate for unpleasant (non-resistive) loads and/or if long life (in terms of number of operations) is required. For example it's not unusual to run a "5A" relay at 0.5A or 1A so that it lasts for a few years rather than a few months of continuous switching operation. For example, if the relay switches 1.5x per minute (say a controller application) 24/7 it will last only about 6 weeks at 5A even with a benign electric heater load. It's rare to see relays rated for higher carrying current than switching current, even though some can handle higher current. This particular construction has all the load current flowing through a flexible wire, which may be the limitation. Basically switching current is limited by the contact materials and construction as well as the power used to move the contacts, and the carrying current is limited typically more by the flexure(s) is a relay. Once gold plated contacts have switched high power they are no longer useful for switching tiny currents. The relay is from a major and long-standing Taiwan maker, and has a valid UL file number. It's not really a "power" relay though, more of an automation relay. If you want to switch amperes, I suggest a power relay. The datasheet should have curves showing the (typical) life for different load types and load currents.
H: Which resistor/solution should I use? I have (had) an UPS which got wrecked during flight, I disassembled it and found something broken. I don't really know which resistor should I install or how could I fix it. The other resistors are 1M ohm, the broken pad doesn't make contact with the pins, but there's like multiple pads which make contact with the pins. Edit 2: New image of the crack. Should I go with the 1005 SMD? It looks more common than the 1004. Edit 3: I think they're the 1206 since the length is almost 3mm (around 2.6mm) and the wide is something around 1.5mm. Should I should buy a 1206 SMD with 1M ohms and 1004 code, or something like that? AI: The circuit board has been fractured, probably due to something heavy soldered into it or some pressure or impact from outside if shipped without housing. You should fix that with some epoxy-glass splints and epoxy so the crack does not further propagate. The crack does not appear to go across the wide trace on the left of the board yet. The resistor as jp314 says is 1M, most likely 1% tolerance. You can determine the size by measuring one of the remaining parts while the part is missing (I would also double check by measuring one to verify it is 1M\$\Omega\$). Here are the common SMT resistor sizes, table from here. Resistor size is related to both power dissipation and voltage capability. A generic replacement will almost surely work, but I would keep to a similar size and the same value of part:
H: Use ESP8266 to power (HF165FD) relay I am working on a project where I need to remotely switch the power for my dryer. I am using a ESP8266 which outputs 3.3V to the LLC which switches the 5V relay to the dryer. I am having problems powering the relay. If I connect the 5V wire from the ESP8266 to the relay directly the relay works (audible click), but when I add in the LLC it does not work. The LLC should be taking the 3.3V signal D1 and outputting a 5V signal that can power the relay, but that is not happening. I checked that a signal is coming from D1 as that powers a LED. The wire from H1 also powers a LED so the LLC is connected. I'm more of a software guy and this is my first hardware project. Any suggestions appreciated. AI: Sorry you are taking the incorrect approach with the level converter. It is designed for logic, nothing more. Use a 1 Channel Relay Module With Optocoupler Isolation, there are many varieties available. They can be purchased for under $6 US and some even Support High and/or Low Level Trigger. Use that relay to switch your power relay. Microprocessors and logic devices in general tend to fry if you switch inductive loads such as relays.
H: How is it possible for a MOSFET driver to increase both current and voltage? Where does the extra power come from? I am learning about MOSFET drivers and I came across this article here. In the article it says "A MOSFET driver IC translates TTL or CMOS logical signals, to a higher voltage and higher current". I don't understand how it is possible to drive both a higher voltage and higher current at the same time. If Power = Voltage * Current, how is it possible to increase voltage without decreasing current and vice versa? Shouldn't you only be able to increase one while decreasing the other? If someone could explain how we can acheive a higher voltage and current in a simple driver IC I would appreciate it. AI: I don't understand how it is possible to drive both a higher voltage and higher current at the same time. If Power = Voltage * Current, how is it possible to increase voltage without decreasing current and vice versa? Like any amplifier, the gate driver takes some power from another source (the power supply pin) and uses it to deliver more power to its output than is provided to its input.
H: How is the input capacitance and output capacitance different in a MOSFET? I was reading this data sheet on a MOSFET. In this datasheet it says the "input capacitance" is 1470 pF, and the "output capacitance" is 360 pF. How is it possible to have a different input and output capacitance? Shouldn't it be the exact same capacitance for both the input and output because you have the exact same capacitor in both scenarios? AI: "because you have the exact same capacitor in both scenarios?" And which two terminals are you imagining each capacitor being across? Because the two terminals that are the input of a MOSFET are different than those that are the output. https://www.semanticscholar.org/paper/The-effects-of-MOSFET-output-capacitance-in-high-Gauen/6bb30165d5dc900c2c4fac43143558a31064f9c7
H: Would writing to the same flash address reduce the lifespan of an MCU? I am using an STM8 microcontroller. Since the vector table is in flash memory, I need to write to flash memory in order to create and use the vector table. Since addresses of vector tables are fixed, my code writes the same values to the same memory addresses of the flash memory on every startup. Since there is no flash memory erase call in the code, I think that after the first write process it won't do anything anymore. Would it reduce the life of the flash memory if I write the same values to the same flash memory addresses on every startup/reset, without erasing? AI: The physics, electronics, and programming of how flash memory really works English ambiguity: From this chat, I learned that English has some ambiguity. "Write 7 to byte #1", and "write byte #1 to 7", can mean the exact same thing in English despite having seemingly opposite directions. I am a programmer and think in code sometimes, so If you have a write operation to write a decimal 7 to address 0x01, I would write it like this in code: write(0x01, 7); // write(address, value) In my wording below I would describe that as: "write byte 1 to 7", although more-clear English might say "write a 7 to byte 1". So, when I say "write a binary 0b11111111 (all bits at HIGH voltage) to 0b00000000 (all bits at LOW voltage)", I mean this: // some address already has a `0b11111111` in it, meaning you previously did // this: erase(address); // erase 4 KiB starting at address // Now you are going to try to write this same address to a `0b00000000` // instead, like this: write8bits(address, 0b00000000); // write(address, value) Another wording might be: "try to change or program a byte which contains binary 0b11111111 (all bits at HIGH voltage) to 0b00000000 (all bits at LOW voltage)". As you read my descriptions below, just remember I'm thinking in code, and speaking kind of like I write code. Quick answers and summary: Since addresses of vector tables are fixed my code writes same values to same memory address of the flash on every startup. Study what I wrote below. No new damage occurs after the initial erase operation and first write operation. My question is would it reduce the life of flash if I write to same flash memory address on every startup/reset without erasing? If you're writing the same exact thing to the same exact flash memory every time, with no erase cycles in between, no new damage occurs after the first write. Only the first write would actually discharge any bits (capacitors), thereby causing damage just that one time. Here's the big take-away: write (frequently called "program" in datasheets) can only discharge. It has such a precise granularity that it can discharge single bits. Erase can only charge, and it has a horrible granularity. It can only charge sectors (ex: 4 KiB of flash memory in the Winbond W25Q128JV chip below), blocks (ex: 16 sectors, or 64 KiB in the Winbond W25Q128JV chip), or the whole chip (ex: 16 MiB in the Winbond W25Q128JV chip) at once. Sectors are generally many kilobytes. A "write" operation can never charge bits, only an "erase" operation can. Here are some useful quotes of mine from the comments below this answer. Remember too: whether 0b11111111 represents all bits at HIGH voltage or at LOW voltage is rather arbitrary. The Winbond W25Q128JV chip below says that all bits at HIGH voltage (freshly erased) will be represented as 0b11111111, but in most of my answer I use the opposite convention, so just pay attention to that too as I talk: Here is my exact quote: But, if you try to write a binary 0b11111111 (all bits at LOW voltage) to 0b00000000 (all bits at HIGH) voltage, nothing happens! Writing can NOT charge the cells, only an erase operation can! You end up with 0b11111111 still. No damage to your cells occurred. You got a no-op. That quote is correct as-written. But, whether a charged cell is a 0 or a 1 is rather arbitrary, and is up to the manufacturer of the flash memory [although, I now realize that erasing to a 0b11111111 is probably more common than erasing to a 0b00000000]. So, if your flash memory uses an opposite convention, then this would also be correct as-written: If you try to write a binary 0b00000000 (all bits at LOW voltage) to 0b11111111 (all bits at HIGH) voltage, nothing happens! Writing can NOT charge the cells, only an erase operation can! You end up with 0b00000000 still. No damage to your cells occurred. You got a no-op. That is also correct as-written. Details: My question is would it reduce the life of flash if I write to same flash memory address on every startup/reset without erasing? Let me add some insight here. My crude understanding below gets the general principles, usage, & behaviors across, but if you really want to learn more about the physics of it, you might start by studying this article here: https://en.wikipedia.org/wiki/Flash_memory. Here is my understanding of how flash memory really works: Erase: Flash memory is memory stored in microscopic capacitors. An "erase" charges all cells to a HIGH voltage, which, depending on your mcu, may be considered a binary zero (0) or a binary one (1). For the rest of my analysis below, however, let's assume a bit which is charged to a HIGH voltage is considered a binary zero (0). An "erase" cycle damages the atomic structure of the capacitor cell wall as electrons blast through it in the charging process. Erase cycles are done on entire flash memory "pages", or large chunks of memory usually many kilobytes in size, all at once. An erase is an expensive operation, both in terms of energy used and time, since a bunch of charge pumps must pump up the tiny mcu voltage to a high voltage which can charge all cells of an entire flash memory page at once in one large flood of energy to charge the capacitor bank. Write: A "write" can be done on a granular word (ex: 4 bytes) or byte level (granularity depends on the mcu), rather than on a large flash memory page level which is usually kilobytes. It is fast since all it has to do is short a single bit or several bits to ground to discharge the capacitors to change a HIGH voltage (0) to a LOW voltage (1). "Writing" a 1 to a bit simply discharges that bit from HIGH (0) to LOW (1). It only changes anything if the bit was "erased" previously to charge it to a HIGH (0) in the first place. Discharging a bit from HIGH to LOW is also a destructive process since it blasts electrons through the capacitor wall as they discharge since you shorted that cell to ground, removing matter at an atomic level as the electrons blast through. "Writing" a bit to a 0 (HIGH voltage) does absolutely nothing. It is a no-op (no operation). It is the erase cycle that sets that bit to a 0 (HIGH). It's like this: Writing a flash 0 (HIGH) --> 1 (LOW) = a charged bit is discharged by shorting it to ground; this is a molecularly destrutive operation. Writing a flash 0 (HIGH) --> 0 (HIGH) = nothing happens (writing to HIGH is a no-op); the bit is already charged, and writing cannot charge a bit; only erase can. The bit remains HIGH (0). Writing a flash 1 (LOW) --> 0 (HIGH) = nothing happens (writing to HIGH is a no-op); writing cannot charge a bit; only erase can. The bit remains LOW (1). Writing a flash 1 (LOW) --> 1 (LOW) = the bit is shorted to ground to discharge it to LOW (1), but since it was already discharged, nothing happens. There are no electrons wanting to move, so this is a non-descructive operation, as the "damage" already occurred previously when this bit was discharged from a 0 (HIGH) to a 1 (LOW). Make sense? So, if you try to write a binary byte 0b00000000 (all bits at HIGH voltage) to 0b11111111 (all bits at LOW voltage) then it discharges all bits by shorting them to ground, bringing their voltage from HIGH to LOW and you end up with 0b11111111. This damages the capacitor cells. If you try to write that byte to 0b11111111 again, it shorts all of their cells to ground again, but nothing happens since the cells were already discharged! No new damage occurs. But, if you try to write a binary 0b11111111 (all bits at LOW voltage) to 0b00000000 (all bits at HIGH) voltage, nothing happens! Writing can NOT charge the cells, only an erase operation can! You end up with 0b11111111 still. No damage to your cells occurred. You got a no-op. Therefore, damage occurs when you discharge a bit, and when you charge a bit. Charging a bit is done through an erase cycle, so you can just count how many times you erased as an estimate of the number of "damage cycles" you've imposed upon the flash memory capacitor cells. Standard flash memory (erasable in groups of large "pages") is generally rated up to 10,000 "write-to-1 (LOW; via discharge) / erase-to-0 (HIGH, via charge)" cycles. Refer to your datasheet for your exact number. EEPROM (erasable at a very granular byte or word level) flash memory is generally more-robust and can handle 100,000 write/erase cycles. Refer to your datasheet for your exact number. Now, with this knowledge, is your "writing" really doing anything at all without an erase first? Are you really causing any damage? If you cause it during the first write, will subsequent writes cause more damage? You should have the tools to know those answers now. Real-life example: Winbond W25Q128JV 16 MiB flash memory chip ($5 on a breakout board from Adafruit) I just consulted the datasheet for the Winbond W25Q128JV 16 MiB flash memory chip, and the terms it uses are "Page Program" (pg 36 as listed at the bottom of a PDF page), and "Sector Erase", "Block Erase", or "Chip Erase". The smallest quantity of bytes you can "program" at once is 1 byte, and the smallest quantity of bytes you can erase at once is 4 KiB. So, their terms are "program" and "erase". I simply chose to say "write" and "erase". Their erased state is all 1s. (0b11111111 in each byte). Therefore, "programming" can only discharge bits from 1 (a HIGH voltage level) to 0 (a LOW voltage level), and "erasing" can only charge bits from 0 (a LOW voltage level) to 1 (a HIGH voltage level). "Programming" can never change a bit that is a 0 back to a 1, no matter how hard you try. :) Here is the "block diagram", or flash memory layout, of the Winbond W25Q128JV, from pg 10 of its datasheet. You can see in it that this particular chip has 256 blocks (numbered 0 to 255), where each block is 16 sectors (numbered 0 to 15), and each sector is 4 KiB. So, that makes the whole chip 4 KiB x 16 x 256 = 16384 KiB. Divide that by 1024 and you get 16 MiB. The smallest granularity you can program ("write") to this chip is 1 byte via the "Page Program" (0x02) instruction on p36, and the smallest granularity you can erase is 1 sector of 4 KiB via the "Sector Erase" (0x20) instruction on p38. So, if you have 4 bytes that are erased to 0xFFFFFFFF (all 1s), and then you "program" them to 0xDEADBEEF (0b 11011110 10101101 10111110 11101111), they will now contain 0xDEADBEEF (0b 11011110 10101101 10111110 11101111). If you then "program" them to 0xBEEFDEAD (0b 10111110 11101111 11011110 10101101), however, those 4 bytes will NOT contain that. Rather, they will change to 0x9EAD9EAD (0b 10011110 10101101 10011110 10101101), as shown here: // look for all of the attempted changes from 0 to 1. // Programming can't do that. Only erasing can. So, they // remain 0s. // 1. Starting point after "erasing" is 0xffffffff. // 2. Attempt to program to 0xdeadbeef, and you get it! // 3. Attempt to then program to 0xbeefdead withOUT erasing again first, // and you do NOT get it! // 4. You end up with 0x9ead9ead instead. 11111111 11111111 11111111 11111111 ffffffff // 1. 11011110 10101101 10111110 11101111 deadbeef // 2. 10111110 11101111 11011110 10101101 beefdead // 3. -------- -------- -------- -------- -------- 10011110 10101101 10011110 10101101 9ead9ead // 4. The code in C to attempt the above might look like this: uint32_t addr = 0; erase(addr); // erase 4 KiB starting at address // `addr` to `0xffffffff` (all binary 1s) program32bits(addr, 0xdeadbeef); // addr now contains `0xdeadbeef` program32bits(addr, 0xbeefdead); // addr now contains `0x9ead9ead`
H: What does T_L mean in a diode datasheet? I am trying to compare different datasheets of the "same" component. I do not undersand the term \$\text{T}_\text{L}\$. Another datasheet: AI: TL is the Lead Temperature. This is sometimes but not always defined in the datasheet.
H: SPI level translator I'm planning to connect an IT8951 to an ESP32 through SPI. The issue is that the ESP32 module that I'm using has an output of 1.8V and the IT8951 works with 3.3V SPI. What kind of circuit should I use for boosting the voltage? Is a dedicated IC necessary? Thanks in advance. Edit 1: Removed wrong transfer speed, reformulated question AI: You don't need a level shifter. Just use SPI2 or SPI3 (not SPI0/SPI1, which are normally used for the main flash memory) with appropriate GPIO pins. These pins are 3.3V logic. ESP32 Technical Reference Manual p. 51 The green pins are the ones normally connected to the flash, which can run at 1.8V (VDD_SDIO) if the chip is strapped accordingly. This is the VDD_SDIO power domain. The blue and yellow pins are in the VDD3P3_CPU and VDD3P3_RTC power domains (both nominally 3.3V).
H: Can I hook up an LED to the power lines of a USB-A connector, in parallel, and expect the USB's function to be unimpared? The title covers most of the question, but here's a simple schematic anyway: The specific LED that I have is a 5V, 12mA red diode. For this example, assume a USB 2.0 connection, which as per datasheet provides a maximum of 500mA. So another way of looking at this, is what is the minimum amount of current necessary for a USB port to have normal, unimpeded functionality, i.e. not result in any data loss? Are there other types of interference that such an LED may introduce, which would impact USB functionality? AI: There should be no problem with that. The power supply won't know what is drawing the current. Any other USB device you attach won't know that something else is using some of the power budget. The only time you'd have trouble would be if the load drew close to 500 mA. The 500 mA is likely to have a wide tolerance and at current limit the voltage is likely to start to droop to 4.9 V, etc., rather than shut off suddenly at 500.00 mA. I wouldn't be surprised if you found that you could pull 600 mA without difficulty and I'm sure that many USB hubs that don't have external 5 V supplies are often overloaded and draw more power from the USB source than they should.
H: What is the impact of different values on the internal capacitor in the magnetics I am using this connector which has the internal magnetics. I am connecting it to an Ethernet PHY device. In the datasheet, we can see, that it has internal capacitors of 100nF. I went through this connector (CIRCUIT E) which has internal capacitors of 10nF. Can someone tell me what would be the impact of the 10nF and 100nF capacitors? What would be affected if we increase or decrease the value of these capacitors? AI: The capacitors in question are decouplers for the centre-taps of the ethernet transformers and are not critical in value. Similarly, many designers use 100 nF decouplers on ICs on a PCB and, there are many other designers who use 10 nF decouplers for each chip. The centre-taps do need to be decoupled though: - Image from here. What would be affected if we increase or decrease the value of these capacitors? It's a bit of a goldilocks story really. Too high in value and the capacitors self resonant frequency will turn them into net-inductance at frequencies coincident with data frequencies and this could reduce data integrity. Too low in value and they are ineffective at centre-tap decoupling at data rate frequencies. Here's an example of different capacitors vs their self-resonant frequency: - Image from here - useful reading. So, roughly speaking, you want a capacitor that has lowest impedance in the 10 MHz to 100 MHz range for normal ethernet data.
H: Should I connect battery ground to the metal enclosure I'm aware in the automotive world the chassis of the vehicle is used as the return path to the battery negative however I am building an electric-powered cart running on 48V lithium batteries. The batteries come pre-built within a plastic enclosure with an Anderson connector. The batteries will be fitted on the cart which consists of a metal frame sheet metal body. I do not intend for the frame to carry current. My reasons for not are: By not connecting the frame to the negative, a short circuit would require both pos and neg to come into contact with the frame. A user should not receive a shock should they come into contact with a fault arising in an energized frame/body/accessory/cable as there is no path back to negative. Is there a benefit in this application to connecting the frame to the battery negative? AI: At 48 V, shock hazard is minimal. Your batteries can probably deliver an embarrassingly large current in the event of a short circuit. Having the battery and the rest of the drive electronics isolated from the chassis means that you would need two faults to chassis rather than one to get a short. You may find that an isolated chassis can enhance some frequencies of EMI coming from the electronics. If you are building a hobby one-off, you'll probably get away with this. If you're designing carts for a production run, then get this aspect checked out with a test house before you burn too much development money.
H: Presence of bandwidth in capacity formula I'm looking two papers https://ieeexplore.ieee.org/abstract/document/9370130 and https://ieeexplore.ieee.org/abstract/document/8855846. In the first paper, achievable rate is w_n is bandwidth. p_n is transmit power. h is channel gain. And in second paper, achievable rate is P is transmit power. H is channel gain. What I curious is the difference between two same formula, the presence of bandwidth. In first formula, there is bandwidth, however in second formula, there is no bandwidth in there(in the fraction part, denominator). What's the difference? AI: \$N_0\$ is the noise density, something that gives power per bandwidth, and since power is energy per time, and time is the inverse qunatity to bandiwdth, \$N_0\$ is an energy. It makes no sense to divide a power \$P_i\lvert H_i\rvert^2\$ by an energy, and add it to 1, because you can't add a dimensionless number 1 to something that is a bandwidth. In other words, the second formula can't be correct. (If you want a hint: Look at where these two papers were published: One paper was published on the IEEE Communications Letters, which is a highly-reknown peer-reviewed journal, and the other had both the words "international" but stress on "in China" in its name, which says it's a regional conference, not a peer-reviewed journal, who tries to attract a wider publishers' audience, with lesser focus on publishing the best works.)
H: What type of SMD diodes are these? I'm looking into repairing a circuit board that was passed on to me damaged. There is a set of four diodes which are used in conjunction with a MAX3488 transceiver to provide an RS422/485 connection. Two of these diodes are visibly damaged, probably due to excessive voltage applied at the adjacent connector. I intend to replace the diodes, and potentially the MAX3488 too. As is seen above, the the two outer diodes are toast - one is gone completely, showing the silkscreen diode symbol beneath. The remaining two are simply labelled 752. The nearby silkscreen describes these parts as DZ1 through DZ4, and as (RD7.5S)x4. Quickly using a multimeter in diode testing mode showed drops of 0.713V and 0.715V over the intact ones, and nothing in reverse bias. My current guess from these markings is that these are 7.5V zener diodes, based on the use of 'ZD' to describe them. I'm not sure what the 'RD' would mean though, or the 'S'. I've searched for part numbers / datasheets relating to the '752' marking, but found nothing. In terms of package, they appear to be roughly 1.6mm x 1.2mm in size (per the image below), and the best match I've found for their dimensions and general look is the SOD323 package. Am I correct in concluding that these are 7.5V Zener Diodes, in a SOD323 package? Is there any other information I'd need before I start to search for a supplier? AI: I'd say those are 4 RD7.5S, Zener diodes made by Renesas: https://www.renesas.com/us/en/general-parts/rd75s-diodes-constant-voltage My Google search term: "RD7.5S" You're quite lucky, that it's printed right on the PCB.
H: Result is XXXXXXXXXXXX for Verilog I'm currently doing a project, and I can't find the reason when my result is XXXXXX which is error. The code run smoothly when I test, but when I want to see the result, it comes out something I can't understand. Here's my code : module cu_dpu_combine(clk,start,X,Y,W,Z,result); input clk, start ; input [7:0] X,Y,W,Z; output [13:0] result ; wire Sel_1,Sel_0, ld ; wire [1:0] op; cu u1(clk,start,Sel_1,Sel_0,ld,op); dpu u2(X,Y,W,Z,clk,ld,op,Sel_0,Sel_1,result); endmodule module cu(clk,start,Sel_1,Sel_0,ld,op); input clk, start ; output Sel_1,Sel_0 ; output ld ; output [1:0] op; reg Sel_1 , Sel_0; reg ld; reg [1:0] op; reg [3:0] next_state ; reg [3:0] present_state; parameter [3:0] s0=0, s1=1, s2=2, s3=3, s4=4, s5=5, s6=6, s7=7, s8=8, s9=9, s10=10 , s11=11; // next state logic always@(present_state, start) begin case(present_state) s0: if(start == 0) next_state = s0 ; else next_state = s1 ; s1: next_state = s2 ; //result = X s2: next_state = s3 ; //result = result + X s3: next_state = s4 ; //result = result - Y s4: next_state = s5 ; //result = result - Y s5: next_state = s6 ; //result = result + W s6: next_state = s7 ; //result = result + W s7: next_state = s8 ; //result = result + W s8: next_state = s9 ; //result = result + W s9: next_state = s10 ; //result = result - Z s10: next_state = s11 ; //result = result - Z s11: next_state = s0 ; //result = result - Z default : next_state = s0; endcase end /////////////////////////////// //the D flip-flops (register) /////////////////////////////// always@(posedge clk) begin present_state <= next_state; end /////////////////////////////// /////////////////////////////// //output logic /////////////////////////////// always @(present_state) begin case(present_state) s0: {Sel_1,Sel_0,ld,op} = 5'b00000 ; //initial condition s1: {Sel_1,Sel_0,ld,op} = 5'b00110 ; //form X s2: {Sel_1,Sel_0,ld,op} = 5'b00100 ; //add X s3: {Sel_1,Sel_0,ld,op} = 5'b01101 ; //subtract Y s4: {Sel_1,Sel_0,ld,op} = 5'b01101 ; //subtract Y s5: {Sel_1,Sel_0,ld,op} = 5'b10100 ; //add W s6: {Sel_1,Sel_0,ld,op} = 5'b10100 ; //add W s7: {Sel_1,Sel_0,ld,op} = 5'b10100 ; //add W s8: {Sel_1,Sel_0,ld,op} = 5'b10100 ; //add W s9: {Sel_1,Sel_0,ld,op} = 5'b11101 ; //subtract Z s10: {Sel_1,Sel_0,ld,op} = 5'b11101 ; //subtract Z s11: {Sel_1,Sel_0,ld,op} = 5'b11101 ; //subtract Z default : {Sel_1,Sel_0,ld,op} = 5'b00000; endcase end endmodule module dpu(X,Y,W,Z,clk,ld,op,Sel_0,Sel_1,result); input [7:0] X,Y,W,Z; input Sel_0,Sel_1; input clk; input ld; input [1:0] op; output [13:0] result; reg [13:0] result; //14bit reg [13:0] ar_out ; //14bit wire [13:0] mux_out ; //8bit //selection for output of multiplexer assign mux_out = (Sel_1==1) ? ((Sel_0==0) ? W : Z) : ((Sel_0==0) ? X : Y); //alu always @(*) begin case(op) 0: ar_out = mux_out + result ; 1: ar_out = result - mux_out ; 2: ar_out = mux_out ; 3: ar_out = 8'b0 ; endcase end //register always @ (posedge clk) begin if(ld) result <= ar_out ; end endmodule //2X-2Y+4W-3Z Here's my testbench code. module testbench; // Inputs reg clk; reg start; reg [7:0] X; reg [7:0] Y; reg [7:0] W; reg [7:0] Z; // Outputs wire [13:0] result; // Instantiate the Unit Under Test (UUT) cu_dpu_combine uut ( .clk(clk), .start(start), .X(X), .Y(Y), .W(W), .Z(Z), .result(result) ); initial begin // Initialize Inputs clk = 0; start = 0; X = 0; Y = 0; W = 0; Z = 0; // Wait 100 ns for global reset to finish #100; // Add stimulus here start=0 ; X =64 ; Y=64 ; W =64 ; Z= 64 ; repeat(2) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; X =128 ; Y=128 ; W =128 ; Z= 128 ; repeat(1) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; X =255 ; Y=255 ; W =255 ; Z= 255 ; repeat(1) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; $stop ; end endmodule AI: You need to toggle the clock signal in the testbench. Currently, clk is always 0. Add this to the testbench module after the end of the initial block: always #5 clk = ~clk; With that change, I see the results signal become known (the X's disappear). Running demo on edaplayground. Here is the complete testbench code: module testbench; // Inputs reg clk; reg start; reg [7:0] X; reg [7:0] Y; reg [7:0] W; reg [7:0] Z; // Outputs wire [13:0] result; // Instantiate the Unit Under Test (UUT) cu_dpu_combine uut ( .clk(clk), .start(start), .X(X), .Y(Y), .W(W), .Z(Z), .result(result) ); initial begin // Initialize Inputs clk = 0; start = 0; X = 0; Y = 0; W = 0; Z = 0; // Wait 100 ns for global reset to finish #100; // Add stimulus here start=0 ; X =64 ; Y=64 ; W =64 ; Z= 64 ; repeat(2) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; X =128 ; Y=128 ; W =128 ; Z= 128 ; repeat(1) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; X =255 ; Y=255 ; W =255 ; Z= 255 ; repeat(1) @(posedge clk) ; start=1 ; @(posedge clk) ; start = 0 ; repeat(12) @(posedge clk) ; $finish; end always #5 clk = ~clk; endmodule
H: Capacitors on center tap This is a follow up to a previous question that I asked here Why do we add capacitors on center taps to the magnetics when connecting to an Ethernet PHY? Also what is the reason on why we terminate the other side of the magnetics with a resistor and a capacitor? AI: Why do we add capacitors on center taps to the magnetics when connecting to an Ethernet PHY? I would say that protection of victim electronics (blue below) from capacitively coupled line surges is going to have some part to play here: - Despite the magnetics being an isolating transformer capable of substantially dealing with several kV surges, there is still capacitive coupling between PHY and line sides. The 100 nF capacitor is certainly capable of "containing" what is left from a line surge after passing through the CM filter and across the parasitic capacitance of the magnetics. I did have a quick look on the internet but I was surprised to find that there wasn't clear-cut documentation about this. I'd like to be more confident about this but, it could certainly be there for diminishing the last remnant of surge current thus protecting the "victim circuits". Also what is the reason on why we terminate the other side of the magnetics with a resistor and a capacitor? Stack exchange Q and A on the Bob Smith termination.
H: Role of NTC thermistor I have this circuit: simulate this circuit – Schematic created using CircuitLab In this schematic I don’t understand what the NTC does. When the resistors and the NTC get headed R1 is increased and RNTC is decreased and because R1 is increased, VBQ1 is decreased. But what does the NTC do? I know the collector current is temperature dependent, if we increase temperature collector current is increased. So is the NTC in some way regulating IC1? AI: The thermistor is for bias temperature compensation. In a transistor amplifier changes in temperature will affect the bias point, the higher the temperature, the higher the collector current for a particular base current. One way to compensate for this is to put a thermistor in the bias circuit to lower the bias with increased temperature. This was more of a problem with Germanium transistor, so in early transistor radios you would often see a thermistor to control the bias of the class B output stage. See this article for an example.
H: Voltage mode Ethernet PHY and Current mode Ethernet PHY In the Ethernet PHY, we have two types. Voltage mode PHY and current mode PHY. Can anyone tell me the reason why the center taps of magnetics are shorted in case of connection to a current mode PHY whereas, the center taps of magnetics are individually decoupled in case of voltage-mode PHY? What is the principle behind this? AI: Can anyone tell me the reason why the center taps of magnetics are shorted in case of connection to a current mode PHY They aren't shorted but connected to Vcc. This is because the PHY driver ports are open-collector (current sources) and the centre-tap is a really great place to supply current to the driver. If you have PHY drivers that are push-pull they will not need the transformer centre connection to enable them to work but, remember, that centre-connection is still needed to be connected to Vcc or GND to deal with surges.
H: What is the purpose of a "monitor diode" inside a laser diode case? I was looking for laser diodes to buy and I noticed many of them have a "monitor diode" included. What is the purpose of a monitor diode? Would not the current (of the laser diode) be a better way to monitor it? AI: Would not the current (of the laser diode) be a better way to monitor it? No, the monitoring diode is an accurate and fundamental method of ensuring the calibration of many laser diodes. For instance, if you buy a laser diode from a reputable supplier, you can pay a little more and get a precise figure for the light power output based on the photodiode output current. The supplier may typically produce information like: - output power = 4.70 mW at a PD current of 250 uA when reverse biased at -5 volts If you buy and use a lot of laser diodes, this information is crucial else, you need to setup your own test rig and calibrate it yourself. Many folk don't want to do that. Then, the photodiode can be used in a simple closed loop system to ensure that in critical applications, the laser diode light output remains constant across a wide range of operational conditions. Operating current versus temperature versus light output is too complex and unreliable for accurate control of the laser diode light output. In other words, the photodiode output current is a much more reliable method of determining laser diode output power than any other method.
H: What happens to a traveling wave tube amplifier when the amount of available power drops? The two Voyager spacecraft use traveling wave tube amplifiers to send data back to us. These have 2 power settings: 12W and 18W. The power supply on board the Voyagers (an RTG) produces less power each year. I'm trying to figure out what will happen when the available power drops below the level where the RTG can supply 12 W to the transmitter. Could this type of amplifier be driven at lower power levels, or is this an all-or-nothing proposition? AI: A TWTA uses a helix wound around the electron beam. The microwave signal applied to the helix interacts with with the electron beam and modulates the electron density. For good interaction the traveling wave on the helix and the electrons within the beam should have similar speed. If the acceleration voltage of the beam is too low, the electrons are too slow for the interaction with traveling wave. So if there is too few power available to accelerate the electron beam, the amplification of the tube decreases to zero. See https://en.wikipedia.org/wiki/Traveling-wave_tube
H: Using a 5V voltage regulator with a 5V input I am building an application for the TMS3705 transponder base station IC with a PCB. The datasheet specifies it needs 5 volts as input power in order for the system to work: I want to supply this voltage via USB using my computer. Should I input the 5 volts my laptop outputs directly to this circuit, or should I use a 5V voltage regulator? I remember one time in an electronics course I had, the professor stated that, even if the input will be the same as the output, one should normally use a voltage regulator regardless. This is what the USB connector and voltage regulator I plan to build looks like: AI: the professor stated that, even if the input will be the same as the output, one should normally use a voltage regulator regardless. Nope. A linear voltage regulator can't produce an output voltage higher than the input. In fact, output will always be lower, and the minimum input-to-output voltage where the regulator still regulates is called "dropout voltage". Here's a simplified schematic. (image link) A low dropout regulator is basically a PMOS or PNP transistor acting as a controlled current source. The error amplifier adjusts current to keep the output voltage stable. Since the PMOS has non zero resistance when fully on, there is always a dropout voltage. If input voltage is too low, for example 5V input on a 5V regulator, then the error amp will turn on the transistor fully in an attempt to bring the output voltage up. Since that's not possible in this case, it just sits there, being useless and wasting a bit of voltage. There is a type of switching regulator that can produce an output voltage equal, lower or higher than the input: the buck-boost converter. That's what you'd use if you had a battery voltage going from 4.2V to 3V as it discharges, and you absolutely needed 3V3, for example. But in this case, it's not necessary, because the datasheet of your chip says: This means with a normal USB port delivering close to 5V, you got quite a bit of margin to accomodate voltage drop in the cable, ferrite bead, etc. What you do need is a decoupling cap on your board, to provide a low impedance power supply at high frequency, but your shematic already has those. Note that your ferrite bead can ring with the low ESR ceramic capacitors and create a voltage spike when you will plug the USB cable. That can fry your chip. One solution is to add a low cost general purpose aluminium electrolytic cap on the right side of the ferrite bead, 100µF or so, in parallel with the ceramic decoupling caps. Also, if you need a regulator for something else, AMS1117 is not the best choice. It has a very slow transient response, so if you use it with a load that draws chopped current, like a micro that goes to sleep, output voltage will not recover quick enough when the micro wake up.
H: Converting 3.3V logic to 5V logic with two common emitter transistors in series I am working with digital signals. I would like to know if there is any alternative to boosting a 3.3V logic signal to 5V logic only using transistors. I am currently using two common emitter amplifiers in series in order to do this. Would other methods work better than this? If not, why is this a good method? AI: Your question as stated isn't really a great fit for EE.SE since you're really asking for opinions, but here's a brief commentary on your circuit as well as some alternatives. Firstly, your circuit: R2 as you have it placed between Q1 & Q2 is unnecessary. A resistor in that position is typically used to limit the base current of the transistor, but in your circuit that current is already limited by R1. You should have a resistor on the base of Q1 to limit base current there. If you apply a 3.3V logic signal to the input without a base resistor you'll overload both Q1 and whatever is attempting to drive it. Since you're using this circuit to drive a micro's GPIO input, you might be able to eliminate R3 by enabling the micro's internal port pin pullup resistor on that input.   Some alternatives: The common "I2C Level Translator" circuit Similarly to your circuit, this one's R2 could potentially be eliminated by enabling your micro's internal input pullup. It has an advantage over yours of requiring fewer parts. R1 could probably also be eliminated if the device which drives 'Input' actively drives it high (instead of just letting it float high as is the case with I2C). simulate this circuit – Schematic created using CircuitLab Truncated version of your circuit Since you're driving a micro's input, you could use a single transistor and then invert the logic level in your micro's firmware, and you also have the option of eliminating the 5V pullup resistor if you enable your micro's internal input pin pullup. simulate this circuit Use a 74HCT-series buffer like a 74HCT244 for 8 lines in one package. Even though it runs on a 5V supply, it'll quite happily interpret 3.3V logic inputs since its VIH and VIL are 2V and 0.8V respectively. If you don't need 8 lines and want to save space, then there's a single-line 74HCT1G125 in a little 5-pin package, or a 2-in-one 74HCT2G125, and remember that these options do not require any external resistors. If the non-inverting 74HCT244 are hard to come by, then you could consider its inverting cousin the 74HTC240 and invert the logic level in your micro's firmware. The easiest option of all: Do nothing. Look at the datasheet of your microcontroller for its VIH (High-level Input Voltage) and VIL (Low-level Input Voltage) specifications. There's a very good chance that your micro will have similar specs to those 74HCT logic gates in option 3, and will quite happily accept 3.3V logic input levels with no buffering or level-shifting needed.
H: Step-Up Converter Design I am trying to design a step-up converter with the following specifications: Vin: 3.0 - 3.7 V Vout: 30 V Iout(max): 50 mA With such a high boost ratio, my calculated duty cycle would be ~92% which is near the maximum for most controller ICs. Is there a topology that uses a transformer to achieve such a high boost ratio? AI: With such a high boost ratio, my calculated duty cycle would be ~92% which is near the maximum for most controller ICs It's ~90% duty when operating in continuous conduction mode (CCM) but, if you lower the value of the inductor, you could operate in DCM (discontinuous conduction mode). If your controller permits DCM, it will be at a lower duty cycle. Here's a sample of what I mean from my basic website with a 33 μH inductor in CCM operating at 100 kHz and 90% duty: - If you reduce the inductor to 22 μH you get an 80% duty cycle and DCM operation: - The calculator is for ideal components so the percentage duty will be a tad higher. If you wanted to run at closer to 60% duty, you could choose an inductor of about 12 μH. Is there a topology that uses a transformer to achieve such a high boost ratio? Consider the above modifications first before taking the step to go to a flyback controller (more complex and requires careful selection of the MOSFET and/or leakage flyback snubbing). If you decide that you need a CCM flyback circuit (1:4 step-up transformer) you can use this basic "ideal" calculator (also from my website): -
H: What resistance should be used with a phototransistor to have fast rise/fall time? I am trying to accurately capture the video latency and response time curve of an LCD display from a black image to a white image for the purpose of reviewing the accuracy of a couple of video latency measurement tools. I am using a digital storage oscilloscope and a phototransistor. This is the spec sheet for the phototransistor. I would like the values of the curve captured by the oscilloscope to be delayed no more than 100 microseconds or so. What resistance should I use to capture a luminosity curve that has this accuracy of timings? Here is more background on my progress so far: I have created a circuit that is similar to the "Testing principle diagram" on page 3: I have a Vcc of 4.5 V (3 AA batteries) and the oscilloscope is wired to measure the voltage across the resistor that connects to the phototransistor and ground. When using the LCD screen that I would like to measure, I get a maximum of about 60 µA through the circuit. I have noticed that the phototransistor's rise and fall times are affected by the resistor that I use in the circuit. To demonstrate this effect clearly in a controlled manner, I have placed the phototransistor in front of an LED that is switched on using a debounce circuit. Here are some screenshots that show the voltage change across the resistor when different resistors are used: 1.3 kΩ 7.8 kΩ 140 kΩ It seems that as resistance increases, the rise time of the phototransistor also increases. Does this mean that a resistance that approaches 0 is the most accurate? I have noticed that the signal to noise ratio decreases as I reduce the resistance. What is a good resistance that will keep the curve accurate to within 100 microseconds while also keeping the voltage across the resistor high enough to not be problematically noisy? UPDATE: After reading Andy's answer, I took the advice to consider using a photodiode instead of a phototransistor for this project. I actually had a couple on hand, but when I first tried them out I didn't understand anything I was doing and disregarded them after the phototransistor seemed to be "easier" for me to get visible on my oscilloscope. To be honest, I'm not sure why I initially struggled with the photodiodes. Here is the voltage across a 194 kΩ resistor that is in a short circuit with a BPX 61 photodiode when turning on the same LED that was used in the previous examples: This shows the LED reaching full luminosity in only 90 microseconds. When I reduced the resistance to 37 kΩ, the time to reach full luminosity decreased to 19 microseconds. That's one fast LED! I also realized that I have an audio device with S/PDIF coax and optical toslink output that I have previously measured using an optical receiver to find that the coax and optical outputs were aligned to within less than 100 nanoseconds of each other. With this in mind, I thought it would be good to compare the voltage from my photodiode circuit when measuring the optical output of this audio device with the coax electrical output of the audio device. Here are my findings (blue is optical through my photodiode circuit and yellow is electrical S/PDIF output from the audio device): I can see from this that the upwards and downwards changes of the optical signal are aligned with the upwards and downward changes of the electrical signal. This seems to show that the photodiode circuit's rise/fall time is not fast enough to keep up with the S/PDIF optical signal, but it does respond immediately to a change, which is the most important thing for me. Here's a 100Ω with the same optical and electrical S/PDIF signal (I switched channels, so this time blue is electrical and yellow is optical through my photodiode): This "works" for this optical S/PDIF signal, but is way too noisy for my use, so I'll go with a higher resistance. I suspect I would be able to get an even better result with a transimpedance amplifier, but at this point I might just play around a bit with the resistance I am using with this specific photodiode and that might be good enough for my purposes. AI: It's the self-capacitance of the phototransistor that is the problem. It's a problem with photodiodes too but to a significantly smaller degree. So, consider using a photodiode instead of the "notably slow" phototransistor. However, if you want to stay with the phototransistor, you can use a common base NPN BJT circuit where the phototransistor is connected from emitter to 0 volts. This keeps the voltage across the phototransistor at an almost constant level thus, its internal capacitance cannot produce negative feedback inside the device that "hits" rise and fall times so badly. An alternative circuit is the TIA (transimpedance amplifier). These are often used with photodiodes for very fast response times (sub 10 ns) but, there's no reason why a phototransistor cannot also benefit from the same circuit (with a volt or so of bias): - The above picture came from Pullup vs. transimpedance amplifier and it explains the mechanisms behind why a phototransistor with pull-up resistor is basically slow.
H: DC voltage attenuation I need to attenuate 0-5V to 0-1 mV. The output of simple voltage dividers (normal op amps or resistor divider) can't be distinguished from baseline noise (baseline noise is 10-20mV.) How can I attenuate the input voltage (0-5V) to get a 0-1mV low noise output? AI: A resistive divider is low noise compared to 1mV for reasonable resistor values and bandwidth. For example, 49.9K\$\Omega\$/10\$\Omega\$ divider will have J-N noise of less than 0.5nV/\$\sqrt{\text {Hz}}\$ at room temperature. 1mV of signal is huge compared to that. If you can't distinguish the signal from noise, given my above assumptions, then your measurement method may be the issue. Most oscilloscopes have a fair bit of noise, as Andy has suggested. Try using a 1:1 probe setting rather than 10:1, and limit the bandwidth if your scope has such a setting- and if that's not enough then you can use a low-noise op-amp as a preamplifier.
H: How to set this switched capacitor filter's cut off frequency? I have never used one and if I'm not wrong, this low pass switched capacitor filter is supposed to have tunable roll-off frequency. If we use the above configuration with a particular clock frequency, what is the formula leading to cut-off frequency? AI: According to the datasheet: For 293 and 294, clock is 100x corner frequency. For 297, clock is 50x.
H: Can telescopic FM/DAB antenna be replaced by Coax? I tried to search for this prior to asking but with no luck... I have an FM/DAB+ antenna like this: from which a standard 75-Ohm Coax is coming down and I would like to connect it to the radio (Panasonic RX-D70BT) like this: as you can see there is only an AC-IN plug and a typical telescopic antenna (no other slots) so the idea is to unsolder that telescopic antenna and solder the Coax instead of it. will this work (?) or am I completely on the wrong track? for some reason I have a feeling this won't be that easy, but I am happy to learn it. AI: Yes, there's a good chance of success. But you need to find an internal ground point in the radio, besides the antenna connection. Open the radio up and look for a large copper area on the PCB, or some large metal shields soldered to the PCB. The closer to the antenna lead, the better. If you have a soldering iron, you can solder short wires to the antenna connection and ground, then connect them to your incoming coaxial cable. It would be best to install an actual coaxial connector. Disconnect it if there's any risk of an electrical storm. If you don't have a soldering setup, you can still "hack" your way forward. Unscrew the pivot point of that telescoping antenna, remove the telescoping part, and put the coax center conductor under the screw and retighten. For ground, slip a wire under the negative end of one of the batteries, or just wrap it around the battery clip. Because that antenna has the center conductor at DC ground, there's little chance of lightning or static coming down the center conductor, rather than the shield. But you must have the coax shield connected to a good ground where it enters the house.
H: Why does voltage drop 50% only when using a small coin cell battery? I recently made a small coin cell battery powered bluetooth board. I get full voltage when using a dev kits 3.0V output but only 1.5V when using a 3V coin cell battery. Circuit summed up pic below. To be exact: Using a Nordic NRF52 Dev Kit's 3.0v VDD/GND output pins attached to battery holder + / -, I am getting correct voltage (2.93V) as expected. Using CR927 coin cell in battery holder I am getting 2.93V to the RTC while the switch is off (expected,) but switched on I only get 1.5V everywhere (including the RTC.) I am a hobbyist and don't know much about power/load/draw, but I am speculating/wondering if these small coin cell batteries must have some sort of maximum load that they can handle at once. All components used are advertised as low power consumption. AI: Batteries have an output impedance and when a battery is not designed to provide high currents it has a high internal impedance. That means there is basically a resistor in series with the output of the ideal voltage source that is the battery. So the more current you try to draw from it, the more voltage is dropped across the resistance and lost.
H: Types of air gaps I read the Transformer and Inductor Design Handbook and I have some question would like to ask you guys. Why are the EE or EC cores shown in Figure 1-30B the bested suited? Why is only Figure 1-30B lg 1 while Figure (A), (C), (E) lg are 1/2? AI: The cores with the gap in the center minimize fringing flux related EMC issues. The loop the magnetic flux travels crosses only one air gap in the cores with a gap in the center leg. The other types have two gaps in series, which behaves similarly to one gap of twice the width provided the core cross-section dimensions are much larger than the gap. Using a double gap in a core is sometimes useful in testing because you can create various effective gap widths with plastic shim stock and verify the operation, even if it's not good enough for production. In some cases it is good enough, and there are productionized ways of creating a controlled-width gap.
H: Generated sound I got these schematics from an arcade game released in 1983. These two circuits are responsible for generating two different sounds even though they look very similar. First one (sounds like a whistle) Second one (sounds like a fart, sorry for the lack of better word) These were manually designed so I am trying to "reverse engineer" the logic. The second one seems like a comparator that will trigger after a delay and then generate a series of distinct peaks in the output. The input triggers the charging of the capacitor, the output voltage split more or less in two, compared, and boom. The first one has 2 JFETs in Darlington(?) which should just amplify the other sound. I tried simulating them in CircuitLab but the chart doesn't seem to be meaningful at all. What is going on? How do I learn to "read" this? My plan is to add these to the game emulation available in MAME. Circuitlab AI: made with FREE microcap v12. Don't know if "sounds like a whistle" ... Low-frequency period : ~ 5.8 ms , high-frequency period : ~ 1.07 ms. And a little "zoom" inside ... And here is the behavior of the second circuit ... which can also be "cut" "logically" at 0.4 s for a "shorter" wave. Added link wave1 wave2
H: I2C short pulse during SCL==low Somewhere around the end of transmitting a byte in Master Transmit mode the following is observed(SCL - yellow, SDA - blue). I am mainly interested in What is the tiny pulse on the right? STOP? START? ACK? glitch? But those clarifications would also help: Is the non-square clock a reason for concern? What is the left pulse? It's too early for a STOP so I guess it's a data bit 1. But is it usual to keep the level during SCL==high? The master is configured for Normal mode 100kBps but the frequency of the clock it emits is 87.4kHz(clock stretching is enabled but not observed). Is that normal? AI: The short pulse occurs after the 8th data bit and just before the ACK. It's the point at which the data line changes direction. Whichever device was driving out the data (master for write, slave for read) releases the data line, so it starts to go high. The device at the other end then begins driving the line low to acknowledge (ACK) the data transfer. If there was to be a NACK, the other device would not drive the line low, so it would continue up to the full line voltage and be clocked as a 1 (NACK). Because this switchover happens when the clock is low, and in plenty of time before the next rising clock edge, it is completely invisible to the devices on each end so can be ignored. The width of the pulse depends largely on the speed of the rising edge and how quickly the responding device reacts to the direction change. I've seen devices which ACK quickly enough that there is no spurious pulse, and other cases where the line reaches full voltage before the ACK is made. The non-square nature of the signals is due to it being an open-drain bus. The bus is only pulled high by a resistor which has to overcome the parasitic capacitance of the bus lines. This R-C circuit leads to a curving of the rising edge. You can make the lines more square by using a lower value pull-up resistor, or alternatively you could use an I2C buffer device to break the line into sections to reduce the parasitic capacitance of each part of the bus. Having said that it is quite normal to have some curvature of I2C lines (the resistors can only be so low before devices stuggle to overcome them when driving low). If all of the devices are working normally then there is probably no issue with the slow edges. The left most pulse will be a data bit. Yes it is completely normal and indeed required for the data line to hold its value while the clock is high. With the exception of a STOP, START, or RESTART condition, the data line must never be changed while the clock is high. As such it will stay at the same value until just after the falling edge of the clock. This can be seen in Figure 4 of the I2C Spec, copied below for perpetuity: Depends on your master - what clock source it is using, how it is generating the SCL signal. It may use a simple clock divider that cannot generate 100kHz for the MCU clock frequency. In practice it doesn't really matter as long as it is in-spec for the devices you are clocking. 88kHz is perfectly reasonable for a 100kHz I2C bus.
H: Multimeter shows voltage on metal cage of my new Standing Fan! A few days ago I bought an electrical fan, with 5x aluminum blades. After I installed and put it to work, I noticed, using a simple indicator screwdriver, that the metal cage/fence was in contact with the voltage! :( I decided to take a measurement with a digital multimeter (EMOS-MD220), and the result was 55V. Metal cage with engine chassis, joined with 4 metal screws. The front of the engine box is black metal. The chassis shows no contact with the voltage, but the screw sockets indicate that it is in contact! Apparently the chassis is painted with some insulating material, and only the holes where the screws are fixed, are without insulation. Please can you give me some help on this situation? Is it normal for an standing fan to show such voltage? AI: I'm assuming you measured voltage between the chassis and Earth from a nearby socket. If you put one probe on the fan and the other somewhere else that isn't Earth, then you're not measuring voltage to Earth, which is what matters for safety. There's always a bit of capacitance between the wires and the metal chassis, and that lets a tiny bit of current through. Combined with the multimeter's very high resistance, this means the multimeter will measure a voltage. The same thing happens when you take the lightbulb out and stick the multimeter probes into the bulb socket to check if the problem is the bulb or a lack of power. Even if the switch is off, the multimeter will usually measure some voltage due to capacitance between wires in the same conduit. Multimeters marketed to electricians usually include a "Low Z" setting that uses a much lower input resistance, which fixes these issues. Now, when you installed the fan, I'm sure you checked if it was Class 2 double-insulated (does not require protection earth) or Class 1 which requires Earth. If it is Class 1, then obviously there should not be any voltage between Earth from a socket and the fan chassis, because you connected the Earth wire to the corresponding terminal in the fan. If there is voltage, then you connected it wrong, or there's a problem with the Earth wire, or some other cause that needs to be fixed. Class 1 appliances are not designed to be safe without Earth. If it is Class 2, your measurement is pretty much normal.
H: System blocks of switching power supplies I want to design my own power supply. I know it can be dangerous for someone whit not much of experience, but at least I want to learn it on the paper! I am aiming for a 12 V/10 A (120 watts) DC switching power supply. What I think the system block should look like 1- 220 V AC to some DC value this will be mainly a transformer, plus usual filter caps, etc... 2- A switching IC to convent the secondary side of the transformer to 12 V 3- Some safety ideas: over current -> monitor output current with a shunt resistor over voltage -> ??? over temperature -> a microcontroller with some SMD temperature sensors around hot areas (switching FET, etc) The questions are: I would need to search for a transformer which is rated for 12 A? 12 A being my output, so maybe the transformer must be rated higher because some energy will get lost? Are there any standard schematic (opensource hardware, etc) that I can take as reference? Where I can find any reference? In general what should be careful about, dos and don'ts? AI: I am aiming for a 12V/10A (120 Watts) DC switching power supply. OK What I think the system block should look like 1- 220V AC to some DC value this will be mainly a transformer, plus usual filter caps, etc... 2- A Switching IC to convert the secondary side of the transformer to 12 V Here's where you are going wrong. A switching supply or converter powered from the AC mains will: - Rectify the AC directly Smooth the rectified mains to produce a fairly ripple-free DC voltage of several hundred volts Use a switching circuit (including a switching transformer) to convert to a low voltage, high current DC output. For instance, here is a circuit of a 15 volt, 7.33 amp DC output flyback converter that incorporates what I mentioned above: - That circuit came from this data sheet by Premier Magnetics and, in turn, that data sheet was found by referring to this general guide on various switching power supplies. 3- Some safety ideas All the ideas are good but the basic designs you will find won't usually include those features. You have to shoe-horn those features in yourself. I would need to search for a transformer which is rated for 12A? 12A being my output, so maybe the transformer must be rated higher because some energy will get lost? You will be searching for a switching transformer and not a regular AC mains transformer so, use the Premier Magnetics link above to get a feel for what they are all about. Are there any standard schematic (opensource hardware, etc) that I can take as reference? Premier Magnetics appear to be the most complete that I have come across. In general what should be careful about, dos and don'ts? "Off-line" converters might be too big of a hurdle for you at the moment but, going down the route you suggest in your question misses the point about converting AC mains to low voltage DC. I have no affiliation with Premier Magnetics; I just came across their website several years ago and find it a useful reference at times like this.
H: What's inside this RF absorptive modulator? I'm trying to understand how a vintage(?) HP 33000 series absorptive modulator works. Unfortunately, I don't have access to the detailed datasheets, so all I've been able to find is this document, which has a small amount of information on page 10 (http://hparchive.com/PARTS/HP-Catalog-1973-74-Comps-RF-Microwave.pdf): In the same document, the 33600 series coaxial switching modules alluded to in the image above are also briefly mentioned (p.6). Is it possible to infer the topology of this unit from the information immediately available? AI: The clue is in those four arrows forming an X. That's a common way to draw a quadrature hybrid coupler. A signal incident on one input splits its power equally between the two outputs, but with 90 degrees phase difference between them. If you put the same type of load on both outputs of a quadrature hybrid, then the reflections from those loads will add up in anti-phase to the input port, and arrive in-phase at the other input port. This means that regardless of the loads, as long as they are equal, the input to the hybrid will stay matched and reflection-free. You can see that there is a dummy load on the other hybrid 'input' to absorb these reflections. Now the stage is set for any type of PIN diode attenuator to be placed in the two positions between the two hybrids. Without the hybrids, you would have to carefully design the resistance of an attenuator formed from series and shunt elements to try to keep it matched. With the hybrids, you can use simple shunt diodes (the A,B,C examples) or more complicated series/shunt diodes (D,E,F examples) as the attenuators, and the hybrids route reflected power into the dummy loads, and not back to the user. The diagram suggests it's a shunt PIN diode arrangement, and suggesting several are used in parallel. In practice, a common way to use shunt diodes is to place them along a transmission line, spaced λ/4 apart at the highest operating frequency. This spacing does an effective shunt->series conversion on each diode, improving the depth of attenuation that can be obtained, at the higher frequency where the shunt diodes' residual inductance is limiting the maximum attenuation that each diode produces.
H: 800V to 250V pulses conversion I need to convert 800V output from Pockels Cell driver, at 52 kHz of bandwidth. Load is capacitive 11pF (drawing a surge current of 100 mA or more), specifically an Electro-Optical Modulator that requires 250V pulses at 52 kHz or lower. What would be the better solution to transfer 800V differential to 250V single ended? Best idea comes to my mind is transformers. AI: If the load is 11 pF, the easiest solution is to use a capacitive dropper. So, if you used 11 pF in series with the 11 pF Pockels cell, the voltage on the cell would be halved from 800 volts to 400 volts but, you would have to add an extra 5.5 pF in parallel with the series Pockels cell and 11 pF to ensure that the overall load seen by the generator remains at 11 pF. That's the general idea and to get 250 volts across the cell needs a series capacitance of 4.5 pF in series with the cell. This has a net capacitance of 3.193 pF and, overall, in parallel with the series pair you will need a further 7.807 pF in parallel to maintain the correct load (11 pF) to the generator. You could use a transformer but, it might alter the characteristic shape of the applied pulse making the cell operate incorrectly. simulate this circuit – Schematic created using CircuitLab
H: Weird comparison between MOSFETs for ESC I am designing an ESC (electronic speed controller) for a BLDC motor which is 3500W, 85A and 44V. The first thing is to choose the MOSFET, which should be easy. I first found the IRFS7534, which has a 1.6 mohm RDS(on) and 255 amp. I viewed its family and found IRFS7530, which has a 1.15 mohm RDS(on) and 338 amps. But for a reason, most sellers on AliExpress claim that IRFS7534 handles more current than IRFS7530, which is the opposite in the datasheet. I move on searching and see some ESC from manufactures and see that they use NTMFS5C612L (5C612) for short. And again, I view its family and see that its brother (5C604) is better in everything from RDS(on) to Q charge. But that is the most perplexing part. Mouser says that 5C604 is not recommended for new designs while 5C612 is new and recommended. Also, I found that 5C604 is half the price of 5C612. I don't know now what I should go for: 5C612, which is more trusted, or 5C604, which is better on paper? AI: I move on searching and see some ESC from manufactures and see that they use NTMFS5C612L (5C612) for short. And again, I view its family and see that its brother (5C604) is better in everything from RDS(on) to Q charge. But that is the most perplexing part. Mouser says that 5C604 is not recommended for new designs while 5C612 is new and recommended. Also, I found that 5C604 is half the price of 5C612. The manufacturer probably chose whichever FET they could get cheapest that would do the job. Perhaps they got a good deal on an older part, or they still have stocks from years ago. When they run out you might see a newer part in the next production run. When selecting a part for your own design, consider what characteristics you need, and (particularly important these days) availability of the part and possible substitutes. I am designing an ESC (electronic speed controller) for a BLDC motor which is 3500W, 85A and 44V... I first found the IRFS7534, which has a 1.6 mohm RDS(on) and 255 amp. I viewed its family and found IRFS7530, which has a 1.15 mohm RDS(on) and 338 amps. 85 A2 x 1.15 mΩ = 11 W of conduction loss. But at 125 °C the resistance is ~2.2 mΩ, then the power loss is ~16 W. At a duty cycle of 1/3 you have ~5 W loss per FET (more when switching loss is included). You might need to put 2 or more FETs in parallel to keep the temperature down. most sellers on AliExpress claim that IRFS7534 handles more current than IRFS7530 Don't trust anything you read on AliExpress, and don't buy from there unless you are willing to risk getting fake parts.
H: Effective way to generate low frequency square wave My goal is to generate a 40Hz square wave in a reasonably cheap and effective manner with a precision around 1%. I considered two approaches: 555 chip: There it seems that they are themselves around 1% accurate and found it really hard to get reasonably accurate capacitors that do not cost a fortune. Oscillator: They all seem very high frequency and I couldn't find ones that could easily be divided to ~40Hz without needing a crazy chain of logic IC. 555 chip (2): with a variable capacitor and manually calibrating it with a scope. Aside from the tediousness of the procedure it also seem that variable capacitor are quite expensive and also have very low values. 555 chip (3): With a potentiometer and hand calibration. Am I missing an easy solution there? AI: One problem with RC oscillators, that concerns you here, is drift with age. It may be possible to find components that will drift slowly enough to stay within 1% of their rated value for a few years, but it's not a trivial task to find them. You won't be able to use an electrolytic capacitor as a timing element, for example, since it will dry out with age, changing its capacitance. Sometimes that's not a big issue (i.e. power supply decoupling), but for this application, that's a problem. So you'll have to stick with something like polyester or ceramic. Changes in temperature may be the the biggest hurdle to overcome. I think it's reasonable to expect a change in frequency of well over 1% over a temperature range of 0°C to 40°C. To obtain 40Hz with a 555, the main timing capacitor will need to be of the order of 100nF, to be able to use reasonable values for the resistors (in the tens of kilohms). One consequence of this is that the trimmer capacitor you place in parallel with it will have to be roughly 1% of that value, and you may have trouble finding that. Therefore, calibrate using a variable resistor, not a variable capacitance. You can find trim-pots of any value quite easily. How square do you want this signal? A 555 can't easily produce a square wave. The output of a 555, in the standard astable configuration, is rectangular at best. It will not have 50% duty cycle. You can modify the configuration with a diode to produce something quite close to 50%, but I've struggled in the past with this dilemma. You can solve that particular problem by producing 80Hz, and dividing by two using a counter or flip-flop IC, like a 74HC74. If you want precision, you need a crystal oscillator, which you can build easily with a crystal, a couple of inverters and a couple of tiny capacitors. Unfortunately you will also need a clock divider IC, since the oscillator will be at least several kilohertz. These days I almost always go the microcontroller route, using something like an ATTiny. This IC has a built-in RC oscillator, but it's heavily temperature and power-supply dependent, like your DIY version. For 1% accuracy, regardless of power supply voltage and temperature, crystal seems to be the way to go, which you can use with microcontrollers. With a microcontroller, you need only two external capacitors and the crystal. The result will be a stable clock well within 1% of its rated value. The microcontroller will have the resources to divide by any factor you choose, to obtain 40Hz. The output can be perfectly square, too. All of this configuration is done in software. If you still require some means of calibration, it's trivial to connect an external trimmer potentiometer to the microcontroller's ADC, and in software read its value and adjust the frequency accordingly. This way, you avoid having to write software unique to each production unit. Lastly, using a microcontroller permits you to do anything you like to the signal, such as start the oscillator 500ms after power-on, or stop it if some input goes low, without the need for additional specialised hardware to implement these things. Microcontrollers are cheap, predictable and flexible. RC oscillators are cheap, unpredictable and inflexible. I think this is a no-brainer. Your investment is time, and a programming setup, but the rewards are always worth it.
H: Using dual output LDO I want to use this LDO: AP7346D-1828FS6-7. This LDO has two outputs and it has "Enable" controls. I want to connect these outputs 1.8V and 2.8V for one input. I can control this with EN pins, but I didn't see any information about this usage in datasheet. Is it okay or not? What I want is something like this: AI: No, you cannot connect the outputs of the AP7346D directly even if you only enable one at a time. They each have an internal discharge transistor that is enabled whenever the output is disabled, and that would heavily load the enabled regulator (about 50\$\Omega\$) .
H: How does the Arduino 5VDC relay module work? Using SRD-05VDC-SL-C I was wondering how this 5V relay module works exactly. I understand how the actual relay circuit (SRD-05VDC-SL-C) works but I'm not totally sure how the signal and 5V inputs work. My guess would be that the 5V is somehow dropped down to, lets say 4V, and then is combined with the signal voltage (let's say 3.3V from a 3.3V pin from an Arduino.) If the signal input is HIGH then the relay (SRD-05VDC-SL-C) receives a voltage greater than 5V and closes, but if the signal input is LOW then the relay is receiving just the 4V from the VCC and stays open. If that is the case, how does the relay module do this using diodes, resistors, etc. If that isn't the case then does anyone know how this thing works? AI: The arduino controls a npn transistor. The transistor controls the relay. There is no mixing of voltage as you describe. The transistor is current controlled. The relay will get 5V across it, or be disconnected like a switch, if we only look at on/off operation. The basic schematic is basically this The transistor acts like a switch.
H: Why is a pentode tube more power efficient than a triode tube? As title. I read some article and it says something like this: Tetrode was invented to reduce grid-plate capacitance. Pentode was invented to reduce secondary emission. I also heard that a pentode has greater power efficiency than a triode. I would like to know the principle and mechanism behind this. AI: The tetrode was a remarkable invention, because its screen grid removed a great part of the next weakness of the triode (a Wikipedia image): You see that the plate current Ia depends on grid voltage and also heavily on the voltage between the plate and the cathode. The latter is not wanted: tube users want the grid voltage to control the plate current like in a theoretical dependent current source. A tetrode is closer to the desired operation, but secondary emission partially spoiled the function. The third (=the suppressor) grid in the pentode made the curves much more ideal: The curves are captured from the EL84 datasheet (see NOTE1). The full explanation is available for ex. here: https://www.electroniclinic.com/pentode-construction-and-characteristics-in-electronics/ About the power efficiency: the non-horizontal Ia vs Ua curves mean effectively there's a parallel resistance inside the triode. It dissipates a part of the AC signal power like a real resistor. The pentode is not fully free of it, but with sufficient anode voltage Ua the Ia curves are much more horizontal than for the triode. That makes the pentode dissipate much less power for a certain AC output signal power. NOTE1: There's still a little left of the same weakness as in tetrodes. When Ug1 =-10V and Ua is 10...20V Ia becomes lower as Ua increases. That's negative resistance. It can cause spurious oscillations in amplifiers if the operation is not kept out of that area. The tetrode has much more kinky Ia vs Ua curves at low Ua.
H: ESP32 boot strapping pins problem (GPIO15 and GPIO5) I am desiging a PCB that includes an ESP32. I have question about strapping pins of the ESP32-WROOM-32E. My design is suitable below picture only for GPIO 0,GPIO 12,GPIO2 pins. Unfortunately, I have to connect GPIO15 (MTDO) to GND with my pull down resistor (30kOhm) to disable a few elements initially passive for power consumption. Also, all pins are full. Will this be a problem for booting? I read the section of strapping pins of datasheet, but I did not understand it. In my opinon it is not enough information. Is silencing the boot message an obstacle to boot? The boot message is not important for me. Can I connect this pin with pull down resistor? There is same situtation for GPIO5. I'm asking the same question for GPIO 5,too. Do you think any problem when booting for me? AI: Today , I tried that connect to GPIO15 and GPIO5 GND with 26.7kOhm pull down resistor. I had no problems booting. When booting, voltage levels of GPIO15 and GPIO5 decreased to about 1.7Volts.In my opininon, It behaves as a basic voltage divider. This also show that, These pins really pull up resistor during boot. After boot i.e after reset ,I was able to configure as a logic 1 or logic 0.I had no problem. But, When I dont configure these pins, These pins stay default value as about 1.7V. Later, When during a deep sleep, I controlled these pins. ESP32 really pulls down to these pins.I had no problem. At now, I have question mark about during a deep sleep.Does esp32 draw unexpected current due to pull up resistors when it sleeps ? For example : little output sink current from GPIO15 or GPIO5. I read datasheet more. I found the document I highlighted in yellow below. I am confused. I just wanted to put my mind at ease and I want to ask you. I want to clarify this before I send it to manufactoring. @jayben @bobflux
H: Is this an I2C ACK? Else why are there 9 bits? Since my previous question I've reduced the clock to 10kHz and am now trying to interpret what the pulses mean. My reading of the image is 0b00110010 being transmitted. Which is exactly what I would have expected from this code. const uint8_t address = 0x49 << 1; const uint8_t tempr_reg = 0x00; static char buf[12]; // Read 2 bytes, MSB first. HAL_I2C_Master_Transmit(&hi2c1, address, &tempr_reg, sizeof(tempr_reg), HAL_MAX_DELAY); HAL_I2C_Master_Receive(&hi2c1, address, buf, 2, HAL_MAX_DELAY); Here is a zoom-in of the pulse. Why is it 9 bits later instead of 8? Is the SDA low (between the red line and the SDA pulse) a slave ACK? If so, who holds SDA low during CLK==high after the 8th bit? Yet the pulse occurs after the ACK - I'm really confused. AI: Why is it 9 bits later instead of 8? Is the SDA low (between the red line and the SDA pulse) a slave ACK? Yes. Check the first half of this figure, from the I2C standard: If so, who holds SDA low during CLK high after the 8th bit? The explanation follows the figure in the same page of document: Edit following the comment below: The device holding the SDA down switches over twice controller -> receiver -> controller during the ACK procedure. I agree with @Justme that the spike you see is caused by the receiver releasing the SDA line at the end of the ACK.
H: BJT saturation region problem I have a graph and a definition like below: Saturation is the state of a BJT in which the collector current has reached a maximum and is independent of the base current. Assume that \$V_{BE}=1V\$, \$I_{B1}=5.10^{-6}A\$ and \$V_{CE}=0.6V\$, if I maintain the \$V_{CE}\$ to be constant and change the value of \$I_B\$, why I can't make the \$I_C\$ jump up to some where like \$I_{B2}\$ line according to the definition above? This is common emitter configuration. AI: I don't think you're interpreting the graph correctly. There are some conditions implied by the graph. First, the independent variable (the X axis) is \$V_{CE}\$, which suggests that you provide \$V_{CE}\$ by directly applying a potential difference between the collector and emitter. This means there's no collector resistor or emitter resistor, or any other components in those places, just a voltage source between collector and emitter. This would be the setup used to determine the characteristics the the transistor as shown in the graph. This does not mean you must do this in an actual application, just that these were the test conditions used to establish the behaviour of the device. Second, the dependent (Y) axis is collector current \$I_C\$, and is not under your control in this experiment. It's the transistor that will decide what current flows, not you. The author of this graph will simply have placed an ammeter in that current path (between the voltage source I mentioned earlier, and collector), and used it to determine what collector current flows as he varies \$V_{CE}\$. Thirdly, \$I_B\$ is under the author's control, but it's kept constant while \$V_{CE}\$ is varied. The entire process of varying \$V_{CE}\$ and plotting the consequent \$I_C\$ is repeated several times, once for each plot, each time with a different value chosen for \$I_B\$. The manner in which you set \$I_B\$ to a constant value, while you plot \$I_C\$ vs. \$V_{CE}\$, is some kind of current source. This whole setup is simple, and easy to simulate: simulate this circuit – Schematic created using CircuitLab This is the plot of a DC sweep simulation, where I've set \$I_B = 1mA\$, and I sweep \$0V \le V_{CE} \le 2V\$: Notice that I've not specified \$V_{BE}\$ anywhere, and this is because it's almost never up to us to decide this value. When the base-emitter junction (a P-N junction, just like a silicon diode) is sinking current from some source, it's effectively a silicon diode, and will develop its own potential difference in the vicinity of 0.7V. The exact value is decided by the transistor itself. \$V_{BE}\$ is almost never imposed externally upon the transistor, and can never exceed 0.7V. Any more than that, and damage can occur. Perhaps the most meaningful interpretation of this graph (at least to me) is as follows. In the active region, the curve is pretty straight. It is slightly sloped, even though that slope is barely discernible. This is a graph of current through a thing vs. voltage across the thing, and if that thing was a resistor, the slope of that graph would be its resistance, in ohms. This is no different; essentially, the slope of the active "linear" region is directly related to the effective resistance of the collector-emitter path, for some given \$I_B\$. Technically, the rise-over-run slope is is the reciprocal of resistance, conductance, but in the parlance of transistors we call it "admittance". Probably because the straight lines of I/V graphs for ohmic components pass through the origin. A transistor is not ohmic, and the straight sections in the active region of your graph clearly all miss the origin by a mile.
H: Do I need a GPS simulator to design a device that contains a GPS receiver and a pre-designed antenna? I have designed a device using an STM32 Nucleo board, a GPS module, a GSM module, some discrete components, and a bread board. It is a sort of a car tracker. I have decided to design my own custom board which includes all the modules that I have bought separately. I have designed some boards before but I haven't gotten anything fabricated yet. There isn't much info on the topic and what little is written on the internet does nothing but confuse me more. I came across an article on this website, which says that I need to either buy a GPS simulator which costs around 10 to 15 thousand dollars or add an entire pre-built module to my PCB. What confuses me is: How can I add an entire module to my PCB? Should I add round pin holes to my PCB and connect them to my MCU using traces and then solder the entire module to those pin holes? That sounds a bit silly to me but that's what I understand when they say "use a module instead of a chipset" because I can sort of "reverse engineer" the module that I have and discover which components are present but then I still don't know the stack up. Given that I am going to buy a GPS antenna as well as a GPS receiver chip, I thought all I needed to design were the transmission lines. Why would I need a simulator? (Isn't that used to test your GPS antenna? I'm buying a pre designed one.) I am planning to use a SIM808 chip and there is a hardware design guide for it on the internet but it doesn't say anything about a simulator there, only that some traces must have their impedance limited to 50 ohms. When they say a GPS module which one of these do they mean? I guess I could just wing it and design something and get it manufactured to see if it works or not but making sure that I understand everything correctly before I do so could save some time and money. AI: RF design is not trivial. "It looks exactly the same" doesn't mean it will work exactly the same. The very material of the PCB makes a difference to how the signals behave at the frequencies used for GPS signals. The traces form inductors, and the traces together with the PCB material form capacitors. You can normally ignore that kind of thing when working with microcontrollers or audio signals. When you get to gigahertz signals, that kind of stuff matters. The thickness and composition of the PCB (fiberglass layers and epoxy) can change the capacitances enough to cause problems - and you have no control over those variations. People who do that kind of thing work together with the PCB manufacturer to be sure that variations don't cause problems. A hobbyist doesn't have the equipment to measure the effects - or much ability to get the PCB manufacturer to do anything to fix problems. Your mention that for the SIM808 "some traces must have their impedance limited to 50 ohms" says you don't understand impedance enough to design a PCB that will work at the required frequencies. GPS signals are extremely weak. Receiving them requires getting a lot of things right. If I were doing your project, I'd use complete modules with the PCB taking the place of the bread board. Yes, pads on the PCB to solder the modules to, or pin headers or sockets to connect the modules. The SIM808 isn't just a GPS receiver. It is a GSM cellphone module with a GPS receiver. Besides the difficulties in getting the GPS part to work correctly, you'd also have to deal with the GSM parts. The GSM parts are subject to FCC regulations (or whatever your country's equivalent is.) That means that if you get it wrong and cause problems for other people, the FCC can come and take your toys away. For that reason alone you should stick to the modules.
H: Why use N-channel and P-channel MOSFETS with an ESC instead of just using N-ch MOSFETS? I am following this build for an ESC. I don't understand why this build uses both n-channel and p-channel MOSFETS instead of just using N-channel MOSFETS because they are less expensive and easier to find. Why go through the trouble of inverting the signal with a 74HC14 and then applying it to the gate p-channel MOSFET instead of just directly applying it to the gate N-channel MOSFET instead? What's the advantage of doing this? AI: Several things: As winny commented, it can be easier to do. And at low-ish power levels and low frequencies (a motor's going to run max ~kHz?), the poorer performance of PMOS isn't so significant. Bootstrap NMOS drivers are fine, but they do have a power requirement: either periodically refreshing (low side turns on, recharging the bootstrap capacitor), or with a charge pump or isolated supply (which allows static (down to DC) operation). Hence the frequency limitation. For a motor, maybe this wouldn't be a problem, but then again maybe it would. (If the high side doesn't have sufficient voltage, it simply doesn't turn on at all -- possible result, a motor that never spins.) The circuit is faulty. I have seen very few actually-good designs on Instructables, in general, and this appears to be no exception. It's almost as if it's made to fail, which seems to clash with the claim that it "works" later on. The problem is this: the TC4427 gate driver outputs a 0/15V level. The 15V supply is derived from a 7815 regulator, which drops about 2V minimum, and more when VCC > 17V. The PMOS senses a voltage between gate and source, and when gate is less than Vgs(th) below source, it's on. This is typically 2V or so. So already we see it's impossible for the PMOS to fully turn off, basically ever. This is only worsened at higher VCC (say 24V, where the PMOS is always turned fully on!). Most likely the transistors destroy themselves; hopefully the user has fused the supply, or used a current limited supply, to limit damage here. There's also the issue that, when the gate driver itself is disabled (low supply voltage, or when disabled for certain types), the output pulls to zero, forcing the PMOS on once again. There are very few PMOS-specific gate drivers out there; at the point you need them, you're better off using a bootstrap or other type NMOS driver and reaping the performance benefits. The best case for complementary power transistors is at low voltages, where gate drive can be taken directly from logic pin outputs -- 74HC/LVC for example, or an MCU or controller ASIC. The same supply is used for logic and motor supply (say 3.3 or 5V), and so there's no issue with insufficient or indeterminate gate voltages. These are of course quite low power levels, like you'd have in a hard drive spindle motor say, not for real motive power like a bicycle.
H: Can you damage LCD backlight when you frequently turn it off and on? I have this small project with PIC microcontroller and LCD display WH1602A-TTI-ET. I am controlling the backlight via software, so I attached it to +5V via PNP transistor that is controlled by one of the pins of the PIC microcontroller. The problem I have is that when I am flashing new program into the microcontroller it seems to randomly send out some "noise" on its pins (which is probably normal) and the LCD backlight keeps going on and off incredibly fast and I am worried this might damage it. Since I am no expert in LED electronics my simple question is - can this wear it out or damage it? Should I physically disconnect it in times when I am flashing new program, or it doesn't matter? AI: It's quite common to use PWM (on/off switching) control of the LED backlights of displays so there shouldn't be any problem if you have some occasional flicker during programming. But it's good practice to keep this in mind when doing circuits which are sensitive to this. Usually this is solved with some kind of buffering that inhibits the signals to propagate during programming.
H: Convert mains voltage to analog input level with the least amount of parts For a future product I will utilize analog input PCB mounted modules from Beckhoff that can measure either 0-10V single ended, -10+10V differential, 0-20mA and 4-20mA signals. I would like to measure the momentary actual voltage level of mains line power potentials (110-240V AC) in order to gain required data on the current phase of the alternating voltage. For my application, minimum complexity has priority. All in one elements are preferred over general purpose components. What are some good ways to convert the mains line voltage level to any of these possible analog inputs with the least amount of additional components while retaining adequate protection? Would also appreciate some suggestions concerning possible suitable components. I did some research on the (getting harder to search) web and came across the known possibility of Optocouplers and Transformers. The former needing considerable additional parts around it to operate and to be linear and the later being harder to find a suitable one. Also looked for integrated circuit components that were specialized for this task but didn't really find anything. Considering fault protection, the analog input modules have plenty of protection integrated inside themselves, the line side less so. Beckhoff does offer modules for this purpose but those are not PCB mount and would require additional overhead due to that. AI: If you want to measure the instantaneous AC mains voltage, something like an isolation amplifier should do the job. Often they require an isolated supply so you would add a DC-DC converter rated appropriately. If they don't they're typically larger and much more expensive. The AMC1350 is the first one I happened to look up, so it may not be optimal on any particular account, but it should be illustrative: Omitted from the above diagram is the DC-DC converter for the high-side supply (or you could derive a supply from the mains voltage using a power supply module of some kind). The voltage divider needs to be rated for mains voltage operation (including transients) and the ratio should reduce the mains voltage to the +/-5V range of this particular chip. The bandwidth of this part is several hundred kHz so it should not affect the phase much. If you want to proceed in this manner, determine your actual requirements a bit better, and do a parametric search for this kind of part and evaluate the various possibilities.
H: How do I wire a transistor to a dpdt relay I have TQ2-L2-3v DPDT relay that is getting activated by an ESP8266-12e pin. The problem is the pin doesn’t always output enough current to activate the relay so I want to wire a transistor (2n2222) to it. My question is since the only relays I have thus far connected to the 2N2222 have been non latching with the positive and ground interchangeable The TQ2-L2-3v has a designated + and - so I am not sure which way to wire it up. I have made two schematics with the two different way I can think of the wiring. Can someone tell me which way is the correct A or B, or are these two incorrect and what is the correct wiring? AI: L2 designates a two coil latching relay, so you would need another transistor and GPIO pin. The + pin will go to Vcc (it will be more positive when the transistor is conducting), but I'm not sure of your diagram- it appears to be rotated 180° for the markings to be readable, assuming it's the top view. For future reference, please link to the datasheet PDF in your question text.
H: What kind of motor is needed to make this work? Not exactly sure what kind of AC or DC motor would be needed to make this. Is it even a realistic idea to make electricity by using a motor as a generator? https://www.youtube.com/watch?v=V3NtSp6aAbs AI: Almost any permanent magnet, brush-type DC motor (one having a commutator) should work as a generator. Don't use a brushless motor, as found in computer fans. Motors from an inexpensive battery-powered toy would likely work. That said, the power output depends on the speed of rotation, the number of poles, permanent magnet strength, etc. Almost any small motor, when spun by hand, could light an LED, for example, but not a small incandescent flashlight (torch, for the UK) lamp.
H: Resistor surge current rating: Why is the pulse energy withstand capability lower for short duration? As per an online surge capability calculator, the pulse energy withstand capability is low initially and increases with higher pulse width. In short, we are not allowed to pass a narrow high current pulse - hoping the energy content is within limits as short-term overload capability. AFAIK this is the sequence of events when current flows through a resistor Heat is generated in resistor (Instantaneous activity)-> Based on i^2 * R *t The heat increases the temperature of resistor gradually -> Based on Qabsorbed = mc∆T and Where, c is the specific heat capacity, m =mass, Q is the transfer of heat per unit time, K is the thermal conductivity of the body, A is the area of heat transfer, d is the thickness of the body WHAT IS MEANT BY THE STATEMENT "This positive sloping section of the curve is when a sufficient amount of time has passed to allow for the heat energy to transfer from the wire into the surrounding" Wouldn't heat transfer start to happen the moment there is a temperature difference between 2 surfaces? Higher the body temperature, faster the rate of transfer- I don't understand the mechanism of resistor heating up 'differently' for a large narrow pulse. I am not sure whether thermal time constant comes into picture anywhere. AI: I don't understand the mechanism of resistor heating up 'differently' for a large narrow pulse Let's take this to the extreme. Imagine I have a 1/4W resistor, and I apply 1/4W to it for 1 year. In that case I have applied 1/4W * 1 year = 7.88 Megajoules of energy to the part, but because I applied it over the course of one year the heat had time to radiate out of the part. If I tried to apply that much energy in one second, I would likely vaporize the part and anything near it. A simple model of heat transfer usually looks like $$ \frac {dE} {dt} = \frac{T2-T1} R$$ That is to say the rate of energy transfer is equal to the difference in temperature between two points, divided by the "thermal resistance" separating those points. So, heat does indeed begin to flow immediately, but the energy is not transferred instantly. Instead, it flows proportional to the temperature difference. Compared to other things heat moves relatively slowly. In many cases it might take many hours of operation for the temperature of a circuit card and all its components to stabilize to their final operating temperature. Another thing to note is that most material have a "heat capacity" measured in Joules per Kelvin, which is the amount of heat (J) required to raise the temperature of the material by 1 K. In most parts, the heat is not generated uniformly in the whole bulk of the part, but rather in "hot spots". Those points where the peak temperatures occur are often the failure points. A thick film SMD chip resistor is typically a non-conductive ceramic substrate with a serpentine resistive pattern printed onto that substrate. https://www.mouser.com/datasheet/2/427/patt-1764178.pdf Like all materials the substrate has some heat capacity, and therefore it takes a certain amount of energy to raise its temperature. All of the heat energy is going to originate from that serpentine pattern. That pattern is the "hot spot". If the energy is applied fast enough (ms or us) then there will not be much time for the heat to spread into the ceramic substrate (some heat will go into the substrate but very little, like microjoules or millijoules). But given enough time (a significant fraction of a second) several hundred millijoules or even joules of heat will have moved into the substrate, and in doing so help to lower the temperature of the hot spot. Ultimately nearly all the heat will move into the PCB or the surrounding air, supporting long term operation of the part while keeping the temperature at reasonable levels.
H: Differential Microstrip Impedance Calculators Giving Conflicting Answers I am attempting to determine the trace width needed for routing a differential signal of 100 Ω on a PCB I am designing. This is a hobby project and the differential traces are for gigabit ethernet. I'm very tempted to just forget about it, and just go with that manufacturer's numbers, but I am curious why different calculators spit out such wildly different numbers. I found some EE exchange posts which provided unsatisfactory answers: Link1 Link2 The second link seems to hint that some calculators take some extra parameters into consideration. Having never routed differential signals the size difference these calculators are coming up with is a little alarming but perhaps this is normal? Parameters: Trace Separation: 4mil Trace thickness: 1.4mil Dielectric Thickness: 3.5mil Er: 4.05 The manufacturer provides a stack up as well as a 2D calculator for determining trace width: here The manufacture's calculator computes 3.49mil trace width. When I attempted to validate this against other online calculators I find that some agree and some differ substantially. EEWeb: Says the same numbers from my manufacture would be 160ohms! Everythingrf: Agrees with the manufacture. Can someone explain why they are different and what I need to consider when deciding trace width using these tools? AI: JLCPCB and EverythingRF use the same formula which only applies to circumstances where the trace-width-to-dielectric-thickness ratio is less than 1 (W/H < 1). (EEWeb has a good explanation on their webpage but I didn't check if they use the same formula). In your case the ratio is quite close 1: Trace width (W) is 3.49 mil, and dielectric thickness (H) is 3.5 mil. The ratio goes beyond 1 as the separation gets higher. So if you find that the dielectric thickness is less than or equal to the required trace width then you should review the stack-up. JLCPCB's first stack-up recommendation for 1.6mm PCB seems correct to me but the 2nd one doesn't.
H: What are the markings on this PCB? I have an E322822 circuit board with the markings of A.F.054.5/04 08_03. Wat do those markings mean? How do I get the Gerber file for this board? AI: It appears to be an industrial control board made by Siemens, which may include a Modbus interface. You'd have to ask Siemens what the markings mean, or find a circuit diagram or device guide that explains it. Good luck. Ask Siemens. They are unlikely to provide you with Gerber files though (unless you can cart up enough wheelbarrows full of cash to make it worth their while.) Failing enough wheelbarrows full of cash, you could recreate the layout using any of the freely available PCB design packages. Now that you've added photos, it is clear that it is not the Siemens board I linked to. The one you have is relatively simple. If you were an experienced user of an electronics design package, you could draw up a copy and generate the needed Gerber file(s) easily.
H: Crystal oscillators of an STM32 microcontroller board I have the STM32F302R8 MCU board.l Its user manual is here. There are two crystal oscillators on the board. X3 is missing and X1 seems to be used for the small debugger microcontroller (circled in red below): In STM32CubeMX I can set external high speed oscillator as the main clock, but it seems X1 is used only for the debugger. If so, how come I can set the clock as HSE? Does that mean this X1 can be used for the main microcontroller? Is there a way to figure out this from the manual? Clock configuration I can either select: or: In the above cases is LSE connected to X3 which is missing? In the second case is HSE the main system clock and connected to X1? AI: The answer is in the manual on p.24. Excerpt from the respective page: There are two possible default configurations of the HSE pins, depending on the version of the STM32 Nucleo board hardware. The board version MB1136 C-01 or MB1136 C-02 is mentioned on the sticker, placed on the bottom side of the PCB. The board marking MB1136 C-01 corresponds to a board, configured as HSE not used. The board marking MB1136 C-02 (or higher) corresponds to a board, configured to use ST-LINK MCO as the clock input.
H: Maximum DC power available over specific distances from source I'm working on a small LVDC project using 60 VDC to distribute power over varying distances to varying loads such as network equipment. I decided to work out the maths and see exactly how much power I can get (maximum) over specific distances. For this specific case I am assuming AWG14 (\$2.5\rm\, mm^2\$) cable is being used. The voltage drop over the cable will be: $$V_{drop} = I_{wire} \cdot R_{wire}.$$ The voltage available at the load will be: $$V_{load} = V_{supply} - V_{drop},$$ which is equivalent to: $$V_{load} = V_{supply} - \left( I_{wire} \cdot R_{wire} \right).$$ Resistance of a wire is $$R_{wire} = {2 \cdot L_{[\rm m]} R_{wire[\rm\Omega/km]} \over 1000\,\rm m}. $$ Re-writing: $$V_{load} = V_{supply} - I_{wire} {2 \cdot L_{[\rm m]} \cdot R_{wire[\rm\Omega/km]} \over 1000\,\rm m}.$$ Power will be given by voltage at the load times current in the wire. Voltage supplied in this case is 60 V so the resulting expression for power at the load can be written: $$\begin{aligned} P_{load} &= \left( 60{\,\rm V} - I_{wire} {2 \cdot L_{[\rm m]} R_{wire[\rm\Omega/km]} \over 1000\,\rm m} \right) \cdot I_{wire} \\ &= 60{\,\rm V} \cdot I_{wire} - I_{wire}^2 {2 \cdot L_{[\rm m]} \cdot R_{wire[\rm\Omega/km]} \over 1000\,\rm m}. \\ \end{aligned}$$ I am looking to maximize the power available at the load with respect to the current in the wire - so using partial differentiation: $$\begin{aligned} {\partial \over \partial I_{wire}} P_{load} &= 60{\,\rm V} - 2\ I_{wire} \cdot {2 \cdot L_{[\rm m]} \cdot R_{wire[\rm\Omega/km]} \over 1000\,\rm m} \\ &= 60{\,\rm V} - 4\ I_{wire} {L_{[\rm m]} \cdot R_{wire[\rm\Omega/km]} \over 1000\,\rm m}. \\ \end{aligned}$$ To maximize, set the derivative to zero and solve - yielding: $$ I_{wire} = {15000 \over L_{[\rm m]} \cdot R_{wire[\rm\Omega/km]} }. $$ So for example, at 50 m with a wire resistance of 14 AWG as \$ 8.286 \, \Omega/{\rm km} \$: $$ I_{wire} = {15000 \over 50 \cdot 8.286 } = 36.3 {\,\rm A}. $$ At this current, the maximum power is achieved. The voltage drop with this current can be calculated to be approx. 30 V, so the power delivered would be just over 1 kW at 1086 W. Obviously 36.26 A is a lot of current especially for 14AWG so I would cap this at a much lower level or use 3 cables to divide the current by 3. 30 V at the end of the cable can then be stepped-up or down depending on device. So taking around 10-15 % losses in efficiency for the DC/DC converter - around 900 W can be transmitted at 60 VDC over 50 m. I want to expand this to longer distances such as 200 m, 300 m and maybe further. Just want to check my thinking/maths is correct as it may be useful for someone else who may come across the same question. AI: I'm working on a small LVDC project using 60VDC to distribute power over varying distances OK, 60 volts is the input voltage. The voltage drop with this current can be calculated to be approx. 30V For maximum power transfer, the volt drop of the cable series resistance will always be 50% of the applied voltage hence it will be exactly 30 volts. Maximum power available at the load will be: - $$\dfrac{V_{SUPPLY}^2}{4\cdot R_{CABLE}}$$ Just want to check my thinking/maths is correct Well, I think you went the long way round and proved the well-known maximum power transfer theorem so good work.
H: Why does a full-wave bridge rectifier have noisy output with no load? Purely academic question: I have a little bridge rectifier made out of four 1N4001 diodes, being fed by a transformer putting out 12VAC. When I run it without anything else attached to it, the waveform looks like this: If I stick a 10K resistor on there as a dummy load, that junk in every other cycle goes away: What gives? Is there some capacitance in the diodes or something? And why is it every other cycle? (I know my scope can take screenshots; I just can't find my USB drive.) AI: ...And why is it every other cycle? The asymmetry is likely due to transformer construction. First, be aware that these high impedance loads (like an oscilloscope) make coupling capacitances overly important - with realistic low-resistance loads on the bridge, capacitance influences voltage far less. The step-down transformer likely has secondary winding laid over primary winding, so that there is some capacitance between windings. A LTspice circuit was set up with transformer windings 10:1 turns ratio...a capacitance (C1) between windings of 100pf was estimated. Not shown is some resistance in series with each winding (else LTspice fails because voltage sources driving pure inductance is forbidden):
H: Fixed vs adjustable DC-DC converter When choosing a DC-DC converter like the LM2596, does it matter if the output voltage is fixed or adjustable? Until now, if I need a fixed 5V output, I've chosen to do it using a constant 5V regulator. I did this not to use fewer components, but because I thought it would be more stable. Is this correct or is it a superstition? If we are operating at a constant voltage, are there valid reasons to prefer either one? AI: If you use adjustable regulators you can get whatever voltage you want. For example, you might want some odd voltage to feed into an LDO for a low-noise analog supply (high enough to be well away from dropout and where PSRR is high). You can also stock only one regulator type instead of a different part number for every voltage. Since there are literally thousands of LDO and switching voltage regulators available this could help a bit. That said, there are a lot of cases where 5.0, 3.3 and 1.8 are necessary and maybe even sufficient. Your main question is whether it is more stable. You'd have to read the datasheet to determine that. Probably it won't make a lot of difference- the on-chip voltage divider will use matched and thermally close resistors so the ratio should be relatively stable. You could degrade it with your external resistors if you use less stable or matched resistors (eg. a carbon film resistor for one and a metal film for the other). But usually for something like a LM2596 the on-chip reference tolerance and tempco will be more limiting (but it depends on what they guarantee, as we will see). Let's take a look at the original (NS/TI) LM2596 (not the many clones etc.) So we have an iron-clad guarantee of output voltage 5.0V +/-5% over the full temperature range and a range of input voltage and load. Pretty sweet, well maybe not the 100mA minimum load, but still good. Okay, so here we have a guarantee of the feedback voltage which we can translate into output voltage via the resistor ratio (and the feedback bias current which is typically low but could be +/-100nA). The tolerance on the feedback voltage alone over temperature is +/-5% which means that you'll have a worst-case tolerance that is always worse than using the fixed voltage regulator, since any resistor pair has some tolerance. Using +/-1% resistors, the worst-case error due to the resistors for a 5V output will be about +/-1.5%, plus some error if you use two E96 series resistors since you won't likely find values that exactly match the required ratio, plus some error due to bias current. So worst-case error is probably closer to +/-6.5~7% than +/-5%. Temperature drift similarly will be worse. A 100ppm/°C 1% resistor could drift 1% for 100°C change so, in theory anyway, you could see another 1.5% over temperature, making the error more than +/-8%.
H: Trouble finding LDM for EMI filter I am trying to use this DC-DC converter and I am having trouble recreating the recommended circuit as I'm not entirely sure what an LDM is or where to find one: From basic google searches it looks like it's some type of inductor used in differential mode filtering, but when I try to find the component itself on websites like Digikey or Mouser, I'm coming up empty. It'll either be some massive part that costs hundreds of dollars or it'll be the full filter as a tiny SMD part, which I don't want. Can I use other types of inductors? Am I just searching for the wrong thing? (I've tried differential inductor and differential choke) For context I'll be using 24 V for the input voltage and dropping it down to 5 V. AI: You're making it seem more complicated than it is. All two-port inductors are "differential mode" by definition. They can't be anything else: for common mode you need 4 ports at least. LDM is just an inductor. The "LDM" label is entirely arbitrary, although it likely means \$L_{DM}\$: Differential Mode inductance. Any inductor with a suitably low series resistance, suitably high saturation current, and a matching nominal inductance will do. Basically, any properly rated \$6.8\,\mu{\rm H}\$ choke will do. You need to know the operating current for the 24V input, and choose a choke with saturation current \$I_{SAT}\$ larger than the operating current, as well as maximum continuous operating current larger then the operating current. The former has to do with core saturation, the latter has to do with thermal heating of the coil winding. When you find such a choke, the equivalent series impedance will be likely suitable for the application, so no worries there.
H: Do static generators maintain a constant voltage at the terminals? Do static generators maintain the terminal voltage constant like a synchronous generator does? If not, are they at constant current? With static generator I mean photovoltaic systems connected to the AC grid through inverter. AI: Grid-tied inverters will always attempt to match the grid voltage and frequency. As the grid goes up and down in voltage, so will the inverter. Assuming constant insolation, a photovoltaic system will act approximately as a constant power source. The inverter will pull as much power as it can from the solar panels, and push that out to the grid at whatever the grid voltage is at that moment. In reality, constant insolation isn't going to be true over a period of hours, and the inverter will always output as much power as it can.
H: How to check the functionality of all the pins on stm32? What happened: I was carrying my stm32 f401re inside a bag, and there was a little droplets of water inside the bag, when I removed the bag the MCU was wet, I wiped it gently on my clothes before connecting it to the PC, and it worked with LEDs and the buttons. AI: Where I work, I design and build circuitboards, usually based on MCUs. When I finish a batch of boards I put them into a literal dish-washer. Like this: As long as they are dry before applying power, no problem at all. Circuitboards and ICs are made to be washed. If it works, it works. I wouldn't bother trying to test the pins individually.
H: implementing dot convention in the following topology, a boost inductor with secondary winding is operated in which the windings are in opposite directions. Implementing this structure, in reality, made me confused. Based on my efforts to find a solution, I have come to the conclusion that CM or CCM windings are not structurally different and depends on which end we consider as the beginning of the winding. In the same coil, if we look at it from one end, the direction of winding is CCM, while if we look at the same coil from the other end, the direction of winding is CM. Therefore, by changing the terminals, we can make the coil CM or CCM, which is shown in the following figures. In the first figure, both windings are wound CM, but we have changed the terminals of the second winding. Is it the same as in the second figure, which is one CM and one CCM without changing the polarities? Question 2: The choice of dotted ends is a contract and we can consider them on other ends as shown below. in better words, there is not any difference between the ends of winding to connect in a practical circuit. Is this sentence correct? AI: As far as I know, you are correct. In power transformers, eventually, one of the ends may have less insulation than the other and, in this case, it is important to observe the correct connection. There are also some considerations in three-phase transformers, which lead to angular displacement. However, your statement is entirely valid for the cases you cited.
H: Single phase voltage (230V) without using a transformer? Something keeps bugging me. Let's assume we have a machine electric cabinet connected to 3x400v with no neutral wire. Only ground is supplied.  How do you generate 230V? Almost every machine I come across has a voltage transformer (400V/230V) with  grounded secondary for supplying control circuits and single phase fans, cabinet air conditioning unit ..., thereby creating a psuedo TN-C-S grounding system.  Why is it not allowed to omit the transformer and connect the neutral directly to the ground busbar? And then use a RCB or RCBO downstream for protection? Besides the unevan phase loading, I don't see any drawbacks to using the second approach.  Am I in the wrong? AI: Even if neutral and earth/ground are connected somewhere upstream in the electrical system, under normal conditions, the neutral wire is the conductor intended for carrying return currents and the ground/earth wire is for safety and carries current only during fault conditions. So what you propose would technically work but it would break electrical code/regulations for safety. If the earth/ground wire upstream ever breaks then effectively every part in the whole machine that should be earthed/grounded becomes live with mains voltage and the 230V control parts would stop working while the machine still has working 3 phase 400V.
H: Does the integral of a sine half if the frequency doubles? I am learning control theory for school and I came across the basic transfer elements such as the integrator. We learned that the integrator has the transfer function F(s) = 1/s or if you use only the frequency F(ω)= 1/ω, so if the frequency doubles, the transfer function drops to a half and so on, as in this example: Example of the transfor function of an integrator: Inductor I have also researched and people wrote, that the amplitude decreases with a higher frequency: https://electronics.stackexchange.com/a/270517/311486 How can I adjust the size of an integrated signal? Also, I plotted the functions, where it seems like for every period, the function with the higher freqency (green) fits two times into the function with the lower frequency (red): Visualisation of sine of a frequency and double the frequency So, I also tried to replicate these observations by calculating by myself, but I get the result that for double the frequency, the integral does not change. I would have expected the integral to become half of the integral of lower frequency, because I am still integrating over the same amount of time. But when I calculate for $$2\pi f= 1$$ I get: $$\int_{0}^{2\pi} \lvert sin(t) \rvert dt$$ $$= 2 \int_{0}^{\pi} \lvert sin(t) \rvert dt$$ $$= 2*(-cos(\pi)+cos(0))$$ $$= 4$$ and for $$2\pi f= 2$$ the same: $$\int_{0}^{2\pi} \lvert sin(2t) \rvert dt$$ $$= 4 \int_{0}^{\pi/2} \lvert sin(t) \rvert dt$$ $$= 4*1/2*(-cos(\pi/2*2)+cos(0)$$ $$= 2*(-cos(\pi)+cos(0))$$ $$= 4$$ Does somebody maybe know, whether I made a mistake in my calculation, or if the integral actually behaves different from what I described? Thanks a lot for any ideas and help, please be welcome to leave a comment so that I can provide additional information! AI: You don't need equations to work this out. You can just think about it. You already know that the area under the sine, from \$0\le\theta\le\pi\$ is 2. This is the sum of all the little tiny rectangular areas defined by, on their vertical side by a finite result of the sine function itself (the shape of which is invariant over a half-cycle), times the infinitesimal on the horizontal that is simply \$\text{d}\theta\$. The integral is just the sum of an infinite number of infinitely thin rectangles stacked side to side. Now, you can rescale the infinitesimal itself in any way you like. So, instead of an infinite number of \$\text{d}\theta\$ wide rectangles stacked side by side to span between \$0\$ and \$\pi\$, you could just as well instead use an infinite number of \$\text{d} t\$ wide rectangles stacked side by side to span between \$0\$ and \$\frac1{f}\$. The height of each is the same (defined by the sine.) It's just the width of each slice that varies. But since the relative width between them is some constant you create, \$\frac{\text{d}\theta}{\text{d}t}\$ (or \$\frac{\text{d}t}{\text{d}\theta}\$ if you prefer), then the area obviously scales, accordingly. So it is no surprise to anyone that the sum across two half-cycles of a "squeezed" sine wave integral would be equal to the sum across one half-cycle of an "unsqueezed" sine wave integral. You could just as well take the two "squeezed" half-cycles, unpacking each tiny rectangle that are \$\text{d}t^{\,'}\$ wide, and then restack them so that two \$\text{d}t^{\,'}\$ wide rectangles (one from each of the two half-cycles) are combined to create a single rectangle that is \$\text{d}t\$ wide (where \$\text{d}t=2\:\text{d}t^{\,'}\$ or \$\frac{\text{d}\,t}{\text{d}t^{\,'}}=2\$) and use that new rectangle to add them back up in order to calculate the area for the resulting "unsqueezed" half-cycle. The result isn't even hard to visualize.
H: Calculating resonant frequency Here is my attempt: Can someone explain to me how to proceed? How do I deal with the negative sign here? If I get the resonant frequency now, it will be complex. Am I not supposed to use the negative sign for the capacitive reactance? AI: Their reactances are equal magnitude but opposing sign: - $$j\omega L = -\dfrac{1}{j\omega C}$$ The minus sign allows us to mathematically equate them as equal reactances: - $$j^2\omega^2 = -\dfrac{1}{LC}$$ And, because \$j^2 = -1\$: - $$\omega^2 = \dfrac{1}{LC}$$
H: Why does the voltage not change when the resistance of the resistor changes? Why does the voltage not change when the resistance of the resistor changes? Why the voltage after the resistor always 0V? I am new to electronics and don't understand it. AI: The circuit is taking the battery negative as the 0 V reference. Many simulators would force you to add a ground (GND) symbol somewhere on the bottom line of your schematic to make this clear and so that the simulator knows (rather than assumes) that battery negative is to be taken as the reference. Since your power supply is 5 V then there will always be 5 V across the resistor and that won't change. What will change is the current. The current will increase when there is less resistance (as you would expect). Lesson: all voltage readings are differential. They are taken at one point with reference to another. Ground is the most useful reference in many cases.
H: Reverse recovery time of diodes and its effect in buck converters I'm currently working on a buck converter topology with a low side switched inductor. The goal is to achieve a constant current in the inductor, therefor only the mean voltage is important and output voltage filtering isn't necessary. The following picture shows a simplified circuit diagram. Q1 resembles the used switch, L1 the load inductor (which additionally has some serial resistance) and V1 the gate driving voltage source. The diode V1 is a standard silicon diode. For the following, the starting point is as follows: It is assumed that a current is already flowing in the free wheeling circuitry through D1 and L1 because of previous switching events of the MOSFET. This current shall be named I1. The MOSFET is currently not conducting. My understanding of this circuit is as follows: When the MOSFET is switched on, it has to carry the current I1 and also the reverse recovery current of D1 before the voltage across drain and source of the MOSFET can lower. Then, D1 is reverse biased and the input voltage drops across the load L1 and D1. My question is as follows: The datasheet of silicon diodes specifies the reverse recovery time of a diode. When the MOSFET is switched fast, which results in a high di/dt in this case, the peak current (I1 + reverse recovery current of D1) could be reached fast and the voltage drop across the drain-source path can be lowered fast. If the reverse recovery time of D1 is too long for the application, meaning that it can't block the reverse voltage fast enough, can there be some kind of shoot-through event where the diode still conducts while the voltage over the mosfet decreases resulting in current spikes? Or did I just not fully understand the circuits behaviour? AI: Your general understanding of the diode behaviour is correct. When the FET turns on, the voltage at its drain doesn't begin to fall until the FET's current exceeds the inductor + diode reverse recovery current. Generally the reverse diode current immediately after switching could be more than 2x the original forward current (which was the inductor current). When switching fast, it is easier to consider the diode's reverse recovery charge. This charge will cause a power loss (Qreverse*Vcc) on every switching cycle. Depending on your current, voltage and efficiency needs, a Schottky diode might be more suitable than a standard rectifier. Note your circuit doesn't have an output -- unless your goal is just to have a (triangular waveform) constant current in the inductor.
H: Series RLC - how does resonance amplify? A series RLC circuit has a resonant frequency f0. If I apply a sinusoid at frequency f0 to this series RLC circuit, the output of the RLC should be a oscillation at frequency f0 with an amplitude higher than the sinusoidal input amplitude. However, a property of a series RLC circuit is that, at resonance, the RLC appears as purely resistive, the reactive components cancel. So, how is it possible that the output of the RLC can output a higher amplitude than the input sinusoid if the circuit appears purely resistive? simulate this circuit – Schematic created using CircuitLab AI: Suppose a high-quality LC series circuit is driven by a function generator having internal 50 ohm resistance: simulate this circuit – Schematic created using CircuitLab At resonance (1000 Hz) capacitive reactance cancels inductive reactance so that the function generator sees a load very near zero ohms. If function generator amplitude was 1V, then only R1 limits current to 20 mA. This current is in phase with V1. But this 20 mA must flow through the inductor, whose reactance is 159.153 ohms. So a voltage appears across the inductor whose amplitude is 3.183 volts, over three times higher than the open-loop function generator's voltage. Seems like you're getting some kind of gain, but you should recognize that current and voltage at L1 have a 90-degree phase relationship, so the only real power is dissipated in R1.
H: Which electronic circuit simulator is working wrong? I ran a simulation of the same circuit on three simulators and one gave me different values. Which simulator working right and which wrong? Falstad: EveryCircuit: PartSim: AI: Try looking at things in detail: -
H: How can I understand amplifier types? I am studying for my final exam. I am struggling a bit with the frequency response formulas. For instance like this circuit, the formulas generally are the same. I don't know what type amplifier it is: On the other hand, the answer says that this circuit it is cascode amplifier, and the formulas are completely different. How can I understand if it is cascode or not, or if it is it CC-CE, CC-CB etc. AI: The point of the CE, CC, CB nomenclature is which pin is common to both input and output. In the first schematic note that the emitter is connected to ground, so is one terminal of the input signal and the output load so it is a common emitter amplifier. Any capacitors in the path are assumed to be a short circuit at AC. If for example there was an emitter resistor that was bypassed with a capacitor it would still be a common emitter amplifier. It is not a cascode that you correctly show in the second diagram. Be careful of the distinction between cascode and cascade (letter 'a' in word). A cascade just means two amplifiers in succession rather than a specific circuit configuration.
H: How to measure voltage with a floating supply? In the example below, ground of a dual supply is referenced to +Vout. Assuming we have a MCU connected to VCC and +Vout, if we then try to measure the voltage at +Vout with the ADC, we're basically measuring the ground of MCU. The actual measurement should be the voltage between +Vout and -Vout, which we can assume to have the maximum value of 20V. How do you measure the floating voltage in this situation? AI: Since the voltage being measured is not floating, but only negative with respect to the ADC, these are the 2 simpler options I can think of. If you can trust the 0V reference for measurement (e.g. low current), it is doable with a simple inverting amplifier connected to -Vout with the module of the gain less than one. Below I've assumed you can use the voltage reference in you circuit also as the ADC reference and I've also assumed it is 1.25V (modify the resistors otherwise): the gain is \$-\frac{5k}{80k}=\frac{1.25}{-20}\$. No accuracy requirements were mentioned and we also don't know the load current. You can have considerable error with the circuit above by assuming that +Vout is 0V (with respect to the ADC) at the load. To avoid this, you could use a differential amplifier (which brings other problems, like resistor mismatch). Both circuits have the same simulated output since wire resistance is zero: 1.25V for -20V at -Vout and 0V for 0V at -Vout.
H: Add current limiter to this circuit I made and built the following circuit. It measures current of an AC square wave of roughly ~19 V. The uController calculates a running average of 5 samples every millisecond. If the average exceeds tollerance (which you can set with the potentiometer) the power is cut off for 5 seconds and a red LED will light up. It will automatically re-apply power after 5 seconds. The square wave is led through the diodes so that both negative and positive cycles flow through the same MOSFET and through the 2 shunt resistors. The purpose is to split a large circuit which can suffer from physical short circuits into smaller circuits. If one of the circuits suffer from a short, the others remain unaffected (the 5 ms is fast enough). Some loads may draw peak currents of ~1.8 A which are to be ignored. That is what the average calculation is for. This circuit lacks current limiting. It is designed with AC IN being 3 A or lower. I now have the problem that somebody wants to use this circuit with a 7 A power supply. I am not sure if my 2 A diodes will survive 7 A for 5 ms. I can alter the software of the prototype to react instantly if more than 3 A is measured in a single sample. And I will increase the sample rate to whatever maximum frequency is possible. I believe I can reduce the short time to 0.1 ms. But I am yet to dive in the attiny's datasheet to see what is achievable. I figured it may be better to limit the current in the first place. I would like to know how I can incorporate a current limiting circuit of ~1.5 A in my circuit. (However if a short time of 0.1 ms will protect the 2 A diodes and the MOSFET, that would suffice as well.) AI: You can add a simple current overload detector as shown here: With the existing two 1 ohm shunt resistors (net 0.5 ohm) the NPN will start to shut off the MOSFET gate when the current through the shunt becomes approximately 0.7V / 0.5 ohm = 1.4A. In practice it will be somewhat higher than that due to the base bias resistor of the NPN transistor. It is possible to replace the NPN base bias resistor with a pair of resistors (one to GND) acting as a voltage divider to increase the amount of current allowed through the shunt resistors before the shutdown NPN will start to turn on. You can connect the collector connection of the NPN back to the micro controller to be able to monitor the circuit as a shutdown detection. Update I made a simulation of the current detect circuit using LTSpice. I adjusted the component types to parts rated for greater than current and voltage. The circuit below adds the voltage divider to the NPN base to raise the shutdown current level to about 2.1A. I added the C1 capacitor to get rid of some of the switching spikes that make the simulation plots hard to read. Here is a plot of the current being supplied by the AC input square wave that has 20Vp-p levels at a frequency of 250Hz. As you can see the circuit limits the + and - current levels to 2.1A even when the load fixture has tried to raise the load current to a higher level. (click on image to enlarge) This waveform shows the voltage at the current sense node (green) and the voltage of the shutdown detect node. (click on image to enlarge) A second NPN transistor can be added to the above circuit in the manner shown here to translate the shutdown/overload detect signal to a full logic swing for a 5V microcontroller.
H: MOSFET Driver input frequency Can I send pulse 1us (1Mhz) to VOD3120AB Mosfet Driver? AI: Looks like you can get away with it, though there will be some distortion, refer to the datasheet page 4: You can see that they promise a "pulse width distortion" of 5 to 70 ns, however they also indicate that the propagation delay difference could be up to +/- 100ns. You will get at least 90% of your pulse and no more than 110%, in my understanding.
H: Can transformer secondary side noise couple back to the primary side? As this picture: Basically, only the noise of primary side will be coupled to secondary side. I read this wiki and know of leakage flux in the secondary side. If the secondary side has noise current such as switch power, can secondary side noise be coupled to the primary side by leakage flux? AI: Transformer coupling between primary and secondary windings is mutual and given by a coupling constant (between 0 and 1), which tells you how much of the flux going through one winding will also go through the other winding. Transformers are built such as to maximize the coupling constant at the working frequency, e.g. 10 kHz to a few MHz for SMPS transformers. Depending on the core material, the coupling constant can change dramatically with frequency, because: the real permeability of the core material can deteriorate at high frequencies the imaginary permeability (loss) can rise at high frequencies, which is exploited in transformers such as common-mode chokes or in ferrite beads Therefore, it is possible and even likely that high frequency switching noise - far beyond the working frequency - is coupled inefficiently between the windings by design. But this depends on the core material. A 50 Hz iron transformer will have quite strong attenuation at very high frequencies, while a small SMPS transformer might show rather little core loss even at 100 MHz. In general, there will be some noise feedthrough, if not magnetically through the core, then at least capacitively through the windings. So you still need proper conducted emission filtering, but you can save on magnetic filtering components if you have the right transformer.
H: Stopping reverse flow of current The above circuit has two sections. Section A is a simple transistor as a switch circuit using BC547 NPN transistor which triggers a LED1 as well as section B. Section B is a NOT gate circuit using BC557 PNP transistor which triggers another LED2. Target: when push button is on LED1 should glow while if push button is off LED2 should glow. Issue: when push button is off LED2 is glowing as expected but LED1 is also glowing with lesser brightness. I understand the reason that the current is flowing through the pnp transistor's B-E and completing the circuit. But this is not expected in the circuit. Can anyone suggest what is the solution to this? AI: I don't see any problems. You likely chose the resistor values incorrectly. Always show all details of your work. We can't tell you what you did wrong if you don't tell us what you did! simulate this circuit – Schematic created using CircuitLab To see this in action, run the simulation, highlight SW1 by clicking on it, then press Space to toggle it. Observe the current meter readings. The LEDs will not be visibly glowing much when the current is <20uA. The only disadvantage of this circuit is that it wastes energy by shunting the LED, i.e. when D1 is on, it uses more current than when D2 is on. I understand the reason that the current is flowing through the pnp transistor's B-E That's not the reason. This current can not light up D1: it flows the wrong way. It also can't turn on Q1. The only way what you claim would be true is if you had a circuit like below: simulate this circuit Here, you can clearly see that D1 gets driven by the B-E diode of Q2. But this doesn't happen in the circuit you've shown! The circuit below uses a long-tailed pair and works quite well. The main disadvantage is that the input is not differential, and thus there's residual imbalance. About 0.5mA less flows through D2 than D1. There is a tradeoff between dissipation in R1,R4 and the balance: the lower the values of R1,R4, the better the balance of the LED currents. As it stands, the circuit would be acceptable for many uses as an indicator. simulate this circuit
H: Dimmer that takes LED from 0 to 50 W over a defined time-period I have installed a 50 W LED lamp next to my bed and plugged it into a timer, so it works as a light supplement to my alarmclock. I love the intensity of the light, but I would like to have it fade from darkness to full brightness over 30-60 seconds to let my eyes adjust to the light slowly. Is there a product I can buy that lets me do this? Or could someone explain to me in simple terms how to build it? P.S. I live in Denmark with a 230V wall-output. AI: At first you should discover if this lamp has any brightness control or is dimmable. If yes, then you should get or create such control device. If not, you should modify its internal power supply circuit to add soft-start feature to it.
H: Dflop not working on LTspice I am trying to simulate a Dflop on LTspice and it does not work. I look the other post on this subject and I didn't get what was the problem. Here is the simulation It is like there is no clock. The IC is not taking into account the clock. AI: First of all, the preset and clear pins are not the power supply. They are active high (should be 0V for the FF to work) and the voltage should be set via the SpiceLine (set to e.g. "Vhigh 5 Trise 1n", by right-clicking the component):
H: How to calculate the receive noise figure of an antenna? I am designing a phased array antenna in S-Band frequency running on RHCP for a project. One of my requirements is to have a receive noise figure better than 2 dB. I cannot find any resources on the receive noise figure, and was wondering if anyone is aware of any resources. AI: How to calculate the receive noise figure of an antenna? Antennas and passive components don't have a noise figure (see last sentence of wiki quote below)... Noise figure is: - the difference in decibels (dB) between the noise output of the actual receiver to the noise output of an “ideal” receiver with the same overall gain and bandwidth when the receivers are connected to matched sources at the standard noise temperature T0 (usually 290 K). The noise power from a simple load is equal to kTB, where k is Boltzmann's constant, T is the absolute temperature of the load (for example a resistor), and B is the measurement bandwidth. So, if your antenna (a passive device) has an output resistance (ignoring the complex part of the impedance) of 50 Ω, then the noise it produces is equivalent to a 50 Ω resistor. See also Antenna Temperature for more information. You then need to ensure that your receiver doesn't produce a NF of greater than 2 dB. This means that your receiver doesn't degrade the signal to noise ratio from input to output by more than 2 dB.
H: Is the UPDI pin on AVR DA/AVR Tiny0/... 5V tolerant when VCC@3V3? I've a project incorporating an AVR128DA28 which is used with 3V3 peripherals and should be programmed in circuit. For this I built a jtag2updi (GitHub) adapter using an Arduino Nano. Since VCC_target = 3V3 and the "programmer" (Arduino) works with 5V I need a level shifter; a simple resistor divider (after the 4K7) does not work (I tested it). But with setting V_target = 3V3 and UPDI directly connected (through 4K7) it does indeed work. I'm wondering if I damage the µController or my circuit (components) this way since the voltage is 3V3 but the UPDI pin is programmed from the 5V IC with 5V high signal etc.? I could not find any documentation from Atmel regarding the the allowed max. voltage ratings for the UPDI pin. Any ideas where to find it? AI: I don't have AVR128DA28 to hand, but I have the AVR64DAxx datasheet open: s.37.2 Absolute Maximum Ratings gives all pins positive limit relative to VDD, i.e. there's an ESD clamp diode to VDD. Testing a AVR64DA64, I indeed measure a diode junction from UPDI to VDD (VDD = 3.3V, it's currently active). HV programming I think doesn't apply to AVR-Dx, anyway. Let's see... ah here we go, tinyAVR do: https://microchipdeveloper.com/mplabx:avr-updi-info with a pretty obvious explanation -- they're short on pins, so need the extra function offered by HVP. The rest with dedicated UPDI pins, it seems, don't.
H: How does a switching regulator behave if its input is equal to its output? I want a switching regulator (eg LM2596) set to 5v output voltage to work with 5V - 36V input voltage. I know regulators have min dropout voltage. So how does it behave in such a case (Vin=5V when Vout=5V)? What is the best practice of this application I want? AI: It depends on the specific regulator. Some buck regulators have 100% duty cycle mode. In that case, the top FET turns on, and it's just like having a resistor between Vin and Vout with a value equal to the RDSon of the FET. Some buck regulators have a maximum duty cycle, (often so they can charge the bootstrap cap). In that case, the part would not be able to regulate or even keep the input close to 5V. Using a 4 switch buck-boost regulator would allow you to regulate the output voltage when Vin is above and below Vout, so that could be a solution for your application.
H: How does current flow when you short base and collector in a current mirror? I am studying analog electronics and I don't understand how current flows in a current mirror when you short base and collector together. Electrons are being injected from the emitter into the base when we forward bias the base-emitter junction, but there isn't any reverse bias voltage nor forward bias voltage between base-collector junction only base and collector are connected through a wired connection. So, how does current reach the collector junction and enter in the collector in the current mirror? Also, there isn't any reverse bias in the base-collector junction so there will be a lot of recombination when electrons enter from the emitter into base. Won't that make the current mirror less efficient? AI: Electrons are being injected from the emitter into the base Just to be clear, this implies we are talking about NPN transistors here. there isn't any reverse bias voltage nor forward bias voltage between base-collector junction The electric fields in the base and collector regions are not zero. Electrons that have just crossed the emitter-base junction from the emitter side, are inclined to move toward the (metal) electrodes contacting the semiconductor material of the base and the collector both by the more positive potential of those electrodes relative to the potential of the silicon just past the emitter-base junctiton, and by the higher (net) negative charge density just past the emitter-base junction relative to the metal-semiconductor junctions of the electrodes. The electric field causes drift current, and the charge density gradient causes diffusion current. In this particular case, the drift and diffusion currents are working in concert, rather than in opposition. only base and collector are connected through a wired connection. Well, the base and collector in a diode connected transistor are definitely connected via the wired connection. But they are also connected via the base-collector junction. So, how does current reach the collector junction and enter in the collector in the current mirror? Electrons in an NPN transistor, or minority carriers generally, easily pass across the base-collector junction. A significant difference between the base-collector junction and a reverse biased or zero biased normal diode junction is that the base (when properly biased) has many minority carriers, whereas a reverse/zero biased diode has (compared to the transistor) few minority carriers. If (when) the P region of a reverse biased junction does have lots of minority carriers, these pass through the junction easily. (A case where a diode is reverse biased, but does have "many" minority carriers occurs when the bias of a diode is suddenly reversed, and there is reverse recovery current. In that case, carriers that have crossed the junction when the diode is forward biased but have not re-combined, easily pass back across the junction when the voltage is reversed). Also, there isn't any reverse bias in the base-collector junction so there will be a lot of recombination when electrons enter from the emitter into base. BJT's are constructed with two properties which favor minority carriers in the base continuing on a journey to the collector. The first factor is that the base is relatively narrow, so that the minority carriers do not have far to travel to reach the collector. The second is that the base is more lightly doped than the emitter. Because of the unequal doping, the density of minority carriers in the base (which have come across the emitter-base junction) is higher than the majority carriers in the base (that come from the base's metal-semiconductor contact). The lop-sided ratio means that many minority carriers will not be recombined in the base, although some will.
H: Best low-cost microcontroller (that can be programmed using Python) for computer vision? I have been researching for many days and haven't found a microcontroller that fits my needs. I want a microcontroller that: Is cheap (I want to make this into a product that can be around $20-40) Can be programmed using Python because I have made a desktop app that uses Python that I want to run on a microcontroller A camera can be used with the microcontroller Is good at image processing and can handle live video streaming because that is used in my Python program I am asking for a lot and there is probably not a microcontroller out there that meets these criteria but anything close to this would be significant. AI: Generally speaking, computer vision is too computationally intensive to run on a microcontroller. The vast majority probably won't even have enough ram to store 1 frame. However, there are special cameras that have a processor built in, specifically for computer vision. Something like this will probably fit the bill: OpenMV Cam H7 And another one: Pixy2
H: 14VAC to 11VAC (~200mA) Using a small transformer I have an application that requires a transformer with 120VAC 60Hz on the primary and 7VAC/11VAC (200mA) on the secondary. Unfortunately, while this part is available in some countries I have not been able to get one shipped to the US (so far). This ebay listing shows a transformer which converts 115VAC to 14VAC and 7VAC (250mA), which is very close (and the current rating is appropriate), so in theory I just need an additional step-down to get from 14 to 11VAC. I don't know the input tolerance of the board that uses the 11VAC otherwise I would try this directly since I assume it's just regulated to some DC value. Unfortunately, the step-down turn ratio (~1.27:1) is not so easy to find, and the closest I've located so far is this digi-key component, but it's intended for DC-DC SMPS. I'm guessing that this would not have the right properties to work for my AC application, so I was hoping someone would please comment to confirm, and if there is a good alternative for what I'm trying to do. Edit: Sorry I wasn't clear before, but this is a one-off repair, so I was looking for a close replacement for the part I couldn't source. Both the 11VAC and 7VAC outputs are needed on the secondary, or an equivalent. AI: What you want is two transformers connected in a buck or boost circuit on the secondary. Take a 14V output and subtract the 3V output from another transformer, by connecting the windings anti-phase in series. That’s a buck circuit. Onto that output add another anti-phase 3V winding in series and you get 7V. If you got a 14V output, then two small Nokia cellphone wallwart transformers like ACP-7U producing about 3.7V rectified would do it I think. You’d step down from 14V twice. These transformers can be tweaked for lower output voltage by unwinding a couple turns from the accessible secondary if needed, and the rectifier can be bypassed as well. ACP-7U are Handy little things can put out 350mA and nicely power 3.3V MCU projects through an LDO (an aside).
H: Is there a name for this vintage 3-pin connector configuration? I am trying to find a suitable replacement for this connector for a vintage radio, but I'm not sure how to search for it or if they even sell them anymore. Simply "3 pin male connector" isn't helping very much, and the part has no markings. I've seen this before on an old stereo amplifier I had as well, with one vertical pin and two horizontal. Is there a name for this particular pin layout or type of connector? AI: Jones plug, 303 format, as made by Cinch (which acquired Jones in the 1940s): https://belfuse.com/resources/datasheets/cinchconnectivitysolutions/industrialcinch/ds-ccs-ind-jones-plugs-300-series.pdf Jones plugs are designed for use as a normal power plug: cheap, medium current, frequent insertion/removal.
H: Can circuit without memory give output depending on the order in which input pins were activated for non-commutative operations? Going trough making small NAND based computer. I have two input pins zy (zero 16 bit Array y), ny (negate bitwise 16 bit Array y). Implementing each one separately or in connection is no problem but there is a difference in output that depends on order of activation - because these operations are not commutative. My ALU is made without flip-flops etc. no memory elements so can it still distinguish between order of inputs without them, using only 1 pin per operation? I feel that no because once two pins are active circuit can't distinguish in which order they were activated. Is it so? How that problem is addressed in real circuits? Do you need (log n)2 pins for describing all permutations of non-commutative operations in your circuit? AI: A purely combinatorial circuit, i.e. one without ANY feedback, when it settles after an input change, will have an output that depends only on the inputs, and not on the history of the inputs. However, if even a single feedback line is added to a combinatorial circuit, i.e. some output is fed back into an input, then the circuit will (or at leas can) have memory. Explicit memory elements, such as flip-flops are not needed. All that is required for the outputs to depend upon the history of inputs is one or more feedback lines.
H: What is the current across the resistor given in the figure? The circuit is short circuit across the the resistor. What will be the current flow across the 30 ohm resistor? AI: If you assume the 3A current flows across the short, then the voltage across the 30Ohm resistor is 0V and since the current source is ideal, it can have any voltage across it (even 0V), hence the solution is valid. On the other hand, if you assume the 3A current flows through the 30Ohm resistor then you have (3x30= 90V) developed across the current source. But then again, you have 90V developed across an ideal short (which violates KVL) and hence the only valid solution is the entire 3A current flows through the short.
H: How does selecting a higher permittivity substrate in patch antenna reduce its size? By selecting a higher permittivity substrate do we reduce the height of the antenna or we reduce the length of the patch antenna? We cannot reduce the length of the antenna as it is dependent on the wavelength. AI: Here's an easier explanation: We know the 2-plate capacitor formula: $$ C=\epsilon_0\ \epsilon_r \ \frac{A}{d} $$ Where \$\epsilon_r\$ is the relative permittivity of the substrate, \$A\$ is the area of the plates and \$d\$ is the distance between them, or the thickness of the substrate. So, using a high-permittivity material will make \$C\$ greater. We know the LC network resonance frequency: $$ f_r=\frac{1}{2\pi \sqrt{LC}} $$ So, greater \$C\$ makes the resonance frequency lower. To bring the resonance frequency to its original (design) value \$C\$ should be decreased. Reduced plate area (\$A\$) will make the \$C\$ lower (first formula), therefore a smaller patch. But there are some drawbacks of course: Since the effective electric field between the plates is inversely proportional to the permittivity, using a high permittivity material will reduce the effective electric field. This means less radiation and therefore less gain. Q will increase and this makes the bandwidth lower -- This can be an advantage or disadvantage, though as it depends on the application (and perspective).
H: High side n-MOS driver without charge pump I couldn't find details of this circuit configuration in an internet search. Most online results are about P-MOSFET in high side path or N-channel MOSFET with charge-pump/IC based driver. simulate this circuit – Schematic created using CircuitLab I understand the high level working of above circuit. It is part of an inrush limiter circuit and would hold the current through M1 constant during C1 charging. R6 provides the initial path to build up output voltage. Due to voltage drop across R2, Q1 Vbe crosses the threshold and starts conducting. This increases the Vgs of M1 and it starts conducting, bypassing R6. Current through R2 is maintained constant by the feedback effect on Vgs Why is it not very popular? Any drawbacks for this circuit? What is the purpose of R5? I could find a similar circuit in another SE answer- but it was with P-channel MOS. AI: Any drawbacks for this circuit? The circuit will limit the current to around 7V/3R = 2.2A during start up (i.e. charging the output cap). Once the output capacitor charges enough the flowing current will be reduced. However... The output voltage of the circuit will be load-dependent: The lower the load resistance (i.e. higher current demand) the lower the output voltage. The load current plus ripple current will continuously flow through R2 which is a needed element to drop some voltage so that Vgs of the NMOS becomes positive. This continuous flow will bring continuous dissipation. An NMOS is normally used to replace diodes or resistances to reduce the total losses but that R2 on its own zeroes this benefit. This gets worse if the load current demand is high because the network will continuously try to limit the peak currents to 2.2A. With the help of R2 the Vgs of the NMOS will be positive, because the gate sees full rectified voltage via R5 (100k). But the developed Vgs will be relatively low (in your case less than VZ + VBE = 7V). Driving a power NMOS with very low Vgs will not bring the full low Rds-on advantage due to the lower Vgs. To make Vgs higher R2 should be higher. The higher R2 the higher the loss. And I'll not mention the continuous loss across R6. Apart from everything I've talked about above, there's one obvious thing: Without all the circuitry and complexity, the R2 on its own can work as an inrush limiter here - and actually, does most of the heavy lifting. Using a relay which shorts that R2 after a couple of milliseconds after energising the system would be even less expensive and less complex, and easier to implement.
H: What's the function of this schematic? - Waveshare E-Paper driver I am currently designing my own PCB which combines a microcontroller board and a Waveshare e-Paper driver board (Schematics). I wonder what the functionality of this part here is. Can anyone help me understand its purpose? AI: The circuit turns on VCC and 3V3 (on a part of the schematic that you have not posted, regulator U2 connected to the VCC_EN signal) when EPD_RST is high. Simple as that.
H: Wouldn't charging a lithium battery to 80% only defeat the purpose of putting the battery into longer use in the first place? I see most smart devices (laptops/tablets/phones) now have the option to optimize your battery by charging it to 80% only. But wouldn't this reduce your capacity i.e. your brand new battery now only has 80% of its rated capacity, which means more frequent recharge and all that. Even if you do charge it to 100% for the occasional road trips or whatever reason that you need to have the battery to last longer, I still don't see the real benefit of capping your battery charge at 80%. This basically comes down to, I'm trying to make my battery last longer, but then I have to lose 20% of its capacity first, so what the heck, I'd rather just charge it to full every time and let the battery deteriorate annually. AI: When you run, you cover less distance by sprinting for a short time compared to running at marathon pace. Same goes for batteries. The longevity of Li-ion batteries is greatly affected by a) how deep you discharge them and b) how full you charge them. Not in a linear way. The quantitative benefits or discharging the battery are out of the scope of this question (although if that is of interest, you can check out this other question [disclaimer: I wrote the accepted answer]), but they are universally recognized and it seems like the benefits of under-charging your battery are actually at least as important in the long run: Using a reputable battery management application I use on my phone, I have observed how "cycles wear" varies vs "charging setpoint" and calculated the relative energy gain at the end of the battery life compared to 100% charging and discharging*: As you can see, charging to 80% instead of 100% multiplies by 4 the amount of energy the battery will have transferred to you over its life - the only tradeoff being to compromise on how much energy you can get out of a full charge (big slices, small cake VS small slices, large cake). This also means you can use your battery for 4 times longer before it gets to the end of its rated life. Note the use of "rated life", because in practice batteries lose capacity as they age and get used, to a point they become unusable. If the above plot was flat, you'd be right, charging at 80% the battery would be useless because we would charge it more often (less convenient) and it would not live longer. But it's not, not even close. So, just like in a race, you have a choice: either run smart, or run fast. In practice "smart" would actually mean here that at times, you would have to charge at 100% to survive an intense day, but that in general you should charge it at a lower level. It is generally recognized that 80% is a good compromise between convenience and life span. I think I remember that in satellites, the battery is generally designed to charge/discharge between a third and 2 thirds of its rated capacity to maximize its lifetime to 15+ years. My general advice for maximizing the battery is only use your battery as much as you need, i.e. charge it as often as possible, and as little as possible. If you need a rule of thumb, stay within 30%-80%. *: This particular calculation assumes deep discharging, but although the benefits may be altered by (safer) more shallower discharging, the qualitative takeaway is the same. Note, also, that obviously this chart is not at all accurate below 50% (my app does not even give estimates below that) and probably not that accurate close to 50%. I believe it above 75% though. Example of scientific proof from the Battery University (e.g. Table 3).
H: What gauge wire is acceptable from 15 amp power supply with low current demands? I am looking to directly power a Raspberry Pi for my 3D printer from its existing power supply. The terminals on the power supply appear to be ~15 amps max output at 24V. I already have a buck converter wired up to the USB on the Pi to bring the voltage down to ~5V that the Pi wants. What I don't know how to handle is the proper size wires to use in this situation. I have a lot of 20 gauge wire that is generally used on 3D printers, but I am worried about wiring the RPi directly from the power supply with this wire. I also have 14 gauge wire, but it is too large to fit into the terminals on my buck converter, and even if it did, the buck converters are 5 amp max so that would just move the "problem" to the buck converter. Any help in how to properly wire this up would be greatly appreciated. I would prefer crimp connections where possible (my solder work is sub par at best), and would classify myself as very much "hobbyist" and basically assume I know nothing about electrical, so as specific an answer as possible would really help. To be clear, from my understanding, because the 14 gauge wire can handle the max output of the power supply, it should be used. But I also know that the max actual draw of the RPi is far lower than that, and is within the acceptable range of say, a 20 gauge wire. My concern is running 20 gauge for this circuit back to the power supply could have the potential of a fire in a short, but I have no idea how I'm supposed to remedy that. As running 14 gauge wire (even if I could) would just offload the short risk to the buck converter or maybe even the Pi. AI: I linked to other questions that should cover the basics of your question (in comments, but I'll list them below as well). Per your additional information, regarding the concern of a fire resulting from a short, you are correct in addressing this concern. If you use a wire gauge that is too small for the power supply, but adequate for the load, there is potential that during a short condition that wire would become quite hot and potentially start a fire. The simple answer is to use a fuse or circuit breaker. This is exactly what is done in home wiring, for example. The circuit breakers are limiting current for each circuit based on its intended usage and how it has been installed. For example, in the US, 15 or 20 ampere breakers are common for circuits that use 14 and 12 gauge wire, respectively. The upstream "power supply" (the mains grid) can easily supply more current which would happily start a fire during a short were it not for the breakers or fuses. Just remember that wherever you install a fuse, the wire should be sized according to the max current that the power supply can deliver on the upstream or input side. Additional resources: How do I tell what gauge wire I need? Choosing power supply, how to get the voltage and current ratings? Fuse position in relation to load and switch
H: What is the purpose of the middle wire on the ribbon? I am new to circuits and am trying to repair a device. What is the purpose of the middle button on the ribbon? It is the one that circles around 4 of the buttons on the device top. I am trying to replace the button panel with a breadboard and buttons - what would the wiring for this be like? AI: Just follow the connections: "Green" is common, in that it goes to each switch. Pressing any button connects "green" to another color.
H: Synchronous Buck Converter - PWM modulation Let's say that our Vin = 9V, Vout = 5V. Hence our duty cycle D = 5/9. Let's say that our switching frequency is 100kHz. Consider T_on as the time the HSFET is on. So our T_on in this case is: T_on = (1/100kHz)*(5/9) = 5.5us If I_Load increases, we will need more time that HS FET is kept on. So that will mean that T_on increases. So this 5.5us will increase. My question : Does the equation I used to calculate 5.5us only apply for no load conditions? I assume I need to have an extra scaling factor which repersents the PWM reacting to increased load and hence inceasing Ton? AI: My question : Does the equation I used to calculate 5.5us only apply for no load conditions? When you have a synchronous buck regulator (as per your schematic), the duty cycle multiplied by the input voltage equals the output voltage irrespective of loading conditions (within reason of course). In other words, the volt drop in both MOSFETs will eventually eat into the above equality but that will be on very heavy loads. For light loads and medium loads the equality holds. For normal heavy loads it nearly holds but on really heavy loads you see several percent discrepancies. Maybe you are thinking of a regular buck converter where the bottom MOSFET is replaced by a diode and, you are not operating in continuous conduction mode? For a synchronous buck converter, the circuit is operating like CCM or in full CCM; it can never operate in DCM (discontinuous mode). I assume I need to have an extra scaling factor which represents the PWM reacting to increased load and hence increasing Ton? No, not when you have a push-pull MOSFET output circuit aka a synchronous buck converter.
H: Installing a relay and its ratings I'm planning to install this DC input relay called G5RL-1A-E-HR-5DC at the line of an SMPS supply. The SMSP will sink nominal AC line current of less than 1A but its inrush is 45A at cold start. The manual of the relay is given here. The model I plan to use is high inrush type and HR in its name means high-inrush. I also found the following table from the manufacturer site here: But given all these info I have the following questions: I cannot find the coil resistance for DC. What is the DC coil resistance of this relay? (G5RL-1A-E-HR-5DC so 5VDC one) It says high inrush but no value. What is the inrush current this relay can handle? As a flyback diode can a 1N4148 handle 45A inrush used with this relay? What plot in the diode datasheet can we conclude that? AI: That datasheet is a bit dubious, maybe it's been superseded. For example, I don't see the TV rating for a DC coil. Here is the current datasheet from the manufacturer. You don't mention the mains voltage, but if I have to guess, I'd guess it could be more than the 120VAC the TV ratings apply to, and the life of 25,000 operations may not apply. There is a CSA/UL motor load rating of 60 LRA/10 FLA, 250 VAC, 40°C, so it should be fine for the 45A inrush (since 60A is more than 45A), but the life is given as only 6,000 operations. So, I suggest caution if your mains can exceed 120VAC and you need much more than 6,000 operations life. Coil ratings are given in a table, so 80mA under nominal conditions for the 5V model. You should ensure it gets as close as possible to the full 5V for long life and reliable operation at high temperature: The flyback diode across the coil just sees a brief pulse of the coil current (80mA in this case, rapidly decaying to nothing), so almost any silicon or Schottky diode will work. For example 1N4005, 1N4148, 1N5819 etc. Since even the 1N4148 can easily handle the full coil current continuously and also easily handle the 5V reverse voltage (typically a 75V or more rating), it will work fine. As far as knowing it will work fine, the voltage rating is straightforward. The current rating is given as 300mA at 25°C absolute maximum, so we have derate that for proper design, but 80mA is around 25% of the absolute maximum at 25°C a and it's only briefly pulsed so it won't get a chance to heat up much so it's fine. About 1V is near the top of the scattering limit of forward voltage at 80mA in Fig 2, which means 80mW peak dissipation, which would heat it 28°C (assuming 4mm lead length in the RthJA spec). So at an ambient of 70°C the junction would only be ~100°C, which is acceptable for reliability and much less than the 175°C absolute maximum. And again, we don't have 80mA continuous, just a brief pulse when the relay drops out, so we're really good.
H: The reason why Thevenin voltage source is in SERIES with Thevenin resistor and why Norton current source is in PARALLEL with Norton resistor As you know, in Thevenin's theorem Thevenin voltage source is in series with Thevenin resistor and in Norton's theorem Norton current source is in parallel with Norton resistor. COULDN'T Thevenin voltage source be in PARALLEL with Thevenin resistor and Norton current source be in SERIES with Norton resistor? AI: If the Thevenin source were in parallel, the output voltage would be constant. Ditto for Norton source/current. A two-port linear network containing only resistances, current sources, and/or voltage sources will have a single open circuit voltage and single short circuit current which are linearly related. Putting a voltage source in parallel with a resistance will give you a constant output voltage regardless of current and also implies infinite short circuit current when the output is shorted. Putting a current source in series with a resistance will give you a constant output current and implies infinite open circuit voltage.
H: How could you determine the point in time a Radar antenna is directly facing a stationary RF receiver station? I am currently doing experiments involving Software Defined Radio and Radar systems. Since I am new in this field and currently try to refresh my physics/math skills, I encountered some problems along the way. The imaginary setup of my experiment is the following: A radar transmitter is placed at one location. An RF receiver (SDR) is stationed at another location. The radar is sending while the only job of the RF receiver is to receive the transmitted signals of the radar. At the same time, the radar is rotating and a difference in measured signal strength should be observed everytime the radar antenna is not facing the RF receiver. My main concern is to measure the moments in time, the radar antenna is directly facing the RF receiver. However, I am rather confused on how to measure signal strength. Since I don't have access to a real radar system, I tried to implement one via Software-Defined Radio. The signal is generated by sending pulsed waves and to simulate a rotation of the antenna the amplitude increases for each pulse. There is one pulse with the peak amplitude in the middle. Afterwards the amplitude for each peak is declining. I am not sure if this is even remotely correct. My question is whether it is possible to measure the time the radar antenna is directly facing towards the RF receiver and whether there are methods that can help me there. AI: Finding the moment at which the signal strength reaches its peak is equivalent to finding local maxima of a mathematical function. If you continually calculate the first derivative of the signal strength with respect to time, then any time the derivative equals zero, that's probably going to be either a maximum or a minimum. In your application, there's probably only going to be one of each, so when the derivative is zero, that's probably going to be the maximum or the minimum. To decide which, you probably can just look at whether the signal strength is greater than or less than some threshold. But, if you want to get fancy, and eliminate the need to choose a threshold, then you could also compute the second derivative. If the second derivative is negative when the first derivative is zero, then you've found the maximum. See also: https://en.wikipedia.org/wiki/Fermat%27s_theorem_(stationary_points) However, I am rather confused on how to measure signal strength. Sorry, I can't help with that bit. I have not written code to interface with an SDR module.
H: Thermocouple circuit not outputting expected values My application requires 11 thermocouples to be sampled and logged. Below is the circuit diagram, containing a differential and common mode filter, non-inverting amplifier (gain 111) and a second order LPF. The Vbuff voltage is a buffer voltage to ensure the output of the non-inverting amplifier is always positive, as the thermocouple needs to be able to read from -40C to 1000C. This Vbuff is shared between all 11 thermocouple circuits. Circuit diagram U1 is a LMV844QMA/NOPB U2 is a TSX7191IYLT Temp sense IC is a TMP236AQDBZTQ1 The location of the CJC temp sensing IC is directly below the thermocouple connector plug as shown in the images below. It is stitched to the ground plane for more thermal mass. At the moment the cables to the thermocouple are around 15-20cm. CJC location When holding four thermocouples in a hot water bath (I only have 4 on hand at the moment) I expect them to all read similar temperatures but the below plot shows they vary to each other by over 10 C. They also do not match the expected temperature well, as read by a thermometer. This plot includes cold junction compensation using a temperature sensing IC. I have also tested them using a high-resolution power supply to simulate the thermocouple voltage. This showed similar discrepancies across the full input range, both between the thermocouple channels and to the expected value. The expected value in the below plot is the straight line, all other curves are the 11 thermocouples simulated using a power supply. Power supply input for all 11 channels vs expected value When calculating the difference between the highest and lowest channels it can be seen to approximate a linear slope with an offset component. The linear slope makes sense and would be due to tolerances on the resistors, which are all 1%. This was confirmed by swapping the resistors between the channels and the slope changed sign, as seen below. Difference between highest and lowest channels Difference between highest and lowest channels after swapping resistors However, I am unsure what creates the offset, as this is the larger component of error. I understand that thermocouples can only achieve around +-2.5 C accuracy but I don't know where my larger errors between channels and compared to the expected value is coming from. Especially when simulating them using a power supply which means the errors are caused by my circuit, not the thermocouple itself. I am aiming for around 2-3 C of error between each thermocouple channel to the expected value. Does anyone have any ideas for other things I could test to figure this out? I have plenty of other data and tests if you need any extra information. Any help would be greatly appreciated! AI: At more-or-less room temperatures, the voltage coefficient of a K-type thermocouple is about 40 uV/deg C. The LMV844 input offset voltage is typically +/- 50 uV, but is worst-case +/- 500 uV. So your potential offset range is about 25 degrees C.
H: How can I use a serial port to listen to if a toggle is switched? Using a PC and writing some software I want to detect if a user toggles a small electric switch with as little external hardware as possible. I was thinking that maybe I could use a USB to serial cable? Here are the pins of the serial cable: Before I open the COM port in my software I measure 0 V from pin 4 to ground. After I open it I measure -13 V from pin 4 to ground. I read that: "On many operating systems, including Windows, the DTR line is held low while the serial port is unused and not being controlled by any applications." So it seems that 0 V is low and -13 V is high, at least on the pins sending info out from the PC. I was thinking that I could connect this -13 V via the toggle switch to an input pin and thereby detect if the switch has been toggled. Should this work? Which pin can I use? I was thinking of maybe using pin 6 Data Set Ready. This sounds like a pin used by e.g. a modem to signal to the PC (that it has data that can be read). What voltage do I need to apply to pin 6 to turn it high? -13 V or +13 V? Do I need to place a resistor in between? AI: The basic idea is correct even though the details how you think it works are not entirely correct. Yes, you can connect DTR to DSR via a switch and be able to detect the switch state by reading the status. It only requires that the DTR is set to "active" state so that DSR pin can either detect active state or default to inactive state. Also remember that what "active" means depends on context, as sometimes signals are active low, and with RS232 the logic level 0 means positive voltage and logic level 1 means negative voltage. The DTR output should never read 0V, unless the USB adapter implemets some kind of power saving mode which shuts down the charge pumps if the port is unused. Using the handshake pins is not the only way though, you can use the data pins too, if you send a character, you either receive back a transmitted character or not, depending on switch position. Depending on the programming language and the libraries you use to access the serial port you can choose how you want to do it.
H: How to make DHT22 shut up while I talk to another device on the same bus I have a project where multiple devices have to be connected to the same bus. One of them is a DHT22, another is another controller. With the second controller I can communicate which ever way I want as far as timing goes, that doesn't really matter. The first problem is the confusing info about the DHT22 in different sources. According to a Sparkfun pdf the reset pulse "must beyond at least 1ms". ocfreaks.com mentions 18ms. In order not to trigger the DHT22 I decided to use 500us reset pulse when communicating with the second controller. I find that the DHT22 still tries to send data and doing so ruins the communication. I have reduced the reset pulse between the 2 controller to 200us, then 100us, then finally to 50us, but the presence of the DHT22 still messes up the communication. As part of another attempt I have tried to control the DHT22's VCC separately, but as it turns out when it's not powered it pulls the bus low. Quite a barbaric behaviour. If somebody has more experience taming the DHT22, please help. PS: I don't know of any datasheet for the DHT22. There are some badly translated pdf documents around, but nothing decent with real info in it. Edit: I didn't explain it well enough. I don't have any problems communicating with the DHT22. I want to communicate with another controller using the same bus and I need the DHT22 to stay quiet in order not to ruin the communications. My problem is that even though some datasheets say that the reset pulse should be at least 1ms, it responds to way shorter pulses as well, when it really shouldn't. I can't find an Aslong datasheet for the DHT22. AI: If I interpet the DHT22 protocol description correctly, communicating with another device on the same bus may not be possible. You seem to expect that the pulse must be longer than 1ms before the sensor starts communicating, but that does not mean it will not respond to shorter pulses. At least the protocol description never indicates that the low pulse length is validated by the sensor, so if the bus just happens to be low when sensor happens to check it, it may start communicating. When the DHT22 is in sleep mode and waiting for the start pulse, the protocol documentation just requires that the start pulse must be more than 1ms for the DHT22 to detect it. It may be because the DHT22 may wake up every 1ms to check if the data pin is low. So it is possible that it can respond to any length low pulse if the bus is low when DHT22 checks it. So it does not mean that the start pulse length itself is validated to be long enough to start communications, it just means the bus must be low when DHT22 checks the bus, and it checks it every 1ms to save power. And turning off the power supply to a chip whose data pin needs to be high is an extremely bad idea in general. Normally I/O pins have protection diodes to VCC and GND, so basically by disconnecting the power supply, the chip will draw power from the supply via pull-up resistor and the protection diode of the data pin.
H: Why does a flyback converter in DCM mode have a different oscillation waveform at start and end of MOSFET turn off? As seen from the waveform, when the MOSFET turns off, it has some high-frequency oscillations (shown in red) whereas before MOSFET turns on it has low-frequency oscillations. I have found that the oscillation during turn-off is due to parasitic Cgd and the transformer's leakage inductance and that oscillation before turn-on is due to parasitic Cgd and primary inductance. Why does this primary inductance not participate during turn off or why does the leakage inductance not participate in oscillation before turning on? There is only one coupled inductor, right? AI: Why does this primary inductance not participate during turn off or why does the leakage inductance not participate in oscillation before turning on? There is only one coupled inductor, right? Image from here shown for clarity. When the MOSFET turns off, most of the energy stored in the primary winding transfers into the secondary winding but, some of the energy is not. This is due to an imperfect magnetic coupling between primary and secondary coils. So then we talk about the primary inductance being made from two series inductors; the main part that couples to the secondary and, the lesser part that we call "leakage inductance". That leakage inductance might be 2 or 3 percent of the overall primary inductance. So, when the MOSFET turns off, a small amount of uncoupled energy is rapidly released and resonates with the MOSFET drain-source capacitance. And, because the leakage inductance is a fraction of the value of the overall primary inductance it has a high resonant frequency. But, you might say, if the leakage inductance is in series with the coupled inductance then how-come the net inductance is so small at that moment in time? The answer is because the coupled inductance (secondary) acts like a voltage source when energy is transferring. It acts like a voltage source because of regular transformer action; the secondary is producing a DC voltage (as we want) and, that DC voltage reflects back to the primary via the coupled part of the primary inductance. Hence the net inductance seen at the primary during this period is only leakage inductance. When there is insufficient energy left to push through the output diode into the DC storage capacitor, a resonant circuit is formed again involving the drain-source capacitance and primary inductance hence, the frequency is much lower this time because the whole of the primary inductance is involved. There are a few subtleties that slightly paint a more complex picture but I find there's no benefit in discussing them or evaluating them for most flyback circuits.