text
stringlengths
83
79.5k
H: Why is the soldering surface of this pogo pin slightly rounded? I was curious as to why the bottom of these SMT pogo pins are slightly rounded. Wouldn't it cause them to fall over before they are reflowed ? It seems that a completly flat bottom would make sense like most SMD pads. AI: The curved bottom will allow for betting wetting when using solder paste, as it gives more surface area on the bottom where the solder paste actually hits. This is less important when hand soldering as you generally use far more solder and it will wet up the sides of the pin, but in large scale manufacturing they use much less solder paste and you want to maximize the surface area exposed to the solder paste. If it was flat the bottom would lay flat against the pad when it is reflowed and give far less surface area. In addition the solder paste is very viscous and and will hold the pin in place, so the pin doesn't actually need to balance on a flat bottom.
H: disagreement between theoretical calculation and measurement of upper cutoff frequency of a circuit based on RG58 coaxial cable For a 100 meters RG58 coaxial cable, according to the datasheet, the capacitance of the cable is 10 nF and the inductance is of 25 µH approximately. A sinusoidal voltage is applied at the input of the line with a 50 Ohm source, and the end of the line is connected to an oscilloscope, HZ mode (1 MOhm//10 pF), AC coupling. This give, according to me, the following schematic: The RG58 coaxial cable is replaced by the lumped element R,L,G,C, G is neglected. The simulation of this circuit with LTSpice, and theoretical calculation give an upper cutoff frequency of 300 kHz approximately. However, I personally made the measurement with a 100 meters RG58 coaxial cable and the measured -3 dB bandwidth is of 5 MHz approximately. (I don't recall the exact measurement value, sorry) The coax cable model(picture below) gives a value in perfect agreement with measurement Once again, I don't have a screenshot of the simulation result, sorry. I do not understand why the measurement and the simulation of the coaxial cable give a cutoff frequency greater than the one given by taking into account only theoretical effect of the parasitic capacitance. Every physical phenomenon I neglected such as skin effect, or conductance of the dielectric should lower the upper cutoff frequency. AI: You need to model your lumped line in much smaller quantities like "per metre". At the moment you are modeling it as "per 100 metres" and that will give you severe problems above a fairly low frequency. Consider this 25 uH and 10 nF have a resonant frequency of: - $$\dfrac{1}{2\pi\sqrt{LC}} = 318 \text{ kHz}$$ And clearly that is the problem with using a lump of line equal to 100 metres. If you were only interested in frequencies of a few tens of kHz then a 100 metre model would be fine. So, take the maximum frequency you are interested in and convert to a wavelength hence, 5 MHz has a wavelength of 60 metres. Now make sure that the lumped model length is no longer than one twentieth this distance i.e. 3 metres.
H: 1A load with 3 1,5V@2950mAh AA Alcaline batteries I have a system that is powered with 3 AA Alcaline batteries Varta Industrial Pro (1,5V@2950mAh). The batteries are connected to a this 3V3 LDO that supplies power to a MCU. I have taken a minimum dropout of 150mV (so LDO would work down to 3V45). The MCU system will activate (using a FET) a electronic lock of this type, which consumes a maximum of 1000mA during 1 second and admits a voltage from 3V to 5V. As my LDO only supports up to 250mA I would like to know. I would like to know if there would any problem connecting the lock directly to the three batteries power supply, as they are a not fixed voltage power supply. In case there's no problem, would these three batteries in series be capable of maintaining the required voltage level when the lock is activated? AI: It is a much better design to connect the lock directly to the batteries, this way the lock operation will not inject current spikes or other disturbances downstream of the LDO where the MCU is connected. When the batteries are new they will maintain the appropriate voltage level (3-5V there is ample margin). At some point when they become depleted the lock will not operate anymore and the MCU may reset if the voltage goes too low during the lock operation. On this site, you can compare the voltage vs current of AA batteries https://lygte-info.dk/review/batteries2012/CommonAAcomparator.php set the current to 1A instead of the default 2A They list some Varta models but not your specific model, but Varta should be able to supply the curves When the battery is new enough, the voltage per cell drops with 1A drawn but is above what you need as there is a need of at least 1.15V per cell to keep the LDO working (3 x 1.15 = 3.45). At some point during the discharge, the 1A curve crosses down the 1.15V level, the batteries will have to be replaced to maintain proper operation of the LDO + MCU.
H: Current source schematics. What's the difference? I have two current source schematics. They are nearly identical and one could be transformed to another by moving componets and voltage sources around. Both are set to 1 mA output. But the left one doesn't work. The right one works perfectly. Why? AI: This is really far from being a practical circuit- too sensitive to resistor ratios, it's also not quite constant as it neglects the loading effect of the resistors on the "output". Very slight offset voltages on the op-amp will swamp your feedback. But your proximate problem with almost ideal parts, as the comment from @Marko indicates, is the gain of the LTspice op-amp, which is default at 100,000 so with the left circuit it takes 100uV error to drive the output from 0V to 10V, which is basically your entire feedback signal with that 0.1 ohm sense resistor. The other circuit has an extra gain of 100, so that source of error is reduced.
H: What is this white box in a battery pack? I have an O2 Hurricane canless air system that seems not to charge anymore. I was a bit surprised because it hasn't been through much use or many charging cycles, and customer support was non-responsive, so I decided to take it apart and see if I could at least make it work with direct DC power. Here's the inside circuit with the battery pack taken out: The black box is the on switch. Circles 1 and 2 are the metal tabs that were connected to the battery pack, and circle 3 is the white piece I'm wondering about. Because when I plug this circuit straight into the wall (via a 12V DC adaptor), the motor twitches but doesn't spin up. Here's a close up of the white box: I tried for 5 minutes to get a picture where I could make out exactly what the writing above 55*C says, but I couldn't do it, even under a 4x magnifying glass. I'm hoping someone recognizes it just by shape or place in the circuit. The other sides of the box are blank. AI: That looks like a thermal fuse or switch. Without the exact details it would be to limit the temperature of the motor or battery pack. One way of testing is to see if it works by shorting it but i would not advise using the device without that working properly.
H: Troubleshooting Switch Mode Power Supply I am trying to troubleshoot switch mode power supply from HP workstation. I have done visual inspection and didnt find any anomalies on the capacitors. I reckon that the culprit might be the fuse as there is absolutely no light nor fan movement when the power supply is connected. Before proceeding I have two questions: Is it safe to permanently remove heat shrink (without replacing) on various components? Is the location of the fuse is in the area circled yellow in the image? AI: No, and no. The circled component isn’t a fuse, it’s a line filter and it is at line potential so needs the heat shrink around it to protect the rest of the system. The fuse is somewhere else.
H: Understanding voltage drop Meanwhile in quarantine I been trying to learn about electronics. I am still not able to understand the voltage drop when it comes to long wires. My goal is to be able to send a 5v signal over a very long wire. I have read a lot about Ohm's law, done a lot of exercises and I am still confused. Before conducting the real experiment with the long wire I will like to simulate it it. Here is the simulation of what I am doing: According to google the resistance of the cable I am using is .05 ohms per meter. If the distance is 200 meters that means there will be a resistance of .05 * 200 = 10 ohms. If I intend to send a 5v signal over a long wire this is my simulation: (The long 200 meters wires represent the resistors in the diagram) According to Ohm's law the current of this circuit will be V=I*R -> 5=I*20 -> I=0.25 amps Knowing the current I can calculate the voltage drop across each resistor will be V=I*R -> V=0.25*10 -> V=2.5. Therefore the voltages of the circuit will be as:(The first part of the circuit has 5 volts. After the long wire there will be 2.5 volts and at the end 0 volts.) To prove this is correct this is a circuit in real life: (The yellow cable comes from the power supply with 5V. Then that is connected to the red cable. The red cable is connected to the 10 Ohms resistor. Then there is a blue cable connected to the other resistor. And finally that is connected to the black cable ground) If I then use a volt meter I can see the the voltage drop between the red wire and blue wire is 2.5 Volts: Now here is my question. Why do I still read 5 Volts if I place the proves like this: On the internet everyone says that running 5V signals over long signals is not a good idea. Maybe it is not a good idea if you are planing to send current. I can understand that the current will drop a lot. But the voltage will not correct? On the other end of the 200 meter wire I just want to read a voltage in order to get a signal. What am I doing wrong? Why I cannot send a PWM signal using this approach? Questions A. When reading a PWM signal with arduino does arduino read the voltages or the current? I think it reads the Voltage and does not care about current like just like the volt meter? I just want to send a PWM signal over the long wires to trigger an event at the other end. B. Why people on the internet say there will be a voltage drop when using long cables. I just simulated a very long cable and I am still able to read 5 Volts. Solution I know the solution will be to use an optocoupler with a higher voltage because I been researching on the internet. But I still want to understand why is my approach not correct and will probably not work. Edit Sorry I broke the circuit. If I place a 10K resistor in order to complete the circuit I still read 5V. The circuit is complete and I am still reading 5V over the 200meters wire. AI: A) Arduino analog input reads voltages. B) People in the internet send current over the wire, so wire resistance causes drop. The case where your circuit has no current flowing has also no drop.
H: What micro-controllers are suitable for a closed-loop DC Motor? I am working on a small experiment on a closed loop DC motor for speed control. The DC motor is separately excited and fed through a DC chopper. I would like to know what are the best micro-controllers best suitable for this project? AI: Assuming a brushed DC motor, if you're planning on tachometer feedback a quadrature encoder peripheral would be a good thing to have. On-chip ADC may or may not be adequate for the analog inputs- current and maybe rail voltage for feed-forward. Speed control, as opposed to position control, generally is pretty undemanding and I suspect most 8-bit, 16-bit or 32-bit MCUs with PWM and quadrature encoder peripheral would be acceptable in most situations. Using a 32-bit micro would allow more sophisticated control algorithms or easy implementation of simpler ones. You can get a Cortex M4 with single precision FPU for only a few dollars these days so that might be a logical way to proceed- though the complexity of firmware development is an order of magnitude above something like PIC you don't need to worry much about execution time and could probably use an RTOS if that's your bent. Although it's possible to buy processors with (relatively) prototyping-friendly 64-lead LQFP packages, an evaluation board would be a lot easier if you're not comfortable with that kind of thing, then only the power devices would need a PCB.
H: Hall sensor confusion with unipolar and bipolar as well as magnetic induction sensitivity I want to create a simple bicycle computer that shows my speed and RPM. I'm thinking of using a TLE4905L or TLE4906L (which I think is more sensitive) - both unipolar. Now I'm a bit confused about unipolar vs bipolar. When my wheel spins, the magnet will pass in front of the sensor (sensor is static, magnet is rotating with the wheel on a spoke) with the same orientation, as it passes, the magnetic field increases, then decreases. With unipolar sensors this should count as a rotation (a rising and decreasing edge at the output pin - of course I'll use just one of the edges at the controller level). On the other hand with bipolar sensors I think I'd need two magnets, one near the other oriented with opposing poles towards the sensor, one would make it go high and the opposing pole from the second magnet passing by would make the signal go low, am I understanding this correctly? Now for the sensitivity of the sensors. Let's assume, for the sake of the argument, a typical fridge magnet at 5mT as per Wikipedia. With the unipolar versions of the sensors with the TLE4905 (page 8 of the data sheet linked above) the strength of the magnet would not be enough to trigger the sensor which has a Bop=7.5mT. The TLE4906 is a bit more sensitive at Bop=6.5mT, but still not enough. Is this right? Now I either need a stronger magnet or a more sensitive unipolar sensor, right? If I got the unipolar vs bipolar part right, then the answer should be yes, otherwise I need some explanation on why not. AI: with bipolar sensors I think I'd need two magnets, one near the other oriented with opposing poles towards the sensor, one would make it go high and the opposing pole from the second magnet passing by would make the signal go low, No. A single magnet has both north and south poles, so with correct orientation it can do both. Two magnets would only be required if you wanted the output to go high and low at widely separated positions on the wheel. Let's assume, for the sake of the argument, a typical fridge magnet at 5mT as per Wikipedia. With the unipolar versions of the sensors with the TLE4905 (page 8 of the data sheet linked above) the strength of the magnet would not be enough to trigger the sensor which has a Bop=7.5mT. The TLE4906 is a bit more sensitive at Bop=6.5mT, but still not enough. Is this right? Now I either need a stronger magnet or a more sensitive unipolar sensor, right? Yes. Best solution may be to use a Neodymium magnet, which is much stronger than a fridge magnet.
H: Understanding TVS diodes in this application From this Reference Design you can see they use two TVS diodes at the gate of the first FET to clamp voltages greater than 500V. They do the same in this other Reference Design. So basically my question is: TVS diodes do not regulate compared to Zeners, so can this approach continuously clamp high voltages without having a negative impact on the TVS? To my understanding, TVS are mainly used to clamp overvoltages but for a short time. AI: What makes you assume that TVS diodes don't regulate compared to Zeners? They are avalanche devices just like "Zeners" and you can see the breakdown characteristics in the datasheet. TVS diodes are just more rugged to handle high peak power events. The TVS diodes shown form a crude reference for the gate of the upper cascode FET, and clamp the Vgs to a safe value respectively. The cascode device allows the switching device to be used in an application where it is switching higher voltage that its rating, but it's not necessary to have a precise reference voltage for the gate, so a TVS works fine.
H: Can you turn on a light bulb using only one of the two wires of the wall outlet? Instead of trying crazy things I would rather ask. I am asking this for purposes of learning. So I have learned that if the voltage is high you can get electrocuted by touching only one side of the wall outlet. That is because the potential difference is high enough for the electrons to start flowing. Taking that into consideration, if I were to make this connection will the light bulb turn on? Will this also happen with a higher voltage 120V DC battery? If this were to also be true with the battery this would work by connecting only the positive side of the battery, correct? If you were to touch the negative side of the battery nothing would happen, correct? AI: If it's a very tiny lightbulb that doesn't require much power at all, then yes it can work. It works because the 2nd wire from the lamp is connected by parasitic capacitance to electrical ground, completing the circuit. There's a type of testers used to check if an outlet is "hot" that work on this principle. They work best when the user touches the 2nd wire to increase the capacitance to ground. Will this also happen with a high 120V DC battery? No, because a capacitive connection can only pass AC current. If this where to also be true with the battery this will only work by connecting the positive side of the battery correct? If you where to touch the negative side of the battery nothing will happen correct? If you want to power a circuit from a battery you have to connect one side of the circuit to the positive terminal of the battery and the other side of the terminal to the negative terminal of the battery. You can't power a circuit by connecting either one of the terminals on its own to the circuit.
H: Am I using the right resistors with these MOSFETs for an ESP8266 controlled LED strip? this is my first question here. I hope I don't mess it up. I made a WiFi controlled LED strip about a year ago. The LED strip is one of the "dumb" (no individual pixel control) 12V ones and I used a Wemos Mini to control it. Now, I want to make a another one, but I can't remember why I chose the resistor values I did. Does the following look right? The MOSFETs are 40T03GP. I tried googling, but I can't find a setup similar to mine to compare it to. AI: If you are using mosfets you can get rid of those 220ohm resistors. And if you are sure that your ESP outputs always are going to have a determinated logic state (0 or 1) even you can delete the 10k pulldown resistors. Remember that fet transistors does not require current but voltage
H: How much power can a 7805 voltage regulator dissipate without a heatsink? I am using a LM7805 linear voltage regulator. Vin is 12 V Vout is 5 V (regulated) My load is currently 100 mA. Without a heatsink, it's pretty hot. Could i use a large heatsink to regulate 5 A? Or what is the maximum output current with a heatsink for a 7805 linear voltage regulator? AI: No. Even if the output transistor did not saturate and even if you had a PERFECT, heatsink of infinite size and infinitesmal thermal conductivity, you could only manage 2.85A due to the junction-to-case thermal resistance being 5K/W at room temperature. So in reality you wouldn't even get 2.85A is unobtainable. Datasheet lists peak current as 2.2A which is just that: a peak current, not continuous. Since there is no input voltage given, that's also probably right at dropout (i.e. minimum heating) and 12V to 5V is well beyond that so you would get significantly less than even 2.2A
H: Parallel Resonance circuit Q factor formula Clarification I'm confused with the formula of Q factor in parallel resonance because in other books its different is it okay to be like this ? A. This is from the Electronics Tutorial Website A. This is from a book AI: I'm confused with the formula of Q factor in parallel resonance because in other books its different In the first case, they are talking about the Q factor of a perfect inductor, L in parallel with a resistor. In the second case they are talking about the Q factor of a resistor in series with the inductor, L. I know it can be confusing and some websites don't make this explicitly clear.
H: STM32F030P6 MCU returning high even when no voltage is measured on the pin I am trying to make a custom PCB based on stm32f030p6 MCU. one of the requirements is to read a pin's value (digitally i.e. high or low). for this purpose I am using stm32CubeIDE, and to read the pin's value, its first initialised as below: GPIO_InitStruct.Pin = GPIO_PIN_6; GPIO_InitStruct.Mode = GPIO_MODE_INPUT; GPIO_InitStruct.Pull = GPIO_NOPULL; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); and the value is recorded like this: int cc = 0; while (1) { cc = HAL_GPIO_ReadPin(GPIOA,6); if(cc) HAL_GPIO_WritePin(GPIOA, GPIO_PIN_4, GPIO_PIN_SET); else HAL_GPIO_WritePin(GPIOA, GPIO_PIN_4, GPIO_PIN_SET); /* USER CODE END WHILE */ /* USER CODE BEGIN 3 */ } above is the complete piece of simple code i wrote to test if the pin reads correctly by examining the state of pin 4 to see if its high meaning that the MCU has recorded pin 6 as high or low and pin 4 is initalised as follows: GPIO_InitStruct.Pin = GPIO_PIN_4; GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP; GPIO_InitStruct.Pull = GPIO_NOPULL; GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); and even when there is no voltage, it still returns the pins status as 1 (HIGH). I did this again after enabling the internal pull down ressitor: GPIO_InitStruct.Pin = GPIO_PIN_6; GPIO_InitStruct.Mode = GPIO_MODE_INPUT; GPIO_InitStruct.Pull = GPIO_PULLDOWN; HAL_GPIO_Init(GPIOA, &GPIO_InitStruct); still to no avail. I made sure this is not a hardware problem by basically converting the pin to ana anlog input and doing a simple if statement: if( the_measured_voltage_on_pin < 0.3V){ do sth; } and it worked fine. I confirmed this by applying a 2.5V voltage to the pin and it worked fine, so its more likely than not that its a pure software problem, but i am getting no where in figuring out where this issue is arising from. Any help or hints would be highly appreciated. Thank you in advance for your time, Ali AI: while (1) { cc = HAL_GPIO_ReadPin(GPIOA,6); if(cc) HAL_GPIO_WritePin(GPIOA, GPIO_PIN_4, GPIO_PIN_SET); else HAL_GPIO_WritePin(GPIOA, GPIO_PIN_4, GPIO_PIN_SET); /* USER CODE END WHILE */ /* USER CODE BEGIN 3 */ } This bit of code is writing GPIO_PIN_SET to pin 4, whether cc, pin 6, is true or not. So pin 4 will always be high.
H: Where to connect GND on keyboard multiplexer circuit? Hi all, I'm making a project to read my old calculator's keyboard with an arduino and I've decided to design a new PCB and change it with the original one, as it's easier than trying to hook into the old circuit. I'm working in EasyEDA and it's nearly done, but I'm confused about how and where to connect the Ardunio's GND pin to the circuit. I know it has to be connected to the input pins(see below), but I'm not sure if directly... or just how in general. I'm completely new to PCB design and electronics, so if you see anything off, please point it out. I'd be more than happy to hear your thoughts on the design, feel free to share it. The output pins are where 5v will be applied and the inputs are going to be read. I haven't decided on the diodes' type yet, so the one you see are the default diodes, EasyEDA has in the schematic designer. My design: (the yellow circles are the exact positions of the keys on the original PCB) And one more thing. I saw people use the "Copper Area" option in tutorials many times, but I have no idea what that does, could someone provide any inside on what is it used for? Ps.: I've got complaints about how I'm not asking a specific question, but please understand that I'm new to this thing. I'm not lazy googleing, but I can't search for something I don't even know about, and I have no other platform to ask these questions. Thanks in advance! AI: The keyboard matrix does not use a ground connection to operate. The MCU sets the output pins to VCC or GND to scan a single row (or column) at a time, and then reads the input pins to see which buttons are activated on the selected row (or column). And no, the need for ground connection does not change based on if you use a ready-made library or write your own code.
H: Power recirculation rig using rectifiers and BLDC motors I am currently trying to set up a rig to test my 3d printed gears under different torques and motor speeds. The power input side is going to be the BLDC motor, driven by a hobby ESC (BLHeli_32), powered by a generic AC to 12V PSU. On the braking side of the gears I am reluctant to wiring up a 3 PWM-controlled resistors across the "braking motor" as it would be very wasteful of energy and produce heat, as I intended to run the motor at around 100W for many hours or days. I am instead considering using a full wave three-phase bridge rectifier and connect the DC outputs back into the DC-in circuit (still with PWM MOSFETs to control torque). What starts confusing me is that, under the assumption of ideal, identical, lossless motors, the output voltage would be sqrt(3)*V_in (minus diode losses). Something feels confusing about it as the voltage would exceed supply voltage, and I can't figure how the system would reach equilibrium. Is this system possible? Would I need to use significantly lower kV motor on the output side? Would I need to put a diode or flyback diode to protect the PSU against backflow of electricity when the input motor throttles down? I've schemed loosely what I think purely-electrically this circuit should look (bar the PWM on output), but may have made mistakes as I have very little electronics knowledge (aerospace engineer). simulate this circuit – Schematic created using CircuitLab AI: 1st note: The BL_Heli ESC won't last for long, since it is meant for RC planes, cars, etc.. they don't work at nominal power all the time, aka S1 duty. 2nd note: with this circuit you won't be able to control the torque. Only when generator voltage will become higher than battery voltage, the generator will produce a negative torque. As said, this would reach the equilibrium when generator would produce higher voltage than supply voltage, and in a very short window it will freely run and in a very small rpm increase it will prevent further speed increase. Of course the two motors may not have the same characteristics to achieve this.
H: What is the thermistor sensitivity? Here it is written: Some references use the negative temperature coefficient (NTC) a to describe the sensitivity of a thermistor alpha = -B/T². and here it is The B constant expresses a degree of thermistor sensitivity (change rate of its resistance) to temperature changes. Which is common now? AI: I don't use either one. Both are approximations that vary with the temperature being measured. I use this equation (or a variation thereof), from the eFunda page, put this into an Excel worksheet or Mlab code, and compute the sensitivity for any temperature I desire. Note that you also should consider how you're going to measure the resistance of the thermistor (current source, voltage divider, etc). In the end, you probably care about the sensitivity of the thermistor + measuring circuit, in mV per deg C, or something similar. Added Example Below This is an example. Thermistor is a Betatherm 10K3CG3 (10 Kohms @ 25 C), with a parallel resistor excited with a 250 uA current source. Curve is the sensitivity of the circuit.
H: How to wire a two pin bridge rectifier I have this bridge rectifier (Image below). https://www.deantechnology.com/products/hvhp10k How do I wire this? I know how to wire four pin bridge rectifiers but how do I wire this two pin rectifier? AI: A rectifier doesn't necessarily have to be of the bridge type. In fact, "rectifier" is basically synonymous with "diode", which is what I believe you've linked here - A diode packaged in a heatsink assembly. From the same website, scroll down and go to "Full Wave Bridge Rectifiers". Sounds like those are what you're after.
H: How can I design the BJT based inverter question given in description? QUES: Design a BJT based inverter operated from a 1.5 V supply.With the input connected to the 1.5 V supply through a resistance equal to RC, the total power dissipation should be 1 mW and forced β should be 10. Use VBE = 0.7 V and VCEsat = 0.2 V I tried solving this but I am not sure that total dissipation will be due to Rc only or both Rc and Rb. Please help. Below is my solved design. AI: It's a bit of a tricky problem. You seem to understand that you can consider the base and collector circuits independently, so that's a good start. The phrase "the input connected to the 1.5 V supply through a resistance equal to RC" means that the base current flows through the series combination of the previous stage's Rc plus this stage's Rb. That's the only way you can meet the forced beta constraint. Here's the circuit: simulate this circuit – Schematic created using CircuitLab The total power dissipation is specified as 1 mW, and the supply voltage is 1.5 V. This means that the total current consumption of both circuits can only be 0.6667 mA. The "forced β" is specified as 10, which means that 10× the base current flows through the collector. This means that Ib must be 0.0606 mA and Ic must be 0.6061 mA. This should be enough of a hint to let you complete the problem.
H: Why use bigger resistors for LED? I have a diode where forward voltage is 3.5V and forward current is 25 mA. I supply it with 5V so I calculate a value of resistor in the following way: R = (5V - 3.5V)/25mA = 60 Ohm However I see that most times bigger resistors are used, like 220 Ohm or 330 Ohm. Why is it better to use bigger resistors then to stick with the calculated values? AI: Because your LED has an unusually high forward voltage. It is a blue or white LED, while a "normal" LED is red or green. A red or green LED has a forward voltage of roughly 2 volts and a standard forward current of about 10 mA. Another reason is that you simply don't want to run the LED at the maximum brightness. Maybe it's annoyingly bright, and you will get a longer life out of it if you run it at a lower current.
H: Design R C snubber for flyback converter I have a DC/DC flyback converter and I need to design a snubber. I already know the diode to use I just need to design the R and C value. Now, I have simulated my circuit and tweaking values on C and R I came out with C=680uF and R=2.2kOhm; the problem is they are simulated values and I've really not applied any formula. Iknow that the diode will have a $$V_{DS}^{max}=V_{IN}+V_{C_{SN}}+\frac{\Delta V_{C_{SN}}}{2}$$ and $$R_{SN}=\frac{2V_{C_{SN}}(V_{C_{SN}}-\frac{n_1}{n_2}V_{out})}{L_df_ci_1^2}$$ and $$C_{SN}=\frac{V_{C_{SN}}}{R_{SN}f_C\Delta V_{C_{SN}}}$$ Now my Vin=12V, Fc=50KhZ, the V_DS max on the transistor should be 17V and the max I_DS on the same transistor is 2.1A (n1/n1=1 and k=0.996588) I really can't figure it out how to put together the data and actually calculate R and C for the snubber I put a random diode in the schematic as the one I use on the project is an MBR1045 simulate this circuit – Schematic created using CircuitLab This is V on the primary and V on the secondary... Now I'm checking the secondary it seems bad, really This is VDS And this is Vout AI: The design of this \$RCD\$ clamp requires the knowledge of the maximum peak current your controller can set up as well as the maximum voltage you tolerate across the MOSFET. I have presented all these equations in a seminar I taught at APEC in 2011 and entitled The Dark side of the Flyback Converter. The equations to determine the components values are there: The principle of operations is to create a low-impedance voltage source hooked to the upper high-voltage rail which will clamp on the maximum excursion of the MOSFET at turn-off. However, be aware that the peak current can be much higher than the limit set by the controller considering the propagation time \$t_{prop}\$. That duration is the time needed by the controller to effectively turn the MOSFET off when the current sense pin has detected a maximum. Depending on the drive scheme, the MOSFET size, the input voltage and the primary inductance, there can be some significant overshoots destroying the MOSFET quite quickly. The capacitor value is more in the vicinity of 1 to 47 nF perhaps, as a very rough figure while the resistance cannot be too low considering the dissipated power. People usually believe that the diode should be ultra-fast but it is little known that the turn-on time of a 1N4007 nicely competes with that of a MUR160 for instance. It is the turn-off (recovery) time that is much longer however but this lazy diode is often used in \$RCD\$ clamps of cheap adapters below 30 W because it nicely damps the oscillations at turn-off and reduces radiated EMI. Also, surprisingly, the peak current going into the \$RCD\$ is often less than the power switch peak at the opening event. This is because part of the energy stored in the leakage inductance is used to charge the parasitic capacitance lumped at the drain until the diode conducts. By doing so, there is less current circulating in the \$RCD\$ network during the reset time. Adding a bit of capacitance across the drain-source of the MOSFET clearly helps on the \$RCD\$ power dissipation as long as the saved power is not lost in switching losses because of too big a capacitor. Typical values of 47-100 pF are often seen in commercial adapters and they also offer some snubbing advantage too.
H: Does a Mcp23s17 output pin keep the value after being unselected? I'm designing a SPI based communiations circuit between Raspberry Pi4+B as a master and (n x 8) GPIO expander modules, as slaves. Concretely we have selected an MCP23S17 GPIO expander: Datasheet By the moment I will try if the system is able to write 0 or 1 to the GPIO = outputs. After this, I will be testing the reading pin values from GPIO = inputs. I would like to know if someone has worked with this, and I will also ask for some things if for example: I access any of the expansor modules and I write some output pins (1 or 0) After write operation we access another expansor modules for carry out any operation like the previous one What happens with the first output pin value of the first operation? Does the expansor device keep the previous pins state even if the expansor module is not acceded (unselected) yet? I have read datasheet and application notes. I don't find there any information that makes me think that the state is "erased" after being unselected. I have another question about something read at datasheet. "each pin can be configured emulating open drain configuration" --> what does it mean? and for what is useful this option? By the information that I've seen it's similar to use pull up/down, but getting more current. It can be also used for passing one level signal to another level signal, between CIs. But when is it better than using only pull up/down resistors? When can I get advantage of using a pin with this configuration? Any help will be appreciated. AI: Any GPIO expander will keep the state it is set to, until it is set to a new state. It would be a pretty lousy expander if the pins changes to some other state while you access another device over the same bus. The open drain emulation means that the IO pin can be set to push-pull output where it outputs high or low, or it can be used like an open-drain output, where it never outputs high, but can be set to output low and high impedance state.
H: An equivalent resistance with a short simulate this circuit – Schematic created using CircuitLab This schematic is giving me trouble because of the connecting wire shown in red. What is the equivalent resistance of the circuit, as seen from the blue terminals? AI: As it is clear that R1 is short circuited by that red wire,the equivalent resistance will be the parallel combination of R2,R3,R4.....
H: Second order all-pass filter input impedance I have a question about the second order all-pass filter input impedance when ω = ∞ and ω = 0 and I would appreciate if you could help me to understand the logic. When ω = 0, all capacitors can be seen as open circuit, because we are at DC level. In this case I expect the Zin to be R3+R4, what would be 3 kOhms, because the voltage at opamp inputs would be 0V. From the simulation I see that the expected input impedance is 1.5 kOhms. Why? When ω = ∞, capacitors can be seen as short circuits, and therefore the current will travel along C with no electrical impedance and R3, R4 are the only impedances that separates V1 from ground. In this case I do not understand why the input impedance is 3 kOhms. Thank you in advance! AI: At DC \$\omega = 0\$ (all capacitors can be seen as open circuit) we have this situation: simulate this circuit – Schematic created using CircuitLab And $$R_{IN_{DC}} = R_3 + R_4 = 3k\Omega $$ But at high frequency (\$\omega = ∞\$) when all capacitors can be seen as short circuits we have this situation: simulate this circuit Therefore the input resistance is now equal to: $$R_{IN_{HF}} = \left[R_1 \times\left(1 + \frac{R_4}{R_3}\right)\right]||(R_3+R_4) = 3k\Omega||3k\Omega = 1.5k\Omega$$ Why? because now the voltage across \$R_1\$ resistor is no longer equal to \$V_{IN}\$ but to the difference between \$V_{IN}\$ and op-amp output voltage. And the op-amp is working as a voltage follower, meaning that the op-amp output voltage is the same as the input voltage (at non-inverting input). And the input voltage is the output voltage produced by the voltage divider build around \$R_3, R_4\$.
H: Is a 2N3906 an appropriate replacement for a BC158(B) in audio amp circuit? On a long but educational process repairing a vintage German stereo. I have identified a failed (shorted) PNP transistor in the power amp section. (Schematic with highlight is below.) The part on the actual board is a BC158B; the schematic specifies only "BC158". The only difference between the BC158 and the B variant is the hFE gain value (higher value in the B). Here's a datasheet showing the BC158 series. I have asked before here and gotten good advice on how to think about transistor equivalence in audio when going from vintage euro parts to current US replacements. It appears to me that a standard general purpose 2N3906 would seem appropriate relative to the base BC158 (has same or higher max voltage, higher max current, and similar hFE. I'm not knowledgeable enough to look at the circuit and see why the schematic shows the BC158 and the actual part used was a BC158B-- they appear to be likely different in amplification function, but maybe (presumably?) it doesn't matter here. Can anyone offer a theory for why the original part used could be a higher-gain model without messing anything up? Do I need to seek out a higher-gain replacement part? I'll use a 2N3906 if it would seem to work. If not, there are some random parts I groused up on Mouser based on the filters that seem like they might be appropriate, but I'm for those I'm casting around in the dark a bit. Thanks! Schematic-- T601 is the one that needs replacing: AI: This transistor is biased DC around 1-2 mA collector current. Suppose HFE is at the low-end: 125...then DC base current is about 9.7 uA. Suppose HFE is at the high-end: 500...then DC base current is about 2.4 uA. This base current flows through base bias equivalent resistance of about 85k ohms. So at one HFE extreme compared to the other HFE extreme, DC bias would change only 0.625 V at T601. That's trivial compared to the 19V at transistor's emitter. Since open-loop gain is very much higher than closed-loop gain, almost all transistor non-linearities are removed. So gain variations are removed too, because overall amplifier gain is set by resistor ratio of R608/R607. T601 is not pushed anywhere near its operating envelope limits. 2N3906 is a decent substitute in this circuit.
H: Hardware-Software synchonization with clock in a microcontroller From what I understand, there is a clock in a micro-controller, and every little task is executed at a clock pulse. Can two or more actions be performed simultaneously (In the same clock pulse)? For example, assume two lines of code. In line 1 I send some data to the UART data register (assume assembly code so there is 1-1 correspondence with clock and command, it is not the point here). Now, what happens on the next clock? Is line 2 executed and the fist bit of the data register is moved to FIFO at the same clock pulse? Is line 2 executed and all 8-bits are moved from the data register to FIFO at the same clock pulse ? Or is execution of line 2 is postponed for some clocks pulses until all data are to FIFO ? AI: From what I understand, there is a clock in a micro-controller, and every little task is executed at a clock pulse. At least one clock. Modern microcontrollers may have more than one, and numerous clocks derived from the one master clock. Question: Can two or more actions be performed simultaneously (In the same clock pulse) ? Can they? Yes. Are they? Often, but not always. For example, assume two lines of code. In line 1 I send some data to the UART data register (assume assembly code so there is 1-1 correspondence with clock and command,it is not the point here). Now, what happens on the next clock? Slow down! If your "lines of code" are in C, Python, or some other higher-level language, then each line of code usually represents multiple machine instructions. One line of code could compile to anything from zero to hundreds of machine instructions. Modern processors (like the ARM Cortex cores) are pipelined, so you can have multiple instructions in play at once (do a web search). So what happens on the "next clock" is generally that the next machine instruction generated by the compiler from that line of code (or kicked off by the interpreter, if it's Python or similar) executes. Eventually that line finishes. Is line 2 executed and the fist bit of the data register is moved to FIFO at the same clock pulse? Is line 2 executed and all 8-bits are moved from the data register to FIFO at the same clock pulse ? In that particular processor, the core "thinks" it's just writing out to memory. The UART is connected in a manner called "memory mapped", which means that all the UART registers appear to be plain old memory -- it's up to you to tell the compiler that they're not (in C you'd use various directives like volatile and const; if you're using a vendor's library that ought to be taken care of for you). When you write a word to the UART's transmit register, the UART takes care of the rest, independently of the core. When it's done, there are various mechanisms of notifying the core (the simplest and most intrusive being that you poll a bit in your code; the hardest and least intrusive being that you set the UART and your code to interrupt the core when a byte is finished transmitting). Note that an alternative method for doing this job involves the processor "manually" sending out bits on a general-purpose I/O pin. This is called "bit banging"; it uses a lot of processor resources for the intelligence used, but it can make sense if you have a teeny processor doing an even teenier job.
H: Design of class AB output stage I'm trying to design an audio amplifier output stage to drive a speaker load. I'd like to start with the design of this stage in order to calculate the input impedance to the output stage so that I can proceed to designing a buffer stage with an appropriate output impedance and a gain stage before that. My speaker impedence is 8 Ω and I'd like to deliver a minimum of 1 W to the load and keep total harmonic distortion below 1%. One possible class AB output stage configuration is given below. I looked at a few different configurations, but this one is nice in that it doesn't require a constant current source for the biasing which makes it simpler: I first began by calculating the peak voltage over the load: \$Vp = \sqrt{2R_LP_l} = \sqrt{2(8)(1)}=4\ \textrm{V}\$ From this I calculated the maximum output current through the load to be 2 mA using Ohm's law, assuming R3 and R4 to be 0 for the time being (I can add these values back if I notice any thermal runaway effects later). Since the voltage gain of this output stage is approximately unity, the output current which I calculated as 2 mA is related to the input current through the current gain given by: $$A_i = \frac{i_o}{i_i}=\frac{(1+\beta)R}{2R_L}$$ The textbook I'm using has made the assumption that all of the transistors, NPN and PNP, are perfectly matched when they derive the current gain. This is obviously not true in reality. The 2N3904 NPN transistors and the 2N3906 transistors I'd like to use have very different current gains. How can I solve for my bias resistors R1 and R2 such that I keep the total harmonic distortion low, while still supplying the output current that satisfies the power requirement? What is a more practical/real-world design approach for designing this output stage? I find that the theory in the textbook is often quite useless when it comes to designing real applications. AI: Thumbs up - discretes-based audio amps used to be my favourite toy... Size doesn't matter, the fun comes from trying to understand how these things work. Looking back, this is where I cut my teeth on feedback control :-) Class D (PWM) takes all the analog fun out of the journey (but still leaves you with some mildly annoying analog side effects). Speaking of class AB, there are hardly any simple solutions / low hanging fruits / shortcuts. If you use plain resistors to drive transistor bases in the output stage (common emitter followers), the power transistors will run out of base drive current just where it's needed most (close to the power rails). Constant current sources work better - they're stil not perfect though, because they do not pull all the way rail-to-rail... If you'd like to see a more complex and proven class AB output stage (still not quite rail to rail), google DPA220. While now historical, in its day it was a fairly decent design all around. Namely, the adjustable biasing stage just before the output power totem contains two transistors that were thermocoupled to the main heatsink (if memory serves) to achieve negative dependency of quiescent current on temperature...
H: Why don't I need a resistance when testing a light bulb circuit in a breadboard? I'm making some simulations in Tinkercad and trying different circuits to turn on a light bulb. Why does an LED need some resistance and a lightbulb does not? I'm only using a breadboard and a 9V battery. AI: With LED's, a small increase in voltage will result in a large increase in current. So it is really hard to get just the right voltage to keep an LED at the right brightness. If you let the voltage just get a tiny bit too high it may destroy the LED. What makes it even harder is that as the LED gets hot, the current will also increase. Naturally when you power it up it will tend to get hot. As a result it is just too much trouble to drive an LED with a voltage. Some form of current limiting usually has to be put in place. It doesn't have to be a resistor, but that is probably the most simple way to do it. LED light bulbs have circuitry integrated into them that overcomes all these problems. Old-fashioned incandescent light bulbs (including halogen bulbs) are different. The part that lights up is made from a thin tungsten wire that glows when it gets hot. The wire has resistance which limits the current automatically. This resistance is also what causes it to heat up. And, icing on the cake, the resistance goes up with temperature, so incandescent light bulbs are really stable when powered from a voltage source.
H: Replace everything in the circuit except the 3 A source by its Thevenin equivalent circuit and use the result to find V1 Replace everything in the circuit except the 3 A source by its Thevenin equivalent circuit and use the result to find V1. This is my solution but the teacher said it is wrong and i do not know how to fix it. AI: In the original schematic, is that a 24V source or a 24A source? The symbol does not match the label. I would interpret that symbol as a current source but it is possible that someone might use that as a voltage source considering you have a different symbol for the 3A current source. Some people use an arrow to represent a rise in potential between two points on a schematic but I've never seen someone try to stick the arrow into a component symbol and try to call it a voltage source because it is confusing. You interpreted it as a current source in your work, but if it is supposed to be a voltage source then the mistake is right there. Not that it's your fault. That's a really bad symbol.
H: How are the neat sinc function shaped carriers produced in OFDM? I am learning about OFDM and QAM. Every explanation I have read illustrates how the separate carriers are sinc functions that are arranged to overlap in a nice orthogonal way so as to cancel out interference from neighbouring carriers. The part I don't understand is how is it possible to generate these neat perfectly overlapping sinc functions, when they contain constantly changing symbols. Each transition from 1 symbol to the next, in the time domain is a random change in phase and amplitude. This must correspond to a random change in the sideband pattern of the carrier in the frequency domain, and therefore disrupt the orthogonal arrangement. AI: These sinc functions, as you've noticed, have zeros in a distance of the subcarrier spacing \$\Delta f\$. Remember how these sincs come to be (the texts you've been reading most definitely mention that!): The sinc function is the Fourier transform of the rectangle function. Scaled to yield zeros in frequency domain every \$\Delta f\$, the width \$T\$ of that rectancgle must be \$T=\frac1{\Delta f}\$. So, that answers your question: all your sincs are just the result of having a rectangle in time domain, and multiplying it by \$e^{j2\frac{n\cdot\Delta f}{f_\text{sample}}t}\$, so to shift it in frequency to yield the \$n\$th subcarrier. The QAM symbol is just a complex factor you multiply the result with – that is just a constant factor and doesn't change the shape, neither in time nor frequency domain. Now, what's \$\Delta f\$, when you think about it? In OFDM, you use the \$N\$-point DFT to divide your Nyquist bandwidth (complex!) \$f_\text{sample}\$ into \$N\$ equally large subcarriers, so \$\Delta f = \frac{f_\text{sample}}{N}\$. Therefore, the width of the rectangle \$T=\frac1{\Delta f}=N\cdot\frac{1}{f_\text{sample}} = N\cdot T_\text{sample}\$. That very simply means that the sinc shapes are just the effect of turning on a (complex) oscillation of frequency \$n\cdot\frac{f_\text{sample}}N\$ for exactly \$N\$ samples. Each transition from 1 symbol to the next, ... Such a transition simply doesn't happen within one OFDM symbol: For the duration of one of these rectangles, the symbol for each subcarrier is constant. So, you use \$N\$ samples to send a single symbol, but you gain the ability to send \$N\$ symbols at once. So, nothing lost, nothing gained here!
H: I don't understand the outcome of the solution for a circuit with two voltage sources and an inductor I have solved this circuit problem and I don't really understand the result in terms of power for the two sources. So for source one, it is clearly generating real and reactive power. For source two, it is consuming, based on the current going out of the source. The reactive power consuming reactive power. The results shows the new power in the circuit is almost zero. But the reactive power is not zero, the consumed is way more than the generation and that does not make sense. Is there a problem with my math? Thanks in advance for your help. AI: First, you draw your current I with a reference direction to the right. So, this current I is (V1-V2)/j5. That will correct your phase angle. Second, to calculate the power quantities coming from source V2 you need to use -I which will just be I with angle shifted by +/- 180 degrees. See if that clears it up for you.
H: Why Wouldn't I Buy A Diode With The Highest Reverse Voltage? I'm only just beginning to read about diodes. I'm looking at the 1N400x series, where all specs are the same except the reverse voltage (link: https://www.mouser.com/ProductDetail/Diodes-Incorporated/1N4003-T?qs=rGAXPo9uwV0nfQ36LZW%252BLg%3D%3D for example). The reverse voltage increases as x increases, all else is the same (even price). From what I understand, the reverse voltage is the maximum voltage drop the diode can have (in reverse) before it fails. My question is, is there a reason I shouldn't just buy the highest reverse voltage I can find? Is the N4001 not the same in every way as the N4003 except that the N4003 can work with higher voltages without failing? AI: You can see my answer linked by DKNguyen about the real differences based on voltage. Realistically, for mains frequency normal applications you may as well buy 1N4007 or the SMT equivalents as the price difference is pretty small, at least for non-consumer quantities. In a million quantity the difference between a 1N4002 and an 1N4007 might be a total of $1,000, so it's worth spec'ing the cheaper one if it works and nobody wants to throw away a kilobuck. The situation is rather different for Schottky diodes- higher rated voltage generally leads to a higher Vf at a given current. Compare the differences between 1N5817, 1N5818 and 1N5819 rated at 20, 30 or 40V. Vf at 1A is max 0.45, 0.55 and 0.6V. So using a 1N5819 where a 1N5817 will do, means about 1/3 more power dissipation.
H: Assortment of electrical component for beginner As a newbie in electronics when building projects with Arduino, sensors etc. on PCB I often have a problem with electrical components like capacitor, resistor, diodes, switches etc. For different projects one needs different electrical components with different size. I am looking for something like a minimal set, or most-needed electrical component catalog/assortment for beginners (e.g. capacitor 1pF ~ 10uF, resistor 1 ohm ~ 10M ohm etc.) which I can order from eBay in larger amounts in advance in order not buy one-by-one with higher costs and waiting time. Questions: What is the most needed electrical component catalog/assortment for beginners ( Capacitor, resistors etc)? What are the mostly used size of them (not sure e.g. for Capacitor maybe 1pF ~ 10uF)? So, I need a list of electrical component catalog/assortment for beginners with most commonly used size/volumes (which I can order in advance). AI: As a former beginner myself (and still a beginner in many ways) I could write this to-buy list: Resistors: 100 ohm, 1K, 4.7K, 10K, 22K, 47K, 100K, 1M. Capacitors: 10pF, 100pF, 1000pF, 0.01µF, 0.1µF, 1µF, 10µF, 100µF Schottky diodes: Single, common anode, common cathode N-Mosfet, P-Mosfet SOD-323 Logic gates (AND, NAND etc) Schmitt Trigger Some indicator LED, red, green, blue Zener diodes: 3.3V, 5V, 12V Terminal connectors, headers male and female, single strand wires, glass fuse holder etc 3V and 5V voltage regulators (LDO and non LDO) (10pcs each) Rotary or sliding potentiometer 1K, 10K and 100K (10pcs each) Tact switch, DIP switch, slide switch. Buy them by 100 or 200 pcs. Both SMD and THT. Better buy electronic components from reputable vendors like Digikey or Mouser. Terminal blocks and other hardware can be bought much cheaper anywhere else.
H: AC coupling of a transistor So we assume to have proper bias currents flowing in both of the two transistors. What is claimed by books is that a capacitive coupling will not change the bias parameters. But, we know that a capacitor will charge and hence gain voltage across it. Now this capacitor voltage is applied back to the transistor, in our case the voltage will be applied to the collector of the first and the base of the second transistor and hence will change the bias parameters! What is wrong in my reasoning? Thank you in advance. AI: To find an answer to your question, you should draw the bias circuits. Then you will see that, after the power is turned on, the capacitor will begin charging and really, until it is fully charged, it will affect the output voltage Vc1 of the first stage (Q1) and the input voltage Vb2 of the second stage (Q2). It is preferable the input voltage to be zero until this initial process finishes. Also, it would be very useful to draw the path where the current flows. After the capacitor is charged, the current through it becomes zero and the capacitor stop affecting the voltages. Figuratively speaking, it has copied the difference Vc1 - Vb2 = VC across itself so you can think of it as of a charged "rechargeable battery". You probably understand why the capacitor cannot be with too high capacity - since during this initial transition process, the amplifier does not work correctly. Now increase the input voltage and you will see how the capacitor conveys Q1 collector voltage variations to Q2 base since the voltage across the capacitor cannot change significantly. Again you can use a metaphor to imagine visually this "AC voltage shifting" technique. Think of the capacitor as of a stiff bar firmly joining Q1 collector and Q2 base. When Q1 collector "moves", it "pulls" up and down Q2 base... so it follows Q1 collector "movement".
H: Electrical Motors and Frequency I was musing about electric motors after seeing one randomly on TV, and there's something I'm must be remembering incorrectly. It's just a thought experiment, so I'm imagining some theoretical DC electric motor, with a constant RMS current/voltage feeding in, and some constant load on it causing it to spin at some constant rpm and produce some constant torque. Since voltage x current = electrical power = motor's mechanical power, that's the cap on how much you can get out of the machine. But I remember that the greater the difference in rotational speeds between the rotor and stator fields (slip?), the more force/torque will be exerted on the rotor. So in the hypothetical, if the motor is doing its thing and is stable with the load, if you suddenly increased the frequency of the input voltage/current, that in my mind would result in suddenly increasing the difference in relative rotational speeds of the stator/rotor, meaning the stator would feel more force/torque on it, which would make it want to speed up and catch up to the stator field, speeding up the load. But only frequency changed and not how much electrical energy is entering the device, so there's no way that's possible! So I MUST be wrong, I just don't see where the error in my thinking is... What's wrong with my thinking, and what would actually happen if you increased the frequency like that? (I'm not an electrical person by trade, so there's a limit to my knowledge of concepts/terminology.) [EDIT: I failed to mention that by "frequency", I'm referring to how DC motors can be controlled with pulse width modulation, and those pulses go in at some frequency. Otherwise it wouldn't make sense to talk about "frequency" with direct motors] AI: The type of problem you are talking about might be be explained by an induction motor. In a dc motor increasing the no of pulses would only increase the average voltage which would then lead to motor entering a transition state till the new steady state is achieved(if achievable). If a steady state is achieved in accordance to both load and motor torque the motor might work at a new speed and hence the frequency(actual not the pwm one) of the internal ac currents and voltages might increase. As for an induction motor when you increase the frequency there are multiple factors(load characteristics and motor characteristics at that frequency) that govern what the new steady state might be and all are in agreement with the conservation of energy. If you keep the voltage constant, the current is governed by the motor load requirement. Hence it will under normal operation, increase the current intake if load speed is increased at same torque. Well you see things start getting a little tricky here, in case you are changing parameters such that input power remains constant( and trust me that is not done by just changing the frequency) Then to increase speed the torque would have to decrease and this new steady state(if the load allows it) will have same product of torque speed as before. It's really good to see people trying to think deeply about motors.Normally people are just interested in the applications. You should do some serious study if you like the topic, it's really indulging. Thank you for taking time to read the answer.
H: How to use MOSFET as an active load resistor? In the Albert Malvino's "Electronic Principle" book, an active load switch is done by using a MOSFET as an active resistor by connecting the gate to drain(two-terminal device)(figure a) Two terminal MOSFET act as a pretty big resistor (15V/3mA=5kohm, 10V/1.6mA=6.25kohm and etc.).(figure C ) BUT when I tried to plot in proteus the same graph using a 2n7000 EMOSFET: The current is too big, thus the resulting resistance is too low that's why it cannot act as a resistor to properly saturate my circuit. (5V/280mA=18ohm, 10V/1.65A=6ohm, 15V/3.6A=4ohm) RESISTANCE TOO LOW!!! My circuit diagram is just the same as figure a. Why does the 2n7000 cannot act as an active load in my simulation? Or are there only a specific MOSFET that can act as an active resistor? AI: You probably missed a small but crucial detail in this schematic. The crucial detail is that Q2 needs to have a larger W/L than Q1. In the books example the ratios of the geometries (W/L) is about: 5 kohm / 667 ohm = 7.5 So the W/L of Q2 needs to be 7.5 times larger than the W/L of Q1. So for example if Q2 = 100um/1um then Q1 = 750um/1um. In that way the voltage (\$V_{GS}\$) to current (\$I_D\$) relation will be different for Q1 and Q2. You could say that Q2 will be "more sensitive". Also this circuit is usually only used on a chip because the only way that you can get a predictable behavior from this circuit is when the transistors have identical behavior (except from the W/L ratios between them). Also the transistors need to have nearly identical temperatures. So in practice these conditions can only be met on a chip. Another issue is that on a chip you can make a MOSFET in almost any size and geometry. You cannot do that when using discrete components like a 2N7000. The 2N7000 is an NMOS designed for switching purposes and it will have a very large W/L so that \$R_{DS,on}\$ becomes small. You can get around the W/L ratio that you need by connecting MOSFETs in parallel, then you essentially double W but L remains the same. But as then \$R_{DS,on}\$ will go down even further, this will not help. As a consequence it is extremely difficult to make a 2N7000 operate such that it has an \$R_{DS,on}\$ of a few hundred ohms or even a couple of kilo ohms. The 2N7000 simply isn't made for that. So that's why you get very large currents and the MOSFET will heat up because of that. The only way I can see this work using discrete MOSFETs is if you use MOSFETs that are not designed for switching but instead are designed to amplify signals. There are some MOSFETs for amplification of RF signals. However these are rare and not easy to use. Another option is to use a MOSFET array, that's a chip with some MOSFETs on it. One example is the HEF4007 but it is limited in what you can do with it due to the way that the MOSFETs are connected. If you want to experiment with this "MOSFET as an active load" circuit then I recommend that you start using a circuit simulator (Like LTSPice or QUCS) as there you can use a "generic MOSFET" and freely choose W/L.
H: Isolation Capacitance (Cio) How I can find a practical value of CIO barrier capacitance (primary to secondary side) in digital isolators? AI: Actually i want to measure it in a practical way to verify the value in the datasheet. How and which equipment i should use to measure the isolation capacitance of digital isolator? Use an oscilloscope with known input capacitance (that is quite a common thing to know for a decent oscilloscope) and place the device under test in series with the input to the o-scope and feed an input signal to isolated port of the device and compare the AC amplitudes of input and output and get a decent measure for the attenuation: - Use a frequency of around 1 MHz so that the scope's input capacitance dominates the input resistance (usually 1 Mohm or 10 Mohm). Basically you are forming a potential divider and using input and output amplitudes to calculate the "unknown" device capacitance.
H: Are there any functional differences between xc16 and C30? I've recently istalled MPLAB X (free version) and tried to build an existing project which uses C30. C30 seems to have been replaced with xc16 which does indeed build the project but the resulting .hex file cannot be flashed onto a dspic30f6013a. Where can I get C30? The error message shown when updating (by ds30Loader) was that the bootloader would be overwritten. AI: The XC16 toolchain is the upgrade for the C30 toolchain. The C24 and dsPIC toolchains are subsets of the C30. Have a look here The difference between xc16 and C30 are quiet small. Maybe your controller is not supported by the C30 compiler. For a download of old compilers have a look here. The C30 is renamed to PIC C Compiler for PIC24.
H: Power In= Power Out in transformers In transformers, we prove that the induced voltage per turn is the same in both primary and the secondary coils of the transformers, i:e (Ep/Np)=(Es/Ns). But when it comes to currents, we show that (Ip/Np= - Is/Ns), which should yeld to (Ep x Ip= - Es x Is). This equation should yeld to say that Power in is opposite to power out, or Pin=-Pout. Beside this we say that Power supplied to the primary coil is equal to the power supplied to the second coil. What did I miss from this concept of power transfer in transformers ? how do I should understand or explain the existence of the minus sign in the currents and the equality of the power In and Out AI: But when it comes to currents, we show that \$\dfrac{Ip}{Np} = -\dfrac{Is}{Ns}\$ Yes, that relationship is true: - But it doesn't mean that the power out to the load is negative (see the above picture). It just means that as far as the transformer is concerned, the current is flowing outward with respect to the primary winding. The 1 ohm load still receives positive power as if it were fed from a voltage source. The secondary produces a current opposite to the primary (in that the current is flowing away from the transformer) but, it is still flowing in the correct direction for positive power into the resistor load. Beside this we say that Power supplied to the primary coil is equal to the power supplied to the second coil. An ideal transformer doesn't deal with watts; it deals with volts and amps so we talk about \$VA\$ and that makes it clearer in my book. Power is supplied to the load; it isn't supplied to the secondary and it isn't really supplied to the primary. Both primary and secondary deal in volts and amps. That's why we talk about transformers having a \$VA\$ rating.
H: Is it possible to replace the ref pin of AD620A with a fixed voltage? I am trying to implement the below circuit (from) without ADuC842, I will use Arduino mega2560 ADC. the circuit generates -5V to 5V rail to rail voltages for ad620a and they control ref voltage via DAC to adjust offset dynamically. from the link, To avoid output saturation, the usable gain is limited by the output swing and the maximum input voltage to the IA. With a +-5 V power supply, the output swing of the AD620 is about +-3.8 V; and the maximum input is +-5 mV plus a variable normal-mode dc offset of up to +-300 mV, allowing a maximum gain of 12.45. Here, the gain is conservatively set to 8 (+-1%), using RG = 8.45 kOhm. I need to change how Ref pin voltage is supplied, I have the following doubts? if the voltage at ref pin is 5V the out swing between 0-5? how can I get the output of ad620 to swing between 0-5V (pin 6 ad620) what voltage should I apply to ref pin dummy doubts is it possible to measure the negative voltage from the microcontroller ADC? can aduc842 generate a negative voltage from DAC? AI: Is it possible to replace the ref pin of AD620A with a fixed voltage? Probably not is the simple answer. The DAC output (under software control) will examine the ADC digital output and make an estimate of whether the signal it is receiving needs slightly lowering or slightly raising in its DC offset level. It does this to avoid signal clipping. Clipping the signal means that the top or bottom section of the signal is "out of range" and causing the ADC to end-stop. This needs to be avoided to achieve maximum dynamic range and so, the DAC will constantly tweak the offset to centralize the signal fed to the ADC. If your proposed MCU doesn't have a DAC output, you should be able to make an effective DAC output using PWM and a low pass filter. This can then be used to feed pin 6 of the AD620.
H: Can FPGAs be emulated on a CPU? Can FPGAs be emulated on general purpose computers? As per Church-Turing thesis all Turing machines can be emulated on the universal Turing machine. So if the FPGA can emulate processors, can the processors emulate FPGAs? I am asking this question so I could know if instead of using FPGAs for Digital ASIC emulation, can we use a computer instead? For this question, I am considering only the digital applications of FPGAs. AI: They can be simulated, but not in real time. There's no way that a sequential computer can truly emulate the parallelism going on in most FPGA designs. Of course, there are exceptions — the FPGA design could be a general-purpose computer! In theory, a GPU could be used to make a simulation run faster, but in fact, the kinds of things that GPUs can accelerate are not a good match for general-purpose logic simulation, so as far as I know, no one has put any significant effort into making it work robustly. Instead, advanced simulators take advantage of multiple CPU threads when they are available. Still nowhere near real-time, however.
H: How can thermal electrical noise be uncorrelated yet appearing at a certain frequency? Say we look at the thermal noise spectrum of an ideal resistor. Now we know it will behave like $$ \langle U^2 \rangle = 4 \, k_B \, T \,R \,\Delta f $$ We also know that two subsequent voltages are uncorrelated: $$ \langle U(t) U(t+\tau)\rangle = c\, \delta(\tau) $$ where c is a proportional constant. Now my question is: How are these two relations compatible with each other? Phrased in words of the experimentalist, what causes a certain \$ U^2 \$ to appear at frequency bin \$f\$? Shouldn't it have some component \$ ~\propto e^{iwt} \$? AI: If you convolve your U(t) with \$e^{i\omega t}\$ then you'll typically get a non-zero result. This means there is typically some component at \$\omega\$. Note that if you convolve a delta function \$\delta (t)\$ with \$e^{i\omega t}\$, you'll also get a non-zero result, for all values of \$\omega\$. The difference between noise and a delta function is the variance in the measured power. With a delta function, the spectrum is flat and consistent. With noise, the spectrum is only flat and consistent after sufficient averaging of multiple convolution results.
H: The choice of method for constructing hardware monitoring and diagnostics created a weight scale device. I chose the software method for diagnosing my system. How can I implement it? X1 is a load cell 5000 grams DA2 HX711 D1 Atmeaga328 D2 shift register H1 lcd display What is the best way to test these items before each run? How to test the system every time before starting. AI: As others will tell you, this is not a "how do I do something" site. If you have a problem, you need to make an effort to solve it yourself, then ask specific questions pertaining to your proposed solution.
H: New soldering iron turns black as soon as it reaches 350C I can't even tin it as it turns black befor I reach melting temperature. I tried 6 different tips and result is the same - it turns black and solder doesn't stick, I've no Idea what to do at this point also it smokes when reaches about 350C. Is it just broken soldering iron and I need to buy a better one or I don't understand something? I only tried Eutectic Solder, 5 tip were from same source 1 from other. I don't know brands. Soldering iron is one of those cheep chinese variable temperature ones Here's photo of it AI: I can't even tin it as it turns black before I reach melting temperature. What kind of solder is that?! The highest melting point of the eutectic solder I have ever seen was around 300°C. Most of the commonly used solders melt way under 250°C. No wonder your iron tip turns black. This is not to say there are no solders that need higher temperature, only that it is highly unlikely you'd ever need one. Another possible reason for tip turning black is using plumbing solder or flux.
H: Can I "sum" internal ADC channels to achieve extra bits resolution? First of all I am a newbie in electronics. I am building some weight scale applications using (as seen everywhere) load cells wired to a HX711 ADC and a ESP8266. I am now giving a try to the Texas Instruments CC1312R MCU (datasheet), still work in progress :) The CC1312R features include a 8 channels 12-Bit ADC. My needs require at least a 16-Bit ADC. Since this internal ADC provides 8 channels, I was wondering if I could somehow serialize them to achieve some extra bits resolution, thus not having to use any external ADC, is that possible? Thanks for your help! AI: If you dither the signal and average more than 256 samples you can get some apparent resolution (not accuracy) increase, but frankly you’d be far better off ignoring the on-chip ADC and using an external ADC designed for load cell applications. The one you mention (HX711, made by Avia in Xiamen, China) is only about 40 cents (24-bit delta-sigma with on-board PGA and bandgap reference) so it's not like it's going to break the bank!
H: What is the maximum voltage that the circuit can withstand Below is my circuit, ESD Capacitors Specifications : C0001, C0002, C0003, C0004 = 47nF, 100V, 10%, 0805 I have 2 basic questions : When a +4kV pulse & 330pF is applied, Does the 12.77V (Q=CV ; Q =4kV * 150pF = 600nC. Voltage at the top node = 600nC / 47nF = 12.77V) is shared by each capacitor separately (C0001 to C0004)? So each capacitor will hold 3.1925V? And what is the maximum voltage can this circuit section withstand in the case of positive pulse? Is it 4 x 100V = 400V ? Or only 2 x 100V = 200V (since the 2 sets of series capacitors are in parallel). 100V is the voltage rating of the capacitor. AI: There will be 6.38 volts across each capacitor (as previously answered here in comments): - The 12.77 volts will appear where you have an arrow called "ESD pulse". If you ignore the small forward volt drop of the diode you can say that there is 6.38 volts across C0002 and C0004 and 6.38 volts across C0001 and C0003. If the rating is 100 volts then the capacitors can withstand 100 volts on each meaning 200 volts at the point marked "ESD pulse". This assumes the capacitors are perfectly matched. If they are mismatched by 10% i.e. one is 51.7 nF and the lower one is 42.7 nF then there will be proportionately more voltage developed across the capacitor with lower capacitance. This is because the 12.77 volt seen during the pulse (as previously derived in my answer here) is shared equally between two series capacitors. Given that the capacitors have a voltage rating of 100 volts, if they have the same value then the peak voltage withstand for two in series is 200 volts. If one capacitor is low in value by 10% and one is high by 10% there will be 20% more voltage seen across the lower value capacitor hence, you can't really assume a 200 volt pulse withstand capability and it will be more like 160 volts.
H: Current division with a short simulate this circuit – Schematic created using CircuitLab This schematic is a bit difficult for me to understand due to the short (red). What is I3, the current over R3 (indicated with open terminals, in blue)? AI: Your answer is correct but, in a comment, you write that you're unsure why. I've redrawn the schematic in a way that I hope will make it more clear why R4 is not in parallel with R1. Can you take it from here? simulate this circuit – Schematic created using CircuitLab
H: Bjt connected as diode I have a dumb question. My textbook says \$\beta I_B\$ current flows into the collector terminal. Isn't Collector-Base path shorted by the external wire? Since base and collector regions are semiconductors, they offer very high resistance compared to the external wire path. How can more current flow into the collector terminal of the bjt compared to the external shorted wire? In below two diagrams I put resistors. Which connection is correct and why? simulate this circuit – Schematic created using CircuitLab AI: At collector terminal, the current coming from the right side has two options: take the high resistive semiconductor path or the least resistive wire path. Why more current takes the semiconductor path is still confusing me ;( But the wire is in series with the base so I'm not clear on what your reasoning is here. Looking into that wire from the collector node, there isn't a low resistance path around the collector to the emitter terminal, there is a low resistance path to the base terminal. So, both paths are through the semiconductor - one path is into the collector, the other path is into the base.
H: Do metals with different electrical conductivities have different succeptibilities to induction? If you have an inductive levitation coil and you gradually increase the strength of the AC current to the coils, could you fraction off metal powders by levitating each of them separately due to their differing electrical conductivities and thus different susceptibilities to electromagnetic induction? I.e. will certain elements levitate before the others at differing induction levitation coil strengths due to their differing electrical conductivities and thus succeptibilities to induction? AI: While the different conductivities will mean different currents flow in the materials, it's not going to help you separate an aluminimium powder from a copper powder. Unfortunately the force the grains will experience in an AC field will be strongly affected by the grain size. You'll end up fractionating the mixed power into grain size rather than element.
H: how to tune wave to obtain a specific harmonic seriers I wanted to know what is the relationship between a square wave and its harmonics, in other words can i change a square wave features (frequency, duty cycle etc) to obtained a desired fundamental frequency (and its corresponding harmonics)? Any help would be appreciated thanks. AI: The feature of a square wave that you change to obtain a desired fundamental frequency, is the frequency. At a duty cycle of 50%, there are no even (2n) harmonics. At a duty cycle of 33.3%, there are no 3n harmonics, and so on.
H: Operation of Induction Motors and their Back EMF I'm trying to reconcile the physical steps that go into the operation of an AC induction motor. Looking online, I either find broad and general explanations that I already know and understand, or very specific and math heavy explanations that get a bit overly detailed. I understand synchronous motor operation and describe it in my head like this: "From standstill, voltage is applied to the rotor. This creates a current in the rotor which generates a magnetic field which starts to turn the rotor due to attraction to the magnets of the stator. At the start, the rotor has little movement relative to the stator's field, so little back-emf is created, allowing for maximum current to flow, and so the machine has max torque at the start. As it gets up to speed, the rotor's quick movement cuts through more of the stator's flux, and generates more intense back-emf, which fights the input voltage, limiting current into the rotor, and torque drops off until some balance of speed/torque is achieved with the load. But when I try to create the same "story" of what happens in an induction motor, I must have some fundamental misunderstanding (probably about back-emf), because I run into this: "At standstill, an AC voltage is applied to the stator windings, quickly manifesting a rotating magnetic field. This induces a voltage in the rotor coils, creating a current that manifests its own magnetic field such that the induced voltage/field opposes what created it. Now the relative motion of the rotating stator field and rotor coil would be maximum at startup, creating maximum possible back-emf.... and that's where my thinking fails. If the back-emf is biggest at the start, it'll fight the input voltage the most at the start, which will result in the least current and least torque at the start... and the thing will never get up to speed." Obviously that's wrong, but I don't see where. In a DC motor, it makes sense easily because input voltage, back-emf, and rotation, are all in the rotor. So when the rotor speed is low, it cuts less flux, and creates low back-emf. When the rotor speed is high, it cuts more flux, and creates higher back-emf. A nice feedback loop. But it seems reversed for the induction case the way I'm thinking of it. At a low rotor speed, the rotor would cut through more flux, because the stator field is quickly rotating around it. This would create a higher back-emf in the rotor, fighting the input voltage, limiting the input current, reducing torque, and slowing the machine or keeping it from moving. Then at a high rotor speed, the rotor would cut through less flux, because it's catching up to the rotational speed of the stator's field. This would generate less back-emf in the rotor, which would mean the input voltage isn't being fought as much, increasing current, increasing torque, speeding the device up. So my thinking ends up with the opposite of a self stabilizing device. If it slowed down, it'd want to keep slowing to standstill, and if it sped up, it'd want to keep speeding up... None of that is right, so I must be messing something up about how back-emf operates in an induction motor. And I think the inverse nature of how the rotor speeding up leads to amount of flux being cut to be lower is also tripping me up... Thank you so much for any help with a layman's quest for understanding! AI: What you're missing is that the rotor has inductance and resistance. So the initial rotor current is low, the rotor sees a high frequency field that can't excite much current against the inductance). Also, the initial phase relationship between rotor and stator is such that the lower the rotor resistance, the less torque you have at startup. At the same time, the lower the rotor resistance the less losses you have, and the closer the run speed is to the motor's synchronous speed. Here's a picture (from this page) that shows the torque vs. speed for an induction machine.
H: How to wire AC motor for old fan? I have a AC motor and it's belong to an old desk fan, unfortunately I missed it's circuit diagram, so it would be great if you could help. My AC outlet supplies 220V, and the motor comes with a 1.5uF capacitor. As you can see in this photo, I have 6 wire Gray and Red are together! In this photo gray and green are shorter than others, If I'm not mistaken, these wires are related to the capacitor?! please let me know how to connect the wires to AC power and speed switch. AI: Try measuring the resistances on the lowest ohms scale of your multimeter. This website has common color codes, you may get lucky:
H: H Bridge not working on HV I'm trying to make a 300V H Bridge for an electrostimulator. The circuitry is as posted here, using MOSFET P20NM60GP: When I tested it with low voltage supply (just for the sake of safe testing), and a 15 Watts 1kOhm resistor I get the output waveform I desire, that is: However, when I try to change the supply voltage to 300V (galvanically isolated, safety first!), with the same 15Watts, 1kOhm resistor I get this output: I didn't take a picture of what that output looks like in 250us but the spikes shown in the image before were not square waves as the ones that showed up in low voltage. The ground for the Source of the low side MOSFETs is isolated from the one used in the IR2110 and the rest of the electronical devices (microcontroller, ICs, etc) Any ideas as to why is this H Bridge misbehaving on High voltage? Thanks in advance! AI: The GND on the MCU, IR2110, and H-bridge all need to be shared. If any one of those is disconnected from any other, then things won't work. So if you disconnected you H-bridge GND (shown in your schematic which would be the negative terminal of your smoothing capacitor after the rectifier) from the GND on your MCU and gate-driver, reconnect them The gate-driver and MCU need to share GNDs so they have a common reference with which to communicate with each other, and your gate driver needs to share the GND with your H-bridge since it is assuming the source is at GND when it tries to drive the low-side MOSFETs (since it is trying to use a GND referenced voltage to produce a gate-source voltage difference).
H: Voltage source placement simulate this circuit – Schematic created using CircuitLab simulate this circuit I would like to better understand the meaning of voltage source placement. To me, these two subcircuits seem the same, but I am not sure. Are they the same? AI: Figure 1 and 2. OP's original circuits. Technically the first one isn't a circuit as there is no circular current path and no current can flow. The second one is just R1 in parallel with V. The other components are only connected at one end so no current flows in them so they can be omitted without changing the circuit. simulate this circuit – Schematic created using CircuitLab Figure 3. Simplification of revised first circuit.
H: How to combine two resistors with a voltage source simulate this circuit – Schematic created using CircuitLab simulate this circuit Are these two circuits the same? If not, could you please indicate why as well? AI: When starting out, your brain is easily tricked by the arrangement in which things are drawn rather than what they actually are. Side-by-side doesn't automatically mean parallel. What if I took your "parallel" circuit: simulate this circuit – Schematic created using CircuitLab and just changed it to this. Does it still look like it is in parallel to you? Or series for that matter? Remember, the current can flow into the circuit on those stubs from outside. simulate this circuit Parallel = voltage across all components are the same Series = current through all components is the same Don't be tricked by how things are arranged on a page.
H: Precautionary Discharging Capacitor went Terribly Wrong | SMPS I wanted to check the fuse of SMPS pictured. In order to safely conduct the procedure I followed the discharging procedure as provided in many tutorial videos i.e (connecting the end of the capacitor using nose plier). The fuse seemed alright but I while discharging the capacitor, a loud bang occurred (no power cable was attached)! This was quite loud bang. Does it mean the capacitor is bad? None of the videos warned about such bang! The capacitor was 330 mF. I cleaned up the soot caused by the bang and checked the capacitance and conductance of the capacitor, I seemed fine, the fuse after rechecking was also fine and pretty much other stuff that I check prior seemed also fine (The power supply is still bad though) Question: Shouldn't the capacitor go bad if a huge bang had occurred followed by soot on the PCB? AI: Shouldn't the capacitor go bad if a huge bang had occurred followed by soot on the PCB? It appears that you short-circuited high-voltage, high-capacity capacitors. These store a lot of energy and you discharged it all in an instant so you generated a very high current and generated an arc due to poor contact. The soot on the PCB was generated externally to the capacitor. Your capacitor is probably fine. In future you should discharge the capacitor in a controlled fashion by using a resistor. Work out the power that would be dissipated in the resistor if it was permanently wired across the capacitor while powered. (You could probably use half the calculated wattage for a brief discharge.) Make up some insulated probes attached to the resistor.
H: Value of resistor and the current in branch with the capacitor I am trying to learn electronics basics. I have a question asking for the current in the capacitor branch and the value of the resistor that is before the ground I have used a circuit simulation software to find out that the value of the resistor is 4.2ish and this circuit is at time = 0. I have added the original question below. Can anyone explain to me how the current and the resistance value is found out? AI: Slim Intro to Differentials Take a falling rock and imagine that it starts in your hand with zero speed and you release it over the edge of a high cliff. Before it hits the bottom, how far will the rock fall given any particular amount of time after it was released? In algebra, you readily learn how to solve distance (\$x\$), time (\$t\$), and speed (\$v\$) problems with a simple relationship: $$x=v\cdot t$$ But the above equation doesn't really help in the falling-rock case above. That's because the speed is continually changing as it falls. The above equation is great if you already know the average speed and the time over which that average was computed. But then, you already must know the distance, by then, too. So the whole thing presumes an answer you don't have. This is a failing of algebra. It works great when you already have averages. But it works more poorly when everything is always-changing. You could use the above equation over shorter times, though. So, for example, you could start by saying that \$v=0\$ for the first second and compute \$x_{t=0}=0\$. Then, knowing the idea that \$v=g\cdot t\$ (the speed increases with time, using the constant \$g\$ as a factor to get a value), you could estimate \$x_{t=1}=g\cdot 1\:\text{s}=g\$ for the 2nd second. So by the first two seconds you estimate \$x_\text{total}=x_{t=0}+x_{t=1}\$. You could continue on this way, adding up all the pieces to see where that gets you. But you already know that by the first second you already got a wrong starting answer. And, in fact, you will always have an estimated difference ever smaller (behind) where the falling rock is actually at. There are lots of other ways to go with this process. You could assume the speed at the end of the period rather than at the beginning of it, or you could work out the average midpoint speed and use that. You could also use a \$\frac12\$ second, instead, to reduce the errors. Or a \$\frac14\$ second. But for all the added work you do, the upshot is that algebra is about averages. And these presume you already know the end of the story. So what do you do? The differential viewpoint is to create a new type of variable that algebra doesn't have. This variable can only hold infinitely small values, which below I'll call infinitesimals. An infinitesimal is as \$\frac1{\infty}\$, \$\frac2{\infty}\$, \$\frac{4.5}{\infty}\$, or \$\frac{-\sqrt{2}}{\infty}\$. They are smaller than any possible finite number. All of these values are very, very close to zero but they are not zero. The new kind of variable gets a special "mark" that is used so that anyone else reading your writing will understand the meaning. This mark, today, is to place the letter d in front of the usual variable name. Before I return to the above falling-rock situation, imagine that you divide up some finite distance, \$x\$, into an infinite number of equally tiny values (infinitesimal fragments of \$x\$) that can be re-added back up to \$x\$. Then each of these tiny values might be called \$\text{d}x\$. All we've really done is take some length and divide it up into extremely small conceptual pieces. Adding them back must result in the original length, so it follows that \$x=\int\text{d}x\$. (I've added a new symbol here, \$\int\$, to indicate a special kind of sum -- an infinite sum -- which you need if you want to get back a finite number from a bunch of infinitely small values.) This just says, "If I break up some length \$x\$ into an infinite number of tiny segments of length \$\text{d}x\$, then when I add them back together I must get back my original length, \$x\$," by definition. There's something subtle I missed above, but is worth adding now. We should really think of \$\int \text{d}x\$ as adding up a lot of tiny bits back into some distance traveled by adding those bits up. Not as an exact \$x\$. By this, I mean, that all those bits of distance add up to some traveled distance. But it leaves out the place where it all started, \$x_0\$, let's say. So actually, we also need to know \$x_0\$ before we can work out the final value of \$x\$. So in calculus we note that fact by instead writing \$x=\int\text{d}x+x_0\$. This is just another way of saying that if we start at \$x_0\$ and then advance from there through the infinite sum of infinitely (but not zero) many tiny distance motions away from \$x_0\$, then the ending place will be the sum of the starting point plus all the tiny distances added up. We can now re-phase the falling-rock question this way: $$ \text{d} x=v_{_t}\cdot \text{d} t $$ It's still actually the same equation. But it now reads a little differently. It says that the infinitely small distance traveled by the rock's fall is equal to the speed the rock has, at that moment in time, multiplied by the infinitely small bit of time we've allowed for it to travel. Note that if you shrink down the time the rock falls to an infinitely small moment, then the velocity of the rock, \$ v_{_t} = g \cdot t \$, is an exact value and no longer an average. We can always get a precision value for \$v_{_t}\$ because of the way gravity works. But we can only get a precision value for a very, very tiny moment of time. The new equation above expresses this, perfectly. Of course, it's still useless. We can't use it for anything practical. To do that, we need to sum up all those bits. So, to get that we just sum both sides like this: $$\begin{align*} \int \text{d}x&=\int \left(v_{_t}\cdot \text{d}t\right)\\ \therefore x&=\int \left(v_{_t}\cdot \text{d}t\right)\\ &=\int \left(g\cdot t\cdot \text{d}t\right)\\ &=g\cdot \int \left(t\cdot \text{d}t\right) \end{align*}$$ I was able to pull \$g\$ outside the infinite sum (integral), because it is a constant. And you already know that \$k\cdot a_1+k\cdot a_2 + ... + k \cdot a_n= k \cdot (a_1+a_2+...+a_n)\$. So that's why I was able to do that here. Also, I've applied a multiplication to the variable representing just an infinitesimal of time. Usually, in textbooks, this multiplication is left out by some unexplained magic. This is because Dedekind and Weierstrass, the authors of what is taught as calculus today, refused to accept the idea of infinitesimal variables. They felt that, despite how well they worked for physicists for the prior century in solving some of the more difficult problems, such ideas were simply "non-mathematical." (Not well-grounded.) It would wait for Abraham Robinson's non-standard analysis to bring these ideas back into the field of mathematics in the early 1960's and ground them in already-accepted ideas. But by then, the damage had been done. So, while it is true that today infinitesimal variables are well-grounded mathematics. They aren't broadly taught. Now, we've one problem to resolve. How do we figure out the mess, \$\int \left(t\cdot \text{d} t\right)\$? It's really not that hard. Pause for a moment and think. This is a product, which "looks like" an area calculation, right? Let's say a "rectangle" that is \$t\$ high and \$\text{d}t\$ wide? It could be, couldn't it? Maybe? Let's assume we start at \$0\$ and proceed towards a final end-point at \$t\$ and see where that takes us: On the left, I've started from \$t=0\$ and have started to place each successive rectangle adjacently, with the first one placed left-most, the next one just beside it to the right, and so on. I've shown only the first four of these rectangles in Figure 1.A. There are an infinite number of them, though. On the right side, in FIgure 1.B, you can see the finished result. You have to imagine it, but I've completed the entire "shape" on the left side, with the infinite number of very thin rectangles placed side by side until you finally reach the end, at \$t\$. As you can see, the total area of all of these tiny rectangles added up makes up a triangle. And you can see that the area of this triangle must be \$\frac12 t^2\$. The result is that we can say: $$\begin{align*} x&=g\cdot \int_0^t \left(t\cdot \text{d}t\right)=\frac12\:g\cdot t^2 \end{align*}$$ This is a good moment to say that the above is a definite integral, which means that I knew the range over which the sum was performed (\$0\$ to \$t\$, here.) There is also an indefinite integral, which is similar but doesn't include a known range. The indefinite integral focuses on the overall shape (in this case, the triangular shape which could also be extended backwards, too, before \$0\$ if you continued the line slope with a ruler) of the result, but leaves out the details of exactly where to start and end for a later time. So the indefinite integral here would be: $$\begin{align*} x&=g\cdot \int \left(t\cdot \text{d}t\right)=g\cdot \left(\frac12 t^2 + C_0\right)=\frac12\:g\cdot t^2+C_1 \end{align*}$$ \$C_0\$ represents an unknown constant to be determined later (where also \$C_1=g\cdot C_0\$, above.) For example, if the rock had started with some initial speed, then \$C_1\$ would be set to that starting speed (and can be positive if in the direction gravity will accelerate the object or else negative if in a direction opposite the direction of gravity's acceleration.) But in our example case from the beginning where the initial speed was zero, we can just say \$C_1=0\$ and find the common answer to the problem as: $$\begin{align*} x&=\frac12\:g\cdot t^2 \end{align*}$$ Obviously, situations can be more complex than this. It's just an introduction to the idea of infinitesimals and the creation of a new algebraic variable which can only hold infinitely small, but non-zero, values. The only case where the infinitesimal variable can be thought of as exactly zero is when the range over which the integral to be performed is also zero -- meaning that the starting and ending point are both the same place. Otherwise, the infinitesimal variable should be thought of as an infinitely small, uniform step-wise piece, while going from the starting point of a range to the ending point of that range; a value that is infinitely small, while not quite zero. Once you grasp this concept, you face the problem of being able to sum up lots of small bits. Much of calculus is about how to perform those sums. Or, put another way, how to imagine stacking up tiny areas side by side as indicated above and looking more closely at the area within the resulting geometric shape that describes that final sum. As you can well imagine, some of those shapes won't be as simple as a triangle! And that's where your imagination begins. I want to return to this earlier equation: $$ \text{d} x=v_{_t}\cdot \text{d} t $$ We should be able to divide both sides by \$\text{d} t\$, yes? It's just a variable, after all, even if it is a weird variable for algebra. So let's do that: $$ \frac{\text{d} x}{\text{d} t}=v_{_t} $$ Or, put another way, \$v_{_t}=\frac{\text{d} x}{\text{d} t}=\frac{\text{d}}{\text{d} t} x\$. Now here's what's kind of interesting. I've divided one infinitesimal variable by a different infinitesimal value and the result is a finite value, the speed! We can do that? Yes, we can. If it helps, just think of this like dividing \$\frac1{\infty}\$ by \$\frac4{\infty}\$ to get \$\frac14\$. The \$\infty\$ parts cancel out and you can get a finite value from that. A better way to look at the meaning here is this: "How do the tiny changes made in one of these variables compare with the tiny changes made in another one of them?" It's just a ratio. And it is quite possible for one change to be 4 times another's change. That's not a conundrum. It actually makes sense. So feel free to do this. These variables are, in fact, variables just like those you are used to in algebra. You can also cancel them, similarly, since the ratio of an infinitesimal variable, taken relative to itself, must be 1. So, they do cancel out. One final thing before I try and answer your question. There is another idea, probably far too early to introduce to you, but worth pointing towards anyway. Let's go back to the original algebraic equation and the subsequent differential equation for the falling-rock situation: $$\begin{align*} x&=v\cdot t\\ \text{d}x &= v_{_t}\cdot \text{d}t \end{align*}$$ The first equation can only be approximate. The second above is exact (so long as time itself is independent of the speed of the object.) And I think you may be beginning to see a difference in perspective arriving, just in the very way I speak about things here. Note that I added the caveat of "so long as time itself is independent of the speed of the object," as a note. Once you acquire this viewpoint, these kinds of caveats will be ever in your mind, as you read and think about equations and expressions and problems you are considering. It is the beginning of a new way to see the world, itself. And it is life-changing. By the way, isn't there a hint of something "simple" there? Like some kind of easy transformation of one into the other? There is, in fact. And it will be worth learning about, at some point. This process has a broad term for it, called implicit differentiation. I'll use that process, below, in the answer area. I would have written more, or less, both better I think, but I didn't have the time for either. Towards an Answer to Your Question I happen to know (because I've spent time to learn it) that implicit differentiation can be used to turn \$y=e^x\$ into \$\text{d} y = e^x\:\text{d}x\$. You could go look that up in a book, in fact. Though it will only be in a small subset of textbooks that you will see it exactly that way. Usually, it is written as \$\frac{\text{d}}{\text{d}x}\, e^x=e^x\$. But you can just multiply both sides by \$\text{d}x\$ (and substitute \$y\$ for \$e^x\$ on the left side) in order to get my result at the beginning of this paragraph. I often use the capital D as an "algebraic operator" that can consume algebra and turn it into new algebra. So the way this works is something like this: $$\begin{align*} D\left[y\right.&=\left. e^{_{x}}\right]\\\\ D\left[y\right]&=D\left[ e^{_{x}}\right]\\\\ \text{d}\,y&=e^{_{x}}\cdot D\left[ x\right]\\\\ \text{d}\,y&=e^{_{x}}\cdot \text{d} \, x \end{align*}$$ This explains the production I suggested at the start of the first paragraph in this section. It's an algorithmic process for chewing up algebra equations and turning them into differential equations. It is a powerful machine and it just works. It will take time for you to follow why, as it requires some additional calculus: the product and chain rules, for example. So now I have to apologize, but I'm going to speed ahead a bit towards an answer for you, using this D-operator algorithm: $$\begin{align*} D\left[q\right.&=\left.3\left(1-e^{_{-t}}\right)\right]\\\\ D\left[q\right] &= D\left[3\left(1-e^{_{-t}}\right)\right]\\\\ \text{d} q&= 3\cdot D\left[1-e^{_{-t}}\right]\\\\ &= 3\cdot \left(D\left[1\right]-D\left[e^{_{-t}}\right]\right)\\\\ &= 3\cdot \left(0-e^{_{-t}}\cdot D\left[-t\right]\right)\\\\ &= 3\cdot \left(0-e^{_{-t}}\cdot -D\left[t\right]\right)\\\\ &= 3\cdot \left(-e^{_{-t}}\cdot -\text{d} t\right)\\\\ &\therefore\\\\ \text{d} q &= 3\cdot e^{_{-t}}\cdot \text{d}t\\\\ &\text{dividing both sides by d}t,\\\\ \frac{\text{d} q}{\text{d}t} &= 3\cdot e^{_{-t}} \end{align*}$$ But that's current! The rate of change of charge with respect to time is, in fact, just current. Since \$I=\frac{\text{d} q}{\text{d}t}\$, it follows that \$I_\text{C}=3\cdot e^{_{-t}}\$ in your case. And at \$t=0\$, this means the current is \$3\:\text{A}\$. Which means that (A) in your question is true! And from there, the rest can be examined.
H: Minimum output current of DC-DC This question is a result of several bad decisions compounded by rush implementation. It is similar to this question, unfortunately the answers there provide only explanation, not the solution. I had to add isolation to the UART port on already assembled boards. Luckily, the boards had small patch of unused holes just for this kind of contingency. I ordered and installed cheapest isolators (SI8422AB-D-IS) and DC-DC converters (PDSE1-S5-S3-S). It is only when I found 3.75V on the isolator I realized that the converter has unregulated output. It needs at least 150 mA load to drop to rated voltage, and isolator draws no more than 5 mA. So, I added LDOs (MCP1700-3302E) to the mix. Now there is a stable 3.3V with insignificant ripple. However I also found "minimum output current 30 mA" in the converter datasheet. Question: Should I leave it be as it is? Seems to be working fine. Is adding 150 Ohm load resistor before or after LDO a bad idea? simulate this circuit – Schematic created using CircuitLab Note that these boards will be used mostly for software development. The final product will have a single ISOW7821 chip. AI: The typical reason for a minimum load is that the leakage inductance in the magnetics causes the output voltage to be significantly higher than the nominal output voltage when lightly loaded. That could exceed the maximum input voltage of your LDO or (conceivably, but not likely) some maximum inside the DC-DC converter. I think the latter is quite unlikely, and they even specify the no-load input current, so if you have plenty of margin on the LDO, I think you’ll be okay.
H: Substituting a capacitor of different capacitance If a circuit calls for a capacitor with a specific capacitance, what happens if a capacitor with a different capacitance is substituted? I have a circuit that calls for a 250uF capacitor to power a small speaker. What would happen if I substituted it for a capacitor with less/more capacitance? Why can't I substitute it for a 220uF capacitor? Why can't I substitute it for a 300uF capacitor? AI: The capacitor is there to block DC from blowing up the speaker. It's value is not that important. However it affects the bass response, which most likely is not that great to begin with, as you describe it being a small speaker.
H: How to find out which process triggered join_any in verilog I have three tasks running in parallel and the fork joins when one of them gets completed. I would like to know which one was completed first and caused the fork to join. Is there a way to know it? fork task1(); task2(); task3(); join_any; AI: A good interactive debugging tool will let you set a breakpoint on the join_any or the statement that follows it. Then you will be able see which processes are still running. If you are looking for a programmatic way, one of many I can think of is pushing a unique ID on a queue and then popping the first one off the queue after the join_any will tell you which one finished first. int Qid[$]; fork begin task1(); Qid.push_back(1); end begin task2(); Qid.push_back(2); end begin task3(); Qid.push_back(3); end join_any $display("task%0d finished first", Qid.pop_front() );
H: Centroid from calculated root locus is different to the one from MATLAB Hi all hope someone can help me here. My problem involves the transfer function G(s)=(s+32.8)/((s+2.8)(s+5)(s+27.8) ) As far as im concerned the centroid is calculated using σ_a= (Ʃpoles - Ʃzeros) / (# of poles - # of zeros) From my calculations σ_a=((-2.8+-5+-27.8)-(-32.8))/(3-1)=-1.4 However when i input the following code into MATLAB the solution from the graph shown the centroid will be -1.55 s=tf('s'); G=(s+32.8)/((s+2.8)*(s+5)*(s+27.8)); rlocus(G) Im not 100% whats the issue but if im doing something wrong could someone please correct me and help me understand. Thank you for taking your time out to read this. AI: I used Octave and I got that the asymptotes goes at -1.4, just as you calculated. You should double check the Matlab result. I think you might be misunderstanding the meaning of the centroid, it is the point where the asymptote lines meet at the real line, it is not the breakaway point (where the poles leave the real line). Check slide 3 of these notes
H: Software identification of a DIY (guitarist) CAD application I know there is a reasonably popular tag for component identification, however I have a question about software identification. Before I used KiCad (when started with electronics), I used for a while a very simplistic 'CAD' program for perfboard layouts. However, I reinstalled my PC since, and I cannot find the name of the application anymore. I need it because I want to open the (complete) layout I made for an old project to see if I can 'revive' it. I only have a screenshot and some info about it... it was meant mostly for guitarists, and it was (afaik) only possible to make some kind of perfboard layouts). The internal components where very basic and some guitarist specific ones. The application extension is .diy and the name also had DIY in it (I think), but DIY is not the most easiest keyword to search for on Google. I would like to know the name (or link) of this application, so I can install it (at least temporarily) to see my old design. AI: It happened again (just after searching for at least half an hour), I find it within 5 minutes after posting the question. Anyway I found the application: It's called DIY Layout Creator. Link: http://diy-fever.com/software/diylc/ (In case you think it's advertisement: if you are serious about electronics, most are here, use a 'real' CAD application like KiCad, Altium etc).
H: Compatibility of devices with different voltage thresholds/noise margins/static disciplines tl;dr included at the bottom. Suppose we have two logical buffers from different logic families, where buffer A drives buffer B. Buffer A has the following voltage thresholds: \$V_{OH}=8\ V\$, \$V_{IH}=6\ V\$, \$V_{IL}=4\ V\$, \$V_{OL}=2\ V\$; thus buffer A interprets all received voltages up to 4 V as a logical 0 and all received voltages above 6 B as a logical 1. Suppose that the thresholds for buffer B are the same as that of A with the exception that B's \$V_{IL}=4.5\ V\$. This means that for all received voltages which A would interpret as valid logical 0, B also interprets as logical 0. According to my textbook (Foundations of Analog and Digital Circuits by Agarwal and Lang) this means that devices from B's logic family can be used with devices from A's logic family, since B guarantees that whenever A sends a valid signal, B will send a valid output; and that B will interpret as valid all the signals which A would interpret as valid. So then there is this range of voltages \$ 4 < V_{received} < 4.5 \$ which is interpreted as valid logical 0 by B but would be considered invalid by A. I can see that since B's noise margin for logical 0s is bigger than A's, that B is in this respect an improvement over A, since if A sends a 0 which is received as 4.25 volts, B will still be able to interpret it as 0 whereas an A receiver would consider it invalid. But it also seems that B's higher \$V_{IL}\$ means it could make more mistakes of the kind where A sends a logical 1 which through (a lot of) noise is received as 4.25 volts. An A receiver would consider this invalid but B will consider it a valid logical 0, when really it was a valid logical 1, so B propagates an incorrect bit. So the bottom line is, do we just have to accept that fact that some types of errors will be more common when mixing logic families as long as valid signals will be handled correctly, or am I misinterpreting something? AI: tl; dr: you design for the worst-case noise margin as a composite of all your logic types, so that all the receivers will interpret logic low and high correctly. To arrive at a set of levels that will work system-wide, you take your highest Vi(h) and lowest Vi(l), add some margin on top of each, and that defines your levels your driver must support. Example: 5V HCMOS vs. 5V LSTTL: HCMOS Vi(h)/Vi(l): 3.5 / 1.0V (0.7 / 0.2 VCC) LSTTL Vi(h)/Vi(l): 2.0 / 0.8V So taking the min Vi(l) and max Vi(h), we have: composite system Vi(h)/Vi(l) = 3.5 / 0.8V Add 100mV noise margin: margined system Vi(h)/Vi(l) = 3.6 / 0.7V So your system signal driver must be able to swing 0.7V for low and 3.6V for high to achieve 100mV of noise margin. In this case, HCMOS output can do that, while an LSTTL output cannot, at least not without the help of an external pull-up resistor as its Vo(h)(min) is only about 2.4V. The threshold analysis is similar regardless of the logic types being mixed - or even if they're not being mixed at all but nevertheless have large fan-out loading to be considered (a big problem with TTL, less so with CMOS.) If, for some reason, a mixed-logic system can't find a workable composite set of thresholds, a technique called level translation can be used to convert the output levels of one logic type to a set of reliable input thresholds for another. Between using Vi(h)/Vi(l) analysis as shown, and level translation where necessary, there is never a cause to have a system where a signal's level is interpreted incorrectly by a receiver, ever. This is basic to a reliable logic design.
H: Displays used in clocks - use with Arduino/ESP32/PIC I'm working on a project with a micrcontroller and I need a low-power display similar to something shown above. Does anyone know what these particular displays are called and whether or not there exists ESP32/Arduino compatible versions of these? I thought of e-paper, but refreshing every minute and those things are expensive. AI: It's a custom-made LCD. They're cost effective if you're making 1000 of something expensive, or 10\$^6\$ of something cheap. Us mere mortals need to find other solutions.
H: Switching 240V A/C source between two loads with relays I need to swich a single US 240V (hot-hot-ground) power source between two A/C motors, 20 amps max, with an option of disconnecting both. For this project, design aesthetics is very important and I want to use a 3-position selector switch with 90 degree index. Looked all over the net, could not find a fitting selector (rotary) switch that would be rated at 20+ amps 250+V, so I'm looking to use a selector switch that cannot carry the full load, just for switching. The switch I'm looking to use is Grayhill 50K90-01-1-03N that is rated for 75ma at 220Vac. This necessitates using a DPST power relay, I'm looking at Omron G7L-2A with 240Vac coil. The wiring diagram is roughly as following (sorry, newb here): ...with the following caveats: a) I couldn't find a 3-position switch widget so the "off" position is missing in the diagram, b) the load schematic is simplified, c) double pole relays are intended specifically to break both hot legs from the power source to the load. I couldn't find a way to reflect that in the wiring diagram. My question is, are the switch and 2 relays the only parts that I need to wire, or do I need something else to limit the current through relay coils? AI: If the relay coil is rated for 240 volts, you just apply 240 volts to it,, and it will draw whatever current it requires - no need for any extra components (well, maybe a fuse).
H: How critical is winding a solenoid/electromagnet neatly? We've all seen the expensive toroidal winding machines on YouTube, and manufactured solenoids are always so neat in construction - as in: Each winding is neatly packed to its previous, without overlapping itself: like "-/////////-" There's no explicit numerical answer here - I'm just after an indication of significance - if a winding was to be done in a factory - perfectly rolled, no overlaps, each coil immediately next to the previous one vs. one of the same specifications done at home with a few gaps, a few overlaps, and other imperfections here and there - what is the "ballpark" performance hit of that? Are we talking 0.1% or 10%? If it's actually possible to perform some sort of calculation (with some assumptions put in there) - let’s use the example of: winding a ⌀35 mm annealed soft iron rod with ⌀1.2 mm magnet wire, wound 200 times across a 40 mm length. AI: Compared to close-packed coils, how much space is inside a random scramble-wound coil? Not 10%, I bet it's more like 50%, and the total volume of the coil is nearly twice that of a close-packed version. For random "scramble-wound" coils, the wire's turns/kg and the turns/km values are low, but also the average skin-depth for the entire inductor is large. That's what we want, so, for low-loss, high-freq RF coils we must avoid the close-packed windings. Buy a fancy scramble-type coil winder, and perhaps also wind your coils "pie-wound," as a stack of pancakes. But for DC or 60Hz, a close-packed coil is much smaller, but with the same gauss/watt value as a big mushy scramble-wound coil. If physical size is an issue (motors for example, also solenoid actuators,) then those close-packed windings produce strong, miniature devices capable of high-wattage drive. Also: vibration. Cheap, poorly-made motors will fail because the windings weren't tight enough. Some of the turns were vibrating, and this chewed through the wire's insulating varnish. Eventually a short-circuit developed. A motor with internally shorted turns will experience drag and heating, and may even "run away" into internal charring, fires. With motor coils, we want the coil to behave like a solid object, with nothing inside that ever wiggles. Also: cooling! We can buy specialized coil-winding wire with square or rectangular section, which lets the windings pack together with minimal gaps. (Usually this wire size is well above 10AWG, intended for large transformers.) Thermally, the resulting rectangle-wire coil acts like a solid metal block, with high thermal conductivity. Fan-cool the outside, and the interior is cooled as well. On the other hand, a scramble-wound coil is full of insulating air: more like a hunk of styrofoam than a hunk of metal. It will have a smaller maximum wattage than a dense, non-scramble coil. Below, as scrambley as possible? A few-mH value, HF tube-amp anode choke . See also: WP: basket wound https://en.wikipedia.org/wiki/Basket_winding crystal radio coils https://www.google.com/search?q="basket+wound"+"coil" rectangular magnet wire .
H: RFID Logger & Battery I'm a complete novice when it comes to circuitry & electrical engineering. I'm more of a coder. So I really appreciate your input, even if my questions are simple for some. That being said, I need to power an RFIDLOG & auxillary logger from Priority 1 (http://www.priority1design.com.au/rfidlog_rfid_data_logger.pdf) from battery power. I'd like the battery to last at least 24 hours. The specs state 'maximum rate' for the set-up I am using is 88mA. I plan on using a 12V battery. It is really as simple as 0.088 x 24 = I need at least 2.1 AH in a battery? So with a 6AH battery, I should get ~68 hours of use? My colleague seems to think otherwise, that I might need a series of 6AH batteries to operate for at least 24 hours, but I can't find any information online that suggests otherwise. Also, the unit specs say it can run off 6V-13.2V input. Would be any benefit to using a step-down buck convertor to reduce voltage, or should I just connect the battery as-is? I've had problems in the past with regulators interfering with radio signals, and the goal is to maximum the read-range of the antennas for 134khZ RFID tags. Finally, is the best way to connect it just to solder wires to the V+ and V- inputs? Or is it better to use some kind of socket-like adaptor? AI: The specs state 'maximum rate' for the set-up I am using is 88mA. I plan on using a 12V battery. It is really as simple as 0.088 x 24 = I need at least 2.1 AH in a battery? So with a 6AH battery, I should get ~68 hours of use? Theoretically that's correct, but in practice the battery cannot be expected to deliver its full rated capacity. Firstly the capacity declines as the battery ages, and secondly over-discharging will damage it. You must ensure that the battery is recharged before it becomes completely discharged. Depending on the installation and access, the realizable capacity could be significantly less than 6 Ah. Cycle life also reduces with greater depth of discharge (DOD). A battery with double the capacity could provide more than twice as many cycles, as well as being less vulnerable to over-discharge. Finally, shelf life reduces at lower state of charge. Shallower discharges keep the battery at a higher average charge level. The usual recommendation is to never go below 50% DOD, so you should specify double the required capacity. However this has to be balanced against cost and required lifespan. At 100% DOD the battery might do 150-200 cycles, but a few slight over-discharges might reduce that to single digits. Also, the unit specs say it can run off 6V-13.2V input. Would be any benefit to using a step-down buck convertor to reduce voltage The unit appears to have linear on-board regulators - so yes, using a step-down converter would be of benefit since it would waste less power and provide longer run time. Assuming 12 V in and 6 V out, the battery could last nearly twice as long. A 6 Ah battery should be able to do ~60 hours with 50% reserve capacity. The buck converter will have a quiescent current draw that reduces efficiency at low current. You should chose a converter that has high efficiency at around 88 mA. I've had problems in the past with regulators interfering with radio signals, and the goal is to maximum the read-range of the antennas for 134khZ RFID tags. It might be an issue, but probably not if the regulator is positioned away from the rfid coils. Preferably use a regulator which switches at above 134 kHz. Finally, is the best way to connect it just to solder wires to the V+ and V- inputs? Or is it better to use some kind of socket-like adaptor? Soldered wires are more reliable (if properly soldered), but a connector is more convenient if you ever need to disconnect the cable for some reason. If the environment introduces significant vibration or moisture then soldering is recommended, though it would be better to eliminate those effects.
H: Opamp driving 24V signal through long wire I have an opamp working as comparator that creates a 24V pulse signal (appx. 400ms duration) that needs to go through a wire of 10-20m length. Is a normal opamp capable of driving this or is there a certain common circuit to achieve this? AI: Most op-amps will effectively drive a capacitive load providing there's a series output resistor used. Given also that your application doesn't involve negative feedback (you are operating as a comparator) there should be realistically no problems of instability leading to oscillations to worry about. Depending on the op-amp and the type of load at the end of the cable and what sort of attenuation you can tolerate, I'd use a 220 ohm resistor in series with the op-amp output. It would help if the op-amp is known though. It's a 24V digital input of a controller so I'd assume there's not much load. I just have no experience with these kind of signal transmition OK a couple of things you need to be aware of here: - if the op-amp power supply is 24 volts then the maximum op-amp output (without a more specialist choice) will be about 22 volts for a regular non-rail-to-rail device Assuming above, the minimum output might only be 2 volts above ground You need to decide whether this is acceptable and if not either choose slightly different power rails for the op-amp or choose a rail-to-rail device I'd also be tempted to put a unidirectional TVS diode across the output that feeds the cable (output to local 0 volts) You also want to avoid ground loops so if your op-amp power supply is grounded and the digital controller 0 volts is also grounded, you might need to take precautions to avoid ground loops circulating any ground fault currents from machines locally through the interconnecting cable. This might mean opto-isolating the op-amp output.
H: Help identifying an encryption accelerator I believe this was released around 2010. Pin configuration appears to be 8x8 I was able to get it hooked up. Hardware ID: PCI\VEN_104A&DEV_7007&SUBSYS_00000000&REV_21 Vendor 104A is indeed STMicroelectronics. Thank you! AI: Help identifying an encryption accelerator Some help narrowing down the possible supplier: - If it isn't a fake device then you should look at ST microelectronics as the likely supplier: - If you know the number of pins (unclear in your picture) you might also narrow the search down because it looks like a QFN package like this: -
H: Mosfet load control - different design compared i've seen in a forum this design (Circuit 1) to drive a load using mosfet, i'm wondering in which differs from the "well-known" MCU mosfet driven loads like the second one in the image below (Circuit 2)? What are advantages/drawbacks on using the first design over the second? AI: Answered after the OP modified the question In both designs, the object is to apply sufficient voltage between gate and source to properly turn the MOSFET on and then, when turning the MOSFET off, have 0 volts between gate and source. Both designs can do this providing the load impedance is quite high. This is because of the choice of MOSFET being a 2N7002: - With only 3.3 volts as the maximum gate-source drive voltage, the 2N7002 has a pretty poor on-characteristic for loads that might take in excess of 100 mA. As can be seen in the characteristic above with a load current of about 100 mA, the voltage dropped will be at best around 0.7 volts (orange circle) and possibly 1 or 2 volts for non-typical devices. However, if the 2N7002 were assumed to be "suitable" (not proven by any means), the only thing to consider is that circuit 1 has to be able to "sink" the load current when its signal goes low - this might be a showstopper in many situations so, how much current can the "DIGITAL_IN" signal sink? If it can sink 100 mA without much volt drop then it's probably OK.
H: What will be the effect of adding a 10k load resistor to this circuit? Background When ignoring the 10k Resistor: When Vsupply = 28V, I have worked out that the current through ZD1 = 4.95mA. When Vsupply = 14V, I have worked out that the current through ZD1 = 4.9mA. Question When the 10k resistor is included: How will things change if the current in R4 which is a 10k resistor is included. I am trying to figure out the current variations in ZD1. Therefore, i want to know if: 1 : There will be no current variations through ZD1. 2 : The current variations through ZD1 will be bigger. 3 : The current variations through ZD1 will be smaller. My Answer I think that the current variations will be smaller cause the 10k resistor will take away some of the stress that the diode ZD1 is taking. Please let me know if this is correct and if it isn't please help me understand why, Thank you. AI: The circuit attempts to deliver 5 mA of current into whatever is connected between the collector and ground (presumed to be the lower node in the circuit). This current will vary a bit with a changing power rail (Vsupply) but I have assumed 5 mA. ZD2 (a 5.6 volt zener) ensures that there will be about 5 volts across the emitter resistor R1 (1 kohm). This means that with a fairly decent accuracy, the current into ZD1 (the collector current) will be about 5 mA. Given that ZD1 is a 5.6 volt zener and that a 10 k resistor is then placed across it, the current into the 10 k resistor will be 0.56 mA. So, now the ZD1 current will be 5 mA - 0.56 mA = 4.44 mA.
H: The potential drop in a pn junction diode In the picture shown above, I was trying to learn about diode equivalent circuits. I am not being able to understand the direction of Vb taught by the instructor here. The question is- If the n region has more potential than the p-region, shouldn't the direction of the battery be reversed. AI: The question is- If the n region has more potential than the p-region, shouldn't the direction of the battery be reversed. The diode will only conduct one way (ignoring reverse leakage). The diode in your schematic is an 'ideal' diode but real ones require some forward voltage to 'crack' them open. This is represented by the voltage source in your model. Here's a model discussing an LED rather than your silicon diode but the principle is the same. Figure 1. An LED can be approximated as a resistor with a fixed voltage source. Image mine: 'Resistance' of an LED. LEDs do not have a linear relationship between current and voltage so they cannot be modeled as simply as a resistor using Ohm’s Law, V=IR. We can, however, make a simplification and model them over a range of currents as a combination of a resistor and a voltage source. If we look at a typical LED IV curve we can see that it is approximately linear over much of its useful range. This allows us to model the LED as a resistor and voltage source. You might also find the analogy below useful in understanding the diode voltage drop. Figure 3. The diode check-valve analogy. See the linked article for more.
H: Difference between always @ block and @ statement in Verilog I have not across the statement @(posedge clk) before. I would like to know how this statement is different from the always @(posedge clk) block which I am familiar with. If I use a sensitivity list in a task for a Finite State Machine synthesis without always block; How is this different from a task with always block and sensitivity list? As I understand both @ and always @ have the same functionality as both of them are always sensitive to the changes in the sensitivity list. Are there any differences with respect to simulation or synthesis? AI: The always @(posedge clk) statement is actually a combination of two statements: The always procedural block: always ... begin //Body of 'always' block end And a sensitivity list: @(posedge clk) - At the positive edge of clk @(signal or signal) - Any change in listed signals @* - Any change to any signal used as an input to the block When using Verilog for testbenches/simulation (this is not just SystemVerilog behaviour), you can use the always block on its own, for example always begin #5 clk = !clk; //Create a clock of period 10 units end You can also use the sensitivity list on its own, in the form of a Procedural Timing Control: initial begin @(posedge clk); //Wait for a pos-edge of the clock reset = 1; //Assert reset repeat(2) @(posedge clk); //Wait for two pos-edge of clock reset = 0; /Adeassert reset end Essentially this allows you to perform a sequence of test events based on say, a clock edge. As far as I'm aware you can't use the procedural timing control for synthesis - instead a state machine or similar would be built within an edge sensitive always block.
H: How to determine regulator's components values? The final goal is to have an output (u0) of 22 V. The only information given is the uBE value of 0.7 V and the maximum load current of 1 A. I'm having a hard time determining basically every value involved in this circuit. It would be nice if someone could at least tell me where to start analyzing or something like that, since I can't find a circuit that looks like this anywhere. AI: I will get you started with a procedure but you will need to choose appropriate parts and do the calculations. Choose a Zener Perhaps we might like to use one of these parts. Applying my engineers eye, I would personally choose the 6.2V part (but that is simply because of the specifications - anything below your target output voltage would do). The current through \$R_{REF}\$ must be sufficient to keep the zener in proper regulation + the current through P1. That current (for the zener) is specified in the datasheet. Choose P1; A 10k trimpot would work nicely here (but as we are looking at an ideal amplifier it could easily be much larger). As the datasheet linked shows the current for which Vz is specified, and the current through P1 will simply be Vz / P1, you can now calculate the maximum value of \$R_{REF}\$ if we know the minimum input voltage. 3 Find the minimum input voltage The minimum input voltage will be \$V_{out} + V_{BE[Q1]} + v_{RCC} + V_{CE(sat)} Q1\$. I would assume \$V_{CE(sat)}\$ to be 0.2V and V(RCC) will be a Vbe bias at current limit, so you can now find the minimum input voltage. For the output voltage set, assuming that the value at the non-inverting input of the error amplifier is Vz when at the maximum output voltage, then you need to choose them such that the voltage across R4 is equal to Vz. So from the voltage divider equation, \$ Vz = \frac {{V_o} \cdot {R4}} {R4 + R5}\$; note that the ratio of R4 and R5 is what is important. From that, you should now be able to solve and keep in mind that there are multiple solutions given what you have shown us.
H: Systemverilog size attributes Suppose we have a 2d Systemverilog array declared as: logic x [0:3][7:0] ; How can we use an attribute to get the width of the first dimension ? How can we use an attribute to get the width of the second dimension ? Is there an attribute equivalent to VHDL's "range" attribute ? I.E: one that'll return not the size but the actual range ( 0 to 3 ) or ( 7 down to 0 ) ? AI: The IEEE Std 1800-2017, section 20.7 Array query functions, describes all that you need: module tb; logic x [0:3][7:0] ; initial begin for (int i=1; i<=$dimensions(x); i++) begin $display; $display($size (x, i)); $display($left (x, i)); $display($right(x, i)); $display($low (x, i)); $display($high (x, i)); $display; end end endmodule Outputs: 4 0 3 0 3 8 7 0 0 7 See also System Tasks And Functions Part-II
H: Why do x amp rated consumer electronics often come with >> x A rated BS1363 fused plugs? I often find that consumer electronics from reputable brands come with fused plugs that are rated significantly higher than the actual product. I would have thought that the ideal fuse would be \$x+\epsilon\$ where \$x\$ is the product's expected maximum draw. Obviously, that's not practical, but frequently I find the fuses are larger than they 'have to be', that there's a standard size between what they shipped and \$x\$. Is it just being cheap, using whatever's available (even if it's actually more expensive than the better choice, I suppose it may be 'cheaper' by availability)? Surely the greater difference between the fuse and \$x\$ the less point there is in having it at all. Or is it more complex than I realise, should I prefer whatever was shipped over one closer in rating even if I happen to have it? AI: The purpose of the fuse is not to protect the product, it's to protect the wiring — to keep it from getting hot enough to start a fire. The fuse is rated according to the gauge of the wire. The only connection with the actual power consumption of the product is that the wire gauge must be selected to support the expected power draw. The same thing is true for the circuit breakers in your house. They exist in order to keep the branch circuit wires from overheating in the walls. In most jurisdictions, the minimum permitted wire size is established by regulations. For example, in the USA, the minimum gauge for house wiring is AWG14, and the minimum gauge for product line cords is AWG18.
H: Obtaining 4-4.2V rail from 5V output of LM7805 voltage regulator I'm creating a numitron clock with 4xIV-9 tubes (seven-segment filament displays) and have decided to illuminate them using shift registers controlled by an arduino. Each segment takes a voltage of 4-4.2V and around 17-20mA of current. My voltage I have available is a 5V output voltage from an LM7805 voltage regulator IC. I need to power 4 high current shift registers (TPIC6B595), each with an expected maximum output current of 200mA. Owing to this, the 4-4.2V rail needs to be able to handle a maximum current draw of around 800mA. I was considering using a simple potential divider however calculated that the required power losses greatly exceed that of the standard resistors that I have available. I was also considering using a diode to drop the voltage however the forward voltage is a function of the current; therefore as the current can vary from the extremes of 0mA to 800mA I did not think a diode would be appropriate. Any suggestions on how to obtain 4-4.2ish V rail would be much appreciated. Many thanks. AI: Replace the 7805 with a 1Amp or more adjustable regulator and set it to 4.2 volts. Run both the arduino and everything from that instead of running it off the 5V. The Arduino will be fine with 4.2 volts without any issue, and it makes it easier since you dont have to worry about level translation for the input lines either.
H: TIVA TM4C123G check clock frequency with oscilloscope I'm trying to configure system clock in the TIVA TM4C123g board. I'm aware that this clock signal goes to a pin called OSC0 which, according to this, is the pin number 41 (see page 22). According to the same document, that pin is supposed to be in the PCB's U1 module, which can be found in Component Locations. I have found the module in my board: . However, there are no debug pins connected to it. Am I supposed to measure directly the pins that come out from the chip? Isn't it dangerous? (you could touch two of them with the probe, causing a short). How can I measure the system clock frequency so I can be sure I have configured it correctly? AI: You have misread the schematic which you linked. The part you have circled is actually U2, the debug interface comprised of an additional MCU, with its own clock circuit, while U1 is the target MCU placed in diagonal orientation in the middle which runs your program. While some similar boards save a little money by feeding a clock output from the debug MCU to the target, the schematic you linked shows that yours does not. Rather, on your board, the target U1 is clocked from its own external 16 MHz crystal, or a lower frequency watch crystal, or likely internal oscillators, likely starting from the latter and then shifting to another option under software control. More importantly, in common usage it would actually not be directly clocked by these, but rather by the output of an internal PLL multiplier configured by your code. Generally speaking, if you perform the right configuration options (ie using a vendor library) and do not do something unexpected (like call the configuration routine when it is already running on the PLL) these "just work" and you typically would verify that they are operating by reading status bits with software. If you really wanted to look at the external clock frequency with a scope, try probing C32 or maybe C31 with a 10x probe to minimize disruption of the circuit. Generally though it would be better to configure some clock-dependent output and measure the frequency of that - a very obvious case being a serial UART you want to use for debug messages anyway, or perhaps a timer peripheral. Typically this is also downstream of the PLL, though clock trees on modern MCUs can be rather complex.
H: How does this old fan regulator circuit work? I was on the lookout for an old fan regulator. I had heard that its power consumption was constant at all speeds. I could lay my hands on this 1970 regulator. It appears to be a transformer with rotary switch tap selection. I am curious to know about its function especially with respect to the power consumption at different speeds. AI: That is either an autotransformer or series inductor used to reduce the motor speed by reducing the voltage as explained in this question: Speed control for PSC induction motor. The mechanical the torque required to turn a fan is proportional to the square of the fan speed as shown by the red curve in my answer to the above question. Power is torque multiplied by speed, so the power required by a fan is proportional to the cube of the fan speed. So if the speed is reduced to 70%, the power is 0.7 cubed or 34 percent of the power required at full speed. However the efficiency of the motor is lower when the speed is reduced, so the electrical power saved is quite a bit less. However even the least efficient method of reducing fan speed is likely to save some power. Using an autotransformer or inductor to reduce the voltage is not the least efficient method.
H: Purpose of 1000uF capacitor in input The board connected to power supply with 50-100m cable. There is C1, 1000uF 50V capacitor on input of board, next to power connector. Schematic of power circuit is below. I think it is for bypass or decoupling. It is really big in physically, so i think the purpose of it and any different solution instead of 1000uF cap. Also, 20-30 board can be connected in same power line sometimes. All boards are same. Has any impact on purpose of that cap? Thanks. AI: One of the purposes of C1 is to help damp-out input voltage overshoot. The 50 to 100m cable has a lot of series inductance. If you were to put, for example, a 10uF ceramic cap in place of C1 you will probably have a very large voltage overshoot when you first connect the regulator. Very possible to blow up the regulator first time you plug it in. I do not know if 1000uF is needed. But if you make any changes, make sure you test for overshoot. Electrolytic caps generally don't overshoot as much as ceramic because their effective series resistance provides damping. It is also possible to add a resistor in series with a ceramic cap to get the same effect (but you may have trouble finding a 1000uF ceramic cap!). Also, if you add a fuse or PTC to the circuit, that may provide some damping. The amount of resistance needed for damping may less than an Ohm. Simulation can be done if you have very accurate models. Otherwise you can test it and monitor the overshoot with an oscilloscope. Application Note 88 by Goran Perica of Linear Technology goes into a lot of detail about input voltage overshoot.
H: What causes my computer fan to stop briefly before resuming spinning? I'm building a small fume extractor for soldering. I have a leftover desktop fan I'll use. I was unsure about the voltage needed, so I hooked it up to my power supply and stepped up the voltage until it started spinning (around 6V but I think most of these fans are 9-12V nominal if I recall). It's drawing 30mA at 9V right now. If I place my finger on it to arrest the movement the current draw drops to 0. When I release it the fan rotates quickly to some location which I assume is the armature aligning with the magnet. Now, though, it sits stationary for about 2 seconds before resuming spinning. What's happening in those 2 seconds before it resumes spinning? AI: Compare a typical BLDC fan controller IC. The delay before a restart attempt after 'mechanical lock' is detected and coil current is shut off (Toff) is nominally 2.8 seconds.
H: Sensor data transmission without using a microcontroller? What is the most traditional approach to getting sensor data onto a radio from an embedded device pcb that doesn’t utilize a microcontroller? Up until now, we’ve been utilizing a convenient firmware on the radio that accesses discrete GPIO data directly wired to the radio’s IO and it automatically constructs everything on the UART for us. Now we need to load information that is not within the sampling capability of the old radio, but utilizes the same traditional UART for sending commands and streaming data. I know there are probably standalone UART IC’s that I assume are serial setting configurable. Is this recommendable, or should I not waste my time? The benefits to no uC would be no need for new code and less strain on the PCB layout redesign, if we even have the space. I would likely need bidirectional capability later. Thanks AI: The only remotely sensible solution is to use an MCU. That was true last week when someone asked a remarkably similar question, and it remains true today. MCUs are cheaper and more flexible than all but the simplest combinations of building block logic, and have the added benefit of drastically expanding the sorts of radios and radio protocols which you can consider, letting you add checksums or cryptographic security, letting you go to low power modes in between readings, letting you build in self-test diagnostics, readily tag transmissions with a unique ID, even giving you the option of storing sensor calibration in the device rather than a database... An MCU would also typically make the PCB layout easier rather than harder, because at least if your layout and firmware people are on speaking terms, you can often put functions on any one of a number of pins in order to make the layout simple; conversely with building block logic or special function ICs you often have to use one particular pin without a choice.
H: Thevenin voltage (Voc) with a resistor in the way? simulate this circuit – Schematic created using CircuitLab I'm still having trouble with how Thevenin voltages are obtained. Above is an example with designated terminals (red). What if there is a resistor in the way (R3)? Is Voc only R2/(R1+R2)? How can R3 not matter? How can a voltage be obtained, even if "open", without closing the circuit? AI: Your calculation of the Thevenin voltage is correct. R3 does not affect the result since no current is flowing in R3 and hence there is no voltage across it. However, it will affect the Thevenin resistance so it will affect the voltage across a load connected to the red terminals. As for your second question, the Thevenin voltage is defined as the open circuit voltage at the load terminals so it will appear there in series with the Thevenin resistance.Voltages do not require closed circuits, currents do.
H: 555 timer as a voltage controlled oscillator, duty cycle query I`m trying to create a square wave generator using a 555 timer but i noticed that due to the pulse width modulation as i vary the control voltage, the duty cycle changes so i was wondering if there is a way to ensure the duty cycle stays to 50% regardeless of the control voltage input. If anyone could point me to the right website or book that would be great. this is the basic circuit im using: source : http://www.learningaboutelectronics.com/Articles/Voltage-controlled-oscillator-VCO-circuit-with-a-555-timer.php AI: To get 50% duty cycle you need to use a flip-flop. The idea is that you run your 555 at twice the frequency and use its output as the clock for the flop-flop. The flop, if it is a D-Type, would have its Q-not connected back to the D input so that it toggles state each clock input thus effectively dividing the 555 output by 2. If the flop is a JK type then just tie both J and K inputs to a high level and it will then toggle state each 'clock' from the 555 chip. This again results in a frequency at half what the 555 is running at. Another thing to consider, since flip-flops often come two to a package, you can cascade two of them to do a net divide by four. You could then run your 555 at four times the needed frequency. This may allow you to use smaller capacitor values at the 555 and may actually improve the control voltage behavior.
H: Degradation in performance of pump I recently purchased a Dc pump rated 3 to 6v~ On connectig it with a 9v battery(cheap one) it works well for 5~~minutes continuous after which its performance degrades to a level of it where it's useless I don't understand is there a fault with my 9v battery or the pump,, I did check the voltage of my battery after 5 minutes it gave me a reading close to 7.8 but my pump barely worked AI: Are you talking about a pump like this in the photo? It's rated (advertise in AliExpress) to use 130-220 mA, which is too much for a 9V battery. At 9v the pump will draw even more current than this. A 9v battery can supply about 450 mAh, but it will be more comfortable with a lower current. This behaviour that you are describing (~ 5 min untill the performance degrades) seems to be normal, when using this battery. Probably the 7.8V reading you talk about was taken with the pump off, but with the pump on the voltage could be even lower. There's no fault to the pump or the battery, maybe it's just because they're not compatible. I suggest you to try another power source for your project. A wallwart from a old cell phone could be perfect. If you really need to use batteries, try using a batch of three or four AA cells. By the way, those pumps are rated with a very low lifespam, just about 500 hours.
H: Transfer function of a second order capacitive accelerometer In order to find the transfer function of the accelerometer MMA7361L based on the information given in the data sheet 1 I first made a hypothesis that it a second order sensor since it has a resonance frequency the frequency response would look like this but when I came to express the proper frequency and the damping factor I expressed in terms of \$F_c\$ the cut off frequency and the resonance frequency \$F_r\$ but when I came to see in the data sheet the cut of frequency is 400Hz and the resonant frequency is 6KHz which is making me confused because the cut off frequency should be after the resonance frequency, did I start from a wrong idea ? is there an other information that could help me get to the the proper frequency and the damping factor of my accelerometer. AI: your approximation of the sensing element might work (although there is not enough data in the datasheet to back it up), but your are missing the filtering that is being done (the cutoff frequency you mention is for the filter, not for the sensing element). The capacitive filter filters the output of the sensing element and removes those high frequency components, but since the resonant frequency might have a really high peak (the datasheet doesn't mention the peak value) it might still show up fairly well in the output of the whole accelerometer. Here is a quick sketch of what I mean by that,
H: Do PMOS Reverse Input protect on ORed power rail As the title says, I have a power mux IC and two power inputs. I need them to be protected from reverse voltage input. The power mux I have already comes with a reverse voltage input protection but I need the second input voltage for another IC and its not always the second input is selected by the PMIC. So I must protect it beforehand. Typically a PMOS does the trick but I do not know if it will work when there are two inputs that share the same ground. If one of the inputs is powered and if one of the other is wired correctly, what would happen? AI: Yes, a PMOS can provide high-side reverse polarity protection. However, the way you have drawn your circuit is a bit backwards. You need to swap source and drain. See this question for a schematic. Also, note that the gate-to-source voltage on the PMOS will be equal to the input voltage. So you need to select a PMOS where Vgs max is greater than Vinput max and Vfused max. Otherwise the PMOS may fail.
H: Using a DS3231 RTC alarm + MOSFETS to turn on a MCU I'm using a DS3231 in conjunction with two IRLZ44N MOSFETs to switch power to a MCU when the alarm on the DS3231 triggers. The alarm on the DS3231 pulls the SQW/INT pin to ground and turns on the MCU using the IRLZ44N FETs. When the MCU resets the alarm on the DS3231, the power is killed to the MCU. I'm using this to turn on an arduino or an ESP32 at a precise time, schedule the next wakeup time, and then shut down. I don't anticipate pulling more than 1A with my projects that would use this circuit (probably less than 1A). I want to be able to use this circuit with 3.3v and 5v MCUs. I've tested with both 3.3v and 5v and it seems to function fine. The datasheet for the IRLZ44N looks like it supports both voltages for controlling the gates on the FETs. I'm a beginner and I would really appreciate feedback on this circuit. Hopefully I'm not way off! Here are my questions: Is this design reasonable? Or are there better solutions? Any glaring issues? Are the IRLZ44N MOSFETs a good choice for this application? Are my 100k pullup resistors reasonable values? Is this is a common pattern to convert something being pulled to ground into switching something else on. Does this sort of thing have a name? Here is my schematic: AI: This is a less than desirable design to be switching the GND line of the MCU subsystem. A far more suitable design would switch on the VCC of the MCU section and keep all the GND reference connections tied together. Switching of the Vcc in the situation shown for a 3V setup could also be a bit simpler in that the job could be done with just one P-channel MOSFET. The P-channel MOSFET would need its GATE pulled to GND to turn it on and that is just what the RTC chip does on the SQW/INT pin.
H: Can I replace a 32 A switch with a 32 A MCB? A switch for a water heater in my apartment is broken (image attached). It is an "Anchor Roma 32 Amp 1-Way Switch with indicator". Can I replace it with this or this switch? Would the installation be similar or the same? Background: I am a software engineer; I am not an electrical engineer but I have replaced switches 8-10 times in the past. However I have always purchased the exact same switch. Should I be paying attention to anything other than the Amperes? I am in a highly effected (by covid19) red quarantine zone in Mumbai, India and the switch I am replacing is unavailable. I am also unable to get an electrician because my block has been cordoned off. AI: It won't fit in the box. This means it will be difficult to operate, and will leave parts exposed that should not be. The 'D. P.' likely means that it is a double pole switch - you should replace it with a double pole switch. It looks like you can buy an exact replacement from the same place for similar cost. You should do this instead.
H: Oscillator made from an RC network I would like to understand how the oscillator of this PWM controller works, and what would happen if I do not apply an RC network but directly a PWM signal generated from a microcontroller for example. It is indicated into the datasheet (https://www.ti.com/lit/ds/symlink/uc3845.pdf?ts=1590914073928) that the capacitor is charged from the reference voltage Vref until the voltage across the capacitor reach an upper threshold and then the capacitor is discharged via an internal current source. Nevertheless the oscillator as showned above generates pulses. When the capacitor is discharging, we can think that a pulse is generated. It could be the inverse... I would like to know exactly how it works... ie when the pulse is generated ? Is it when the capacitor is discharging or when it is when the capacitor is charging. Now, if I directly inject a PWM signal into the RT/CT pin, like this : simulate this circuit – Schematic created using CircuitLab When the PWM signal which rise (very rapidly as it is a rectangular waveform) to the upper threshold, the controller will draw current (for discharging the capacitor), it will be provided by the microcontroller, then the PWM will be low and the PWM controller will stop to draw current and will wait the the PWM controller rise again to draw again current. So it seems necessary to not insert a resistance as it could insert a voltage drop when the pwm controller draws current and the PWM controller could not able to see the different voltage level of the PWM. Can anyone know how it is built internally and if what I want to do is possible ? Thank you very much and have a nice day. AI: If you are trying to control the oscillator externally use a sync pulse as shown in the data sheet: - This is also described here: - There are several ways that the oscillator can work and this is probably the basis: - Picture from here. If it has an offset duty cycle i.e. it produces more of a pulse than a square wave it may have an internal diode and resistor to forcibly shorten the time taken to charge or discharge the capacitor like this: - Picture from here. \$R_d\$ and its series diode may be inside the chip. There isn't a requirement for \$R_c\$'s diode if \$R_d\$ is >> \$R_d\$. There are a few other ways this can be done too.
H: What is the purpose of this schottky diode reversed across the BJT base resistor? In a design for the popular "gotek" floppy drive emulator, as per the standard, the outputs from the floppy emulator are open-collector. An STM32 is used as the processor. This drives a 74HC04, which then drives MMBT2222A NPN transistors through a 1k resistor as the open-collector outputs. On just one of these six drivers (the serial DATA_OUT from the floppy emulator), there is a schottky diode across the base resistor. What is the purpose of it? My guess is that it causes the transistor's base capacitance to discharge quicker when the HC04 goes LOW, so this driver will switch off quicker than the others? (I'd think this level of timing precision wouldn't be needed for floppy drives). On a side-note: I am developing an enhanced version for vintage computers (original is very basic without modifications). I am trying to optimize the design. Based on looking at what other floppy drives use to drive the open-collector outputs (often a 7438), I should be able to use a 74LVC07, or a 74HC07 with 3.3V powering it, and do away with the transistors? Or am I missing something important. Or the safe option: use the original design (but use HCT04 instead). A 5.25" floppy drive on the same open-collector "bus" can have 150-ohm pull-ups. This would cause significant current that would be pushing a logic IC, so I wonder why floppy drives get away with using 7438 as the driver. AI: Your guess is correct, it is there to turn off the transistor faster. The speed does matter, as the bit rate is 250 to 500 kbps, and typically the low pulse length is approximately 500ns, at least in the 150ns to 800ns range. You can't use a 74HC07 with 3.3V supply to interface a 5V bus. The LVC07 seems like a better choise. Anyway, a design with 5V powered 74HC04 and driving the input with 3.3V signal from MCU is bad design, but as many STM32 devices have 5V tolerant IO pins, the MCU pins might be pulled up to 5V with a resistor so it can drive a 74HC04 input just fine.
H: How independent is the output voltage from the load resistor in this linear regulator? Consider the circuit below: So we know that the op-amp will try to keep \$V_{+}\$ equal to \$V_{-}\$ (Inverting and non-inverting voltages of the op-amp) . Knowing this we can calculate the output voltage: $$ V_{+}=15\times \frac {1k}{9.1k+1k}=1.48\; V $$ $$ V_{-}=V_{out}\times \frac {R_{2}}{R_{1}+R_{2}}=V_{+} $$ $$ \Longrightarrow V_{out}= V_{+}\times (1+\frac {R_{1}}{R_{2}})= 1.48\;V\times(1+\frac {R_{1}}{R_{2}}) $$ So as we can see, the output voltage is not dependent on the load resistor, but in practice we know that our circuit is not perfect and if we make a significant change in the amount of load resistor, there will be a small change in the output voltage, why does that happen? AI: There are two types of regulation specified in data sheets for regulators ... Line regulation - change of output when the input changes Load regulation - change of output when the load changes You are talking about load regulation. Your equations for the output voltage assume infinite gain in the opamp. Once you've written down the full equations including a finite gain, you'll find that the voltage at the junction of R1 and R2 will have to vary to provide the varying current output, and there's your change of output voltage.
H: Superposition of energy simulate this circuit – Schematic created using CircuitLab My current understand is, for superposition (V1 and V2 acting alone) to be used in the circuit above to calculate the energy dissipated over a given resistor, the following must apply: $$\int_{T_1}^{T_2} v_1\cdot v_2 \cdot dt \equiv 0$$ Is there an equivalent for current sources, i.e. replacing V1(t) and V2(t) by I1(t) and I2(t)? P.S. Is that last dot product (with dt) a typo? AI: Yes. Couple of things: generally at every moment the power dissipated in a resistor R is (i(t)^2)R where i is the current. If i happens to be the sum of two components a and b caused by 2 separate sources in a linear circuit, the power is ((a+b)^2)R. Expanding the square of the sum gives the power = (a^2+2ab+b^2)R. Dissipated energy in a certain time interval can be got by integrating. The integral really gives a sum of the separate energies of a and b if the product ab as integrated happens to give zero. In communication signal calculations we often sum the powers of non-dependent sources or the squares of the RMS voltages just for this reason. The non-dependency means zero correlation which is the same as your "integral of the product must be zero" -rule. The independency must be true for the summed current components that are caused by different sources. It's a well known case when the independency of the sources doesn't imply the independency of the current components in a load. That happens when 2 sources have a common point frequency component which just at the sources have 90 degrees phase shift. At load the phase difference can be different and that destroys the independence. BTW The dot in the integral in the question means multiplication of scalar quantities. It's not an error. But often it's left out because we are used to present multiplication with no operator. In exact rigorous math the dot in front of dt is meaningless. The integration symbol doesn't present any multiplication by some infinitely small dt, the t after d only remember us that t is the integration variable. Mr Gauss, Mr Cauchy and Mr Riemann removed all mystical infinitely small quantities from integration and presented it as a logically acceptable limit process. Many famous mathematicians before them really used infinitely small quantities in their work.
H: Error while implemeting D_FF from JK_FF in Verilog I'm designing a D_FF from JK_FF. The circuit I'm implementing is , as shown in the figure :- Verilog Module Code :- `timescale 1ns / 1ps /* Conversion of a JK Flip Flop to a D Flip Flop Connections are "J = D" and "K = ~D" */ module d_ff_from_jk_ff (q,qbar,rst,clk,d); output reg q; output qbar; input rst, clk; input d; wire d_n; // Inverted version of D input // assign d_n = ~d ; jk_ff d_converted(q,qbar,clk,rst,d,d_n); endmodule /******************************************* JK Filp FLop Module ************************************/ module jk_ff(q,qbar,clk,rst,j,k); output reg q; output qbar; input clk, rst; input j,k; assign qbar = ~q; always @(posedge clk) begin if (rst) q<=1'b0; // Synchronous Reset // else case ({j, k}) 2'b00: q<=q; // Previous Value // 2'b01: q<=1'b0; 2'b10: q<=1'b1; 2'b11: q<=~q; // Toggle // endcase end endmodule Testbench Code :- `timescale 1ns / 1ps module testbench; // Inputs reg rst; reg clk; reg d; // Outputs wire q; wire qbar; // Instantiate the Unit Under Test (UUT) d_ff_from_jk_ff uut ( .q(q), .qbar(qbar), .rst(rst), .clk(clk), .d(d) ); initial begin clk = 1'b0; rst = 1'b1; #10 rst=1'b0; end always #5 clk=~clk; initial begin d = 1; #50; d = 0; #50; d = 1; #50; d = 0; #50; #100 $finish; end endmodule In the simulation output, the value of Qbar is changing as expected, but the waveform of Q is constantly held at a "X" (don't care) condition. Can anyone tell me where I've gone wrong and what correction is to be made ? AI: I get a compile error with Cadence. In d_ff_from_jk_ff, change: output reg q; to: output q; After I fix that, q is only X for the first 5ns, then I see it toggle between 0 and 1 thereafter. Here is the error I got: jk_ff d_converted(q,qbar,clk,rst,d,d_n); | xmelab: *E,RANOTL : A reg is not a legal lvalue in this context [6.1.2(IEEE)].
H: How do i go about finding the transfer function of this system with two inputs? This was how i tried to formulate the answer but ive been told its incorrect. $$R(t)=u(t)$$ $$D(t)=u(t)=1/s$$ $$K=10$$ $$H=1$$ $$G=\frac{K}{s(s+1)(s+2)} =\frac{K}{s^3+3s^2+2s} $$ If someone could help me understand how i tackle this system that would be appreciated. AI: Considering your block diagram with the signs showed as in the figure below: The principle of superposition can be applied in this linear system in order to investigate the effects that two inputs (regulation \$r(t)\$ and disturbance \$d(t)\$) on output. In this way, we have the two transfer functions in \$s\$ domain: Doing \$d(t) = 0\$: $$ G_1(s) = \frac{Y(s)}{R(s)}$$ and doing \$r(t)=0\$: $$ G_2(s) = \frac{Y(s)}{D(s)} $$: Then, using the standard feedback formula: $$ G_1(s) = \frac{K}{s^2+3s+2+K} $$ With \$K=10\$: $$ G_1(s) = \frac{10}{s^2+3s+12} $$ For \$G_2(s)\$ determination, note the negative sign on input of block \$K\$: $$ Y(s) = -\frac{K}{(s+1)(s+2)}Y(s) + D(s) $$ Or: $$ G_2(s) = \frac{s^2+3s+2}{s^2+3s+2+K} $$ With \$K=10\$: $$ G_2(s) = \frac{s^2+3s+2}{s^2+3s+12} $$ The expression for output is: $$ Y(s) = G_1(s)R(s) + G_2(s)D(s) $$ Finally, replace both \$R(s)\$ and \$D(s)\$ by \$1/s\$ (the Laplace transform of unit step input \$u(t)\$). $$Y(s) = \frac{1}{s}$$ An simulation graph:
H: Removing the phosphor coating in fluorescent lamp Any ideas on how to remove the inner phosphor coating on the inner surface of fluorescent tube lamp turning it into a UV lamp? AI: You'd have to open it, clean off the phosphor and then reseal it with the right pressure of mercury vapour intact. You'd also have to replace the envelope with a different kind of glass if you wanted UVC (germicidal) 235nm light. Basically, forget about it.
H: Building a T flip-flop with enable and reset using only a JK flip-flop that has no enable or reset, and use some necessary logic gates I am confused about whether if I should use don't cares to fill in the J and K on the state table, or just use keep and change. If I use don't cares I will not get any Q inputs (as seen on the K-maps), thus I am not sure if I can design the flip-flop correctly or not without the Q. AI: You should not use Don't cares for J and K. The reason for this is J and K are not inputs in your circuit. Furthermore, As J and K are connected together, J = K. Don't cares are only for inputs. Reference: Wikipedia: Don't care term simulate this circuit – Schematic created using CircuitLab The above circuit is a T Flip-Flop using J-K Flip-Flop and logic gates. It has Active High Enable and Active High Reset. The Reset is asynchronous.
H: Connecting Solenoid valve in the emitter of the transistor We are working on a circuit in which we are controlling two way solenoid valve (24V, 10W). We are using 3.3V based controller. The solenoid valve is connected to the controller via TIP 122 (NPN Darlington) transistor. When the solenoid valve was connected in the collector end of the TIP 122, it was working fine. When the valve was connected to the emitter of the transistor, it was not switching ON. Why there is discrepancy, when connecting the valve on the emitter end. In the end the transistor has to turn-ON and has to allow current to pass between collector and emitter end. Answer: When connecting load across emitter and the ground: The VBE required to maintain the forward bias will go away as voltage will drop across the load. If VBE is set at 0.7 V, and then as soon as the voltage across load becomes 0.7V transistor will no longer act as a switch. As their will not be potential difference across VBE. Th AI: Let's consider what the difference between the two topologies is... 1. Transistor switch (common-emitter stage). When you connect the load (solenoid valve in this case) between the collector and positive rail, and apply input voltage (through a resistor) to the base, the base-emitter voltage will increase because the emitter is firmly fixed (grounded). The transistor begins decreasing its collector-emitter "resistance"... and accordingly, its collector-emitter voltage. Finally both its "resistance" and voltage become (almost) zero (as they say, the transistor saturates or, more figuratively, behaves as a piece of wire)... and the whole supply voltage is applied to the load. 2. Emitter follower (common-collector stage). If you connect the load between the emitter and ground, when applying the input voltage the transistor begins conducting and its collector current will create an increasing voltage drop across the emitter resistor. Figuratively speaking, the transistor will "lift" itself its emitter voltage until makes it (almost) equal to the input voltage... and stops. So the maximum final voltage applied to the load is (almost) equal to the input voltage. The name of this phenomenon is "following negative feedback". Now you can continue to think in this direction to decide what to choose... Just a note: I have written "almost" but in the case of TIP (Darlington transistor) there is a significant voltage drop (roughly 1.4 V) across the base and emitter since actually there are two junctions in series. You can estimate how much voltage will remain for the solenoid...
H: Help with output impedance of a circuit We have to find the output impedance of the following circuit, where the output is the upper pin of R4. If I try to compute it with LTspice, I get about 55 Ohm, but whenever I try to work on it theoretically, I get 77 Ohm, and I don't know what I'm doing wrong. For the theoretic approach, I replaced the BJT with the T-model. I've asked a few other people, and we all get about 75 Ohm, if we replace with the T-model. Edit: For the LTspice simulation I used the following circuit, to compute output impedance: AI: We can use a small-signal analysis to find the output impedance. We also recognize the bootstrap capacitor \$C_2\$. But we are now interested in finding the output resistance. After some further simplification, we get this equivalent circuit. simulate this circuit – Schematic created using CircuitLab We can see that: $$V_{O} = (\frac{I_E}{\beta +1}+I_{R_3})R_{sig} + I_E\: r_e$$ $$R_{OUT} = \frac{r_e (R_3 + R_{sig}) + \frac{R_3 R_{sig}}{\beta +1}}{R_B + r_e}||R_E$$ Now for your circuit, we need to find the quiescent current. $$I_E = \frac{4.5V - 0.7V}{2k\Omega + \frac{20k\Omega}{\beta +1}} \approx 1.728mA...1.8mA $$ if we assume that \$\beta = 100..200\$ Thus now we can find \$r_e\$ we have \$r_e \approx \frac{26mV}{I_E} \approx 15\Omega...14\Omega \$ So, we have all we need to find output resistance. \$R_E = R_1||R_2||R_4 \approx 1.67k\Omega\$ And \$R_{out} \approx 117\Omega\$ for \$\beta = 100\$ Or \$R_{out} \approx 74.2\Omega\$ for \$ \beta = 200\$